Merge branch '1.7' into 1.8
diff --git a/INSTALL.md b/INSTALL.md
index 32f74ca..ae31de3 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -142,9 +142,9 @@
 
     ./bin/accumulo shell -u root
 
-Use your web browser to connect the Accumulo monitor page on port 50095.
+Use your web browser to connect the Accumulo monitor page on port 9995.
 
-    http://<hostname in conf/monitor>:50095/
+    http://<hostname in conf/monitor>:9995/
 
 When finished, use the following command to stop Accumulo.
 
diff --git a/README.md b/README.md
index f3d7bca..fd57394 100644
--- a/README.md
+++ b/README.md
@@ -29,32 +29,28 @@
 
 To install and run an Accumulo binary distribution, follow the [install][2]
 instructions.
-  
+
 Documentation
 -------------
 
-Accumulo provides the following documentation :
+Accumulo has the following documentation which is viewable on the [Accumulo website][1]
+using the links below:
 
- * **User Manual** : In-depth developer and administrator documentation.
- * **Examples** : Code with corresponding readme files that give step by step
-                  instructions for running example code.
+* [User Manual][10] - In-depth developer and administrator documentation.
+* [Examples][11] - Code with corresponding README files that give step by step
+instructions for running the example.
 
-This documentation is available on the [Accumulo site][1].  In the source and
-binary distributions of Accumulo, the documentation is at different locations.
+This documentation can also be found in Accumulo distributions:
 
-In the Accumulo binary distribution, all documentation is in the `docs`
-directory.  The binary distribution does not include example source code, but
-it does include a jar with the compiled examples.   This examples jar makes it
-easy to step through the example readmes, after following the [install][2]
-instructions.
+* **Binary distribution** - The User Manual can be found in the `docs` directory.  The
+Examples Readmes can be found in `docs/examples`. While the source for the Examples is
+not included, the distribution has a jar with the compiled examples. This makes it easy
+to run them after following the [install][2] instructions.
 
-In the Accumulo source, documentations is found at the following locations.
+* **Source distribution** - The [Example Source][14], [Example Readmes][15], and
+[User Manual Source][16] can all be found in the source distribution.
 
- * [Example Source](examples/simple/src/main/java/org/apache/accumulo/examples/simple)
- * [Example Readmes](docs/src/main/resources/examples)
- * [User Manual Source](docs/src/main/asciidoc)
-
-Building 
+Building
 --------
 
 Accumulo uses [Maven][9] to compile, [test][3], and package its source.  The
@@ -75,7 +71,7 @@
 The public Accumulo API is composed of :
 
 All public types in the following packages and their subpackages excluding
-those named *impl*, *thrift*, or *crypto*. 
+those named *impl*, *thrift*, or *crypto*.
 
    * org.apache.accumulo.core.client
    * org.apache.accumulo.core.data
@@ -89,7 +85,7 @@
 
 The following regex matches imports that are *not* Accumulo public API.  This
 regex can be used with [RegexpSingleline][13] to automatically find suspicious
-imports in a project using Accumulo. 
+imports in a project using Accumulo.
 
 ```
 import\s+org\.apache\.accumulo\.(.*\.(impl|thrift|crypto)\..*|(?!core|minicluster).*|core\.(?!client|data|security).*)
@@ -132,13 +128,18 @@
 [1]: http://accumulo.apache.org
 [2]: INSTALL.md
 [3]: TESTING.md
-[4]: http://research.google.com/archive/bigtable.html
-[5]: http://hadoop.apache.org
-[6]: http://zookeeper.apache.org
-[7]: http://thrift.apache.org/
-[8]: http://accumulo.apache.org/notable_features.html
-[9]: http://maven.apache.org/
-[12]: http://semver.org/spec/v2.0.0.html
+[4]: https://research.google.com/archive/bigtable.html
+[5]: https://hadoop.apache.org
+[6]: https://zookeeper.apache.org
+[7]: https://thrift.apache.org
+[8]: https://accumulo.apache.org/notable_features
+[9]: https://maven.apache.org
+[10]: https://accumulo.apache.org/latest/accumulo_user_manual
+[11]: https://accumulo.apache.org/latest/examples
+[12]: http://semver.org/spec/v2.0.0
 [13]: http://checkstyle.sourceforge.net/config_regexp.html
+[14]: examples/simple/src/main/java/org/apache/accumulo/examples/simple
+[15]: docs/src/main/resources/examples
+[16]: docs/src/main/asciidoc
 [java-export]: http://www.oracle.com/us/products/export/export-regulations-345813.html
 [bouncy-faq]: http://www.bouncycastle.org/wiki/display/JA1/Frequently+Asked+Questions
diff --git a/TESTING.md b/TESTING.md
index de484ee..03ee687 100644
--- a/TESTING.md
+++ b/TESTING.md
@@ -47,6 +47,22 @@
 resources, at least another gigabyte of memory over what Maven itself requires. As such, it's recommended to have at
 least 3-4GB of free memory and 10GB of free disk space.
 
+## Performance tests
+
+Performance tests refer to a small subset of integration tests which are not activated by default. These tests allow
+developers to write tests which specifically exercise expected performance which may be dependent on the available
+resources of the host machine. Normal integration tests should be capable of running anywhere with a lower-bound on
+available memory.
+
+These tests are designated using the JUnit Category annotation with the `PerformanceTest` interface in the
+accumulo-test module. See the `PerformanceTest` interface for more information on how to use this to write your
+own performance test.
+
+To invoke the performance tests, activate the `performanceTests` Maven profile in addition to the integration-test
+or verify Maven lifecycle. For example `mvn verify -PperformanceTests` would invoke all of the integration tests:
+both normal integration tests and the performance tests. There is presently no way to invoke only the performance
+tests without the rest of the integration tests.
+
 ## Accumulo for testing
 
 The primary reason these tests take so much longer than the unit tests is that most are using an Accumulo instance to
diff --git a/assemble/bin/accumulo b/assemble/bin/accumulo
index a688879..91298d1 100755
--- a/assemble/bin/accumulo
+++ b/assemble/bin/accumulo
@@ -105,8 +105,9 @@
 case "$1" in
 master)  export ACCUMULO_OPTS="${ACCUMULO_GENERAL_OPTS} ${ACCUMULO_MASTER_OPTS}" ;;
 gc)      export ACCUMULO_OPTS="${ACCUMULO_GENERAL_OPTS} ${ACCUMULO_GC_OPTS}" ;;
-tserver) export ACCUMULO_OPTS="${ACCUMULO_GENERAL_OPTS} ${ACCUMULO_TSERVER_OPTS}" ;;
+tserver*) export ACCUMULO_OPTS="${ACCUMULO_GENERAL_OPTS} ${ACCUMULO_TSERVER_OPTS}" ;;
 monitor) export ACCUMULO_OPTS="${ACCUMULO_GENERAL_OPTS} ${ACCUMULO_MONITOR_OPTS}" ;;
+shell)   export ACCUMULO_OPTS="${ACCUMULO_GENERAL_OPTS} ${ACCUMULO_SHELL_OPTS}" ;;
 *)       export ACCUMULO_OPTS="${ACCUMULO_GENERAL_OPTS} ${ACCUMULO_OTHER_OPTS}" ;;
 esac
 
@@ -150,10 +151,23 @@
 # Export the variables just in case they are not exported
 # This makes them available to java
 export JAVA_HOME HADOOP_PREFIX ZOOKEEPER_HOME LD_LIBRARY_PATH DYLD_LIBRARY_PATH
+
+# Strip the instance from $1
+APP=$1
+INSTANCE="1"
+if [[ "$1" =~ .*-.* ]]; then
+  APP=`echo $1 | cut -d"-" -f1`
+  INSTANCE=`echo $1 | cut -d"-" -f2`
+
+  #Rewrite the input arguments
+  set -- "$APP" "${@:2}"
+fi
+
 #
 # app isn't used anywhere, but it makes the process easier to spot when ps/top/snmp truncate the command line
 JAVA="${JAVA_HOME}/bin/java"
 exec "$JAVA" "-Dapp=$1" \
+   "-Dinstance=$INSTANCE" \
    $ACCUMULO_OPTS \
    -classpath "${CLASSPATH}" \
    -XX:OnOutOfMemoryError="${ACCUMULO_KILL_CMD:-kill -9 %p}" \
diff --git a/assemble/bin/bootstrap_config.sh b/assemble/bin/bootstrap_config.sh
index 719aff9..86c827a 100755
--- a/assemble/bin/bootstrap_config.sh
+++ b/assemble/bin/bootstrap_config.sh
@@ -134,6 +134,7 @@
 _1GB_monitor="-Xmx64m -Xms64m"
 _1GB_gc="-Xmx64m -Xms64m"
 _1GB_other="-Xmx128m -Xms64m"
+_1GB_shell="${_1GB_other}"
 
 _1GB_memoryMapMax="256M"
 native_1GB_nativeEnabled="true"
@@ -148,6 +149,7 @@
 _2GB_monitor="-Xmx128m -Xms64m"
 _2GB_gc="-Xmx128m -Xms128m"
 _2GB_other="-Xmx256m -Xms64m"
+_2GB_shell="${_2GB_other}"
 
 _2GB_memoryMapMax="512M"
 native_2GB_nativeEnabled="true"
@@ -162,6 +164,7 @@
 _3GB_monitor="-Xmx1g -Xms256m"
 _3GB_gc="-Xmx256m -Xms256m"
 _3GB_other="-Xmx1g -Xms256m"
+_3GB_shell="${_3GB_other}"
 
 _3GB_memoryMapMax="1G"
 native_3GB_nativeEnabled="true"
@@ -176,6 +179,7 @@
 _512MB_monitor="-Xmx64m -Xms64m"
 _512MB_gc="-Xmx64m -Xms64m"
 _512MB_other="-Xmx128m -Xms64m"
+_512MB_shell="${_512MB_other}"
 
 _512MB_memoryMapMax="80M"
 native_512MB_nativeEnabled="true"
@@ -277,6 +281,7 @@
 MASTER="_${SIZE}_master"
 MONITOR="_${SIZE}_monitor"
 GC="_${SIZE}_gc"
+SHELL="_${SIZE}_shell"
 OTHER="_${SIZE}_other"
 
 MEMORY_MAP_MAX="_${SIZE}_memoryMapMax"
@@ -298,6 +303,7 @@
     -e "s/\${masterHigh_masterLow}/${!MASTER}/" \
     -e "s/\${monitorHigh_monitorLow}/${!MONITOR}/" \
     -e "s/\${gcHigh_gcLow}/${!GC}/" \
+    -e "s/\${shellHigh_shellLow}/${!SHELL}/" \
     -e "s/\${otherHigh_otherLow}/${!OTHER}/" \
     ${TEMPLATE_CONF_DIR}/$ACCUMULO_ENV > ${CONF_DIR}/$ACCUMULO_ENV
 
diff --git a/assemble/bin/config-server.sh b/assemble/bin/config-server.sh
index eb3fd55..92aa4a6 100755
--- a/assemble/bin/config-server.sh
+++ b/assemble/bin/config-server.sh
@@ -15,7 +15,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-# Guarantees that Accumulo and its environment variables are set.
+# Guarantees that Accumulo and its environment variables are set for start
+# and stop scripts.  Should always be run after config.sh.
 #
 # Parameters checked by script
 #  ACCUMULO_VERIFY_ONLY set to skip actions that would alter the local filesystem
@@ -70,3 +71,15 @@
 
 # ACCUMULO-1985 provide a way to use the scripts and still bind to all network interfaces
 export ACCUMULO_MONITOR_BIND_ALL=${ACCUMULO_MONITOR_BIND_ALL:-"false"}
+
+if [[ -z "${ACCUMULO_PID_DIR}" ]]; then
+  export ACCUMULO_PID_DIR="${ACCUMULO_HOME}/run"
+fi
+[[ -z ${ACCUMULO_VERIFY_ONLY} ]] && mkdir -p "${ACCUMULO_PID_DIR}" 2>/dev/null
+
+if [[ -z "${ACCUMULO_IDENT_STRING}" ]]; then
+  export ACCUMULO_IDENT_STRING="$USER"
+fi
+
+# The number of .out and .err files to retain
+export ACCUMULO_NUM_OUT_FILES=${ACCUMULO_NUM_OUT_FILES:-5}
diff --git a/assemble/bin/config.sh b/assemble/bin/config.sh
index 2299a12..ae4b4ef 100755
--- a/assemble/bin/config.sh
+++ b/assemble/bin/config.sh
@@ -118,6 +118,20 @@
   export NUMA_CMD=""
 fi
 
+# NUMA sanity checks
+if [[ -z $NUM_TSERVERS ]]; then
+   echo "NUM_TSERVERS is missing in accumulo-env.sh, please check your configuration."
+   exit 1
+fi
+if [[ $NUM_TSERVERS -eq 1 && -n $TSERVER_NUMA_OPTIONS ]]; then
+   echo "TSERVER_NUMA_OPTIONS declared when NUM_TSERVERS is 1, use ACCUMULO_NUMACTL_OPTIONS instead"
+   exit 1
+fi
+if [[ $NUM_TSERVERS -gt 1 && -n $TSERVER_NUMA_OPTIONS && ${#TSERVER_NUMA_OPTIONS[*]} -ne $NUM_TSERVERS ]]; then
+   echo "TSERVER_NUMA_OPTIONS is declared, but not the same size as NUM_TSERVERS"
+   exit 1
+fi
+
 export HADOOP_HOME=$HADOOP_PREFIX
 export HADOOP_HOME_WARN_SUPPRESS=true
 
diff --git a/assemble/bin/start-all.sh b/assemble/bin/start-all.sh
index b670743..f03a6a8 100755
--- a/assemble/bin/start-all.sh
+++ b/assemble/bin/start-all.sh
@@ -69,7 +69,7 @@
 done
 
 for gc in $(egrep -v '(^#|^\s*$)' "$ACCUMULO_CONF_DIR/gc"); do
-   ${bin}/start-server.sh $gc gc "garbage collector"
+   ${bin}/start-server.sh $gc gc
 done
 
 for tracer in $(egrep -v '(^#|^\s*$)' "$ACCUMULO_CONF_DIR/tracers"); do
diff --git a/assemble/bin/start-daemon.sh b/assemble/bin/start-daemon.sh
new file mode 100755
index 0000000..4df228e
--- /dev/null
+++ b/assemble/bin/start-daemon.sh
@@ -0,0 +1,158 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Start: Resolve Script Directory
+SOURCE="${BASH_SOURCE[0]}"
+while [[ -h "$SOURCE" ]]; do # resolve $SOURCE until the file is no longer a symlink
+   bin="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
+   SOURCE="$(readlink "$SOURCE")"
+   [[ $SOURCE != /* ]] && SOURCE="$bin/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
+done
+bin="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
+script=$( basename "$SOURCE" )
+# Stop: Resolve Script Directory
+
+usage="Usage: start-daemon.sh <host> <service>"
+
+rotate_log () {
+  logfile=$1;
+  max_retained=$2;
+  if [[ ! $max_retained =~ ^[0-9]+$ ]] || [[ $max_retained -lt 1 ]] ; then
+    echo "ACCUMULO_NUM_OUT_FILES should be a positive number, but was '$max_retained'"
+    exit 1
+  fi
+
+  if [ -f "$logfile" ]; then # rotate logs
+    while [ $max_retained -gt 1 ]; do
+      prev=`expr $max_retained - 1`
+      [ -f "$logfile.$prev" ] && mv -f "$logfile.$prev" "$logfile.$max_retained"
+      max_retained=$prev
+    done
+    mv -f "$logfile" "$logfile.$max_retained";
+  fi
+}
+
+if [[ $# -ne 2 ]]; then
+  echo $usage
+  exit 2
+fi
+
+. "$bin"/config.sh
+. "$bin"/config-server.sh
+
+HOST="$1"
+ADDRESS=$HOST
+host "$1" >/dev/null 2>&1
+if [[ $? != 0 ]]; then
+   LOGHOST=$HOST
+else
+   LOGHOST=$(host "$HOST" | head -1 | cut -d' ' -f1)
+fi
+SERVICE=$2
+
+SLAVES=$(wc -l < "${ACCUMULO_CONF_DIR}/slaves")
+
+# When the hostname provided is the alias/shortname, try to use the FQDN to make
+# sure we send the right address to the Accumulo process.
+if [[ "$HOST" = "$(hostname -s)" ]]; then
+   HOST="$(hostname -f)"
+   ADDRESS="$HOST"
+fi
+
+# ACCUMULO-1985 Allow monitor to bind on all interfaces
+if [[ ${SERVICE} == "monitor" && ${ACCUMULO_MONITOR_BIND_ALL} == "true" ]]; then
+   ADDRESS="0.0.0.0"
+fi
+
+COMMAND="${bin}/accumulo"
+if [ "${ACCUMULO_WATCHER}" = "true" ]; then
+   COMMAND="${bin}/accumulo_watcher.sh ${LOGHOST}"
+fi
+
+OUTFILE="${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out"
+ERRFILE="${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err"
+
+# Rotate the .out and .err files
+rotate_log "$OUTFILE" ${ACCUMULO_NUM_OUT_FILES}
+rotate_log "$ERRFILE" ${ACCUMULO_NUM_OUT_FILES}
+
+if [[ "$SERVICE" != "tserver" || $NUM_TSERVERS -eq 1 ]]; then
+   # Check the pid file to figure out if its already running.
+   PID_FILE="${ACCUMULO_PID_DIR}/accumulo-${ACCUMULO_IDENT_STRING}-${SERVICE}.pid"
+   if [ -f ${PID_FILE} ]; then
+      PID=`cat ${PID_FILE}`
+      if kill -0 $PID 2>/dev/null; then
+         # Starting an already-started service shouldn't be an error per LSB
+         echo "$HOST : $SERVICE already running (${PID})"
+         exit 0
+      fi
+   fi
+   echo "Starting $SERVICE on $HOST"
+
+   # Fork the process, store the pid
+   nohup ${NUMA_CMD} "$COMMAND" "${SERVICE}" --address "${ADDRESS}" >"$OUTFILE" 2>"$ERRFILE" < /dev/null &
+   echo $! > ${PID_FILE}
+
+else
+
+   S="$SERVICE"
+   for (( t=1; t<=$NUM_TSERVERS; t++)); do
+
+      SERVICE="$S-$t"
+
+      # Check the pid file to figure out if its already running.
+      PID_FILE="${ACCUMULO_PID_DIR}/accumulo-${ACCUMULO_IDENT_STRING}-${SERVICE}.pid"
+      if [ -f ${PID_FILE} ]; then
+         PID=`cat ${PID_FILE}`
+         if kill -0 $PID 2>/dev/null; then
+            # Starting an already-started service shouldn't be an error per LSB
+            echo "$HOST : $SERVICE already running (${PID})"
+            continue
+         fi
+      fi
+      echo "Starting $SERVICE on $HOST"
+
+      ACCUMULO_NUMACTL_OPTIONS=${ACCUMULO_NUMACTL_OPTIONS:-"--interleave=all"}
+      ACCUMULO_NUMACTL_OPTIONS=${TSERVER_NUMA_OPTIONS[$t]}
+      if [[ "$ACCUMULO_ENABLE_NUMACTL" == "true" ]]; then
+         NUMA=`which numactl 2>/dev/null`
+         NUMACTL_EXISTS=$?
+         if [[ ( ${NUMACTL_EXISTS} -eq 0 ) ]]; then
+            export NUMA_CMD="${NUMA} ${ACCUMULO_NUMACTL_OPTIONS}"
+         else
+            export NUMA_CMD=""
+         fi
+      fi
+
+      # Fork the process, store the pid
+      nohup ${NUMA_CMD} "$COMMAND" "${SERVICE}" --address "${ADDRESS}" >"${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out" 2>"${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err" < /dev/null &
+      echo $! > ${PID_FILE}
+
+   done
+
+fi
+
+# Check the max open files limit and selectively warn
+MAX_FILES_OPEN=$(ulimit -n)
+
+if [[ -n $MAX_FILES_OPEN && -n $SLAVES ]] ; then
+   MAX_FILES_RECOMMENDED=${MAX_FILES_RECOMMENDED:-32768}
+   if (( SLAVES > 10 )) && (( MAX_FILES_OPEN < MAX_FILES_RECOMMENDED ))
+   then
+      echo "WARN : Max open files on $HOST is $MAX_FILES_OPEN, recommend $MAX_FILES_RECOMMENDED" >&2
+   fi
+fi
diff --git a/assemble/bin/start-here.sh b/assemble/bin/start-here.sh
index 76bcc96..dba1779 100755
--- a/assemble/bin/start-here.sh
+++ b/assemble/bin/start-here.sh
@@ -43,7 +43,7 @@
 HOSTS="$(hostname -a 2> /dev/null) $(hostname) localhost 127.0.0.1 $IP"
 for host in $HOSTS; do
    if grep -q "^${host}\$" "$ACCUMULO_CONF_DIR/slaves"; then
-      "${bin}/start-server.sh" "$host" tserver "tablet server"
+      "${bin}/start-server.sh" "$host" tserver
       break
    fi
 done
@@ -58,7 +58,7 @@
 
 for host in $HOSTS; do
    if grep -q "^${host}\$" "$ACCUMULO_CONF_DIR/gc"; then
-      "${bin}/start-server.sh" "$host" gc "garbage collector"
+      "${bin}/start-server.sh" "$host" gc
       break
    fi
 done
diff --git a/assemble/bin/start-server.sh b/assemble/bin/start-server.sh
index 42613fb..d5b7594 100755
--- a/assemble/bin/start-server.sh
+++ b/assemble/bin/start-server.sh
@@ -26,72 +26,32 @@
 script=$( basename "$SOURCE" )
 # Stop: Resolve Script Directory
 
+# Really, we still support the third <long_name> argument, but let's not tell people that..
+usage="Usage: start-server.sh <host> <service>"
+
+# Support the 3-arg invocation for backwards-compat
+if [[ $# -ne 2 ]] && [[ $# -ne 3 ]]; then
+  echo $usage
+  exit 2
+fi
+
 . "$bin"/config.sh
 . "$bin"/config-server.sh
 
 HOST="$1"
-host "$1" >/dev/null 2>/dev/null
-if [[ $? != 0 ]]; then
-   LOGHOST="$1"
-else
-   LOGHOST=$(host "$1" | head -1 | cut -d' ' -f1)
-fi
-ADDRESS=$1
-SERVICE=$2
-LONGNAME=$3
-[[ -z $LONGNAME ]] && LONGNAME=$2
-
-SLAVES=$(wc -l < "${ACCUMULO_CONF_DIR}/slaves")
+SERVICE="$2"
 
 IFCONFIG=/sbin/ifconfig
 [[ ! -x $IFCONFIG ]] && IFCONFIG='/bin/netstat -ie'
 
-
 IP=$($IFCONFIG 2>/dev/null| grep "inet[^6]" | awk '{print $2}' | sed 's/addr://' | grep -v 0.0.0.0 | grep -v 127.0.0.1 | head -n 1)
 if [[ $? != 0 ]] ; then
    IP=$(python -c 'import socket as s; print s.gethostbyname(s.getfqdn())')
 fi
 
-# When the hostname provided is the alias/shortname, try to use the FQDN to make
-# sure we send the right address to the Accumulo process.
-if [[ "$HOST" = "$(hostname -s)" ]]; then
-    HOST="$(hostname -f)"
-    ADDRESS="$HOST"
-fi
-
-# ACCUMULO-1985 Allow monitor to bind on all interfaces
-if [[ ${SERVICE} == "monitor" && ${ACCUMULO_MONITOR_BIND_ALL} == "true" ]]; then
-    ADDRESS="0.0.0.0"
-fi
-
-if [[ $HOST == localhost || $HOST == "$(hostname -f)" || $HOST = "$IP" ]]; then
-   PID=$(ps -ef | egrep ${ACCUMULO_HOME}/.*/accumulo.*.jar | grep "Main $SERVICE" | grep -v grep | awk {'print $2'} | head -1)
+if [[ $HOST == "localhost" || $HOST == $(hostname -f) || $HOST == $(hostname -s) || $HOST == $IP ]]; then
+   "$bin/start-daemon.sh" "$HOST" "$SERVICE"
 else
-   PID=$($SSH "$HOST" ps -ef | egrep "${ACCUMULO_HOME}/.*/accumulo.*.jar" | grep "Main $SERVICE" | grep -v grep | awk {'print $2'} | head -1)
-fi
-
-if [[ -z "$PID" ]]; then
-   echo "Starting $LONGNAME on $HOST"
-   COMMAND="${bin}/accumulo"
-   if [ "${ACCUMULO_WATCHER}" = "true" ]; then
-      COMMAND="${bin}/accumulo_watcher.sh ${LOGHOST}"
-   fi
-
-   if [[ $HOST == localhost || $HOST == "$(hostname -f)" || $HOST = "$IP" ]]; then
-      nohup ${NUMA_CMD} "$COMMAND" "${SERVICE}" --address "${ADDRESS}" >"${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out" 2>"${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err" &
-      MAX_FILES_OPEN=$(ulimit -n)
-   else
-      $SSH "$HOST" "bash -c 'exec nohup ${NUMA_CMD} $COMMAND ${SERVICE} --address ${ADDRESS} >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err' &"
-      MAX_FILES_OPEN=$($SSH "$HOST" "/usr/bin/env bash -c 'ulimit -n'")
-   fi
-
-   if [[ -n $MAX_FILES_OPEN && -n $SLAVES ]] ; then
-      MAX_FILES_RECOMMENDED=${MAX_FILES_RECOMMENDED:-32768}
-      if (( SLAVES > 10 )) && (( MAX_FILES_OPEN < MAX_FILES_RECOMMENDED ))
-      then
-         echo "WARN : Max open files on $HOST is $MAX_FILES_OPEN, recommend $MAX_FILES_RECOMMENDED" >&2
-      fi
-   fi
-else
-   echo "$HOST : $LONGNAME already running (${PID})"
+   # Ensure that the provided configuration directory is sent with the command
+   echo $($SSH $HOST "bash -c 'ACCUMULO_CONF_DIR=${ACCUMULO_CONF_DIR} $bin/start-daemon.sh \"$HOST\" \"$SERVICE\"'")
 fi
diff --git a/assemble/bin/stop-here.sh b/assemble/bin/stop-here.sh
index 294f2bc..6f0aa7f 100755
--- a/assemble/bin/stop-here.sh
+++ b/assemble/bin/stop-here.sh
@@ -40,7 +40,7 @@
 if egrep -q localhost\|127.0.0.1 "$ACCUMULO_CONF_DIR/slaves"; then
    "$bin/accumulo" admin stop localhost
 else
-   for host in "$(hostname -a 2> /dev/null) $(hostname)"; do
+   for host in "$(hostname -a 2> /dev/null)" "$(hostname)"; do
       if grep -q ${host} $ACCUMULO_CONF_DIR/slaves; then
          "${bin}/accumulo" admin stop "$host"
       fi
@@ -49,10 +49,6 @@
 
 for signal in TERM KILL; do
    for svc in tserver gc master monitor tracer; do
-      PID=$(ps -ef | egrep "${ACCUMULO}" | grep "Main $svc" | grep -v grep | grep -v stop-here.sh | awk '{print $2}' | head -1)
-      if [[ -n $PID ]]; then
-         echo "Stopping ${svc} on ${HOSTNAME} with signal ${signal}"
-         kill -s ${signal} "${PID}"
-      fi
+      $ACCUMULO_HOME/bin/stop-server.sh $HOSTNAME "$ACCUMULO_HOME/lib/accumulo-start.jar" $svc $signal
    done
 done
diff --git a/assemble/bin/stop-server.sh b/assemble/bin/stop-server.sh
index 78ef783..bba0f1e 100755
--- a/assemble/bin/stop-server.sh
+++ b/assemble/bin/stop-server.sh
@@ -26,6 +26,7 @@
 # Stop: Resolve Script Directory
 
 . "$bin"/config.sh
+. "$bin"/config-server.sh
 
 HOST=$1
 
@@ -40,15 +41,19 @@
 
 # only stop if there's not one already running
 if [[ $HOST == localhost || $HOST = "$(hostname -s)" || $HOST = "$(hostname -f)" || $HOST = "$IP" ]] ; then
-   PID=$(ps -ef | grep "$ACCUMULO_HOME" | egrep ${2} | grep "Main ${3}" | grep -v grep | grep -v ssh | grep -v stop-server.sh | awk {'print $2'} | head -1)
-   if [[ -n $PID ]]; then
-      echo "Stopping ${3} on $1";
-      kill -s "${4}" "${PID}" 2>/dev/null
-   fi;
+   for PID_FILE in ${ACCUMULO_PID_DIR}/accumulo-${ACCUMULO_IDENT_STRING}-${3}*.pid; do
+      if [[ -f ${PID_FILE} ]]; then
+         echo "Stopping $3 on $1";
+         kill -s "$4" `cat ${PID_FILE}` 2>/dev/null
+         rm -f ${PID_FILE} 2>/dev/null
+      fi;
+   done
 else
-   PID=$(ssh -q -o 'ConnectTimeout 8' "$1" "ps -ef | grep \"$ACCUMULO_HOME\" |  egrep '${2}' | grep 'Main ${3}' | grep -v grep | grep -v ssh | grep -v stop-server.sh" | awk {'print $2'} | head -1)
-   if [[ -n $PID ]]; then
-      echo "Stopping ${3} on $1";
-      ssh -q -o 'ConnectTimeout 8' "$1" "kill -s ${4} ${PID} 2>/dev/null"
-   fi;
+   for PID_FILE in $(ssh -q -o 'ConnectTimeout 8' "$1" ls "${ACCUMULO_PID_DIR}/accumulo-${ACCUMULO_IDENT_STRING}-${3}*.pid" 2>/dev/null); do
+      PID=$(ssh -q -o 'ConnectTimeout 8' "$1" cat "${PID_FILE}" 2>/dev/null)
+      if [[ ! -z $PID ]]; then
+         echo "Stopping $3 on $1";
+         ssh -q -o 'ConnectTimeout 8' "$1" "kill -s $4 $PID 2>/dev/null; rm -f ${PID_FILE} 2>/dev/null"
+      fi
+   done
 fi
diff --git a/assemble/bin/tup.sh b/assemble/bin/tup.sh
index d89896d..f01aa77 100755
--- a/assemble/bin/tup.sh
+++ b/assemble/bin/tup.sh
@@ -34,7 +34,7 @@
 count=1
 for server in $(egrep -v '(^#|^\s*$)' "${SLAVES}"); do
    echo -n "."
-   ${bin}/start-server.sh $server tserver "tablet server" &
+   ${bin}/start-server.sh $server tserver &
    if (( ++count % 72 == 0 )) ;
    then
       echo
diff --git a/assemble/conf/examples/vfs-classloader/accumulo-site.xml b/assemble/conf/examples/vfs-classloader/accumulo-site.xml
index d9d85e5..fb66d27 100644
--- a/assemble/conf/examples/vfs-classloader/accumulo-site.xml
+++ b/assemble/conf/examples/vfs-classloader/accumulo-site.xml
@@ -142,32 +142,32 @@
   <!--
   Properties in this category define a classpath for a named context. These properties start with the category prefix, followed by a context name.
   The value is a comma seperated list of URIs. Supports full regex on filename alone. For example 
-  general.vfs.context.classpath.cx1=hdfs://nn1:9902/mylibdir/*.jar.  You can enable post delegation for a context, which will load classes from 
+  general.vfs.context.classpath.cx1=hdfs://nn1:9902/mylibdir/[^.].*.jar.  You can enable post delegation for a context, which will load classes from
   the context first instead of the parent first.  Do this by setting general.vfs.context.classpath.<name>.delegation=post, where <name> 
   is your context name.  If delegation is not specified, it defaults to loading from parent classloader first.
   -->
 
   <property>
     <name>general.vfs.context.classpath.application1</name>
-    <value>hdfs://localhost:8020/application1/classpath/*.jar</value>
+    <value>hdfs://localhost:8020/application1/classpath/[^.].*.jar</value>
     <description>classpath for the application1 context</description>
   </property>
 
   <property>
     <name>general.vfs.context.classpath.application1.delegation=post</name>
-    <value>hdfs://localhost:8020/application1/classpath/*.jar</value>
+    <value>hdfs://localhost:8020/application1/classpath/[^.].*.jar</value>
     <description>classpath for the application1 context, but the classloader parent delegation model is inverted to prefer the jars/classes in this directory
     </description>
   </property>
 
   <property>
     <name>general.vfs.context.classpath.application2</name>
-    <value>hdfs://localhost:8020/application1/classpath/*.jar,hdfs://localhost:8020/application2/classpath/*.jar</value>
+    <value>hdfs://localhost:8020/application1/classpath/[^.].*.jar,hdfs://localhost:8020/application2/classpath/[^.].*.jar</value>
     <description>classpath for the application2 context, includes all of the jars in app1 context</description>
   </property>
   
   <!--
-  Once classpath context are configured, tables can be configured in the shell to use them via the table.classpath.context property.
+  Once classpath contexts are configured, tables can be configured in the shell to use them via the table.classpath.context property.
   For example, all of the tables related to application1 would have the context.classpath property set to 'application1'. 
   -->
 
diff --git a/assemble/conf/templates/accumulo-env.sh b/assemble/conf/templates/accumulo-env.sh
index 42633a7..217465b 100644
--- a/assemble/conf/templates/accumulo-env.sh
+++ b/assemble/conf/templates/accumulo-env.sh
@@ -48,8 +48,10 @@
 test -z "$ACCUMULO_MASTER_OPTS"  && export ACCUMULO_MASTER_OPTS="${POLICY} ${masterHigh_masterLow}"
 test -z "$ACCUMULO_MONITOR_OPTS" && export ACCUMULO_MONITOR_OPTS="${POLICY} ${monitorHigh_monitorLow}"
 test -z "$ACCUMULO_GC_OPTS"      && export ACCUMULO_GC_OPTS="${gcHigh_gcLow}"
+test -z "$ACCUMULO_SHELL_OPTS"   && export ACCUMULO_SHELL_OPTS="${shellHigh_shellLow}"
 test -z "$ACCUMULO_GENERAL_OPTS" && export ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -Djava.net.preferIPv4Stack=true -XX:+CMSClassUnloadingEnabled"
 test -z "$ACCUMULO_OTHER_OPTS"   && export ACCUMULO_OTHER_OPTS="${otherHigh_otherLow}"
+test -z "${ACCUMULO_PID_DIR}"    && export ACCUMULO_PID_DIR="${ACCUMULO_HOME}/run"
 # what do when the JVM runs out of heap memory
 export ACCUMULO_KILL_CMD='kill -9 %p'
 
@@ -75,3 +77,17 @@
 export ZKLOCK_TIMESPAN="600"
 export ZKLOCK_RETRIES="5"
 
+# The number of .out and .err files per process to retain
+# export ACCUMULO_NUM_OUT_FILES=5
+
+export NUM_TSERVERS=1
+
+### Example for configuring multiple tservers per host. Note that the ACCUMULO_NUMACTL_OPTIONS
+### environment variable is used when NUM_TSERVERS is 1 to preserve backwards compatibility.
+### If NUM_TSERVERS is greater than 2, then the TSERVER_NUMA_OPTIONS array is used if defined.
+### If TSERVER_NUMA_OPTIONS is declared but not the correct size, then the service will not start.
+###
+### export NUM_TSERVERS=2
+### declare -a TSERVER_NUMA_OPTIONS
+### TSERVER_NUMA_OPTIONS[1]="--cpunodebind 0"
+### TSERVER_NUMA_OPTIONS[2]="--cpunodebind 1"
diff --git a/assemble/conf/templates/generic_logger.xml b/assemble/conf/templates/generic_logger.xml
index db79efe..833df17 100644
--- a/assemble/conf/templates/generic_logger.xml
+++ b/assemble/conf/templates/generic_logger.xml
@@ -20,7 +20,7 @@
 
   <!-- Write out everything at the DEBUG level to the debug log -->
   <appender name="A2" class="org.apache.log4j.RollingFileAppender">
-     <param name="File"           value="${org.apache.accumulo.core.dir.log}/${org.apache.accumulo.core.application}_${org.apache.accumulo.core.ip.localhost.hostname}.debug.log"/>
+     <param name="File"           value="${org.apache.accumulo.core.dir.log}/${org.apache.accumulo.core.application}_${instance}_${org.apache.accumulo.core.ip.localhost.hostname}.debug.log"/>
      <param name="MaxFileSize"    value="1000MB"/>
      <param name="MaxBackupIndex" value="10"/>
      <param name="Threshold"      value="DEBUG"/>
@@ -31,7 +31,7 @@
 
   <!--  Write out INFO and higher to the regular log -->
   <appender name="A3" class="org.apache.log4j.RollingFileAppender">
-     <param name="File"           value="${org.apache.accumulo.core.dir.log}/${org.apache.accumulo.core.application}_${org.apache.accumulo.core.ip.localhost.hostname}.log"/>
+     <param name="File"           value="${org.apache.accumulo.core.dir.log}/${org.apache.accumulo.core.application}_${instance}_${org.apache.accumulo.core.ip.localhost.hostname}.log"/>
      <param name="MaxFileSize"    value="1000MB"/>
      <param name="MaxBackupIndex" value="10"/>
      <param name="Threshold"      value="INFO"/>
diff --git a/assemble/pom.xml b/assemble/pom.xml
index 6d5b611..022669a 100644
--- a/assemble/pom.xml
+++ b/assemble/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo</artifactId>
   <packaging>pom</packaging>
@@ -147,7 +147,7 @@
     </dependency>
     <dependency>
       <groupId>org.apache.commons</groupId>
-      <artifactId>commons-math</artifactId>
+      <artifactId>commons-math3</artifactId>
       <optional>true</optional>
     </dependency>
     <dependency>
@@ -224,7 +224,7 @@
               <outputScope>false</outputScope>
               <sort>true</sort>
               <!-- this list should match that in src/main/assemblies/component.xml -->
-              <includeArtifactIds>commons-math,commons-vfs2,gson,guava,htrace-core,javax.servlet-api,jcommander,jetty-http,jetty-io,jetty-security,jetty-server,jetty-servlet,jetty-util,jline,libthrift,protobuf-java,slf4j-api,slf4j-log4j12</includeArtifactIds>
+              <includeArtifactIds>commons-math3,commons-vfs2,gson,guava,htrace-core,javax.servlet-api,jcommander,jetty-http,jetty-io,jetty-security,jetty-server,jetty-servlet,jetty-util,jline,libthrift,protobuf-java,slf4j-api,slf4j-log4j12</includeArtifactIds>
               <excludeTransitive>true</excludeTransitive>
             </configuration>
           </execution>
@@ -306,5 +306,22 @@
         </plugins>
       </build>
     </profile>
+    <profile>
+      <!-- create shaded test jar appropriate for running ITs on MapReduce -->
+      <id>mrit</id>
+      <activation>
+        <property>
+          <name>mrit</name>
+        </property>
+      </activation>
+      <dependencies>
+        <dependency>
+          <groupId>org.apache.accumulo</groupId>
+          <artifactId>accumulo-test</artifactId>
+          <classifier>mrit</classifier>
+          <optional>true</optional>
+        </dependency>
+      </dependencies>
+    </profile>
   </profiles>
 </project>
diff --git a/assemble/src/main/assemblies/component.xml b/assemble/src/main/assemblies/component.xml
index 84a3a06..bfcedc8 100644
--- a/assemble/src/main/assemblies/component.xml
+++ b/assemble/src/main/assemblies/component.xml
@@ -36,7 +36,7 @@
         <include>com.google.protobuf:protobuf-java</include>
         <include>javax.servlet:javax.servlet-api</include>
         <include>jline:jline</include>
-        <include>org.apache.commons:commons-math</include>
+        <include>org.apache.commons:commons-math3</include>
         <include>org.apache.commons:commons-vfs2</include>
         <include>org.apache.thrift:libthrift</include>
         <include>org.eclipse.jetty:jetty-http</include>
diff --git a/assemble/src/main/resources/LICENSE b/assemble/src/main/resources/LICENSE
index 9ba98fd..7b7568a 100644
--- a/assemble/src/main/resources/LICENSE
+++ b/assemble/src/main/resources/LICENSE
@@ -349,7 +349,9 @@
 
 **********
 
-This product includes Apache Commons Math:
+This product includes Apache Commons Math3:
+
+    APACHE COMMONS MATH DERIVATIVE WORKS:
 
     The Apache commons-math library includes a number of subcomponents
     whose implementation is derived from original sources written
@@ -359,7 +361,7 @@
     ===============================================================================
     For the lmder, lmpar and qrsolv Fortran routine from minpack and translated in
     the LevenbergMarquardtOptimizer class in package
-    org.apache.commons.math.optimization.general
+    org.apache.commons.math3.optimization.general
     Original source copyright and license statement:
 
     Minpack Copyright Notice (1999) University of Chicago.  All rights reserved
@@ -417,7 +419,7 @@
 
     Copyright and license statement for the odex Fortran routine developed by
     E. Hairer and G. Wanner and translated in GraggBulirschStoerIntegrator class
-    in package org.apache.commons.math.ode.nonstiff:
+    in package org.apache.commons.math3.ode.nonstiff:
 
 
     Copyright (c) 2004, Ernst Hairer
@@ -446,50 +448,9 @@
     SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
     ===============================================================================
 
-    Copyright and license statement for the original lapack fortran routines
-    translated in EigenDecompositionImpl class in package
-    org.apache.commons.math.linear:
-
-    Copyright (c) 1992-2008 The University of Tennessee.  All rights reserved.
-
-    $COPYRIGHT$
-
-    Additional copyrights may follow
-
-    $HEADER$
-
-    Redistribution and use in source and binary forms, with or without
-    modification, are permitted provided that the following conditions are
-    met:
-
-    - Redistributions of source code must retain the above copyright
-      notice, this list of conditions and the following disclaimer.
-
-    - Redistributions in binary form must reproduce the above copyright
-      notice, this list of conditions and the following disclaimer listed
-      in this license in the documentation and/or other materials
-      provided with the distribution.
-
-    - Neither the name of the copyright holders nor the names of its
-      contributors may be used to endorse or promote products derived from
-      this software without specific prior written permission.
-
-    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-    ===============================================================================
-
     Copyright and license statement for the original Mersenne twister C
     routines translated in MersenneTwister class in package
-    org.apache.commons.math.random:
+    org.apache.commons.math3.random:
 
        Copyright (C) 1997 - 2002, Makoto Matsumoto and Takuji Nishimura,
        All rights reserved.
@@ -521,6 +482,106 @@
        NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
        SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
+    ===============================================================================
+
+    The initial code for shuffling an array (originally in class
+    "org.apache.commons.math3.random.RandomDataGenerator", now replaced by
+    a method in class "org.apache.commons.math3.util.MathArrays") was
+    inspired from the algorithm description provided in
+    "Algorithms", by Ian Craw and John Pulham (University of Aberdeen 1999).
+    The textbook (containing a proof that the shuffle is uniformly random) is
+    available here:
+      http://citeseerx.ist.psu.edu/viewdoc/download;?doi=10.1.1.173.1898&rep=rep1&type=pdf
+
+    ===============================================================================
+    License statement for the direction numbers in the resource files for Sobol sequences.
+
+    -----------------------------------------------------------------------------
+    Licence pertaining to sobol.cc and the accompanying sets of direction numbers
+
+    -----------------------------------------------------------------------------
+    Copyright (c) 2008, Frances Y. Kuo and Stephen Joe
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions are met:
+
+        * Redistributions of source code must retain the above copyright
+          notice, this list of conditions and the following disclaimer.
+
+        * Redistributions in binary form must reproduce the above copyright
+          notice, this list of conditions and the following disclaimer in the
+          documentation and/or other materials provided with the distribution.
+
+        * Neither the names of the copyright holders nor the names of the
+          University of New South Wales and the University of Waikato
+          and its contributors may be used to endorse or promote products derived
+          from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
+    EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+    WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+    DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
+    DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+    (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+    LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+    ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+    SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+    ===============================================================================
+
+    The initial commit of package "org.apache.commons.math3.ml.neuralnet" is
+    an adapted version of code developed in the context of the Data Processing
+    and Analysis Consortium (DPAC) of the "Gaia" project of the European Space
+    Agency (ESA).
+    ===============================================================================
+
+    The initial commit of the class "org.apache.commons.math3.special.BesselJ" is
+    an adapted version of code translated from the netlib Fortran program, rjbesl
+    http://www.netlib.org/specfun/rjbesl by R.J. Cody at Argonne National
+    Laboratory (USA).  There is no license or copyright statement included with the
+    original Fortran sources.
+    ===============================================================================
+
+    The BracketFinder (package org.apache.commons.math3.optimization.univariate)
+    and PowellOptimizer (package org.apache.commons.math3.optimization.general)
+    classes are based on the Python code in module "optimize.py" (version 0.5)
+    developed by Travis E. Oliphant for the SciPy library (http://www.scipy.org/)
+    Copyright © 2003-2009 SciPy Developers.
+
+    SciPy license
+    Copyright © 2001, 2002 Enthought, Inc.
+    All rights reserved.
+
+    Copyright © 2003-2013 SciPy Developers.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions are met:
+
+        * Redistributions of source code must retain the above copyright
+          notice, this list of conditions and the following disclaimer.
+
+        * Redistributions in binary form must reproduce the above copyright
+          notice, this list of conditions and the following disclaimer in the
+          documentation and/or other materials provided with the distribution.
+
+        * Neither the name of Enthought nor the names of the SciPy Developers may
+          be used to endorse or promote products derived from this software without
+          specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY
+    EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+    WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+    DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY
+    DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+    (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+    LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+    ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+    SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+    ===============================================================================
+
 **********
 
 This product includes Protocol Buffers:
diff --git a/assemble/src/main/resources/NOTICE b/assemble/src/main/resources/NOTICE
index f6370f2..b19c90a 100644
--- a/assemble/src/main/resources/NOTICE
+++ b/assemble/src/main/resources/NOTICE
@@ -11,49 +11,11 @@
 
 **********
 
-This product includes Apache Commons Math (https://commons.apache.org/proper/commons-math/).
-Copyright 2001-2010 The Apache Software Foundation
+From Apache Commons Math3:
 
-From commons-math NOTICE.txt:
-
-    ===============================================================================
-    The LinearConstraint, LinearObjectiveFunction, LinearOptimizer,
-    RelationShip, SimplexSolver and SimplexTableau classes in package
-    org.apache.commons.math.optimization.linear include software developed by
-    Benjamin McCann (http://www.benmccann.com) and distributed with
-    the following copyright: Copyright 2009 Google Inc.
-    ===============================================================================
-
-    This product includes software developed by the
-    University of Chicago, as Operator of Argonne National
-    Laboratory.
-    The LevenbergMarquardtOptimizer class in package
-    org.apache.commons.math.optimization.general includes software
-    translated from the lmder, lmpar and qrsolv Fortran routines
-    from the Minpack package
-    Minpack Copyright Notice (1999) University of Chicago.  All rights reserved
-    ===============================================================================
-
-    The GraggBulirschStoerIntegrator class in package
-    org.apache.commons.math.ode.nonstiff includes software translated
-    from the odex Fortran routine developed by E. Hairer and G. Wanner.
-    Original source copyright:
-    Copyright (c) 2004, Ernst Hairer
-    ===============================================================================
-
-    The EigenDecompositionImpl class in package
-    org.apache.commons.math.linear includes software translated
-    from some LAPACK Fortran routines.  Original source copyright:
-    Copyright (c) 1992-2008 The University of Tennessee.  All rights reserved.
-    ===============================================================================
-
-    The MersenneTwister class in package org.apache.commons.math.random
-    includes software translated from the 2002-01-26 version of
-    the Mersenne-Twister generator written in C by Makoto Matsumoto and Takuji
-    Nishimura. Original source copyright:
-    Copyright (C) 1997 - 2002, Makoto Matsumoto and Takuji Nishimura,
-    All rights reserved
-    ===============================================================================
+    This product includes software developed for Orekit by
+    CS Systèmes d'Information (http://www.c-s.fr/)
+    Copyright 2010-2012 CS Systèmes d'Information
 
 **********
 
diff --git a/core/pom.xml b/core/pom.xml
index 6efd79d..5898b09 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-core</artifactId>
   <name>Apache Accumulo Core</name>
@@ -81,7 +81,7 @@
     </dependency>
     <dependency>
       <groupId>org.apache.commons</groupId>
-      <artifactId>commons-math</artifactId>
+      <artifactId>commons-math3</artifactId>
     </dependency>
     <dependency>
       <groupId>org.apache.commons</groupId>
diff --git a/core/src/main/findbugs/exclude-filter.xml b/core/src/main/findbugs/exclude-filter.xml
index f5b84d9..58d7a12 100644
--- a/core/src/main/findbugs/exclude-filter.xml
+++ b/core/src/main/findbugs/exclude-filter.xml
@@ -58,6 +58,7 @@
       <Class name="org.apache.accumulo.core.util.AddressUtil" />
       <Class name="org.apache.accumulo.core.zookeeper.ZooUtil" />
       <Class name="org.apache.accumulo.core.security.VisibilityConstraint" />
+      <Class name="org.apache.accumulo.core.client.mock.IteratorAdapter" />
     </Or>
     <Or>
       <Bug code="NM" pattern="NM_SAME_SIMPLE_NAME_AS_SUPERCLASS" />
diff --git a/core/src/main/java/org/apache/accumulo/core/Constants.java b/core/src/main/java/org/apache/accumulo/core/Constants.java
index 94ada7a..eebd81d 100644
--- a/core/src/main/java/org/apache/accumulo/core/Constants.java
+++ b/core/src/main/java/org/apache/accumulo/core/Constants.java
@@ -50,6 +50,7 @@
   public static final String ZMASTER_LOCK = ZMASTERS + "/lock";
   public static final String ZMASTER_GOAL_STATE = ZMASTERS + "/goal_state";
   public static final String ZMASTER_REPLICATION_COORDINATOR_ADDR = ZMASTERS + "/repl_coord_addr";
+  public static final String ZMASTER_TICK = ZMASTERS + "/tick";
 
   public static final String ZGC = "/gc";
   public static final String ZGC_LOCK = ZGC + "/lock";
diff --git a/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java b/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
index 54e8b53..c07c4cb 100644
--- a/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
+++ b/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
@@ -34,7 +34,6 @@
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
 import org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.Properties;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
@@ -45,6 +44,7 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.accumulo.core.trace.Trace;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.volume.VolumeConfiguration;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.hadoop.conf.Configuration;
@@ -122,7 +122,7 @@
 
   @DynamicParameter(names = "-l",
       description = "login properties in the format key=value. Reuse -l for each property (prompt for properties if this option is missing")
-  public Map<String,String> loginProps = new LinkedHashMap<String,String>();
+  public Map<String,String> loginProps = new LinkedHashMap<>();
 
   public AuthenticationToken getToken() {
     if (null != tokenClassName) {
@@ -260,7 +260,7 @@
     if (cachedInstance != null)
       return cachedInstance;
     if (mock)
-      return cachedInstance = new MockInstance(instance);
+      return cachedInstance = DeprecationUtil.makeMockInstance(instance);
     return cachedInstance = new ZooKeeperInstance(this.getClientConfiguration());
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/BatchWriter.java b/core/src/main/java/org/apache/accumulo/core/client/BatchWriter.java
index b4d81aa..95d87c5 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/BatchWriter.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/BatchWriter.java
@@ -29,7 +29,7 @@
  * In the event that an MutationsRejectedException exception is thrown by one of the methods on a BatchWriter instance, the user should close the current
  * instance and create a new instance. This is a known limitation which will be addressed by ACCUMULO-2990 in the future.
  */
-public interface BatchWriter {
+public interface BatchWriter extends AutoCloseable {
 
   /**
    * Queues one mutation to write.
@@ -66,6 +66,7 @@
    * @throws MutationsRejectedException
    *           this could be thrown because current or previous mutations failed
    */
+  @Override
   void close() throws MutationsRejectedException;
 
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
index 3421f76..521e0ce 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
@@ -196,7 +196,7 @@
   @Override
   public void write(DataOutput out) throws IOException {
     // write this out in a human-readable way
-    ArrayList<String> fields = new ArrayList<String>();
+    ArrayList<String> fields = new ArrayList<>();
     if (maxMemory != null)
       addField(fields, "maxMemory", maxMemory);
     if (maxLatency != null)
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ClientConfiguration.java b/core/src/main/java/org/apache/accumulo/core/client/ClientConfiguration.java
index d5bf920..1b9b380 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ClientConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ClientConfiguration.java
@@ -145,7 +145,7 @@
           return prop;
       return null;
     }
-  };
+  }
 
   public ClientConfiguration(String configFile) throws ConfigurationException {
     this(new PropertiesConfiguration(), configFile);
@@ -215,7 +215,7 @@
 
   private static ClientConfiguration loadFromSearchPath(List<String> paths) {
     try {
-      List<Configuration> configs = new LinkedList<Configuration>();
+      List<Configuration> configs = new LinkedList<>();
       for (String path : paths) {
         File conf = new File(path);
         if (conf.isFile() && conf.canRead()) {
@@ -272,7 +272,7 @@
       // ~/.accumulo/config
       // $ACCUMULO_CONF_DIR/client.conf -OR- $ACCUMULO_HOME/conf/client.conf (depending on whether $ACCUMULO_CONF_DIR is set)
       // /etc/accumulo/client.conf
-      clientConfPaths = new LinkedList<String>();
+      clientConfPaths = new LinkedList<>();
       clientConfPaths.add(System.getProperty("user.home") + File.separator + USER_ACCUMULO_DIR_NAME + File.separator + USER_CONF_FILENAME);
       if (System.getenv("ACCUMULO_CONF_DIR") != null) {
         clientConfPaths.add(System.getenv("ACCUMULO_CONF_DIR") + File.separator + GLOBAL_CONF_FILENAME);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ClientSideIteratorScanner.java b/core/src/main/java/org/apache/accumulo/core/client/ClientSideIteratorScanner.java
index f077573..d4622c6 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ClientSideIteratorScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ClientSideIteratorScanner.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.core.client;
 
+import static java.util.Objects.requireNonNull;
+
 import java.io.IOException;
 import java.util.Collection;
 import java.util.Iterator;
@@ -28,7 +30,7 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.impl.ScannerOptions;
-import org.apache.accumulo.core.client.mock.IteratorAdapter;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.ArrayByteSequence;
 import org.apache.accumulo.core.data.ByteSequence;
@@ -37,6 +39,7 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.data.thrift.IterInfo;
+import org.apache.accumulo.core.iterators.IteratorAdapter;
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.IteratorUtil;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
@@ -60,6 +63,7 @@
   private Range range;
   private boolean isolated = false;
   private long readaheadThreshold = Constants.SCANNER_DEFAULT_READAHEAD_THRESHOLD;
+  private SamplerConfiguration iteratorSamplerConfig;
 
   /**
    * @deprecated since 1.7.0 was never intended for public use. However this could have been used by anything extending this class.
@@ -67,7 +71,7 @@
   @Deprecated
   public class ScannerTranslator extends ScannerTranslatorImpl {
     public ScannerTranslator(Scanner scanner) {
-      super(scanner);
+      super(scanner, scanner.getSamplerConfiguration());
     }
 
     @Override
@@ -76,6 +80,62 @@
     }
   }
 
+  private class ClientSideIteratorEnvironment implements IteratorEnvironment {
+
+    private SamplerConfiguration samplerConfig;
+    private boolean sampleEnabled;
+
+    ClientSideIteratorEnvironment(boolean sampleEnabled, SamplerConfiguration samplerConfig) {
+      this.sampleEnabled = sampleEnabled;
+      this.samplerConfig = samplerConfig;
+    }
+
+    @Override
+    public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String mapFileName) throws IOException {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public AccumuloConfiguration getConfig() {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public IteratorScope getIteratorScope() {
+      return IteratorScope.scan;
+    }
+
+    @Override
+    public boolean isFullMajorCompaction() {
+      return false;
+    }
+
+    @Override
+    public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public Authorizations getAuthorizations() {
+      return ClientSideIteratorScanner.this.getAuthorizations();
+    }
+
+    @Override
+    public IteratorEnvironment cloneWithSamplingEnabled() {
+      return new ClientSideIteratorEnvironment(true, samplerConfig);
+    }
+
+    @Override
+    public boolean isSamplingEnabled() {
+      return sampleEnabled;
+    }
+
+    @Override
+    public SamplerConfiguration getSamplerConfiguration() {
+      return samplerConfig;
+    }
+  }
+
   /**
    * A class that wraps a Scanner in a SortedKeyValueIterator so that other accumulo iterators can use it as a source.
    */
@@ -83,6 +143,7 @@
     protected Scanner scanner;
     Iterator<Entry<Key,Value>> iter;
     Entry<Key,Value> top = null;
+    private SamplerConfiguration samplerConfig;
 
     /**
      * Constructs an accumulo iterator from a scanner.
@@ -90,8 +151,9 @@
      * @param scanner
      *          the scanner to iterate over
      */
-    public ScannerTranslatorImpl(final Scanner scanner) {
+    public ScannerTranslatorImpl(final Scanner scanner, SamplerConfiguration samplerConfig) {
       this.scanner = scanner;
+      this.samplerConfig = samplerConfig;
     }
 
     @Override
@@ -122,6 +184,13 @@
       for (ByteSequence colf : columnFamilies) {
         scanner.fetchColumnFamily(new Text(colf.toArray()));
       }
+
+      if (samplerConfig == null) {
+        scanner.clearSamplerConfiguration();
+      } else {
+        scanner.setSamplerConfiguration(samplerConfig);
+      }
+
       iter = scanner.iterator();
       next();
     }
@@ -138,7 +207,7 @@
 
     @Override
     public SortedKeyValueIterator<Key,Value> deepCopy(final IteratorEnvironment env) {
-      return new ScannerTranslatorImpl(scanner);
+      return new ScannerTranslatorImpl(scanner, env.isSamplingEnabled() ? env.getSamplerConfiguration() : null);
     }
   }
 
@@ -151,31 +220,38 @@
    *          the source scanner
    */
   public ClientSideIteratorScanner(final Scanner scanner) {
-    smi = new ScannerTranslatorImpl(scanner);
+    smi = new ScannerTranslatorImpl(scanner, scanner.getSamplerConfiguration());
     this.range = scanner.getRange();
     this.size = scanner.getBatchSize();
     this.timeOut = scanner.getTimeout(TimeUnit.MILLISECONDS);
+    this.batchTimeOut = scanner.getTimeout(TimeUnit.MILLISECONDS);
     this.readaheadThreshold = scanner.getReadaheadThreshold();
+    SamplerConfiguration samplerConfig = scanner.getSamplerConfiguration();
+    if (samplerConfig != null)
+      setSamplerConfiguration(samplerConfig);
   }
 
   /**
    * Sets the source Scanner.
    */
   public void setSource(final Scanner scanner) {
-    smi = new ScannerTranslatorImpl(scanner);
+    smi = new ScannerTranslatorImpl(scanner, scanner.getSamplerConfiguration());
   }
 
   @Override
   public Iterator<Entry<Key,Value>> iterator() {
     smi.scanner.setBatchSize(size);
     smi.scanner.setTimeout(timeOut, TimeUnit.MILLISECONDS);
+    smi.scanner.setBatchTimeout(batchTimeOut, TimeUnit.MILLISECONDS);
     smi.scanner.setReadaheadThreshold(readaheadThreshold);
     if (isolated)
       smi.scanner.enableIsolation();
     else
       smi.scanner.disableIsolation();
 
-    final TreeMap<Integer,IterInfo> tm = new TreeMap<Integer,IterInfo>();
+    smi.samplerConfig = getSamplerConfiguration();
+
+    final TreeMap<Integer,IterInfo> tm = new TreeMap<>();
 
     for (IterInfo iterInfo : serverSideIteratorList) {
       tm.put(iterInfo.getPriority(), iterInfo);
@@ -183,40 +259,13 @@
 
     SortedKeyValueIterator<Key,Value> skvi;
     try {
-      skvi = IteratorUtil.loadIterators(smi, tm.values(), serverSideIteratorOptions, new IteratorEnvironment() {
-        @Override
-        public SortedKeyValueIterator<Key,Value> reserveMapFileReader(final String mapFileName) throws IOException {
-          return null;
-        }
-
-        @Override
-        public AccumuloConfiguration getConfig() {
-          return null;
-        }
-
-        @Override
-        public IteratorScope getIteratorScope() {
-          return null;
-        }
-
-        @Override
-        public boolean isFullMajorCompaction() {
-          return false;
-        }
-
-        @Override
-        public void registerSideChannel(final SortedKeyValueIterator<Key,Value> iter) {}
-
-        @Override
-        public Authorizations getAuthorizations() {
-          return smi.scanner.getAuthorizations();
-        }
-      }, false, null);
+      skvi = IteratorUtil.loadIterators(smi, tm.values(), serverSideIteratorOptions, new ClientSideIteratorEnvironment(getSamplerConfiguration() != null,
+          getIteratorSamplerConfigurationInternal()), false, null);
     } catch (IOException e) {
       throw new RuntimeException(e);
     }
 
-    final Set<ByteSequence> colfs = new TreeSet<ByteSequence>();
+    final Set<ByteSequence> colfs = new TreeSet<>();
     for (Column c : this.getFetchedColumns()) {
       colfs.add(new ArrayByteSequence(c.getColumnFamily()));
     }
@@ -295,4 +344,50 @@
     }
     this.readaheadThreshold = batches;
   }
+
+  private SamplerConfiguration getIteratorSamplerConfigurationInternal() {
+    SamplerConfiguration scannerSamplerConfig = getSamplerConfiguration();
+    if (scannerSamplerConfig != null) {
+      if (iteratorSamplerConfig != null && !iteratorSamplerConfig.equals(scannerSamplerConfig)) {
+        throw new IllegalStateException("Scanner and iterator sampler configuration differ");
+      }
+
+      return scannerSamplerConfig;
+    }
+
+    return iteratorSamplerConfig;
+  }
+
+  /**
+   * This is provided for the case where no sampler configuration is set on the scanner, but there is a need to create iterator deep copies that have sampling
+   * enabled. If sampler configuration is set on the scanner, then this method does not need to be called inorder to create deep copies with sampling.
+   *
+   * <p>
+   * Setting this differently than the scanners sampler configuration may cause exceptions.
+   *
+   * @since 1.8.0
+   */
+  public void setIteratorSamplerConfiguration(SamplerConfiguration sc) {
+    requireNonNull(sc);
+    this.iteratorSamplerConfig = sc;
+  }
+
+  /**
+   * Clear any iterator sampler configuration.
+   *
+   * @since 1.8.0
+   */
+  public void clearIteratorSamplerConfiguration() {
+    this.iteratorSamplerConfig = null;
+  }
+
+  /**
+   * @return currently set iterator sampler configuration.
+   *
+   * @since 1.8.0
+   */
+
+  public SamplerConfiguration getIteratorSamplerConfiguration() {
+    return iteratorSamplerConfig;
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriter.java b/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriter.java
index 62244e6..d13dc09 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriter.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriter.java
@@ -28,7 +28,7 @@
  *
  * @since 1.6.0
  */
-public interface ConditionalWriter {
+public interface ConditionalWriter extends AutoCloseable {
   class Result {
 
     private Status status;
@@ -131,5 +131,6 @@
   /**
    * release any resources (like threads pools) used by conditional writer
    */
+  @Override
   void close();
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java b/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
index 77e0134..ae4577d 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
@@ -17,6 +17,7 @@
 package org.apache.accumulo.core.client;
 
 import static com.google.common.base.Preconditions.checkArgument;
+import static java.util.Objects.requireNonNull;
 
 import java.util.concurrent.TimeUnit;
 
@@ -38,6 +39,8 @@
 
   private Durability durability = Durability.DEFAULT;
 
+  private String classLoaderContext = null;
+
   /**
    * A set of authorization labels that will be checked against the column visibility of each key in order to filter data. The authorizations passed in must be
    * a subset of the accumulo user's set of authorizations. If the accumulo user has authorizations (A1, A2) and authorizations (A2, A3) are passed, then an
@@ -133,4 +136,39 @@
   public Durability getDurability() {
     return durability;
   }
+
+  /**
+   * Sets the name of the classloader context on this scanner. See the administration chapter of the user manual for details on how to configure and use
+   * classloader contexts.
+   *
+   * @param classLoaderContext
+   *          name of the classloader context
+   * @throws NullPointerException
+   *           if context is null
+   * @since 1.8.0
+   */
+  public void setClassLoaderContext(String classLoaderContext) {
+    requireNonNull(classLoaderContext, "context name cannot be null");
+    this.classLoaderContext = classLoaderContext;
+  }
+
+  /**
+   * Clears the current classloader context set on this scanner
+   *
+   * @since 1.8.0
+   */
+  public void clearClassLoaderContext() {
+    this.classLoaderContext = null;
+  }
+
+  /**
+   * Returns the name of the current classloader context set on this scanner
+   *
+   * @return name of the current context
+   * @since 1.8.0
+   */
+  public String getClassLoaderContext() {
+    return this.classLoaderContext;
+  }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/IsolatedScanner.java b/core/src/main/java/org/apache/accumulo/core/client/IsolatedScanner.java
index e530100..90e8637 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/IsolatedScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/IsolatedScanner.java
@@ -28,9 +28,10 @@
 import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.hadoop.io.Text;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 /**
  * A scanner that presents a row isolated view of an accumulo table. Rows are buffered in memory on the client side. If you think your rows may not fit into
  * memory, then you can provide an alternative row buffer factory to the constructor. This would allow rows to be buffered to disk for example.
@@ -111,7 +112,7 @@
           }
 
           // wait a moment before retrying
-          UtilWaitThread.sleep(100);
+          sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
 
           source = newIterator(seekRange);
         }
@@ -193,7 +194,7 @@
 
   public static class MemoryRowBuffer implements RowBuffer {
 
-    private ArrayList<Entry<Key,Value>> buffer = new ArrayList<Entry<Key,Value>>();
+    private ArrayList<Entry<Key,Value>> buffer = new ArrayList<>();
 
     @Override
     public void add(Entry<Key,Value> entry) {
@@ -226,6 +227,7 @@
     this.scanner = scanner;
     this.range = scanner.getRange();
     this.timeOut = scanner.getTimeout(TimeUnit.MILLISECONDS);
+    this.batchTimeOut = scanner.getBatchTimeout(TimeUnit.MILLISECONDS);
     this.batchSize = scanner.getBatchSize();
     this.readaheadThreshold = scanner.getReadaheadThreshold();
     this.bufferFactory = bufferFactory;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/IteratorSetting.java b/core/src/main/java/org/apache/accumulo/core/client/IteratorSetting.java
index e7bbbdf..a6d9a09 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/IteratorSetting.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/IteratorSetting.java
@@ -140,7 +140,7 @@
     setPriority(priority);
     setName(name);
     setIteratorClass(iteratorClass);
-    this.properties = new HashMap<String,String>();
+    this.properties = new HashMap<>();
     addOptions(properties);
   }
 
@@ -209,7 +209,7 @@
    * @since 1.5.0
    */
   public IteratorSetting(DataInput din) throws IOException {
-    this.properties = new HashMap<String,String>();
+    this.properties = new HashMap<>();
     this.readFields(din);
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/MutationsRejectedException.java b/core/src/main/java/org/apache/accumulo/core/client/MutationsRejectedException.java
index e8f675b..676957a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/MutationsRejectedException.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/MutationsRejectedException.java
@@ -47,7 +47,7 @@
   private int unknownErrors;
 
   private static <K,V,L> Map<L,V> transformKeys(Map<K,V> map, Function<K,L> keyFunction) {
-    HashMap<L,V> ret = new HashMap<L,V>();
+    HashMap<L,V> ret = new HashMap<>();
     for (Entry<K,V> entry : map.entrySet()) {
       ret.put(keyFunction.apply(entry.getKey()), entry.getValue());
     }
@@ -125,7 +125,7 @@
   }
 
   private static String format(Map<TabletId,Set<SecurityErrorCode>> hashMap, Instance instance) {
-    Map<String,Set<SecurityErrorCode>> result = new HashMap<String,Set<SecurityErrorCode>>();
+    Map<String,Set<SecurityErrorCode>> result = new HashMap<>();
 
     for (Entry<TabletId,Set<SecurityErrorCode>> entry : hashMap.entrySet()) {
       String tableInfo = Tables.getPrintableTableInfoFromId(instance, entry.getKey().getTableId().toString());
@@ -153,7 +153,7 @@
    */
   @Deprecated
   public List<org.apache.accumulo.core.data.KeyExtent> getAuthorizationFailures() {
-    return new ArrayList<org.apache.accumulo.core.data.KeyExtent>(Collections2.transform(af.keySet(), TabletIdImpl.TID_2_KE_OLD));
+    return new ArrayList<>(Collections2.transform(af.keySet(), TabletIdImpl.TID_2_KE_OLD));
   }
 
   /**
diff --git a/core/src/main/java/org/apache/accumulo/core/client/RowIterator.java b/core/src/main/java/org/apache/accumulo/core/client/RowIterator.java
index 190b0b2..c8dab71 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/RowIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/RowIterator.java
@@ -107,7 +107,7 @@
    * Create an iterator from an (ordered) sequence of KeyValue pairs.
    */
   public RowIterator(Iterator<Entry<Key,Value>> iterator) {
-    this.iter = new PeekingIterator<Entry<Key,Value>>(iterator);
+    this.iter = new PeekingIterator<>(iterator);
   }
 
   /**
diff --git a/core/src/main/java/org/apache/accumulo/core/client/SampleNotPresentException.java b/core/src/main/java/org/apache/accumulo/core/client/SampleNotPresentException.java
new file mode 100644
index 0000000..c70a898
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/SampleNotPresentException.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client;
+
+/**
+ * Exception thrown when a table does not have sampling configured or when sampling is configured but it differs from what was requested.
+ *
+ * @since 1.8.0
+ */
+
+public class SampleNotPresentException extends RuntimeException {
+
+  public SampleNotPresentException(String message, Exception cause) {
+    super(message, cause);
+  }
+
+  public SampleNotPresentException(String message) {
+    super(message);
+  }
+
+  public SampleNotPresentException() {
+    super();
+  }
+
+  private static final long serialVersionUID = 1L;
+
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java b/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
index b5692d2..2110050 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
@@ -21,6 +21,7 @@
 import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.IteratorSetting.Column;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
@@ -30,7 +31,7 @@
  * This class hosts configuration methods that are shared between different types of scanners.
  *
  */
-public interface ScannerBase extends Iterable<Entry<Key,Value>> {
+public interface ScannerBase extends Iterable<Entry<Key,Value>>, AutoCloseable {
 
   /**
    * Add a server-side scan iterator.
@@ -65,7 +66,6 @@
   void updateScanIteratorOption(String iteratorName, String key, String value);
 
   /**
-   * <p>
    * Adds a column family to the list of columns that will be fetched by this scanner. By default when no columns have been added the scanner fetches all
    * columns. To fetch multiple column families call this function multiple times.
    *
@@ -82,7 +82,6 @@
   void fetchColumnFamily(Text col);
 
   /**
-   * <p>
    * Adds a column to the list of columns that will be fetched by this scanner. The column is identified by family and qualifier. By default when no columns
    * have been added the scanner fetches all columns.
    *
@@ -161,10 +160,11 @@
   long getTimeout(TimeUnit timeUnit);
 
   /**
-   * Closes any underlying connections on the scanner
+   * Closes any underlying connections on the scanner. This may invalidate any iterators derived from the Scanner, causing them to throw exceptions.
    *
    * @since 1.5.0
    */
+  @Override
   void close();
 
   /**
@@ -174,4 +174,96 @@
    * @return The authorizations set on the scanner instance
    */
   Authorizations getAuthorizations();
+
+  /**
+   * Setting this will cause the scanner to read sample data, as long as that sample data was generated with the given configuration. By default this is not set
+   * and all data is read.
+   *
+   * <p>
+   * One way to use this method is as follows, where the sampler configuration is obtained from the table configuration. Sample data can be generated in many
+   * different ways, so its important to verify the sample data configuration meets expectations.
+   *
+   * <pre>
+   * <code>
+   *   // could cache this if creating many scanners to avoid RPCs.
+   *   SamplerConfiguration samplerConfig = connector.tableOperations().getSamplerConfiguration(table);
+   *   // verify table's sample data is generated in an expected way before using
+   *   userCode.verifySamplerConfig(samplerConfig);
+   *   scanner.setSamplerCongiguration(samplerConfig);
+   * </code>
+   * </pre>
+   *
+   * <p>
+   * Of course this is not the only way to obtain a {@link SamplerConfiguration}, it could be a constant, configuration, etc.
+   *
+   * <p>
+   * If sample data is not present or sample data was generated with a different configuration, then the scanner iterator will throw a
+   * {@link SampleNotPresentException}. Also if a table's sampler configuration is changed while a scanner is iterating over a table, a
+   * {@link SampleNotPresentException} may be thrown.
+   *
+   * @since 1.8.0
+   */
+  void setSamplerConfiguration(SamplerConfiguration samplerConfig);
+
+  /**
+   * @return currently set sampler configuration. Returns null if no sampler configuration is set.
+   * @since 1.8.0
+   */
+  SamplerConfiguration getSamplerConfiguration();
+
+  /**
+   * Clears sampler configuration making a scanner read all data. After calling this, {@link #getSamplerConfiguration()} should return null.
+   *
+   * @since 1.8.0
+   */
+  void clearSamplerConfiguration();
+
+  /**
+   * This setting determines how long a scanner will wait to fill the returned batch. By default, a scanner wait until the batch is full.
+   *
+   * <p>
+   * Setting the timeout to zero (with any time unit) or {@link Long#MAX_VALUE} (with {@link TimeUnit#MILLISECONDS}) means no timeout.
+   *
+   * @param timeOut
+   *          the length of the timeout
+   * @param timeUnit
+   *          the units of the timeout
+   * @since 1.8.0
+   */
+  void setBatchTimeout(long timeOut, TimeUnit timeUnit);
+
+  /**
+   * Returns the timeout to fill a batch in the given TimeUnit.
+   *
+   * @return the batch timeout configured for this scanner
+   * @since 1.8.0
+   */
+  long getBatchTimeout(TimeUnit timeUnit);
+
+  /**
+   * Sets the name of the classloader context on this scanner. See the administration chapter of the user manual for details on how to configure and use
+   * classloader contexts.
+   *
+   * @param classLoaderContext
+   *          name of the classloader context
+   * @throws NullPointerException
+   *           if context is null
+   * @since 1.8.0
+   */
+  void setClassLoaderContext(String classLoaderContext);
+
+  /**
+   * Clears the current classloader context set on this scanner
+   *
+   * @since 1.8.0
+   */
+  void clearClassLoaderContext();
+
+  /**
+   * Returns the name of the current classloader context set on this scanner
+   *
+   * @return name of the current context
+   * @since 1.8.0
+   */
+  String getClassLoaderContext();
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/TimedOutException.java b/core/src/main/java/org/apache/accumulo/core/client/TimedOutException.java
index 5ec9c59..e5bba3e 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/TimedOutException.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/TimedOutException.java
@@ -34,7 +34,7 @@
       return set.toString();
     }
 
-    return new ArrayList<String>(set).subList(0, 10).toString() + " ... " + (set.size() - 10) + " servers not shown";
+    return new ArrayList<>(set).subList(0, 10).toString() + " ... " + (set.size() - 10) + " servers not shown";
   }
 
   public TimedOutException(Set<String> timedoutServers) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java b/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java
index c5cb482..4a4dd5f 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java
@@ -23,6 +23,7 @@
 import java.util.Collections;
 import java.util.List;
 import java.util.UUID;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
@@ -43,11 +44,10 @@
 import org.apache.accumulo.fate.zookeeper.ZooCacheFactory;
 import org.apache.commons.configuration.Configuration;
 import org.apache.hadoop.io.Text;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
- * <p>
  * An implementation of instance that looks in zookeeper to find information needed to connect to an instance of accumulo.
  *
  * <p>
@@ -62,7 +62,7 @@
 
 public class ZooKeeperInstance implements Instance {
 
-  private static final Logger log = Logger.getLogger(ZooKeeperInstance.class);
+  private static final Logger log = LoggerFactory.getLogger(ZooKeeperInstance.class);
 
   private String instanceId = null;
   private String instanceName = null;
@@ -187,9 +187,20 @@
   public List<String> getMasterLocations() {
     String masterLocPath = ZooUtil.getRoot(this) + Constants.ZMASTER_LOCK;
 
-    OpTimer opTimer = new OpTimer(log, Level.TRACE).start("Looking up master location in zoocache.");
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Looking up master location in zookeeper.", Thread.currentThread().getId());
+      timer = new OpTimer().start();
+    }
+
     byte[] loc = ZooUtil.getLockData(zooCache, masterLocPath);
-    opTimer.stop("Found master at " + (loc == null ? null : new String(loc, UTF_8)) + " in %DURATION%");
+
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Found master at {} in {}", Thread.currentThread().getId(), (loc == null ? "null" : new String(loc, UTF_8)),
+          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
 
     if (loc == null) {
       return Collections.emptyList();
@@ -202,9 +213,20 @@
   public String getRootTabletLocation() {
     String zRootLocPath = ZooUtil.getRoot(this) + RootTable.ZROOT_TABLET_LOCATION;
 
-    OpTimer opTimer = new OpTimer(log, Level.TRACE).start("Looking up root tablet location in zookeeper.");
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Looking up root tablet location in zookeeper.", Thread.currentThread().getId());
+      timer = new OpTimer().start();
+    }
+
     byte[] loc = zooCache.get(zRootLocPath);
-    opTimer.stop("Found root tablet at " + (loc == null ? null : new String(loc, UTF_8)) + " in %DURATION%");
+
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Found root tablet at {} in {}", Thread.currentThread().getId(), (loc == null ? "null" : new String(loc, UTF_8)),
+          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
 
     if (loc == null) {
       return null;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveCompaction.java b/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveCompaction.java
index 89d30b1..5228391 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveCompaction.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/admin/ActiveCompaction.java
@@ -45,7 +45,7 @@
      * compaction that merges all of a tablets files into one file
      */
     FULL
-  };
+  }
 
   public static enum CompactionReason {
     /**
@@ -68,7 +68,7 @@
      * Compaction initiated to close a unload a tablet
      */
     CLOSE
-  };
+  }
 
   /**
    *
diff --git a/core/src/main/java/org/apache/accumulo/core/client/admin/InstanceOperations.java b/core/src/main/java/org/apache/accumulo/core/client/admin/InstanceOperations.java
index 54194c1..79b430c 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/admin/InstanceOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/admin/InstanceOperations.java
@@ -29,7 +29,9 @@
 
   /**
    * Sets an system property in zookeeper. Tablet servers will pull this setting and override the equivalent setting in accumulo-site.xml. Changes can be seen
-   * using {@link #getSystemConfiguration()}
+   * using {@link #getSystemConfiguration()}.
+   * <p>
+   * Only some properties can be changed by this method, an IllegalArgumentException will be thrown if a read-only property is set.
    *
    * @param property
    *          the name of a per-table property
diff --git a/core/src/main/java/org/apache/accumulo/core/client/admin/Locations.java b/core/src/main/java/org/apache/accumulo/core/client/admin/Locations.java
new file mode 100644
index 0000000..aaecf33
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/admin/Locations.java
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.admin;
+
+import java.util.List;
+import java.util.Map;
+
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.TabletId;
+
+/**
+ * A snapshot of metadata information about where a specified set of ranges are located returned by {@link TableOperations#locate(String, java.util.Collection)}
+ *
+ * @since 1.8.0
+ */
+public interface Locations {
+
+  /**
+   * For all of the ranges passed to {@link TableOperations#locate(String, java.util.Collection)}, return a map of the tablets each range overlaps.
+   */
+  public Map<Range,List<TabletId>> groupByRange();
+
+  /**
+   * For all of the ranges passed to {@link TableOperations#locate(String, java.util.Collection)}, return a map of the ranges each tablet overlaps.
+   */
+  public Map<TabletId,List<Range>> groupByTablet();
+
+  /**
+   * For any {@link TabletId} known to this object, the method will return the tablet server location for that tablet.
+   *
+   * @return A tablet server location in the form of {@code <host>:<port>}
+   */
+  public String getTabletLocation(TabletId tabletId);
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/admin/NewTableConfiguration.java b/core/src/main/java/org/apache/accumulo/core/client/admin/NewTableConfiguration.java
index 4db1d89..4694e1e 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/admin/NewTableConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/admin/NewTableConfiguration.java
@@ -17,13 +17,16 @@
 package org.apache.accumulo.core.client.admin;
 
 import static com.google.common.base.Preconditions.checkArgument;
+import static java.util.Objects.requireNonNull;
 
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.Map;
 
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.iterators.IteratorUtil;
 import org.apache.accumulo.core.iterators.user.VersioningIterator;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 
 /**
  * This object stores table creation parameters. Currently includes: {@link TimeType}, whether to include default iterators, and user-specified initial
@@ -38,7 +41,8 @@
 
   private boolean limitVersion = true;
 
-  private Map<String,String> properties = new HashMap<String,String>();
+  private Map<String,String> properties = new HashMap<>();
+  private SamplerConfiguration samplerConfiguration;
 
   /**
    * Configure logical or millisecond time for tables created with this configuration.
@@ -84,8 +88,9 @@
    */
   public NewTableConfiguration setProperties(Map<String,String> prop) {
     checkArgument(prop != null, "properties is null");
+    SamplerConfigurationImpl.checkDisjoint(prop, samplerConfiguration);
 
-    this.properties = new HashMap<String,String>(prop);
+    this.properties = new HashMap<>(prop);
     return this;
   }
 
@@ -101,7 +106,23 @@
       propertyMap.putAll(IteratorUtil.generateInitialTableProperties(limitVersion));
     }
 
+    if (samplerConfiguration != null) {
+      propertyMap.putAll(new SamplerConfigurationImpl(samplerConfiguration).toTablePropertiesMap());
+    }
+
     propertyMap.putAll(properties);
     return Collections.unmodifiableMap(propertyMap);
   }
+
+  /**
+   * Enable building a sample data set on the new table using the given sampler configuration.
+   *
+   * @since 1.8.0
+   */
+  public NewTableConfiguration enableSampling(SamplerConfiguration samplerConfiguration) {
+    requireNonNull(samplerConfiguration);
+    SamplerConfigurationImpl.checkDisjoint(properties, samplerConfiguration);
+    this.samplerConfiguration = samplerConfiguration;
+    return this;
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java b/core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java
index 45b94a5..3e56736 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/admin/TableOperations.java
@@ -30,6 +30,10 @@
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.TableExistsException;
 import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.TableOfflineException;
+import org.apache.accumulo.core.client.mapreduce.AccumuloFileOutputFormat;
+import org.apache.accumulo.core.client.rfile.RFile;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.security.Authorizations;
@@ -231,6 +235,19 @@
   Collection<Text> listSplits(String tableName, int maxSplits) throws TableNotFoundException, AccumuloSecurityException, AccumuloException;
 
   /**
+   * Locates the tablet servers and tablets that would service a collections of ranges. If a range covers multiple tablets, it will occur multiple times in the
+   * returned map.
+   *
+   * @param ranges
+   *          The input ranges that should be mapped to tablet servers and tablets.
+   *
+   * @throws TableOfflineException
+   *           if the table is offline or goes offline during the operation
+   * @since 1.8.0
+   */
+  Locations locate(String tableName, Collection<Range> ranges) throws AccumuloException, AccumuloSecurityException, TableNotFoundException;
+
+  /**
    * Finds the max row within a given range. To find the max row in a table, pass null for start and end row.
    *
    * @param auths
@@ -523,7 +540,7 @@
   Set<Range> splitRangeByTablets(String tableName, Range range, int maxSplits) throws AccumuloException, AccumuloSecurityException, TableNotFoundException;
 
   /**
-   * Bulk import all the files in a directory into a table.
+   * Bulk import all the files in a directory into a table. Files can be created using {@link AccumuloFileOutputFormat} and {@link RFile#newWriter()}
    *
    * @param tableName
    *          the name of the table
@@ -762,4 +779,33 @@
    */
   boolean testClassLoad(String tableName, final String className, final String asTypeName) throws AccumuloException, AccumuloSecurityException,
       TableNotFoundException;
+
+  /**
+   * Set or update the sampler configuration for a table. If the table has existing sampler configuration, those properties will be cleared before setting the
+   * new table properties.
+   *
+   * @param tableName
+   *          the name of the table
+   * @since 1.8.0
+   */
+  void setSamplerConfiguration(String tableName, SamplerConfiguration samplerConfiguration) throws TableNotFoundException, AccumuloException,
+      AccumuloSecurityException;
+
+  /**
+   * Clear all sampling configuration properties on the table.
+   *
+   * @param tableName
+   *          the name of the table
+   * @since 1.8.0
+   */
+  void clearSamplerConfiguration(String tableName) throws TableNotFoundException, AccumuloException, AccumuloSecurityException;
+
+  /**
+   * Reads the sampling configuration properties for a table.
+   *
+   * @param tableName
+   *          the name of the table
+   * @since 1.8.0
+   */
+  SamplerConfiguration getSamplerConfiguration(String tableName) throws TableNotFoundException, AccumuloException, AccumuloSecurityException;
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/AcceptableThriftTableOperationException.java b/core/src/main/java/org/apache/accumulo/core/client/impl/AcceptableThriftTableOperationException.java
new file mode 100644
index 0000000..98c1bf5
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/AcceptableThriftTableOperationException.java
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.client.impl;
+
+import org.apache.accumulo.core.client.impl.thrift.TableOperation;
+import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
+import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
+import org.apache.accumulo.fate.AcceptableException;
+
+/**
+ * Concrete implementation of {@link AcceptableException} for table operations.
+ */
+public class AcceptableThriftTableOperationException extends ThriftTableOperationException implements AcceptableException {
+
+  private static final long serialVersionUID = 1L;
+
+  public AcceptableThriftTableOperationException(String tableId, String tableName, TableOperation op, TableOperationExceptionType type, String description) {
+    super(tableId, tableName, op, type, description);
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveCompactionImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveCompactionImpl.java
index 1e429c8..bdd5d51 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveCompactionImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveCompactionImpl.java
@@ -28,6 +28,7 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.data.impl.TabletIdImpl;
 import org.apache.accumulo.core.data.thrift.IterInfo;
+import org.apache.hadoop.io.Text;
 
 /**
  *
@@ -45,14 +46,14 @@
 
   @Override
   public String getTable() throws TableNotFoundException {
-    return Tables.getTableName(instance, new KeyExtent(tac.getExtent()).getTableId().toString());
+    return Tables.getTableName(instance, new KeyExtent(tac.getExtent()).getTableId());
   }
 
   @Override
   @Deprecated
   public org.apache.accumulo.core.data.KeyExtent getExtent() {
     KeyExtent ke = new KeyExtent(tac.getExtent());
-    org.apache.accumulo.core.data.KeyExtent oke = new org.apache.accumulo.core.data.KeyExtent(ke.getTableId(), ke.getEndRow(), ke.getPrevEndRow());
+    org.apache.accumulo.core.data.KeyExtent oke = new org.apache.accumulo.core.data.KeyExtent(new Text(ke.getTableId()), ke.getEndRow(), ke.getPrevEndRow());
     return oke;
   }
 
@@ -103,7 +104,7 @@
 
   @Override
   public List<IteratorSetting> getIterators() {
-    ArrayList<IteratorSetting> ret = new ArrayList<IteratorSetting>();
+    ArrayList<IteratorSetting> ret = new ArrayList<>();
 
     for (IterInfo ii : tac.getSsiList()) {
       IteratorSetting settings = new IteratorSetting(ii.getPriority(), ii.getIterName(), ii.getClassName());
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveScanImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveScanImpl.java
index 429f8cd..9021190 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveScanImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ActiveScanImpl.java
@@ -32,6 +32,7 @@
 import org.apache.accumulo.core.data.thrift.IterInfo;
 import org.apache.accumulo.core.data.thrift.TColumn;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.hadoop.io.Text;
 
 /**
  * A class that contains information about an ActiveScan
@@ -42,7 +43,7 @@
 
   private long scanId;
   private String client;
-  private String table;
+  private String tableName;
   private long age;
   private long idle;
   private ScanType type;
@@ -60,18 +61,18 @@
     this.user = activeScan.user;
     this.age = activeScan.age;
     this.idle = activeScan.idleTime;
-    this.table = Tables.getTableName(instance, activeScan.tableId);
+    this.tableName = Tables.getTableName(instance, activeScan.tableId);
     this.type = ScanType.valueOf(activeScan.getType().name());
     this.state = ScanState.valueOf(activeScan.state.name());
     this.extent = new KeyExtent(activeScan.extent);
     this.authorizations = new Authorizations(activeScan.authorizations);
 
-    this.columns = new ArrayList<Column>(activeScan.columns.size());
+    this.columns = new ArrayList<>(activeScan.columns.size());
 
     for (TColumn tcolumn : activeScan.columns)
       this.columns.add(new Column(tcolumn));
 
-    this.ssiList = new ArrayList<String>();
+    this.ssiList = new ArrayList<>();
     for (IterInfo ii : activeScan.ssiList) {
       this.ssiList.add(ii.iterName + "=" + ii.priority + "," + ii.className);
     }
@@ -95,7 +96,7 @@
 
   @Override
   public String getTable() {
-    return table;
+    return tableName;
   }
 
   @Override
@@ -121,7 +122,7 @@
   @Override
   @Deprecated
   public org.apache.accumulo.core.data.KeyExtent getExtent() {
-    return new org.apache.accumulo.core.data.KeyExtent(extent.getTableId(), extent.getEndRow(), extent.getPrevEndRow());
+    return new org.apache.accumulo.core.data.KeyExtent(new Text(extent.getTableId()), extent.getEndRow(), extent.getPrevEndRow());
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/BaseIteratorEnvironment.java b/core/src/main/java/org/apache/accumulo/core/client/impl/BaseIteratorEnvironment.java
new file mode 100644
index 0000000..7b1c441
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/BaseIteratorEnvironment.java
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.impl;
+
+import java.io.IOException;
+
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.security.Authorizations;
+
+/**
+ * An implementation of {@link IteratorEnvironment} that throws {@link UnsupportedOperationException} for each operation. This is useful for situations that
+ * need to extend {@link IteratorEnvironment} and implement a subset of the methods.
+ */
+
+public class BaseIteratorEnvironment implements IteratorEnvironment {
+
+  @Override
+  public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String mapFileName) throws IOException {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public AccumuloConfiguration getConfig() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public IteratorScope getIteratorScope() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public boolean isFullMajorCompaction() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public Authorizations getAuthorizations() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public boolean isSamplingEnabled() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public SamplerConfiguration getSamplerConfiguration() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public IteratorEnvironment cloneWithSamplingEnabled() {
+    throw new UnsupportedOperationException();
+  }
+
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/BatchWriterImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/BatchWriterImpl.java
index c173333..7096187 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/BatchWriterImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/BatchWriterImpl.java
@@ -25,28 +25,28 @@
 
 public class BatchWriterImpl implements BatchWriter {
 
-  private final String table;
+  private final String tableId;
   private final TabletServerBatchWriter bw;
 
-  public BatchWriterImpl(ClientContext context, String table, BatchWriterConfig config) {
+  public BatchWriterImpl(ClientContext context, String tableId, BatchWriterConfig config) {
     checkArgument(context != null, "context is null");
-    checkArgument(table != null, "table is null");
+    checkArgument(tableId != null, "tableId is null");
     if (config == null)
       config = new BatchWriterConfig();
-    this.table = table;
+    this.tableId = tableId;
     this.bw = new TabletServerBatchWriter(context, config);
   }
 
   @Override
   public void addMutation(Mutation m) throws MutationsRejectedException {
     checkArgument(m != null, "m is null");
-    bw.addMutation(table, m);
+    bw.addMutation(tableId, m);
   }
 
   @Override
   public void addMutations(Iterable<Mutation> iterable) throws MutationsRejectedException {
     checkArgument(iterable != null, "iterable is null");
-    bw.addMutation(table, iterable.iterator());
+    bw.addMutation(tableId, iterable.iterator());
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/CompressedIterators.java b/core/src/main/java/org/apache/accumulo/core/client/impl/CompressedIterators.java
index 96d58a7..c227b40 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/CompressedIterators.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/CompressedIterators.java
@@ -32,13 +32,13 @@
   private List<String> symbolTable;
 
   public static class IterConfig {
-    public List<IterInfo> ssiList = new ArrayList<IterInfo>();
-    public Map<String,Map<String,String>> ssio = new HashMap<String,Map<String,String>>();
+    public List<IterInfo> ssiList = new ArrayList<>();
+    public Map<String,Map<String,String>> ssio = new HashMap<>();
   }
 
   public CompressedIterators() {
-    symbolMap = new HashMap<String,Integer>();
-    symbolTable = new ArrayList<String>();
+    symbolMap = new HashMap<>();
+    symbolTable = new ArrayList<>();
   }
 
   public CompressedIterators(List<String> symbols) {
@@ -96,7 +96,7 @@
 
       int numOpts = in.readVInt();
 
-      HashMap<String,String> opts = new HashMap<String,String>();
+      HashMap<String,String> opts = new HashMap<>();
 
       for (int j = 0; j < numOpts; j++) {
         String key = symbolTable.get(in.readVInt());
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ConditionalWriterImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ConditionalWriterImpl.java
index c7756ad..98a15ca 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ConditionalWriterImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ConditionalWriterImpl.java
@@ -17,6 +17,7 @@
 
 package org.apache.accumulo.core.client.impl;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.nio.ByteBuffer;
@@ -77,7 +78,6 @@
 import org.apache.accumulo.core.util.BadArgumentException;
 import org.apache.accumulo.core.util.ByteBufferUtil;
 import org.apache.accumulo.core.util.NamingThreadFactory;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.util.LoggingRunnable;
 import org.apache.accumulo.fate.zookeeper.ZooCacheFactory;
@@ -113,17 +113,18 @@
   private Map<Text,Boolean> cache = Collections.synchronizedMap(new LRUMap(1000));
   private final ClientContext context;
   private TabletLocator locator;
-  private String tableId;
+  private final String tableId;
   private long timeout;
   private final Durability durability;
+  private final String classLoaderContext;
 
   private static class ServerQueue {
-    BlockingQueue<TabletServerMutations<QCMutation>> queue = new LinkedBlockingQueue<TabletServerMutations<QCMutation>>();
+    BlockingQueue<TabletServerMutations<QCMutation>> queue = new LinkedBlockingQueue<>();
     boolean taskQueued = false;
   }
 
   private Map<String,ServerQueue> serverQueues;
-  private DelayQueue<QCMutation> failedMutations = new DelayQueue<QCMutation>();
+  private DelayQueue<QCMutation> failedMutations = new DelayQueue<>();
   private ScheduledThreadPoolExecutor threadPool;
 
   private class RQIterator implements Iterator<Result> {
@@ -261,7 +262,7 @@
 
       long time = System.currentTimeMillis();
 
-      ArrayList<QCMutation> mutations2 = new ArrayList<ConditionalWriterImpl.QCMutation>(mutations.size());
+      ArrayList<QCMutation> mutations2 = new ArrayList<>(mutations.size());
 
       for (QCMutation qcm : mutations) {
         qcm.resetDelay();
@@ -289,8 +290,8 @@
   }
 
   private void queue(List<QCMutation> mutations) {
-    List<QCMutation> failures = new ArrayList<QCMutation>();
-    Map<String,TabletServerMutations<QCMutation>> binnedMutations = new HashMap<String,TabletLocator.TabletServerMutations<QCMutation>>();
+    List<QCMutation> failures = new ArrayList<>();
+    Map<String,TabletServerMutations<QCMutation>> binnedMutations = new HashMap<>();
 
     try {
       locator.binMutations(context, mutations, binnedMutations, failures);
@@ -354,7 +355,7 @@
   private TabletServerMutations<QCMutation> dequeue(String location) {
     BlockingQueue<TabletServerMutations<QCMutation>> queue = getServerQueue(location).queue;
 
-    ArrayList<TabletServerMutations<QCMutation>> mutations = new ArrayList<TabletLocator.TabletServerMutations<QCMutation>>();
+    ArrayList<TabletServerMutations<QCMutation>> mutations = new ArrayList<>();
     queue.drainTo(mutations);
 
     if (mutations.size() == 0)
@@ -370,7 +371,7 @@
         for (Entry<KeyExtent,List<QCMutation>> entry : mutations.get(i).getMutations().entrySet()) {
           List<QCMutation> list = tsm.getMutations().get(entry.getKey());
           if (list == null) {
-            list = new ArrayList<QCMutation>();
+            list = new ArrayList<>();
             tsm.getMutations().put(entry.getKey(), list);
           }
 
@@ -387,17 +388,18 @@
     this.auths = config.getAuthorizations();
     this.ve = new VisibilityEvaluator(config.getAuthorizations());
     this.threadPool = new ScheduledThreadPoolExecutor(config.getMaxWriteThreads(), new NamingThreadFactory(this.getClass().getSimpleName()));
-    this.locator = TabletLocator.getLocator(context, new Text(tableId));
-    this.serverQueues = new HashMap<String,ServerQueue>();
+    this.locator = new SyncingTabletLocator(context, tableId);
+    this.serverQueues = new HashMap<>();
     this.tableId = tableId;
     this.timeout = config.getTimeout(TimeUnit.MILLISECONDS);
     this.durability = config.getDurability();
+    this.classLoaderContext = config.getClassLoaderContext();
 
     Runnable failureHandler = new Runnable() {
 
       @Override
       public void run() {
-        List<QCMutation> mutations = new ArrayList<QCMutation>();
+        List<QCMutation> mutations = new ArrayList<>();
         failedMutations.drainTo(mutations);
         if (mutations.size() > 0)
           queue(mutations);
@@ -412,9 +414,9 @@
   @Override
   public Iterator<Result> write(Iterator<ConditionalMutation> mutations) {
 
-    BlockingQueue<Result> resultQueue = new LinkedBlockingQueue<Result>();
+    BlockingQueue<Result> resultQueue = new LinkedBlockingQueue<>();
 
-    List<QCMutation> mutationList = new ArrayList<QCMutation>();
+    List<QCMutation> mutationList = new ArrayList<>();
 
     int count = 0;
 
@@ -489,7 +491,7 @@
     }
   }
 
-  private HashMap<HostAndPort,SessionID> cachedSessionIDs = new HashMap<HostAndPort,SessionID>();
+  private HashMap<HostAndPort,SessionID> cachedSessionIDs = new HashMap<>();
 
   private SessionID reserveSessionID(HostAndPort location, TabletClientService.Iface client, TInfo tinfo) throws ThriftSecurityException, TException {
     // avoid cost of repeatedly making RPC to create sessions, reuse sessions
@@ -509,7 +511,7 @@
     }
 
     TConditionalSession tcs = client.startConditionalUpdate(tinfo, context.rpcCreds(), ByteBufferUtil.toByteBuffers(auths.getAuthorizations()), tableId,
-        DurabilityImpl.toThrift(durability));
+        DurabilityImpl.toThrift(durability), this.classLoaderContext);
 
     synchronized (cachedSessionIDs) {
       SessionID sid = new SessionID();
@@ -546,7 +548,7 @@
   }
 
   List<SessionID> getActiveSessions() {
-    ArrayList<SessionID> activeSessions = new ArrayList<SessionID>();
+    ArrayList<SessionID> activeSessions = new ArrayList<>();
     for (SessionID sid : cachedSessionIDs.values())
       if (sid.isActive())
         activeSessions.add(sid);
@@ -567,13 +569,13 @@
 
     TInfo tinfo = Tracer.traceInfo();
 
-    Map<Long,CMK> cmidToCm = new HashMap<Long,CMK>();
+    Map<Long,CMK> cmidToCm = new HashMap<>();
     MutableLong cmid = new MutableLong(0);
 
     SessionID sessionId = null;
 
     try {
-      Map<TKeyExtent,List<TConditionalMutation>> tmutations = new HashMap<TKeyExtent,List<TConditionalMutation>>();
+      Map<TKeyExtent,List<TConditionalMutation>> tmutations = new HashMap<>();
 
       CompressedIterators compressedIters = new CompressedIterators();
       convertMutations(mutations, cmidToCm, cmid, tmutations, compressedIters);
@@ -592,9 +594,9 @@
         }
       }
 
-      HashSet<KeyExtent> extentsToInvalidate = new HashSet<KeyExtent>();
+      HashSet<KeyExtent> extentsToInvalidate = new HashSet<>();
 
-      ArrayList<QCMutation> ignored = new ArrayList<QCMutation>();
+      ArrayList<QCMutation> ignored = new ArrayList<>();
 
       for (TCMResult tcmResult : tresults) {
         if (tcmResult.status == TCMStatus.IGNORED) {
@@ -635,7 +637,7 @@
   }
 
   private void queueRetry(Map<Long,CMK> cmidToCm, HostAndPort location) {
-    ArrayList<QCMutation> ignored = new ArrayList<QCMutation>();
+    ArrayList<QCMutation> ignored = new ArrayList<>();
     for (CMK cmk : cmidToCm.values())
       ignored.add(cmk.cm);
     queueRetry(ignored, location);
@@ -699,7 +701,7 @@
       if ((System.currentTimeMillis() - startTime) + sleepTime > timeout)
         throw new TimedOutException(Collections.singleton(location.toString()));
 
-      UtilWaitThread.sleep(sleepTime);
+      sleepUninterruptibly(sleepTime, TimeUnit.MILLISECONDS);
       sleepTime = Math.min(2 * sleepTime, MAX_SLEEP);
 
     }
@@ -737,7 +739,7 @@
 
     for (Entry<KeyExtent,List<QCMutation>> entry : mutations.getMutations().entrySet()) {
       TKeyExtent tke = entry.getKey().toThrift();
-      ArrayList<TConditionalMutation> tcondMutaions = new ArrayList<TConditionalMutation>();
+      ArrayList<TConditionalMutation> tcondMutaions = new ArrayList<>();
 
       List<QCMutation> condMutations = entry.getValue();
 
@@ -790,7 +792,7 @@
   private static final ConditionComparator CONDITION_COMPARATOR = new ConditionComparator();
 
   private List<TCondition> convertConditions(ConditionalMutation cm, CompressedIterators compressedIters) {
-    List<TCondition> conditions = new ArrayList<TCondition>(cm.getConditions().size());
+    List<TCondition> conditions = new ArrayList<>(cm.getConditions().size());
 
     // sort conditions inorder to get better lookup performance. Sort on client side so tserver does not have to do it.
     Condition[] ca = cm.getConditions().toArray(new Condition[cm.getConditions().size()]);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/InstanceOperationsImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/InstanceOperationsImpl.java
index 6383967..d716650 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/InstanceOperationsImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/InstanceOperationsImpl.java
@@ -63,7 +63,7 @@
   }
 
   @Override
-  public void setProperty(final String property, final String value) throws AccumuloException, AccumuloSecurityException {
+  public void setProperty(final String property, final String value) throws AccumuloException, AccumuloSecurityException, IllegalArgumentException {
     checkArgument(property != null, "property is null");
     checkArgument(value != null, "value is null");
     MasterClient.execute(context, new ClientExec<MasterClientService.Client>() {
@@ -110,11 +110,11 @@
     Instance instance = context.getInstance();
     ZooCache cache = new ZooCacheFactory().getZooCache(instance.getZooKeepers(), instance.getZooKeepersSessionTimeOut());
     String path = ZooUtil.getRoot(instance) + Constants.ZTSERVERS;
-    List<String> results = new ArrayList<String>();
+    List<String> results = new ArrayList<>();
     for (String candidate : cache.getChildren(path)) {
       List<String> children = cache.getChildren(path + "/" + candidate);
       if (children != null && children.size() > 0) {
-        List<String> copy = new ArrayList<String>(children);
+        List<String> copy = new ArrayList<>(children);
         Collections.sort(copy);
         byte[] data = cache.get(path + "/" + candidate + "/" + copy.get(0));
         if (data != null && !"master".equals(new String(data, UTF_8))) {
@@ -132,7 +132,7 @@
     try {
       client = ThriftUtil.getTServerClient(parsedTserver, context);
 
-      List<ActiveScan> as = new ArrayList<ActiveScan>();
+      List<ActiveScan> as = new ArrayList<>();
       for (org.apache.accumulo.core.tabletserver.thrift.ActiveScan activeScan : client.getActiveScans(Tracer.traceInfo(), context.rpcCreds())) {
         try {
           as.add(new ActiveScanImpl(context.getInstance(), activeScan));
@@ -170,7 +170,7 @@
     try {
       client = ThriftUtil.getTServerClient(parsedTserver, context);
 
-      List<ActiveCompaction> as = new ArrayList<ActiveCompaction>();
+      List<ActiveCompaction> as = new ArrayList<>();
       for (org.apache.accumulo.core.tabletserver.thrift.ActiveCompaction activeCompaction : client.getActiveCompactions(Tracer.traceInfo(), context.rpcCreds())) {
         as.add(new ActiveCompactionImpl(context.getInstance(), activeCompaction));
       }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/MasterClient.java b/core/src/main/java/org/apache/accumulo/core/client/impl/MasterClient.java
index 32a71bc..73e7f10 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/MasterClient.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/MasterClient.java
@@ -20,6 +20,7 @@
 
 import java.net.UnknownHostException;
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -29,13 +30,13 @@
 import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.master.thrift.MasterClientService;
 import org.apache.accumulo.core.rpc.ThriftUtil;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.thrift.TServiceClient;
 import org.apache.thrift.transport.TTransportException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.net.HostAndPort;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 public class MasterClient {
   private static final Logger log = LoggerFactory.getLogger(MasterClient.class);
@@ -46,7 +47,7 @@
       MasterClientService.Client result = getConnection(context);
       if (result != null)
         return result;
-      UtilWaitThread.sleep(250);
+      sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
     }
   }
 
@@ -97,7 +98,7 @@
         return exec.execute(client);
       } catch (TTransportException tte) {
         log.debug("MasterClient request failed, retrying ... ", tte);
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } catch (ThriftSecurityException e) {
         throw new AccumuloSecurityException(e.user, e.code, e);
       } catch (AccumuloException e) {
@@ -130,7 +131,7 @@
         break;
       } catch (TTransportException tte) {
         log.debug("MasterClient request failed, retrying ... ", tte);
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } catch (ThriftSecurityException e) {
         throw new AccumuloSecurityException(e.user, e.code, e);
       } catch (AccumuloException e) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/MultiTableBatchWriterImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/MultiTableBatchWriterImpl.java
index 5d13eda..15d1c34 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/MultiTableBatchWriterImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/MultiTableBatchWriterImpl.java
@@ -53,21 +53,21 @@
 
   private class TableBatchWriter implements BatchWriter {
 
-    private String table;
+    private String tableId;
 
-    TableBatchWriter(String table) {
-      this.table = table;
+    TableBatchWriter(String tableId) {
+      this.tableId = tableId;
     }
 
     @Override
     public void addMutation(Mutation m) throws MutationsRejectedException {
       checkArgument(m != null, "m is null");
-      bw.addMutation(table, m);
+      bw.addMutation(tableId, m);
     }
 
     @Override
     public void addMutations(Iterable<Mutation> iterable) throws MutationsRejectedException {
-      bw.addMutation(table, iterable.iterator());
+      bw.addMutation(tableId, iterable.iterator());
     }
 
     @Override
@@ -118,7 +118,7 @@
     checkArgument(cacheTimeUnit != null, "cacheTimeUnit is null");
     this.context = context;
     this.bw = new TabletServerBatchWriter(context, config);
-    tableWriters = new ConcurrentHashMap<String,BatchWriter>();
+    tableWriters = new ConcurrentHashMap<>();
     this.closed = new AtomicBoolean(false);
     this.cacheLastState = new AtomicLong(0);
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/NamespaceOperationsHelper.java b/core/src/main/java/org/apache/accumulo/core/client/impl/NamespaceOperationsHelper.java
index 2dac1fa..9b3a358 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/NamespaceOperationsHelper.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/NamespaceOperationsHelper.java
@@ -66,7 +66,7 @@
       NamespaceNotFoundException {
     if (!exists(namespace))
       throw new NamespaceNotFoundException(null, namespace, null);
-    Map<String,String> copy = new TreeMap<String,String>();
+    Map<String,String> copy = new TreeMap<>();
     for (Entry<String,String> property : this.getProperties(namespace)) {
       copy.put(property.getKey(), property.getValue());
     }
@@ -86,7 +86,7 @@
       throw new NamespaceNotFoundException(null, namespace, null);
     int priority = -1;
     String classname = null;
-    Map<String,String> settings = new HashMap<String,String>();
+    Map<String,String> settings = new HashMap<>();
 
     String root = String.format("%s%s.%s", Property.TABLE_ITERATOR_PREFIX, scope.name().toLowerCase(), name);
     String opt = root + ".opt.";
@@ -112,7 +112,7 @@
   public Map<String,EnumSet<IteratorScope>> listIterators(String namespace) throws AccumuloSecurityException, AccumuloException, NamespaceNotFoundException {
     if (!exists(namespace))
       throw new NamespaceNotFoundException(null, namespace, null);
-    Map<String,EnumSet<IteratorScope>> result = new TreeMap<String,EnumSet<IteratorScope>>();
+    Map<String,EnumSet<IteratorScope>> result = new TreeMap<>();
     for (Entry<String,String> property : this.getProperties(namespace)) {
       String name = property.getKey();
       String[] parts = name.split("\\.");
@@ -137,7 +137,7 @@
       String scopeStr = String.format("%s%s", Property.TABLE_ITERATOR_PREFIX, scope.name().toLowerCase());
       String nameStr = String.format("%s.%s", scopeStr, setting.getName());
       String optStr = String.format("%s.opt.", nameStr);
-      Map<String,String> optionConflicts = new TreeMap<String,String>();
+      Map<String,String> optionConflicts = new TreeMap<>();
       for (Entry<String,String> property : this.getProperties(namespace)) {
         if (property.getKey().startsWith(scopeStr)) {
           if (property.getKey().equals(nameStr))
@@ -165,8 +165,8 @@
 
   @Override
   public int addConstraint(String namespace, String constraintClassName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException {
-    TreeSet<Integer> constraintNumbers = new TreeSet<Integer>();
-    TreeMap<String,Integer> constraintClasses = new TreeMap<String,Integer>();
+    TreeSet<Integer> constraintNumbers = new TreeSet<>();
+    TreeMap<String,Integer> constraintClasses = new TreeMap<>();
     int i;
     for (Entry<String,String> property : this.getProperties(namespace)) {
       if (property.getKey().startsWith(Property.TABLE_CONSTRAINT_PREFIX.toString())) {
@@ -196,7 +196,7 @@
 
   @Override
   public Map<String,Integer> listConstraints(String namespace) throws AccumuloException, NamespaceNotFoundException, AccumuloSecurityException {
-    Map<String,Integer> constraints = new TreeMap<String,Integer>();
+    Map<String,Integer> constraints = new TreeMap<>();
     for (Entry<String,String> property : this.getProperties(namespace)) {
       if (property.getKey().startsWith(Property.TABLE_CONSTRAINT_PREFIX.toString())) {
         if (constraints.containsKey(property.getValue()))
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/NamespaceOperationsImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/NamespaceOperationsImpl.java
index b087c73..0716122 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/NamespaceOperationsImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/NamespaceOperationsImpl.java
@@ -29,6 +29,7 @@
 import java.util.Map.Entry;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -50,14 +51,14 @@
 import org.apache.accumulo.core.master.thrift.MasterClientService;
 import org.apache.accumulo.core.trace.Tracer;
 import org.apache.accumulo.core.util.OpTimer;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.slf4j.Logger;
 
 public class NamespaceOperationsImpl extends NamespaceOperationsHelper {
   private final ClientContext context;
   private TableOperationsImpl tableOps;
 
-  private static final Logger log = Logger.getLogger(TableOperations.class);
+  private static final Logger log = LoggerFactory.getLogger(TableOperations.class);
 
   public NamespaceOperationsImpl(ClientContext context, TableOperationsImpl tableOps) {
     checkArgument(context != null, "context is null");
@@ -67,9 +68,22 @@
 
   @Override
   public SortedSet<String> list() {
-    OpTimer opTimer = new OpTimer(log, Level.TRACE).start("Fetching list of namespaces...");
-    TreeSet<String> namespaces = new TreeSet<String>(Namespaces.getNameToIdMap(context.getInstance()).keySet());
-    opTimer.stop("Fetched " + namespaces.size() + " namespaces in %DURATION%");
+
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Fetching list of namespaces...", Thread.currentThread().getId());
+      timer = new OpTimer().start();
+    }
+
+    TreeSet<String> namespaces = new TreeSet<>(Namespaces.getNameToIdMap(context.getInstance()).keySet());
+
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Fetched {} namespaces in {}", Thread.currentThread().getId(), namespaces.size(),
+          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
+
     return namespaces;
   }
 
@@ -77,9 +91,20 @@
   public boolean exists(String namespace) {
     checkArgument(namespace != null, "namespace is null");
 
-    OpTimer opTimer = new OpTimer(log, Level.TRACE).start("Checking if namespace " + namespace + " exists...");
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Checking if namespace {} exists", Thread.currentThread().getId(), namespace);
+      timer = new OpTimer().start();
+    }
+
     boolean exists = Namespaces.getNameToIdMap(context.getInstance()).containsKey(namespace);
-    opTimer.stop("Checked existance of " + exists + " in %DURATION%");
+
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Checked existance of {} in {}", Thread.currentThread().getId(), exists, String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
+
     return exists;
   }
 
@@ -103,7 +128,7 @@
 
     if (namespaceId.equals(Namespaces.ACCUMULO_NAMESPACE_ID) || namespaceId.equals(Namespaces.DEFAULT_NAMESPACE_ID)) {
       Credentials credentials = context.getCredentials();
-      log.debug(credentials.getPrincipal() + " attempted to delete the " + namespaceId + " namespace");
+      log.debug("{} attempted to delete the {} namespace", credentials.getPrincipal(), namespaceId);
       throw new AccumuloSecurityException(credentials.getPrincipal(), SecurityErrorCode.UNSUPPORTED_OPERATION);
     }
 
@@ -112,7 +137,7 @@
     }
 
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(namespace.getBytes(UTF_8)));
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     try {
       doNamespaceFateOperation(FateOperation.NAMESPACE_DELETE, args, opts, namespace);
@@ -128,7 +153,7 @@
       NamespaceExistsException {
 
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(oldNamespaceName.getBytes(UTF_8)), ByteBuffer.wrap(newNamespaceName.getBytes(UTF_8)));
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
     doNamespaceFateOperation(FateOperation.NAMESPACE_RENAME, args, opts, oldNamespaceName);
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/Namespaces.java b/core/src/main/java/org/apache/accumulo/core/client/impl/Namespaces.java
index 3c2bc45..39d5822 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/Namespaces.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/Namespaces.java
@@ -37,7 +37,7 @@
   public static final String VALID_NAME_REGEX = "^\\w*$";
   public static final Validator<String> VALID_NAME = new Validator<String>() {
     @Override
-    public boolean isValid(String namespace) {
+    public boolean apply(String namespace) {
       return namespace != null && namespace.matches(VALID_NAME_REGEX);
     }
 
@@ -51,7 +51,7 @@
 
   public static final Validator<String> NOT_DEFAULT = new Validator<String>() {
     @Override
-    public boolean isValid(String namespace) {
+    public boolean apply(String namespace) {
       return !Namespaces.DEFAULT_NAMESPACE.equals(namespace);
     }
 
@@ -63,7 +63,7 @@
 
   public static final Validator<String> NOT_ACCUMULO = new Validator<String>() {
     @Override
-    public boolean isValid(String namespace) {
+    public boolean apply(String namespace) {
       return !Namespaces.ACCUMULO_NAMESPACE.equals(namespace);
     }
 
@@ -93,7 +93,7 @@
 
     List<String> namespaceIds = zc.getChildren(ZooUtil.getRoot(instance) + Constants.ZNAMESPACES);
 
-    TreeMap<String,String> namespaceMap = new TreeMap<String,String>();
+    TreeMap<String,String> namespaceMap = new TreeMap<>();
 
     for (String id : namespaceIds) {
       byte[] path = zc.get(ZooUtil.getRoot(instance) + Constants.ZNAMESPACES + "/" + id + Constants.ZNAMESPACE_NAME);
@@ -137,7 +137,7 @@
 
   public static List<String> getTableIds(Instance instance, String namespaceId) throws NamespaceNotFoundException {
     String namespace = getNamespaceName(instance, namespaceId);
-    List<String> names = new LinkedList<String>();
+    List<String> names = new LinkedList<>();
     for (Entry<String,String> nameToId : Tables.getNameToIdMap(instance).entrySet())
       if (namespace.equals(Tables.qualify(nameToId.getKey()).getFirst()))
         names.add(nameToId.getValue());
@@ -146,7 +146,7 @@
 
   public static List<String> getTableNames(Instance instance, String namespaceId) throws NamespaceNotFoundException {
     String namespace = getNamespaceName(instance, namespaceId);
-    List<String> names = new LinkedList<String>();
+    List<String> names = new LinkedList<>();
     for (String name : Tables.getNameToIdMap(instance).keySet())
       if (namespace.equals(Tables.qualify(name).getFirst()))
         names.add(name);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/OfflineIterator.java b/core/src/main/java/org/apache/accumulo/core/client/impl/OfflineIterator.java
index 69ad41e..9f51704 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/OfflineIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/OfflineIterator.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.core.client.impl;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Arrays;
@@ -23,18 +25,20 @@
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.RowIterator;
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.ConfigurationCopy;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.Column;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.KeyValue;
 import org.apache.accumulo.core.data.PartialKey;
@@ -45,23 +49,19 @@
 import org.apache.accumulo.core.file.FileSKVIterator;
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.IteratorUtil;
-import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
-import org.apache.accumulo.core.iterators.system.ColumnFamilySkippingIterator;
-import org.apache.accumulo.core.iterators.system.ColumnQualifierFilter;
-import org.apache.accumulo.core.iterators.system.DeletingIterator;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.system.MultiIterator;
-import org.apache.accumulo.core.iterators.system.VisibilityFilter;
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.LocalityGroupUtil;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.volume.VolumeConfiguration;
 import org.apache.commons.lang.NotImplementedException;
 import org.apache.hadoop.conf.Configuration;
@@ -73,9 +73,15 @@
   static class OfflineIteratorEnvironment implements IteratorEnvironment {
 
     private final Authorizations authorizations;
+    private AccumuloConfiguration conf;
+    private boolean useSample;
+    private SamplerConfiguration sampleConf;
 
-    public OfflineIteratorEnvironment(Authorizations auths) {
+    public OfflineIteratorEnvironment(Authorizations auths, AccumuloConfiguration acuTableConf, boolean useSample, SamplerConfiguration samplerConf) {
       this.authorizations = auths;
+      this.conf = acuTableConf;
+      this.useSample = useSample;
+      this.sampleConf = samplerConf;
     }
 
     @Override
@@ -85,7 +91,7 @@
 
     @Override
     public AccumuloConfiguration getConfig() {
-      return AccumuloConfiguration.getDefaultConfiguration();
+      return conf;
     }
 
     @Override
@@ -98,7 +104,7 @@
       return false;
     }
 
-    private ArrayList<SortedKeyValueIterator<Key,Value>> topLevelIterators = new ArrayList<SortedKeyValueIterator<Key,Value>>();
+    private ArrayList<SortedKeyValueIterator<Key,Value>> topLevelIterators = new ArrayList<>();
 
     @Override
     public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {
@@ -113,10 +119,27 @@
     SortedKeyValueIterator<Key,Value> getTopLevelIterator(SortedKeyValueIterator<Key,Value> iter) {
       if (topLevelIterators.isEmpty())
         return iter;
-      ArrayList<SortedKeyValueIterator<Key,Value>> allIters = new ArrayList<SortedKeyValueIterator<Key,Value>>(topLevelIterators);
+      ArrayList<SortedKeyValueIterator<Key,Value>> allIters = new ArrayList<>(topLevelIterators);
       allIters.add(iter);
       return new MultiIterator(allIters, false);
     }
+
+    @Override
+    public boolean isSamplingEnabled() {
+      return useSample;
+    }
+
+    @Override
+    public SamplerConfiguration getSamplerConfiguration() {
+      return sampleConf;
+    }
+
+    @Override
+    public IteratorEnvironment cloneWithSamplingEnabled() {
+      if (sampleConf == null)
+        throw new SampleNotPresentException();
+      return new OfflineIteratorEnvironment(authorizations, conf, true, sampleConf);
+    }
   }
 
   private SortedKeyValueIterator<Key,Value> iter;
@@ -141,7 +164,7 @@
 
     this.tableId = table.toString();
     this.authorizations = authorizations;
-    this.readers = new ArrayList<SortedKeyValueIterator<Key,Value>>();
+    this.readers = new ArrayList<>();
 
     try {
       conn = instance.getConnector(credentials.getPrincipal(), credentials.getToken());
@@ -152,6 +175,8 @@
         nextTablet();
 
     } catch (Exception e) {
+      if (e instanceof RuntimeException)
+        throw (RuntimeException) e;
       throw new RuntimeException(e);
     }
   }
@@ -191,7 +216,7 @@
       else
         startRow = new Text();
 
-      nextRange = new Range(new KeyExtent(new Text(tableId), startRow, null).getMetadataEntry(), true, null, false);
+      nextRange = new Range(new KeyExtent(tableId, startRow, null).getMetadataEntry(), true, null, false);
     } else {
 
       if (currentExtent.getEndRow() == null) {
@@ -207,7 +232,7 @@
       nextRange = new Range(currentExtent.getMetadataEntry(), false, null, false);
     }
 
-    List<String> relFiles = new ArrayList<String>();
+    List<String> relFiles = new ArrayList<>();
 
     Pair<KeyExtent,String> eloc = getTabletFiles(nextRange, relFiles);
 
@@ -219,14 +244,14 @@
         }
       }
 
-      UtilWaitThread.sleep(250);
+      sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
 
       eloc = getTabletFiles(nextRange, relFiles);
     }
 
     KeyExtent extent = eloc.getFirst();
 
-    if (!extent.getTableId().toString().equals(tableId)) {
+    if (!extent.getTableId().equals(tableId)) {
       throw new AccumuloException(" did not find tablets for table " + tableId + " " + extent);
     }
 
@@ -238,7 +263,7 @@
     @SuppressWarnings("deprecation")
     String tablesDir = config.get(Property.INSTANCE_DFS_DIR) + Constants.HDFS_TABLES_DIR;
 
-    List<String> absFiles = new ArrayList<String>();
+    List<String> absFiles = new ArrayList<>();
     for (String relPath : relFiles) {
       if (relPath.contains(":")) {
         absFiles.add(relPath);
@@ -287,7 +312,7 @@
       }
 
     }
-    return new Pair<KeyExtent,String>(extent, location);
+    return new Pair<>(extent, location);
   }
 
   private SortedKeyValueIterator<Key,Value> createIterator(KeyExtent extent, List<String> absFiles) throws TableNotFoundException, AccumuloException,
@@ -304,29 +329,37 @@
 
     readers.clear();
 
+    SamplerConfiguration scannerSamplerConfig = options.getSamplerConfiguration();
+    SamplerConfigurationImpl scannerSamplerConfigImpl = scannerSamplerConfig == null ? null : new SamplerConfigurationImpl(scannerSamplerConfig);
+    SamplerConfigurationImpl samplerConfImpl = SamplerConfigurationImpl.newSamplerConfig(acuTableConf);
+
+    if (scannerSamplerConfigImpl != null && ((samplerConfImpl != null && !scannerSamplerConfigImpl.equals(samplerConfImpl)) || samplerConfImpl == null)) {
+      throw new SampleNotPresentException();
+    }
+
     // TODO need to close files - ACCUMULO-1303
     for (String file : absFiles) {
       FileSystem fs = VolumeConfiguration.getVolume(file, conf, config).getFileSystem();
-      FileSKVIterator reader = FileOperations.getInstance().openReader(file, false, fs, conf, acuTableConf, null, null);
+      FileSKVIterator reader = FileOperations.getInstance().newReaderBuilder().forFile(file, fs, conf).withTableConfiguration(acuTableConf).build();
+      if (scannerSamplerConfigImpl != null) {
+        reader = reader.getSample(scannerSamplerConfigImpl);
+        if (reader == null)
+          throw new SampleNotPresentException();
+      }
       readers.add(reader);
     }
 
     MultiIterator multiIter = new MultiIterator(readers, extent);
 
-    OfflineIteratorEnvironment iterEnv = new OfflineIteratorEnvironment(authorizations);
-
-    DeletingIterator delIter = new DeletingIterator(multiIter, false);
-
-    ColumnFamilySkippingIterator cfsi = new ColumnFamilySkippingIterator(delIter);
-
-    ColumnQualifierFilter colFilter = new ColumnQualifierFilter(cfsi, new HashSet<Column>(options.fetchedColumns));
+    OfflineIteratorEnvironment iterEnv = new OfflineIteratorEnvironment(authorizations, acuTableConf, false, samplerConfImpl == null ? null
+        : samplerConfImpl.toSamplerConfiguration());
 
     byte[] defaultSecurityLabel;
-
     ColumnVisibility cv = new ColumnVisibility(acuTableConf.get(Property.TABLE_DEFAULT_SCANTIME_VISIBILITY));
     defaultSecurityLabel = cv.getExpression();
 
-    VisibilityFilter visFilter = new VisibilityFilter(colFilter, authorizations, defaultSecurityLabel);
+    SortedKeyValueIterator<Key,Value> visFilter = IteratorUtil.setupSystemScanIterators(multiIter, new HashSet<>(options.fetchedColumns), authorizations,
+        defaultSecurityLabel);
 
     return iterEnv.getTopLevelIterator(IteratorUtil.loadIterators(IteratorScope.scan, visFilter, extent, acuTableConf, options.serverSideIteratorList,
         options.serverSideIteratorOptions, iterEnv, false));
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ReplicationOperationsImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ReplicationOperationsImpl.java
index d5a13ad..ab6160e 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ReplicationOperationsImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ReplicationOperationsImpl.java
@@ -16,12 +16,14 @@
  */
 package org.apache.accumulo.core.client.impl;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.util.Objects.requireNonNull;
 
 import java.util.Collections;
 import java.util.HashSet;
 import java.util.Map.Entry;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -45,7 +47,6 @@
 import org.apache.accumulo.core.tabletserver.log.LogEntry;
 import org.apache.accumulo.core.trace.Tracer;
 import org.apache.accumulo.core.trace.thrift.TInfo;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
@@ -116,22 +117,22 @@
     });
   }
 
-  protected Text getTableId(Connector conn, String tableName) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
+  protected String getTableId(Connector conn, String tableName) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
     TableOperations tops = conn.tableOperations();
 
     if (!conn.tableOperations().exists(tableName)) {
       throw new TableNotFoundException(null, tableName, null);
     }
 
-    String strTableId = null;
-    while (null == strTableId) {
-      strTableId = tops.tableIdMap().get(tableName);
-      if (null == strTableId) {
-        UtilWaitThread.sleep(200);
+    String tableId = null;
+    while (null == tableId) {
+      tableId = tops.tableIdMap().get(tableName);
+      if (null == tableId) {
+        sleepUninterruptibly(200, TimeUnit.MILLISECONDS);
       }
     }
 
-    return new Text(strTableId);
+    return tableId;
   }
 
   @Override
@@ -141,21 +142,19 @@
     log.debug("Collecting referenced files for replication of table {}", tableName);
 
     Connector conn = context.getConnector();
-    Text tableId = getTableId(conn, tableName);
+    String tableId = getTableId(conn, tableName);
 
     log.debug("Found id of {} for name {}", tableId, tableName);
 
     // Get the WALs currently referenced by the table
     BatchScanner metaBs = conn.createBatchScanner(MetadataTable.NAME, Authorizations.EMPTY, 4);
-    metaBs.setRanges(Collections.singleton(MetadataSchema.TabletsSection.getRange(tableId.toString())));
+    metaBs.setRanges(Collections.singleton(MetadataSchema.TabletsSection.getRange(tableId)));
     metaBs.fetchColumnFamily(LogColumnFamily.NAME);
     Set<String> wals = new HashSet<>();
     try {
       for (Entry<Key,Value> entry : metaBs) {
         LogEntry logEntry = LogEntry.fromKeyValue(entry.getKey(), entry.getValue());
-        for (String log : logEntry.logSet) {
-          wals.add(new Path(log).toString());
-        }
+        wals.add(new Path(logEntry.filename).toString());
       }
     } finally {
       metaBs.close();
@@ -168,8 +167,7 @@
     try {
       Text buffer = new Text();
       for (Entry<Key,Value> entry : metaBs) {
-        ReplicationSection.getTableId(entry.getKey(), buffer);
-        if (buffer.equals(tableId)) {
+        if (tableId.equals(ReplicationSection.getTableId(entry.getKey()))) {
           ReplicationSection.getFile(entry.getKey(), buffer);
           wals.add(buffer.toString());
         }
@@ -177,7 +175,6 @@
     } finally {
       metaBs.close();
     }
-
     return wals;
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java b/core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
index 3a0c0d7..9fdbb25 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/RootTabletLocator.java
@@ -20,6 +20,7 @@
 import java.util.Collections;
 import java.util.List;
 import java.util.Map;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -32,13 +33,14 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.util.OpTimer;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooCacheFactory;
 import org.apache.hadoop.io.Text;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 public class RootTabletLocator extends TabletLocator {
 
@@ -59,7 +61,7 @@
       throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
     TabletLocation rootTabletLocation = getRootTabletLocation(context);
     if (rootTabletLocation != null) {
-      TabletServerMutations<T> tsm = new TabletServerMutations<T>(rootTabletLocation.tablet_session);
+      TabletServerMutations<T> tsm = new TabletServerMutations<>(rootTabletLocation.tablet_session);
       for (T mutation : mutations) {
         tsm.addMutation(RootTable.EXTENT, mutation);
       }
@@ -104,9 +106,22 @@
     String zRootLocPath = ZooUtil.getRoot(instance) + RootTable.ZROOT_TABLET_LOCATION;
     ZooCache zooCache = zcf.getZooCache(instance.getZooKeepers(), instance.getZooKeepersSessionTimeOut());
 
-    OpTimer opTimer = new OpTimer(Logger.getLogger(this.getClass()), Level.TRACE).start("Looking up root tablet location in zookeeper.");
+    Logger log = LoggerFactory.getLogger(this.getClass());
+
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Looking up root tablet location in zookeeper.", Thread.currentThread().getId());
+      timer = new OpTimer().start();
+    }
+
     byte[] loc = zooCache.get(zRootLocPath);
-    opTimer.stop("Found root tablet at " + (loc == null ? null : new String(loc)) + " in %DURATION%");
+
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Found root tablet at {} in {}", Thread.currentThread().getId(), (loc == null ? "null" : new String(loc)),
+          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
 
     if (loc == null) {
       return null;
@@ -126,7 +141,7 @@
     TabletLocation location = getRootTabletLocation(context);
     // Always retry when finding the root tablet
     while (retry && location == null) {
-      UtilWaitThread.sleep(500);
+      sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
       location = getRootTabletLocation(context);
     }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerImpl.java
index 09edc4a..89406f4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerImpl.java
@@ -28,7 +28,6 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.hadoop.io.Text;
 
 /**
  * provides scanner functionality
@@ -47,7 +46,7 @@
 
   private final ClientContext context;
   private Authorizations authorizations;
-  private Text table;
+  private String tableId;
 
   private int size;
 
@@ -55,12 +54,12 @@
   private boolean isolated = false;
   private long readaheadThreshold = Constants.SCANNER_DEFAULT_READAHEAD_THRESHOLD;
 
-  public ScannerImpl(ClientContext context, String table, Authorizations authorizations) {
+  public ScannerImpl(ClientContext context, String tableId, Authorizations authorizations) {
     checkArgument(context != null, "context is null");
-    checkArgument(table != null, "table is null");
+    checkArgument(tableId != null, "tableId is null");
     checkArgument(authorizations != null, "authorizations is null");
     this.context = context;
-    this.table = new Text(table);
+    this.tableId = tableId;
     this.range = new Range((Key) null, (Key) null);
     this.authorizations = authorizations;
 
@@ -93,7 +92,7 @@
 
   @Override
   public synchronized Iterator<Entry<Key,Value>> iterator() {
-    return new ScannerIterator(context, table, authorizations, range, size, getTimeOut(), this, isolated, readaheadThreshold);
+    return new ScannerIterator(context, tableId, authorizations, range, size, getTimeOut(), this, isolated, readaheadThreshold);
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerIterator.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerIterator.java
index fd91b5a..ae55cc0 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerIterator.java
@@ -28,6 +28,7 @@
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.client.TableDeletedException;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.TableOfflineException;
@@ -39,7 +40,6 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.NamingThreadFactory;
-import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -48,7 +48,6 @@
   private static final Logger log = LoggerFactory.getLogger(ScannerIterator.class);
 
   // scanner options
-  private Text tableId;
   private int timeOut;
 
   // scanner state
@@ -90,7 +89,8 @@
           synchQ.add(currentBatch);
           return;
         }
-      } catch (IsolationException | ScanTimedOutException | AccumuloException | AccumuloSecurityException | TableDeletedException | TableOfflineException e) {
+      } catch (IsolationException | ScanTimedOutException | AccumuloException | AccumuloSecurityException | TableDeletedException | TableOfflineException
+          | SampleNotPresentException e) {
         log.trace("{}", e.getMessage(), e);
         synchQ.add(e);
       } catch (TableNotFoundException e) {
@@ -104,22 +104,21 @@
 
   }
 
-  ScannerIterator(ClientContext context, Text table, Authorizations authorizations, Range range, int size, int timeOut, ScannerOptions options,
+  ScannerIterator(ClientContext context, String tableId, Authorizations authorizations, Range range, int size, int timeOut, ScannerOptions options,
       boolean isolated, long readaheadThreshold) {
-    this.tableId = new Text(table);
     this.timeOut = timeOut;
     this.readaheadThreshold = readaheadThreshold;
 
     this.options = new ScannerOptions(options);
 
-    synchQ = new ArrayBlockingQueue<Object>(1);
+    synchQ = new ArrayBlockingQueue<>(1);
 
     if (this.options.fetchedColumns.size() > 0) {
       range = range.bound(this.options.fetchedColumns.first(), this.options.fetchedColumns.last());
     }
 
     scanState = new ScanState(context, tableId, authorizations, new Range(range), options.fetchedColumns, size, options.serverSideIteratorList,
-        options.serverSideIteratorOptions, isolated, readaheadThreshold);
+        options.serverSideIteratorOptions, isolated, readaheadThreshold, options.getSamplerConfiguration(), options.batchTimeOut, options.classLoaderContext);
 
     // If we want to start readahead immediately, don't wait for hasNext to be called
     if (0l == readaheadThreshold) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerOptions.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerOptions.java
index e455d5a..a986d87 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerOptions.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ScannerOptions.java
@@ -17,6 +17,7 @@
 package org.apache.accumulo.core.client.impl;
 
 import static com.google.common.base.Preconditions.checkArgument;
+import static java.util.Objects.requireNonNull;
 
 import java.util.ArrayList;
 import java.util.Collections;
@@ -32,6 +33,7 @@
 
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.ScannerBase;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Column;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
@@ -45,12 +47,18 @@
   protected List<IterInfo> serverSideIteratorList = Collections.emptyList();
   protected Map<String,Map<String,String>> serverSideIteratorOptions = Collections.emptyMap();
 
-  protected SortedSet<Column> fetchedColumns = new TreeSet<Column>();
+  protected SortedSet<Column> fetchedColumns = new TreeSet<>();
 
   protected long timeOut = Long.MAX_VALUE;
 
+  protected long batchTimeOut = Long.MAX_VALUE;
+
   private String regexIterName = null;
 
+  private SamplerConfiguration samplerConfig = null;
+
+  protected String classLoaderContext = null;
+
   protected ScannerOptions() {}
 
   public ScannerOptions(ScannerOptions so) {
@@ -61,7 +69,7 @@
   public synchronized void addScanIterator(IteratorSetting si) {
     checkArgument(si != null, "si is null");
     if (serverSideIteratorList.size() == 0)
-      serverSideIteratorList = new ArrayList<IterInfo>();
+      serverSideIteratorList = new ArrayList<>();
 
     for (IterInfo ii : serverSideIteratorList) {
       if (ii.iterName.equals(si.getName()))
@@ -73,12 +81,12 @@
     serverSideIteratorList.add(new IterInfo(si.getPriority(), si.getIteratorClass(), si.getName()));
 
     if (serverSideIteratorOptions.size() == 0)
-      serverSideIteratorOptions = new HashMap<String,Map<String,String>>();
+      serverSideIteratorOptions = new HashMap<>();
 
     Map<String,String> opts = serverSideIteratorOptions.get(si.getName());
 
     if (opts == null) {
-      opts = new HashMap<String,String>();
+      opts = new HashMap<>();
       serverSideIteratorOptions.put(si.getName(), opts);
     }
     opts.putAll(si.getOptions());
@@ -107,12 +115,12 @@
     checkArgument(key != null, "key is null");
     checkArgument(value != null, "value is null");
     if (serverSideIteratorOptions.size() == 0)
-      serverSideIteratorOptions = new HashMap<String,Map<String,String>>();
+      serverSideIteratorOptions = new HashMap<>();
 
     Map<String,String> opts = serverSideIteratorOptions.get(iteratorName);
 
     if (opts == null) {
-      opts = new HashMap<String,String>();
+      opts = new HashMap<>();
       serverSideIteratorOptions.put(iteratorName, opts);
     }
     opts.put(key, value);
@@ -159,13 +167,17 @@
     synchronized (dst) {
       synchronized (src) {
         dst.regexIterName = src.regexIterName;
-        dst.fetchedColumns = new TreeSet<Column>(src.fetchedColumns);
-        dst.serverSideIteratorList = new ArrayList<IterInfo>(src.serverSideIteratorList);
+        dst.fetchedColumns = new TreeSet<>(src.fetchedColumns);
+        dst.serverSideIteratorList = new ArrayList<>(src.serverSideIteratorList);
+        dst.classLoaderContext = src.classLoaderContext;
 
-        dst.serverSideIteratorOptions = new HashMap<String,Map<String,String>>();
+        dst.serverSideIteratorOptions = new HashMap<>();
         Set<Entry<String,Map<String,String>>> es = src.serverSideIteratorOptions.entrySet();
         for (Entry<String,Map<String,String>> entry : es)
-          dst.serverSideIteratorOptions.put(entry.getKey(), new HashMap<String,String>(entry.getValue()));
+          dst.serverSideIteratorOptions.put(entry.getKey(), new HashMap<>(entry.getValue()));
+
+        dst.samplerConfig = src.samplerConfig;
+        dst.batchTimeOut = src.batchTimeOut;
       }
     }
   }
@@ -176,7 +188,7 @@
   }
 
   @Override
-  public void setTimeout(long timeout, TimeUnit timeUnit) {
+  public synchronized void setTimeout(long timeout, TimeUnit timeUnit) {
     if (timeOut < 0) {
       throw new IllegalArgumentException("TimeOut must be positive : " + timeOut);
     }
@@ -188,7 +200,7 @@
   }
 
   @Override
-  public long getTimeout(TimeUnit timeunit) {
+  public synchronized long getTimeout(TimeUnit timeunit) {
     return timeunit.convert(timeOut, TimeUnit.MILLISECONDS);
   }
 
@@ -198,7 +210,57 @@
   }
 
   @Override
-  public Authorizations getAuthorizations() {
+  public synchronized Authorizations getAuthorizations() {
     throw new UnsupportedOperationException("No authorizations to return");
   }
+
+  @Override
+  public synchronized void setSamplerConfiguration(SamplerConfiguration samplerConfig) {
+    requireNonNull(samplerConfig);
+    this.samplerConfig = samplerConfig;
+  }
+
+  @Override
+  public synchronized SamplerConfiguration getSamplerConfiguration() {
+    return samplerConfig;
+  }
+
+  @Override
+  public synchronized void clearSamplerConfiguration() {
+    this.samplerConfig = null;
+  }
+
+  @Override
+  public void setBatchTimeout(long timeout, TimeUnit timeUnit) {
+    if (timeOut < 0) {
+      throw new IllegalArgumentException("Batch timeout must be positive : " + timeOut);
+    }
+    if (timeout == 0) {
+      this.batchTimeOut = Long.MAX_VALUE;
+    } else {
+      this.batchTimeOut = timeUnit.toMillis(timeout);
+    }
+  }
+
+  @Override
+  public long getBatchTimeout(TimeUnit timeUnit) {
+    return timeUnit.convert(batchTimeOut, TimeUnit.MILLISECONDS);
+  }
+
+  @Override
+  public void setClassLoaderContext(String classLoaderContext) {
+    requireNonNull(classLoaderContext, "classloader context name cannot be null");
+    this.classLoaderContext = classLoaderContext;
+  }
+
+  @Override
+  public void clearClassLoaderContext() {
+    this.classLoaderContext = null;
+  }
+
+  @Override
+  public String getClassLoaderContext() {
+    return this.classLoaderContext;
+  }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ServerClient.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ServerClient.java
index 9ceb880..501b4df 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ServerClient.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ServerClient.java
@@ -20,6 +20,7 @@
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.util.ArrayList;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -32,7 +33,6 @@
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.core.util.ServerServices;
 import org.apache.accumulo.core.util.ServerServices.Service;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooCacheFactory;
@@ -41,6 +41,8 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class ServerClient {
   private static final Logger log = LoggerFactory.getLogger(ServerClient.class);
 
@@ -79,7 +81,7 @@
         return exec.execute(client);
       } catch (TTransportException tte) {
         log.debug("ClientService request failed " + server + ", retrying ... ", tte);
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } finally {
         if (client != null)
           ServerClient.close(client);
@@ -99,7 +101,7 @@
         break;
       } catch (TTransportException tte) {
         log.debug("ClientService request failed " + server + ", retrying ... ", tte);
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } finally {
         if (client != null)
           ServerClient.close(client);
@@ -121,7 +123,7 @@
       throws TTransportException {
     checkArgument(context != null, "context is null");
     // create list of servers
-    ArrayList<ThriftTransportKey> servers = new ArrayList<ThriftTransportKey>();
+    ArrayList<ThriftTransportKey> servers = new ArrayList<>();
 
     // add tservers
     Instance instance = context.getInstance();
@@ -142,7 +144,7 @@
       ClientService.Client client = ThriftUtil.createClient(new ClientService.Client.Factory(), pair.getSecond());
       opened = true;
       warnedAboutTServersBeingDown = false;
-      return new Pair<String,ClientService.Client>(pair.getFirst(), client);
+      return new Pair<>(pair.getFirst(), client);
     } finally {
       if (!opened) {
         if (!warnedAboutTServersBeingDown) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/SyncingTabletLocator.java b/core/src/main/java/org/apache/accumulo/core/client/impl/SyncingTabletLocator.java
new file mode 100644
index 0000000..6e7e072
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/SyncingTabletLocator.java
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.client.impl;
+
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.Callable;
+
+import org.apache.accumulo.core.client.AccumuloException;
+import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.data.impl.KeyExtent;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.hadoop.io.Text;
+import org.apache.log4j.Logger;
+
+/**
+ * Syncs itself with the static collection of TabletLocators, so that when the server clears it, it will automatically get the most up-to-date version. Caching
+ * TabletLocators locally is safe when using SyncingTabletLocator.
+ */
+public class SyncingTabletLocator extends TabletLocator {
+  private static final Logger log = Logger.getLogger(SyncingTabletLocator.class);
+
+  private volatile TabletLocator locator;
+  private final Callable<TabletLocator> getLocatorFunction;
+
+  public SyncingTabletLocator(Callable<TabletLocator> getLocatorFunction) {
+    this.getLocatorFunction = getLocatorFunction;
+    try {
+      this.locator = getLocatorFunction.call();
+    } catch (Exception e) {
+      log.error("Problem obtaining TabletLocator", e);
+      throw new RuntimeException(e);
+    }
+  }
+
+  public SyncingTabletLocator(final ClientContext context, final String tableId) {
+    this(new Callable<TabletLocator>() {
+      @Override
+      public TabletLocator call() throws Exception {
+        return TabletLocator.getLocator(context, tableId);
+      }
+    });
+  }
+
+  private TabletLocator syncLocator() {
+    TabletLocator loc = this.locator;
+    if (!loc.isValid())
+      synchronized (this) {
+        if (locator == loc)
+          try {
+            loc = locator = getLocatorFunction.call();
+          } catch (Exception e) {
+            log.error("Problem obtaining TabletLocator", e);
+            throw new RuntimeException(e);
+          }
+      }
+    return loc;
+  }
+
+  @Override
+  public TabletLocation locateTablet(ClientContext context, Text row, boolean skipRow, boolean retry) throws AccumuloException, AccumuloSecurityException,
+      TableNotFoundException {
+    return syncLocator().locateTablet(context, row, skipRow, retry);
+  }
+
+  @Override
+  public <T extends Mutation> void binMutations(ClientContext context, List<T> mutations, Map<String,TabletServerMutations<T>> binnedMutations, List<T> failures)
+      throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
+    syncLocator().binMutations(context, mutations, binnedMutations, failures);
+  }
+
+  @Override
+  public List<Range> binRanges(ClientContext context, List<Range> ranges, Map<String,Map<KeyExtent,List<Range>>> binnedRanges) throws AccumuloException,
+      AccumuloSecurityException, TableNotFoundException {
+    return syncLocator().binRanges(context, ranges, binnedRanges);
+  }
+
+  @Override
+  public void invalidateCache(KeyExtent failedExtent) {
+    syncLocator().invalidateCache(failedExtent);
+  }
+
+  @Override
+  public void invalidateCache(Collection<KeyExtent> keySet) {
+    syncLocator().invalidateCache(keySet);
+  }
+
+  @Override
+  public void invalidateCache() {
+    syncLocator().invalidateCache();
+  }
+
+  @Override
+  public void invalidateCache(Instance instance, String server) {
+    syncLocator().invalidateCache(instance, server);
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsHelper.java b/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsHelper.java
index 4068dee..a81241a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsHelper.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsHelper.java
@@ -59,7 +59,7 @@
   @Override
   public void removeIterator(String tableName, String name, EnumSet<IteratorScope> scopes) throws AccumuloSecurityException, AccumuloException,
       TableNotFoundException {
-    Map<String,String> copy = new TreeMap<String,String>();
+    Map<String,String> copy = new TreeMap<>();
     for (Entry<String,String> property : this.getProperties(tableName)) {
       copy.put(property.getKey(), property.getValue());
     }
@@ -80,7 +80,7 @@
     checkArgument(scope != null, "scope is null");
     int priority = -1;
     String classname = null;
-    Map<String,String> settings = new HashMap<String,String>();
+    Map<String,String> settings = new HashMap<>();
 
     String root = String.format("%s%s.%s", Property.TABLE_ITERATOR_PREFIX, scope.name().toLowerCase(), name);
     String opt = root + ".opt.";
@@ -104,7 +104,7 @@
 
   @Override
   public Map<String,EnumSet<IteratorScope>> listIterators(String tableName) throws AccumuloSecurityException, AccumuloException, TableNotFoundException {
-    Map<String,EnumSet<IteratorScope>> result = new TreeMap<String,EnumSet<IteratorScope>>();
+    Map<String,EnumSet<IteratorScope>> result = new TreeMap<>();
     for (Entry<String,String> property : this.getProperties(tableName)) {
       String name = property.getKey();
       String[] parts = name.split("\\.");
@@ -129,7 +129,7 @@
       String scopeStr = String.format("%s%s", Property.TABLE_ITERATOR_PREFIX, scope.name().toLowerCase());
       String nameStr = String.format("%s.%s", scopeStr, setting.getName());
       String optStr = String.format("%s.opt.", nameStr);
-      Map<String,String> optionConflicts = new TreeMap<String,String>();
+      Map<String,String> optionConflicts = new TreeMap<>();
       for (Entry<String,String> property : this.getProperties(tableName)) {
         if (property.getKey().startsWith(scopeStr)) {
           if (property.getKey().equals(nameStr))
@@ -157,8 +157,8 @@
 
   @Override
   public int addConstraint(String tableName, String constraintClassName) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    TreeSet<Integer> constraintNumbers = new TreeSet<Integer>();
-    TreeMap<String,Integer> constraintClasses = new TreeMap<String,Integer>();
+    TreeSet<Integer> constraintNumbers = new TreeSet<>();
+    TreeMap<String,Integer> constraintClasses = new TreeMap<>();
     int i;
     for (Entry<String,String> property : this.getProperties(tableName)) {
       if (property.getKey().startsWith(Property.TABLE_CONSTRAINT_PREFIX.toString())) {
@@ -188,7 +188,7 @@
 
   @Override
   public Map<String,Integer> listConstraints(String tableName) throws AccumuloException, TableNotFoundException {
-    Map<String,Integer> constraints = new TreeMap<String,Integer>();
+    Map<String,Integer> constraints = new TreeMap<>();
     for (Entry<String,String> property : this.getProperties(tableName)) {
       if (property.getKey().startsWith(Property.TABLE_CONSTRAINT_PREFIX.toString())) {
         if (constraints.containsKey(property.getValue()))
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
index 6f9ea29..3d17a85 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/TableOperationsImpl.java
@@ -17,9 +17,12 @@
 package org.apache.accumulo.core.client.impl;
 
 import static com.google.common.base.Preconditions.checkArgument;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
+import static java.util.Objects.requireNonNull;
 
 import java.io.BufferedReader;
+import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.io.InputStreamReader;
 import java.nio.ByteBuffer;
@@ -64,12 +67,14 @@
 import org.apache.accumulo.core.client.admin.CompactionConfig;
 import org.apache.accumulo.core.client.admin.DiskUsage;
 import org.apache.accumulo.core.client.admin.FindMax;
+import org.apache.accumulo.core.client.admin.Locations;
 import org.apache.accumulo.core.client.admin.NewTableConfiguration;
 import org.apache.accumulo.core.client.admin.TableOperations;
 import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.client.impl.TabletLocator.TabletLocation;
 import org.apache.accumulo.core.client.impl.thrift.ClientService;
 import org.apache.accumulo.core.client.impl.thrift.ClientService.Client;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.client.impl.thrift.TDiskUsage;
 import org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException;
 import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
@@ -80,8 +85,10 @@
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.TabletId;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.data.impl.KeyExtent;
+import org.apache.accumulo.core.data.impl.TabletIdImpl;
 import org.apache.accumulo.core.iterators.IteratorUtil;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
@@ -93,6 +100,7 @@
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.rpc.ThriftUtil;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.tabletserver.thrift.NotServingTabletException;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
@@ -104,17 +112,17 @@
 import org.apache.accumulo.core.util.OpTimer;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.core.util.TextUtil;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.volume.VolumeConfiguration;
+import org.apache.accumulo.fate.zookeeper.Retry;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
 import org.apache.thrift.TApplicationException;
 import org.apache.thrift.TException;
 import org.apache.thrift.transport.TTransportException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.common.base.Joiner;
 import com.google.common.net.HostAndPort;
@@ -122,7 +130,7 @@
 public class TableOperationsImpl extends TableOperationsHelper {
 
   public static final String CLONE_EXCLUDE_PREFIX = "!";
-  private static final Logger log = Logger.getLogger(TableOperations.class);
+  private static final Logger log = LoggerFactory.getLogger(TableOperations.class);
   private final ClientContext context;
 
   public TableOperationsImpl(ClientContext context) {
@@ -132,9 +140,22 @@
 
   @Override
   public SortedSet<String> list() {
-    OpTimer opTimer = new OpTimer(log, Level.TRACE).start("Fetching list of tables...");
-    TreeSet<String> tableNames = new TreeSet<String>(Tables.getNameToIdMap(context.getInstance()).keySet());
-    opTimer.stop("Fetched " + tableNames.size() + " table names in %DURATION%");
+
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Fetching list of tables...", Thread.currentThread().getId());
+      timer = new OpTimer().start();
+    }
+
+    TreeSet<String> tableNames = new TreeSet<>(Tables.getNameToIdMap(context.getInstance()).keySet());
+
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Fetched {} table names in {}", Thread.currentThread().getId(), tableNames.size(),
+          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
+
     return tableNames;
   }
 
@@ -144,9 +165,20 @@
     if (tableName.equals(MetadataTable.NAME) || tableName.equals(RootTable.NAME))
       return true;
 
-    OpTimer opTimer = new OpTimer(log, Level.TRACE).start("Checking if table " + tableName + " exists...");
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Checking if table {} exists...", Thread.currentThread().getId(), tableName);
+      timer = new OpTimer().start();
+    }
+
     boolean exists = Tables.getNameToIdMap(context.getInstance()).containsKey(tableName);
-    opTimer.stop("Checked existance of " + exists + " in %DURATION%");
+
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Checked existance of {} in {}", Thread.currentThread().getId(), exists, String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
+
     return exists;
   }
 
@@ -200,7 +232,7 @@
         return client.beginFateOperation(Tracer.traceInfo(), context.rpcCreds());
       } catch (TTransportException tte) {
         log.debug("Failed to call beginFateOperation(), retrying ... ", tte);
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } finally {
         MasterClient.close(client);
       }
@@ -218,7 +250,7 @@
         break;
       } catch (TTransportException tte) {
         log.debug("Failed to call executeFateOperation(), retrying ... ", tte);
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } finally {
         MasterClient.close(client);
       }
@@ -233,7 +265,7 @@
         return client.waitForFateOperation(Tracer.traceInfo(), context.rpcCreds(), opid);
       } catch (TTransportException tte) {
         log.debug("Failed to call waitForFateOperation(), retrying ... ", tte);
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } finally {
         MasterClient.close(client);
       }
@@ -249,7 +281,7 @@
         break;
       } catch (TTransportException tte) {
         log.debug("Failed to call finishFateOperation(), retrying ... ", tte);
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } finally {
         MasterClient.close(client);
       }
@@ -308,7 +340,7 @@
         try {
           finishFateOperation(opid);
         } catch (Exception e) {
-          log.warn(e.getMessage(), e);
+          log.warn("Exception thrown while finishing fate table operation", e);
         }
     }
   }
@@ -346,7 +378,7 @@
           return;
 
         if (splits.size() <= 2) {
-          addSplits(env.tableName, new TreeSet<Text>(splits), env.tableId);
+          addSplits(env.tableName, new TreeSet<>(splits), env.tableId);
           for (int i = 0; i < splits.size(); i++)
             env.latch.countDown();
           return;
@@ -356,7 +388,7 @@
 
         // split the middle split point to ensure that child task split different tablets and can therefore
         // run in parallel
-        addSplits(env.tableName, new TreeSet<Text>(splits.subList(mid, mid + 1)), env.tableId);
+        addSplits(env.tableName, new TreeSet<>(splits.subList(mid, mid + 1)), env.tableId);
         env.latch.countDown();
 
         env.executor.submit(new SplitTask(env, splits.subList(0, mid)));
@@ -373,13 +405,13 @@
   public void addSplits(String tableName, SortedSet<Text> partitionKeys) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
     String tableId = Tables.getTableId(context.getInstance(), tableName);
 
-    List<Text> splits = new ArrayList<Text>(partitionKeys);
+    List<Text> splits = new ArrayList<>(partitionKeys);
     // should be sorted because we copied from a sorted set, but that makes assumptions about
     // how the copy was done so resort to be sure.
     Collections.sort(splits);
 
     CountDownLatch latch = new CountDownLatch(splits.size());
-    AtomicReference<Exception> exception = new AtomicReference<Exception>(null);
+    AtomicReference<Exception> exception = new AtomicReference<>(null);
 
     ExecutorService executor = Executors.newFixedThreadPool(16, new NamingThreadFactory("addSplits"));
     try {
@@ -410,7 +442,7 @@
 
   private void addSplits(String tableName, SortedSet<Text> partitionKeys, String tableId) throws AccumuloException, AccumuloSecurityException,
       TableNotFoundException, AccumuloServerException {
-    TabletLocator tabLocator = TabletLocator.getLocator(context, new Text(tableId));
+    TabletLocator tabLocator = TabletLocator.getLocator(context, tableId);
 
     for (Text split : partitionKeys) {
       boolean successful = false;
@@ -420,7 +452,7 @@
       while (!successful) {
 
         if (attempt > 0)
-          UtilWaitThread.sleep(100);
+          sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
 
         attempt++;
 
@@ -439,17 +471,24 @@
         try {
           TabletClientService.Client client = ThriftUtil.getTServerClient(address, context);
           try {
-            OpTimer opTimer = null;
-            if (log.isTraceEnabled())
-              opTimer = new OpTimer(log, Level.TRACE).start("Splitting tablet " + tl.tablet_extent + " on " + address + " at " + split);
+
+            OpTimer timer = null;
+
+            if (log.isTraceEnabled()) {
+              log.trace("tid={} Splitting tablet {} on {} at {}", Thread.currentThread().getId(), tl.tablet_extent, address, split);
+              timer = new OpTimer().start();
+            }
 
             client.splitTablet(Tracer.traceInfo(), context.rpcCreds(), tl.tablet_extent.toThrift(), TextUtil.getByteBuffer(split));
 
             // just split it, might as well invalidate it in the cache
             tabLocator.invalidateCache(tl.tablet_extent);
 
-            if (opTimer != null)
-              opTimer.stop("Split tablet in %DURATION%");
+            if (timer != null) {
+              timer.stop();
+              log.trace("Split tablet in {}", String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+            }
+
           } finally {
             ThriftUtil.returnClient(client);
           }
@@ -468,8 +507,7 @@
           // Do not silently spin when we repeatedly fail to get the location for a tablet
           locationFailures++;
           if (5 == locationFailures || 0 == locationFailures % 50) {
-            log.warn("Having difficulty locating hosting tabletserver for split " + split + " on table " + tableName + ". Seen " + locationFailures
-                + " failures.");
+            log.warn("Having difficulty locating hosting tabletserver for split {} on table {}. Seen {} failures.", split, tableName, locationFailures);
           }
 
           tabLocator.invalidateCache(tl.tablet_extent);
@@ -491,7 +529,7 @@
     ByteBuffer EMPTY = ByteBuffer.allocate(0);
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(tableName.getBytes(UTF_8)), start == null ? EMPTY : TextUtil.getByteBuffer(start),
         end == null ? EMPTY : TextUtil.getByteBuffer(end));
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
     try {
       doTableFateOperation(tableName, TableNotFoundException.class, FateOperation.TABLE_MERGE, args, opts);
     } catch (TableExistsException e) {
@@ -507,7 +545,7 @@
     ByteBuffer EMPTY = ByteBuffer.allocate(0);
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(tableName.getBytes(UTF_8)), start == null ? EMPTY : TextUtil.getByteBuffer(start),
         end == null ? EMPTY : TextUtil.getByteBuffer(end));
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
     try {
       doTableFateOperation(tableName, TableNotFoundException.class, FateOperation.TABLE_DELETE_RANGE, args, opts);
     } catch (TableExistsException e) {
@@ -523,7 +561,7 @@
 
     String tableId = Tables.getTableId(context.getInstance(), tableName);
 
-    TreeMap<KeyExtent,String> tabletLocations = new TreeMap<KeyExtent,String>();
+    TreeMap<KeyExtent,String> tabletLocations = new TreeMap<>();
 
     while (true) {
       try {
@@ -542,12 +580,12 @@
           throw (AccumuloSecurityException) e.getCause();
         }
 
-        log.info(e.getMessage() + " ... retrying ...");
-        UtilWaitThread.sleep(3000);
+        log.info("{} ... retrying ...", e.getMessage());
+        sleepUninterruptibly(3, TimeUnit.SECONDS);
       }
     }
 
-    ArrayList<Text> endRows = new ArrayList<Text>(tabletLocations.size());
+    ArrayList<Text> endRows = new ArrayList<>(tabletLocations.size());
 
     for (KeyExtent ke : tabletLocations.keySet())
       if (ke.getEndRow() != null)
@@ -576,7 +614,7 @@
     double r = (maxSplits + 1) / (double) (endRows.size());
     double pos = 0;
 
-    ArrayList<Text> subset = new ArrayList<Text>(maxSplits);
+    ArrayList<Text> subset = new ArrayList<>(maxSplits);
 
     int j = 0;
     for (int i = 0; i < endRows.size() && j < maxSplits; i++) {
@@ -606,7 +644,7 @@
     checkArgument(tableName != null, "tableName is null");
 
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(tableName.getBytes(UTF_8)));
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     try {
       doTableFateOperation(tableName, TableNotFoundException.class, FateOperation.TABLE_DELETE, args, opts);
@@ -636,7 +674,7 @@
       propertiesToSet = Collections.emptyMap();
 
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(srcTableId.getBytes(UTF_8)), ByteBuffer.wrap(newTableName.getBytes(UTF_8)));
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
     for (Entry<String,String> entry : propertiesToSet.entrySet()) {
       if (entry.getKey().startsWith(CLONE_EXCLUDE_PREFIX))
         throw new IllegalArgumentException("Property can not start with " + CLONE_EXCLUDE_PREFIX);
@@ -655,7 +693,7 @@
       TableExistsException {
 
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(oldTableName.getBytes(UTF_8)), ByteBuffer.wrap(newTableName.getBytes(UTF_8)));
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
     doTableFateOperation(oldTableName, TableNotFoundException.class, FateOperation.TABLE_RENAME, args, opts);
   }
 
@@ -723,7 +761,7 @@
         : TextUtil.getByteBuffer(end), ByteBuffer.wrap(IteratorUtil.encodeIteratorSettings(config.getIterators())), ByteBuffer
         .wrap(CompactionStrategyConfigUtil.encode(config.getCompactionStrategy())));
 
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
     try {
       doFateOperation(FateOperation.TABLE_COMPACT, args, opts, tableName, config.getWait());
     } catch (TableExistsException e) {
@@ -743,7 +781,7 @@
 
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(tableId.getBytes(UTF_8)));
 
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
     try {
       doTableFateOperation(tableName, TableNotFoundException.class, FateOperation.TABLE_CANCEL_COMPACT, args, opts);
     } catch (TableExistsException e) {
@@ -769,7 +807,7 @@
           break;
         } catch (TTransportException tte) {
           log.debug("Failed to call initiateFlush, retrying ... ", tte);
-          UtilWaitThread.sleep(100);
+          sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
         } finally {
           MasterClient.close(client);
         }
@@ -784,7 +822,7 @@
           break;
         } catch (TTransportException tte) {
           log.debug("Failed to call initiateFlush, retrying ... ", tte);
-          UtilWaitThread.sleep(100);
+          sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
         } finally {
           MasterClient.close(client);
         }
@@ -794,7 +832,7 @@
         case TABLE_DOESNT_EXIST:
           throw new TableNotFoundException(tableId, null, e.getMessage(), e);
         default:
-          log.debug("flush security exception on table id " + tableId);
+          log.debug("flush security exception on table id {}", tableId);
           throw new AccumuloSecurityException(e.user, e.code, e);
       }
     } catch (ThriftTableOperationException e) {
@@ -872,7 +910,7 @@
   @Override
   public void setLocalityGroups(String tableName, Map<String,Set<Text>> groups) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
     // ensure locality groups do not overlap
-    HashSet<Text> all = new HashSet<Text>();
+    HashSet<Text> all = new HashSet<>();
     for (Entry<String,Set<Text>> entry : groups.entrySet()) {
 
       if (!Collections.disjoint(all, entry.getValue())) {
@@ -918,10 +956,10 @@
     AccumuloConfiguration conf = new ConfigurationCopy(this.getProperties(tableName));
     Map<String,Set<ByteSequence>> groups = LocalityGroupUtil.getLocalityGroups(conf);
 
-    Map<String,Set<Text>> groups2 = new HashMap<String,Set<Text>>();
+    Map<String,Set<Text>> groups2 = new HashMap<>();
     for (Entry<String,Set<ByteSequence>> entry : groups.entrySet()) {
 
-      HashSet<Text> colFams = new HashSet<Text>();
+      HashSet<Text> colFams = new HashSet<>();
 
       for (ByteSequence bs : entry.getValue()) {
         colFams.add(new Text(bs.toArray()));
@@ -944,9 +982,9 @@
       return Collections.singleton(range);
 
     Random random = new Random();
-    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<String,Map<KeyExtent,List<Range>>>();
+    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<>();
     String tableId = Tables.getTableId(context.getInstance(), tableName);
-    TabletLocator tl = TabletLocator.getLocator(context, new Text(tableId));
+    TabletLocator tl = TabletLocator.getLocator(context, tableId);
     // its possible that the cache could contain complete, but old information about a tables tablets... so clear it
     tl.invalidateCache();
     while (!tl.binRanges(context, Collections.singletonList(range), binnedRanges).isEmpty()) {
@@ -957,14 +995,14 @@
 
       log.warn("Unable to locate bins for specified range. Retrying.");
       // sleep randomly between 100 and 200ms
-      UtilWaitThread.sleep(100 + random.nextInt(100));
+      sleepUninterruptibly(100 + random.nextInt(100), TimeUnit.MILLISECONDS);
       binnedRanges.clear();
       tl.invalidateCache();
     }
 
     // group key extents to get <= maxSplits
-    LinkedList<KeyExtent> unmergedExtents = new LinkedList<KeyExtent>();
-    List<KeyExtent> mergedExtents = new ArrayList<KeyExtent>();
+    LinkedList<KeyExtent> unmergedExtents = new LinkedList<>();
+    List<KeyExtent> mergedExtents = new ArrayList<>();
 
     for (Map<KeyExtent,List<Range>> map : binnedRanges.values())
       unmergedExtents.addAll(map.keySet());
@@ -989,7 +1027,7 @@
 
     mergedExtents.addAll(unmergedExtents);
 
-    Set<Range> ranges = new HashSet<Range>();
+    Set<Range> ranges = new HashSet<>();
     for (KeyExtent k : mergedExtents)
       ranges.add(k.toDataRange().clip(range));
 
@@ -1009,11 +1047,12 @@
       ret = fs.makeQualified(new Path(dir));
     }
 
-    if (!fs.exists(ret))
+    try {
+      if (!fs.getFileStatus(ret).isDirectory()) {
+        throw new AccumuloException(kind + " import " + type + " directory " + dir + " is not a directory!");
+      }
+    } catch (FileNotFoundException fnf) {
       throw new AccumuloException(kind + " import " + type + " directory " + dir + " does not exist!");
-
-    if (!fs.getFileStatus(ret).isDirectory()) {
-      throw new AccumuloException(kind + " import " + type + " directory " + dir + " is not a directory!");
     }
 
     if (type.equals("failure")) {
@@ -1040,7 +1079,7 @@
 
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(tableName.getBytes(UTF_8)), ByteBuffer.wrap(dirPath.toString().getBytes(UTF_8)),
         ByteBuffer.wrap(failPath.toString().getBytes(UTF_8)), ByteBuffer.wrap((setTime + "").getBytes(UTF_8)));
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     try {
       doTableFateOperation(tableName, TableNotFoundException.class, FateOperation.TABLE_BULK_IMPORT, args, opts);
@@ -1067,9 +1106,9 @@
         }
       }
 
-      Range range = new KeyExtent(new Text(tableId), null, null).toMetadataRange();
+      Range range = new KeyExtent(tableId, null, null).toMetadataRange();
       if (startRow == null || lastRow == null)
-        range = new KeyExtent(new Text(tableId), null, null).toMetadataRange();
+        range = new KeyExtent(tableId, null, null).toMetadataRange();
       else
         range = new Range(startRow, lastRow);
 
@@ -1087,7 +1126,7 @@
       int waitFor = 0;
       int holes = 0;
       Text continueRow = null;
-      MapCounter<String> serverCounts = new MapCounter<String>();
+      MapCounter<String> serverCounts = new MapCounter<>();
 
       while (rowIter.hasNext()) {
         Iterator<Entry<Key,Value>> row = rowIter.next();
@@ -1124,7 +1163,7 @@
             serverCounts.increment(future, 1);
         }
 
-        if (!extent.getTableId().toString().equals(tableId)) {
+        if (!extent.getTableId().equals(tableId)) {
           throw new AccumuloException("Saw unexpected table Id " + tableId + " " + extent);
         }
 
@@ -1154,9 +1193,8 @@
           waitTime = waitFor * 10;
         waitTime = Math.max(100, waitTime);
         waitTime = Math.min(5000, waitTime);
-        log.trace("Waiting for " + waitFor + "(" + maxPerServer + ") tablets, startRow = " + startRow + " lastRow = " + lastRow + ", holes=" + holes
-            + " sleeping:" + waitTime + "ms");
-        UtilWaitThread.sleep(waitTime);
+        log.trace("Waiting for {}({}) tablets, startRow = {} lastRow = {}, holes={} sleeping:{}ms", waitFor, maxPerServer, startRow, lastRow, holes, waitTime);
+        sleepUninterruptibly(waitTime, TimeUnit.MILLISECONDS);
       } else {
         break;
       }
@@ -1187,7 +1225,7 @@
     checkArgument(tableName != null, "tableName is null");
     String tableId = Tables.getTableId(context.getInstance(), tableName);
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(tableId.getBytes(UTF_8)));
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     try {
       doTableFateOperation(tableName, TableNotFoundException.class, FateOperation.TABLE_OFFLINE, args, opts);
@@ -1210,7 +1248,7 @@
     checkArgument(tableName != null, "tableName is null");
     String tableId = Tables.getTableId(context.getInstance(), tableName);
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(tableId.getBytes(UTF_8)));
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     try {
       doTableFateOperation(tableName, TableNotFoundException.class, FateOperation.TABLE_ONLINE, args, opts);
@@ -1226,7 +1264,7 @@
   @Override
   public void clearLocatorCache(String tableName) throws TableNotFoundException {
     checkArgument(tableName != null, "tableName is null");
-    TabletLocator tabLocator = TabletLocator.getLocator(context, new Text(Tables.getTableId(context.getInstance(), tableName)));
+    TabletLocator tabLocator = TabletLocator.getLocator(context, Tables.getTableId(context.getInstance(), tableName));
     tabLocator.invalidateCache();
   }
 
@@ -1271,9 +1309,9 @@
         if (pair == null) {
           log.debug("Disk usage request failed.  Pair is null.  Retrying request...", e);
         } else {
-          log.debug("Disk usage request failed " + pair.getFirst() + ", retrying ... ", e);
+          log.debug("Disk usage request failed {}, retrying ... ", pair.getFirst(), e);
         }
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } catch (TException e) {
         // may be a TApplicationException which indicates error on the server side
         throw new AccumuloException(e);
@@ -1284,16 +1322,16 @@
       }
     }
 
-    List<DiskUsage> finalUsages = new ArrayList<DiskUsage>();
+    List<DiskUsage> finalUsages = new ArrayList<>();
     for (TDiskUsage diskUsage : diskUsages) {
-      finalUsages.add(new DiskUsage(new TreeSet<String>(diskUsage.getTables()), diskUsage.getUsage()));
+      finalUsages.add(new DiskUsage(new TreeSet<>(diskUsage.getTables()), diskUsage.getUsage()));
     }
 
     return finalUsages;
   }
 
   public static Map<String,String> getExportedProps(FileSystem fs, Path path) throws IOException {
-    HashMap<String,String> props = new HashMap<String,String>();
+    HashMap<String,String> props = new HashMap<>();
 
     ZipInputStream zis = new ZipInputStream(fs.open(path));
     try {
@@ -1337,13 +1375,13 @@
 
       for (Entry<String,String> entry : props.entrySet()) {
         if (Property.isClassProperty(entry.getKey()) && !entry.getValue().contains(Constants.CORE_PACKAGE_NAME)) {
-          Logger.getLogger(this.getClass()).info(
-              "Imported table sets '" + entry.getKey() + "' to '" + entry.getValue() + "'.  Ensure this class is on Accumulo classpath.");
+          LoggerFactory.getLogger(this.getClass()).info("Imported table sets '{}' to '{}'.  Ensure this class is on Accumulo classpath.", entry.getKey(),
+              entry.getValue());
         }
       }
 
     } catch (IOException ioe) {
-      Logger.getLogger(this.getClass()).warn("Failed to check if imported table references external java classes : " + ioe.getMessage());
+      LoggerFactory.getLogger(this.getClass()).warn("Failed to check if imported table references external java classes : {}", ioe.getMessage());
     }
 
     List<ByteBuffer> args = Arrays.asList(ByteBuffer.wrap(tableName.getBytes(UTF_8)), ByteBuffer.wrap(importDir.getBytes(UTF_8)));
@@ -1443,4 +1481,148 @@
     }
   }
 
+  private void clearSamplerOptions(String tableName) throws AccumuloException, TableNotFoundException, AccumuloSecurityException {
+    String prefix = Property.TABLE_SAMPLER_OPTS.getKey();
+    for (Entry<String,String> entry : getProperties(tableName)) {
+      String property = entry.getKey();
+      if (property.startsWith(prefix)) {
+        removeProperty(tableName, property);
+      }
+    }
+  }
+
+  @Override
+  public void setSamplerConfiguration(String tableName, SamplerConfiguration samplerConfiguration) throws AccumuloException, TableNotFoundException,
+      AccumuloSecurityException {
+    clearSamplerOptions(tableName);
+
+    List<Pair<String,String>> props = new SamplerConfigurationImpl(samplerConfiguration).toTableProperties();
+    for (Pair<String,String> pair : props) {
+      setProperty(tableName, pair.getFirst(), pair.getSecond());
+    }
+  }
+
+  @Override
+  public void clearSamplerConfiguration(String tableName) throws AccumuloException, TableNotFoundException, AccumuloSecurityException {
+    removeProperty(tableName, Property.TABLE_SAMPLER.getKey());
+    clearSamplerOptions(tableName);
+  }
+
+  @Override
+  public SamplerConfiguration getSamplerConfiguration(String tableName) throws TableNotFoundException, AccumuloException {
+    AccumuloConfiguration conf = new ConfigurationCopy(this.getProperties(tableName));
+    SamplerConfigurationImpl sci = SamplerConfigurationImpl.newSamplerConfig(conf);
+    if (sci == null) {
+      return null;
+    }
+    return sci.toSamplerConfiguration();
+  }
+
+  private static class LoctionsImpl implements Locations {
+
+    private Map<Range,List<TabletId>> groupedByRanges;
+    private Map<TabletId,List<Range>> groupedByTablets;
+    private Map<TabletId,String> tabletLocations;
+
+    public LoctionsImpl(Map<String,Map<KeyExtent,List<Range>>> binnedRanges) {
+      groupedByTablets = new HashMap<>();
+      groupedByRanges = null;
+      tabletLocations = new HashMap<>();
+
+      for (Entry<String,Map<KeyExtent,List<Range>>> entry : binnedRanges.entrySet()) {
+        String location = entry.getKey();
+
+        for (Entry<KeyExtent,List<Range>> entry2 : entry.getValue().entrySet()) {
+          TabletIdImpl tabletId = new TabletIdImpl(entry2.getKey());
+          tabletLocations.put(tabletId, location);
+          List<Range> prev = groupedByTablets.put(tabletId, Collections.unmodifiableList(entry2.getValue()));
+          if (prev != null) {
+            throw new RuntimeException("Unexpected : tablet at multiple locations : " + location + " " + tabletId);
+          }
+        }
+      }
+
+      groupedByTablets = Collections.unmodifiableMap(groupedByTablets);
+    }
+
+    @Override
+    public String getTabletLocation(TabletId tabletId) {
+      return tabletLocations.get(tabletId);
+    }
+
+    @Override
+    public Map<Range,List<TabletId>> groupByRange() {
+      if (groupedByRanges == null) {
+        Map<Range,List<TabletId>> tmp = new HashMap<>();
+
+        for (Entry<TabletId,List<Range>> entry : groupedByTablets.entrySet()) {
+          for (Range range : entry.getValue()) {
+            List<TabletId> tablets = tmp.get(range);
+            if (tablets == null) {
+              tablets = new ArrayList<>();
+              tmp.put(range, tablets);
+            }
+
+            tablets.add(entry.getKey());
+          }
+        }
+
+        Map<Range,List<TabletId>> tmp2 = new HashMap<>();
+        for (Entry<Range,List<TabletId>> entry : tmp.entrySet()) {
+          tmp2.put(entry.getKey(), Collections.unmodifiableList(entry.getValue()));
+        }
+
+        groupedByRanges = Collections.unmodifiableMap(tmp2);
+      }
+
+      return groupedByRanges;
+    }
+
+    @Override
+    public Map<TabletId,List<Range>> groupByTablet() {
+      return groupedByTablets;
+    }
+  }
+
+  @Override
+  public Locations locate(String tableName, Collection<Range> ranges) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
+    requireNonNull(tableName, "tableName must be non null");
+    requireNonNull(ranges, "ranges must be non null");
+
+    String tableId = Tables.getTableId(context.getInstance(), tableName);
+    TabletLocator locator = TabletLocator.getLocator(context, tableId);
+
+    List<Range> rangeList = null;
+    if (ranges instanceof List) {
+      rangeList = (List<Range>) ranges;
+    } else {
+      rangeList = new ArrayList<>(ranges);
+    }
+
+    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<>();
+
+    locator.invalidateCache();
+
+    Retry retry = new Retry(Long.MAX_VALUE, 100, 100, 2000);
+
+    while (!locator.binRanges(context, rangeList, binnedRanges).isEmpty()) {
+
+      if (!Tables.exists(context.getInstance(), tableId))
+        throw new TableNotFoundException(tableId, tableName, null);
+      if (Tables.getTableState(context.getInstance(), tableId) == TableState.OFFLINE)
+        throw new TableOfflineException(context.getInstance(), tableId);
+
+      binnedRanges.clear();
+
+      try {
+        retry.waitForNextAttempt();
+      } catch (InterruptedException e) {
+        throw new RuntimeException(e);
+      }
+
+      locator.invalidateCache();
+    }
+
+    return new LoctionsImpl(binnedRanges);
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/Tables.java b/core/src/main/java/org/apache/accumulo/core/client/impl/Tables.java
index beacea9..9f869ff 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/Tables.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/Tables.java
@@ -60,8 +60,8 @@
     ZooCache zc = getZooCache(instance);
 
     List<String> tableIds = zc.getChildren(ZooUtil.getRoot(instance) + Constants.ZTABLES);
-    TreeMap<String,String> tableMap = new TreeMap<String,String>();
-    Map<String,String> namespaceIdToNameMap = new HashMap<String,String>();
+    TreeMap<String,String> tableMap = new TreeMap<>();
+    Map<String,String> namespaceIdToNameMap = new HashMap<>();
 
     for (String tableId : tableIds) {
       byte[] tableName = zc.get(ZooUtil.getRoot(instance) + Constants.ZTABLES + "/" + tableId + Constants.ZTABLE_NAME);
@@ -211,9 +211,9 @@
       tableName = MetadataTable.NAME;
     if (tableName.contains(".")) {
       String[] s = tableName.split("\\.", 2);
-      return new Pair<String,String>(s[0], s[1]);
+      return new Pair<>(s[0], s[1]);
     }
-    return new Pair<String,String>(defaultNamespace, tableName);
+    return new Pair<>(defaultNamespace, tableName);
   }
 
   /**
@@ -236,7 +236,7 @@
 
     // We might get null out of ZooCache if this tableID doesn't exist
     if (null == n) {
-      throw new IllegalArgumentException("Table with id " + tableId + " does not exist");
+      throw new IllegalArgumentException(new TableNotFoundException(tableId, null, null));
     }
 
     return new String(n, UTF_8);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java b/core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java
index 1fbaee8..9229643 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocator.java
@@ -40,6 +40,15 @@
 
 public abstract class TabletLocator {
 
+  /**
+   * Flipped false on call to {@link #clearLocators}. Checked by client classes that locally cache Locators.
+   */
+  private volatile boolean isValid = true;
+
+  boolean isValid() {
+    return isValid;
+  }
+
   public abstract TabletLocation locateTablet(ClientContext context, Text row, boolean skipRow, boolean retry) throws AccumuloException,
       AccumuloSecurityException, TableNotFoundException;
 
@@ -65,9 +74,9 @@
 
   private static class LocatorKey {
     String instanceId;
-    Text tableName;
+    String tableName;
 
-    LocatorKey(String instanceId, Text table) {
+    LocatorKey(String instanceId, String table) {
       this.instanceId = instanceId;
       this.tableName = table;
     }
@@ -90,25 +99,28 @@
 
   }
 
-  private static HashMap<LocatorKey,TabletLocator> locators = new HashMap<LocatorKey,TabletLocator>();
+  private static HashMap<LocatorKey,TabletLocator> locators = new HashMap<>();
 
   public static synchronized void clearLocators() {
+    for (TabletLocator locator : locators.values()) {
+      locator.isValid = false;
+    }
     locators.clear();
   }
 
-  public static synchronized TabletLocator getLocator(ClientContext context, Text tableId) {
+  public static synchronized TabletLocator getLocator(ClientContext context, String tableId) {
     Instance instance = context.getInstance();
     LocatorKey key = new LocatorKey(instance.getInstanceID(), tableId);
     TabletLocator tl = locators.get(key);
     if (tl == null) {
       MetadataLocationObtainer mlo = new MetadataLocationObtainer();
 
-      if (tableId.toString().equals(RootTable.ID)) {
+      if (RootTable.ID.equals(tableId)) {
         tl = new RootTabletLocator(new ZookeeperLockChecker(instance));
-      } else if (tableId.toString().equals(MetadataTable.ID)) {
-        tl = new TabletLocatorImpl(new Text(MetadataTable.ID), getLocator(context, new Text(RootTable.ID)), mlo, new ZookeeperLockChecker(instance));
+      } else if (MetadataTable.ID.equals(tableId)) {
+        tl = new TabletLocatorImpl(MetadataTable.ID, getLocator(context, RootTable.ID), mlo, new ZookeeperLockChecker(instance));
       } else {
-        tl = new TabletLocatorImpl(tableId, getLocator(context, new Text(MetadataTable.ID)), mlo, new ZookeeperLockChecker(instance));
+        tl = new TabletLocatorImpl(tableId, getLocator(context, MetadataTable.ID), mlo, new ZookeeperLockChecker(instance));
       }
       locators.put(key, tl);
     }
@@ -136,7 +148,7 @@
   }
 
   public static class TabletLocation implements Comparable<TabletLocation> {
-    private static final WeakHashMap<String,WeakReference<String>> tabletLocs = new WeakHashMap<String,WeakReference<String>>();
+    private static final WeakHashMap<String,WeakReference<String>> tabletLocs = new WeakHashMap<>();
 
     private static String dedupeLocation(String tabletLoc) {
       synchronized (tabletLocs) {
@@ -148,7 +160,7 @@
           }
         }
 
-        tabletLocs.put(tabletLoc, new WeakReference<String>(tabletLoc));
+        tabletLocs.put(tabletLoc, new WeakReference<>(tabletLoc));
         return tabletLoc;
       }
     }
@@ -203,13 +215,13 @@
 
     public TabletServerMutations(String tserverSession) {
       this.tserverSession = tserverSession;
-      this.mutations = new HashMap<KeyExtent,List<T>>();
+      this.mutations = new HashMap<>();
     }
 
     public void addMutation(KeyExtent ke, T m) {
       List<T> mutList = mutations.get(ke);
       if (mutList == null) {
-        mutList = new ArrayList<T>();
+        mutList = new ArrayList<>();
         mutations.put(ke, mutList);
       }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java b/core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
index c28320d..5932fda 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/TabletLocatorImpl.java
@@ -30,6 +30,7 @@
 import java.util.SortedMap;
 import java.util.TreeMap;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.locks.Lock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
@@ -45,15 +46,16 @@
 import org.apache.accumulo.core.util.OpTimer;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.core.util.TextUtil;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.WritableComparator;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 public class TabletLocatorImpl extends TabletLocator {
 
-  private static final Logger log = Logger.getLogger(TabletLocatorImpl.class);
+  private static final Logger log = LoggerFactory.getLogger(TabletLocatorImpl.class);
 
   // there seems to be a bug in TreeMap.tailMap related to
   // putting null in the treemap.. therefore instead of
@@ -86,14 +88,14 @@
 
   static final EndRowComparator endRowComparator = new EndRowComparator();
 
-  protected Text tableId;
+  protected String tableId;
   protected TabletLocator parent;
-  protected TreeMap<Text,TabletLocation> metaCache = new TreeMap<Text,TabletLocation>(endRowComparator);
+  protected TreeMap<Text,TabletLocation> metaCache = new TreeMap<>(endRowComparator);
   protected TabletLocationObtainer locationObtainer;
   private TabletServerLockChecker lockChecker;
   protected Text lastTabletRow;
 
-  private TreeSet<KeyExtent> badExtents = new TreeSet<KeyExtent>();
+  private TreeSet<KeyExtent> badExtents = new TreeSet<>();
   private ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();
   private final Lock rLock = rwLock.readLock();
   private final Lock wLock = rwLock.writeLock();
@@ -117,8 +119,8 @@
 
   private class LockCheckerSession {
 
-    private HashSet<Pair<String,String>> okLocks = new HashSet<Pair<String,String>>();
-    private HashSet<Pair<String,String>> invalidLocks = new HashSet<Pair<String,String>>();
+    private HashSet<Pair<String,String>> okLocks = new HashSet<>();
+    private HashSet<Pair<String,String>> invalidLocks = new HashSet<>();
 
     private TabletLocation checkLock(TabletLocation tl) {
       // the goal of this class is to minimize calls out to lockChecker under that assumption that its a resource synchronized among many threads... want to
@@ -128,7 +130,7 @@
       if (tl == null)
         return null;
 
-      Pair<String,String> lock = new Pair<String,String>(tl.tablet_location, tl.tablet_session);
+      Pair<String,String> lock = new Pair<>(tl.tablet_location, tl.tablet_session);
 
       if (okLocks.contains(lock))
         return tl;
@@ -142,7 +144,7 @@
       }
 
       if (log.isTraceEnabled())
-        log.trace("Tablet server " + tl.tablet_location + " " + tl.tablet_session + " no longer holds its lock");
+        log.trace("Tablet server {} {} no longer holds its lock", tl.tablet_location, tl.tablet_session);
 
       invalidLocks.add(lock);
 
@@ -150,8 +152,8 @@
     }
   }
 
-  public TabletLocatorImpl(Text table, TabletLocator parent, TabletLocationObtainer tlo, TabletServerLockChecker tslc) {
-    this.tableId = table;
+  public TabletLocatorImpl(String tableId, TabletLocator parent, TabletLocationObtainer tlo, TabletServerLockChecker tslc) {
+    this.tableId = tableId;
     this.parent = parent;
     this.locationObtainer = tlo;
     this.lockChecker = tslc;
@@ -164,11 +166,14 @@
   public <T extends Mutation> void binMutations(ClientContext context, List<T> mutations, Map<String,TabletServerMutations<T>> binnedMutations, List<T> failures)
       throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
 
-    OpTimer opTimer = null;
-    if (log.isTraceEnabled())
-      opTimer = new OpTimer(log, Level.TRACE).start("Binning " + mutations.size() + " mutations for table " + tableId);
+    OpTimer timer = null;
 
-    ArrayList<T> notInCache = new ArrayList<T>();
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Binning {} mutations for table {}", Thread.currentThread().getId(), mutations.size(), tableId);
+      timer = new OpTimer().start();
+    }
+
+    ArrayList<T> notInCache = new ArrayList<>();
     Text row = new Text();
 
     LockCheckerSession lcSession = new LockCheckerSession();
@@ -226,8 +231,12 @@
       }
     }
 
-    if (opTimer != null)
-      opTimer.stop("Binned " + mutations.size() + " mutations for table " + tableId + " to " + binnedMutations.size() + " tservers in %DURATION%");
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Binned {} mutations for table {} to {} tservers in {}", Thread.currentThread().getId(), mutations.size(), tableId,
+          binnedMutations.size(), String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
+
   }
 
   private <T extends Mutation> boolean addMutation(Map<String,TabletServerMutations<T>> binnedMutations, T mutation, TabletLocation tl,
@@ -238,7 +247,7 @@
       // do lock check once per tserver here to make binning faster
       boolean lockHeld = lcSession.checkLock(tl) != null;
       if (lockHeld) {
-        tsm = new TabletServerMutations<T>(tl.tablet_session);
+        tsm = new TabletServerMutations<>(tl.tablet_session);
         binnedMutations.put(tl.tablet_location, tsm);
       } else {
         return false;
@@ -256,8 +265,8 @@
 
   private List<Range> binRanges(ClientContext context, List<Range> ranges, Map<String,Map<KeyExtent,List<Range>>> binnedRanges, boolean useCache,
       LockCheckerSession lcSession) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    List<Range> failures = new ArrayList<Range>();
-    List<TabletLocation> tabletLocations = new ArrayList<TabletLocation>();
+    List<Range> failures = new ArrayList<>();
+    List<TabletLocation> tabletLocations = new ArrayList<>();
 
     boolean lookupFailed = false;
 
@@ -324,9 +333,12 @@
      * should not log.
      */
 
-    OpTimer opTimer = null;
-    if (log.isTraceEnabled())
-      opTimer = new OpTimer(log, Level.TRACE).start("Binning " + ranges.size() + " ranges for table " + tableId);
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Binning {} ranges for table {}", Thread.currentThread().getId(), ranges.size(), tableId);
+      timer = new OpTimer().start();
+    }
 
     LockCheckerSession lcSession = new LockCheckerSession();
 
@@ -358,8 +370,11 @@
       }
     }
 
-    if (opTimer != null)
-      opTimer.stop("Binned " + ranges.size() + " ranges for table " + tableId + " to " + binnedRanges.size() + " tservers in %DURATION%");
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Binned {} ranges for table {} to {} tservers in {}", Thread.currentThread().getId(), ranges.size(), tableId, binnedRanges.size(),
+          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
 
     return failures;
   }
@@ -373,7 +388,7 @@
       wLock.unlock();
     }
     if (log.isTraceEnabled())
-      log.trace("Invalidated extent=" + failedExtent);
+      log.trace("Invalidated extent={}", failedExtent);
   }
 
   @Override
@@ -385,7 +400,7 @@
       wLock.unlock();
     }
     if (log.isTraceEnabled())
-      log.trace("Invalidated " + keySet.size() + " cache entries for table " + tableId);
+      log.trace("Invalidated {} cache entries for table {}", keySet.size(), tableId);
   }
 
   @Override
@@ -406,7 +421,7 @@
     lockChecker.invalidateCache(server);
 
     if (log.isTraceEnabled())
-      log.trace("invalidated " + invalidatedCount + " cache entries  table=" + tableId + " server=" + server);
+      log.trace("invalidated {} cache entries  table={} server={}", invalidatedCount, tableId, server);
 
   }
 
@@ -421,17 +436,19 @@
       wLock.unlock();
     }
     if (log.isTraceEnabled())
-      log.trace("invalidated all " + invalidatedCount + " cache entries for table=" + tableId);
+      log.trace("invalidated all {} cache entries for table={}", invalidatedCount, tableId);
   }
 
   @Override
   public TabletLocation locateTablet(ClientContext context, Text row, boolean skipRow, boolean retry) throws AccumuloException, AccumuloSecurityException,
       TableNotFoundException {
 
-    OpTimer opTimer = null;
-    if (log.isTraceEnabled())
-      opTimer = new OpTimer(log, Level.TRACE).start("Locating tablet  table=" + tableId + " row=" + TextUtil.truncate(row) + "  skipRow=" + skipRow + " retry="
-          + retry);
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Locating tablet  table={} row={} skipRow={} retry={}", Thread.currentThread().getId(), tableId, TextUtil.truncate(row), skipRow, retry);
+      timer = new OpTimer().start();
+    }
 
     while (true) {
 
@@ -439,14 +456,17 @@
       TabletLocation tl = _locateTablet(context, row, skipRow, retry, true, lcSession);
 
       if (retry && tl == null) {
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
         if (log.isTraceEnabled())
-          log.trace("Failed to locate tablet containing row " + TextUtil.truncate(row) + " in table " + tableId + ", will retry...");
+          log.trace("Failed to locate tablet containing row {} in table {}, will retry...", TextUtil.truncate(row), tableId);
         continue;
       }
 
-      if (opTimer != null)
-        opTimer.stop("Located tablet " + (tl == null ? null : tl.tablet_extent) + " at " + (tl == null ? null : tl.tablet_location) + " in %DURATION%");
+      if (timer != null) {
+        timer.stop();
+        log.trace("tid={} Located tablet {} at {} in {}", Thread.currentThread().getId(), (tl == null ? "null" : tl.tablet_extent), (tl == null ? "null"
+            : tl.tablet_location), String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+      }
 
       return tl;
     }
@@ -656,7 +676,7 @@
           return;
       }
 
-      List<Range> lookups = new ArrayList<Range>(badExtents.size());
+      List<Range> lookups = new ArrayList<>(badExtents.size());
 
       for (KeyExtent be : badExtents) {
         lookups.add(be.toMetadataRange());
@@ -665,12 +685,12 @@
 
       lookups = Range.mergeOverlapping(lookups);
 
-      Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<String,Map<KeyExtent,List<Range>>>();
+      Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<>();
 
       parent.binRanges(context, lookups, binnedRanges);
 
       // randomize server order
-      ArrayList<String> tabletServers = new ArrayList<String>(binnedRanges.keySet());
+      ArrayList<String> tabletServers = new ArrayList<>(binnedRanges.keySet());
       Collections.shuffle(tabletServers);
 
       for (String tserver : tabletServers) {
@@ -691,13 +711,13 @@
   protected static void addRange(Map<String,Map<KeyExtent,List<Range>>> binnedRanges, String location, KeyExtent ke, Range range) {
     Map<KeyExtent,List<Range>> tablets = binnedRanges.get(location);
     if (tablets == null) {
-      tablets = new HashMap<KeyExtent,List<Range>>();
+      tablets = new HashMap<>();
       binnedRanges.put(location, tablets);
     }
 
     List<Range> tabletsRanges = tablets.get(ke);
     if (tabletsRanges == null) {
-      tabletsRanges = new ArrayList<Range>();
+      tabletsRanges = new ArrayList<>();
       tablets.put(ke, tabletsRanges);
     }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchReader.java b/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchReader.java
index 6d09936..0d4909b 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchReader.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchReader.java
@@ -36,7 +36,7 @@
 public class TabletServerBatchReader extends ScannerOptions implements BatchScanner {
   private static final Logger log = LoggerFactory.getLogger(TabletServerBatchReader.class);
 
-  private String table;
+  private String tableId;
   private int numThreads;
   private ExecutorService queryThreadPool;
 
@@ -54,13 +54,13 @@
 
   private final int batchReaderInstance = getNextBatchReaderInstance();
 
-  public TabletServerBatchReader(ClientContext context, String table, Authorizations authorizations, int numQueryThreads) {
+  public TabletServerBatchReader(ClientContext context, String tableId, Authorizations authorizations, int numQueryThreads) {
     checkArgument(context != null, "context is null");
-    checkArgument(table != null, "table is null");
+    checkArgument(tableId != null, "tableId is null");
     checkArgument(authorizations != null, "authorizations is null");
     this.context = context;
     this.authorizations = authorizations;
-    this.table = table;
+    this.tableId = tableId;
     this.numThreads = numQueryThreads;
 
     queryThreadPool = new SimpleThreadPool(numQueryThreads, "batch scanner " + batchReaderInstance + "-");
@@ -98,7 +98,7 @@
       throw new IllegalStateException("batch reader closed");
     }
 
-    this.ranges = new ArrayList<Range>(ranges);
+    this.ranges = new ArrayList<>(ranges);
 
   }
 
@@ -112,6 +112,6 @@
       throw new IllegalStateException("batch reader closed");
     }
 
-    return new TabletServerBatchReaderIterator(context, table, authorizations, ranges, numThreads, queryThreadPool, this, timeOut);
+    return new TabletServerBatchReaderIterator(context, tableId, authorizations, ranges, numThreads, queryThreadPool, this, timeOut);
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchReaderIterator.java b/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchReaderIterator.java
index 053f2b3..8796d3c 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchReaderIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchReaderIterator.java
@@ -34,10 +34,12 @@
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Semaphore;
 import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.client.TableDeletedException;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.TableOfflineException;
@@ -55,30 +57,31 @@
 import org.apache.accumulo.core.data.thrift.TRange;
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.rpc.ThriftUtil;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.tabletserver.thrift.NoSuchScanIDException;
+import org.apache.accumulo.core.tabletserver.thrift.TSampleNotPresentException;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
 import org.apache.accumulo.core.trace.Tracer;
 import org.apache.accumulo.core.util.ByteBufferUtil;
 import org.apache.accumulo.core.util.OpTimer;
-import org.apache.hadoop.io.Text;
 import org.apache.htrace.wrappers.TraceRunnable;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
 import org.apache.thrift.TApplicationException;
 import org.apache.thrift.TException;
 import org.apache.thrift.transport.TTransport;
 import org.apache.thrift.transport.TTransportException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.common.net.HostAndPort;
 
 public class TabletServerBatchReaderIterator implements Iterator<Entry<Key,Value>> {
 
-  private static final Logger log = Logger.getLogger(TabletServerBatchReaderIterator.class);
+  private static final Logger log = LoggerFactory.getLogger(TabletServerBatchReaderIterator.class);
 
   private final ClientContext context;
   private final Instance instance;
-  private final String table;
+  private final String tableId;
   private Authorizations authorizations = Authorizations.EMPTY;
   private final int numThreads;
   private final ExecutorService queryThreadPool;
@@ -87,7 +90,7 @@
   private ArrayBlockingQueue<List<Entry<Key,Value>>> resultsQueue;
   private Iterator<Entry<Key,Value>> batchIterator;
   private List<Entry<Key,Value>> batch;
-  private static final List<Entry<Key,Value>> LAST_BATCH = new ArrayList<Map.Entry<Key,Value>>();
+  private static final List<Entry<Key,Value>> LAST_BATCH = new ArrayList<>();
   private final Object nextLock = new Object();
 
   private long failSleepTime = 100;
@@ -96,7 +99,7 @@
 
   private Map<String,TimeoutTracker> timeoutTrackers;
   private Set<String> timedoutServers;
-  private long timeout;
+  private final long timeout;
 
   private TabletLocator locator;
 
@@ -104,26 +107,26 @@
     void receive(List<Entry<Key,Value>> entries);
   }
 
-  public TabletServerBatchReaderIterator(ClientContext context, String table, Authorizations authorizations, ArrayList<Range> ranges, int numThreads,
+  public TabletServerBatchReaderIterator(ClientContext context, String tableId, Authorizations authorizations, ArrayList<Range> ranges, int numThreads,
       ExecutorService queryThreadPool, ScannerOptions scannerOptions, long timeout) {
 
     this.context = context;
     this.instance = context.getInstance();
-    this.table = table;
+    this.tableId = tableId;
     this.authorizations = authorizations;
     this.numThreads = numThreads;
     this.queryThreadPool = queryThreadPool;
     this.options = new ScannerOptions(scannerOptions);
-    resultsQueue = new ArrayBlockingQueue<List<Entry<Key,Value>>>(numThreads);
+    resultsQueue = new ArrayBlockingQueue<>(numThreads);
 
-    this.locator = new TimeoutTabletLocator(TabletLocator.getLocator(context, new Text(table)), timeout);
+    this.locator = new TimeoutTabletLocator(timeout, context, tableId);
 
     timeoutTrackers = Collections.synchronizedMap(new HashMap<String,TabletServerBatchReaderIterator.TimeoutTracker>());
     timedoutServers = Collections.synchronizedSet(new HashSet<String>());
     this.timeout = timeout;
 
     if (options.fetchedColumns.size() > 0) {
-      ArrayList<Range> ranges2 = new ArrayList<Range>(ranges.size());
+      ArrayList<Range> ranges2 = new ArrayList<>(ranges.size());
       for (Range range : ranges) {
         ranges2.add(range.bound(options.fetchedColumns.first(), options.fetchedColumns.last()));
       }
@@ -212,10 +215,10 @@
   }
 
   private synchronized void lookup(List<Range> ranges, ResultReceiver receiver) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    List<Column> columns = new ArrayList<Column>(options.fetchedColumns);
+    List<Column> columns = new ArrayList<>(options.fetchedColumns);
     ranges = Range.mergeOverlapping(ranges);
 
-    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<String,Map<KeyExtent,List<Range>>>();
+    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<>();
 
     binRanges(locator, ranges, binnedRanges);
 
@@ -238,15 +241,16 @@
         // the table was deleted the tablet locator entries for the deleted table were not cleared... so
         // need to always do the check when failures occur
         if (failures.size() >= lastFailureSize)
-          if (!Tables.exists(instance, table))
-            throw new TableDeletedException(table);
-          else if (Tables.getTableState(instance, table) == TableState.OFFLINE)
-            throw new TableOfflineException(instance, table);
+          if (!Tables.exists(instance, tableId))
+            throw new TableDeletedException(tableId);
+          else if (Tables.getTableState(instance, tableId) == TableState.OFFLINE)
+            throw new TableOfflineException(instance, tableId);
 
         lastFailureSize = failures.size();
 
         if (log.isTraceEnabled())
-          log.trace("Failed to bin " + failures.size() + " ranges, tablet locations were null, retrying in 100ms");
+          log.trace("Failed to bin {} ranges, tablet locations were null, retrying in 100ms", failures.size());
+
         try {
           Thread.sleep(100);
         } catch (InterruptedException e) {
@@ -260,13 +264,13 @@
 
     // truncate the ranges to within the tablets... this makes it easier to know what work
     // needs to be redone when failures occurs and tablets have merged or split
-    Map<String,Map<KeyExtent,List<Range>>> binnedRanges2 = new HashMap<String,Map<KeyExtent,List<Range>>>();
+    Map<String,Map<KeyExtent,List<Range>>> binnedRanges2 = new HashMap<>();
     for (Entry<String,Map<KeyExtent,List<Range>>> entry : binnedRanges.entrySet()) {
-      Map<KeyExtent,List<Range>> tabletMap = new HashMap<KeyExtent,List<Range>>();
+      Map<KeyExtent,List<Range>> tabletMap = new HashMap<>();
       binnedRanges2.put(entry.getKey(), tabletMap);
       for (Entry<KeyExtent,List<Range>> tabletRanges : entry.getValue().entrySet()) {
         Range tabletRange = tabletRanges.getKey().toDataRange();
-        List<Range> clippedRanges = new ArrayList<Range>();
+        List<Range> clippedRanges = new ArrayList<>();
         tabletMap.put(tabletRanges.getKey(), clippedRanges);
         for (Range range : tabletRanges.getValue())
           clippedRanges.add(tabletRange.clip(range));
@@ -280,7 +284,7 @@
   private void processFailures(Map<KeyExtent,List<Range>> failures, ResultReceiver receiver, List<Column> columns) throws AccumuloException,
       AccumuloSecurityException, TableNotFoundException {
     if (log.isTraceEnabled())
-      log.trace("Failed to execute multiscans against " + failures.size() + " tablets, retrying...");
+      log.trace("Failed to execute multiscans against {} tablets, retrying...", failures.size());
 
     try {
       Thread.sleep(failSleepTime);
@@ -294,8 +298,8 @@
 
     failSleepTime = Math.min(5000, failSleepTime * 2);
 
-    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<String,Map<KeyExtent,List<Range>>>();
-    List<Range> allRanges = new ArrayList<Range>();
+    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<>();
+    List<Range> allRanges = new ArrayList<>();
 
     for (List<Range> ranges : failures.values())
       allRanges.addAll(ranges);
@@ -308,7 +312,7 @@
   }
 
   private String getTableInfo() {
-    return Tables.getPrintableTableInfoFromId(instance, table);
+    return Tables.getPrintableTableInfoFromId(instance, tableId);
   }
 
   private class QueryTask implements Runnable {
@@ -338,8 +342,8 @@
     public void run() {
       String threadName = Thread.currentThread().getName();
       Thread.currentThread().setName(threadName + " looking up " + tabletsRanges.size() + " ranges at " + tsLocation);
-      Map<KeyExtent,List<Range>> unscanned = new HashMap<KeyExtent,List<Range>>();
-      Map<KeyExtent,List<Range>> tsFailures = new HashMap<KeyExtent,List<Range>>();
+      Map<KeyExtent,List<Range>> unscanned = new HashMap<>();
+      Map<KeyExtent,List<Range>> tsFailures = new HashMap<>();
       try {
         TimeoutTracker timeoutTracker = timeoutTrackers.get(tsLocation);
         if (timeoutTracker == null) {
@@ -363,21 +367,23 @@
 
           locator.invalidateCache(context.getInstance(), tsLocation);
         }
-        log.debug(e.getMessage(), e);
+        log.debug("IOException thrown", e);
       } catch (AccumuloSecurityException e) {
         e.setTableInfo(getTableInfo());
-        log.debug(e.getMessage(), e);
+        log.debug("AccumuloSecurityException thrown", e);
 
         Tables.clearCache(instance);
-        if (!Tables.exists(instance, table))
-          fatalException = new TableDeletedException(table);
+        if (!Tables.exists(instance, tableId))
+          fatalException = new TableDeletedException(tableId);
         else
           fatalException = e;
+      } catch (SampleNotPresentException e) {
+        fatalException = e;
       } catch (Throwable t) {
         if (queryThreadPool.isShutdown())
-          log.debug(t.getMessage(), t);
+          log.debug("Caught exception, but queryThreadPool is shutdown", t);
         else
-          log.warn(t.getMessage(), t);
+          log.warn("Caught exception, but queryThreadPool is not shutdown", t);
         fatalException = t;
       } finally {
         semaphore.release();
@@ -389,17 +395,17 @@
             try {
               processFailures(failures, receiver, columns);
             } catch (TableNotFoundException e) {
-              log.debug(e.getMessage(), e);
+              log.debug("{}", e.getMessage(), e);
               fatalException = e;
             } catch (AccumuloException e) {
-              log.debug(e.getMessage(), e);
+              log.debug("{}", e.getMessage(), e);
               fatalException = e;
             } catch (AccumuloSecurityException e) {
               e.setTableInfo(getTableInfo());
-              log.debug(e.getMessage(), e);
+              log.debug("{}", e.getMessage(), e);
               fatalException = e;
             } catch (Throwable t) {
-              log.debug(t.getMessage(), t);
+              log.debug("{}", t.getMessage(), t);
               fatalException = t;
             }
 
@@ -456,7 +462,7 @@
 
     }
 
-    Map<KeyExtent,List<Range>> failures = new HashMap<KeyExtent,List<Range>>();
+    Map<KeyExtent,List<Range>> failures = new HashMap<>();
 
     if (timedoutServers.size() > 0) {
       // go ahead and fail any timed out servers
@@ -471,10 +477,10 @@
 
     // randomize tabletserver order... this will help when there are multiple
     // batch readers and writers running against accumulo
-    List<String> locations = new ArrayList<String>(binnedRanges.keySet());
+    List<String> locations = new ArrayList<>(binnedRanges.keySet());
     Collections.shuffle(locations);
 
-    List<QueryTask> queryTasks = new ArrayList<QueryTask>();
+    List<QueryTask> queryTasks = new ArrayList<>();
 
     for (final String tsLocation : locations) {
 
@@ -483,13 +489,13 @@
         QueryTask queryTask = new QueryTask(tsLocation, tabletsRanges, failures, receiver, columns);
         queryTasks.add(queryTask);
       } else {
-        HashMap<KeyExtent,List<Range>> tabletSubset = new HashMap<KeyExtent,List<Range>>();
+        HashMap<KeyExtent,List<Range>> tabletSubset = new HashMap<>();
         for (Entry<KeyExtent,List<Range>> entry : tabletsRanges.entrySet()) {
           tabletSubset.put(entry.getKey(), entry.getValue());
           if (tabletSubset.size() >= maxTabletsPerRequest) {
             QueryTask queryTask = new QueryTask(tsLocation, tabletSubset, failures, receiver, columns);
             queryTasks.add(queryTask);
-            tabletSubset = new HashMap<KeyExtent,List<Range>>();
+            tabletSubset = new HashMap<>();
           }
         }
 
@@ -512,13 +518,12 @@
   static void trackScanning(Map<KeyExtent,List<Range>> failures, Map<KeyExtent,List<Range>> unscanned, MultiScanResult scanResult) {
 
     // translate returned failures, remove them from unscanned, and add them to failures
-    Map<KeyExtent,List<Range>> retFailures = Translator.translate(scanResult.failures, Translators.TKET, new Translator.ListTranslator<TRange,Range>(
-        Translators.TRT));
+    Map<KeyExtent,List<Range>> retFailures = Translator.translate(scanResult.failures, Translators.TKET, new Translator.ListTranslator<>(Translators.TRT));
     unscanned.keySet().removeAll(retFailures.keySet());
     failures.putAll(retFailures);
 
     // translate full scans and remove them from unscanned
-    HashSet<KeyExtent> fullScans = new HashSet<KeyExtent>(Translator.translate(scanResult.fullScans, Translators.TKET));
+    HashSet<KeyExtent> fullScans = new HashSet<>(Translator.translate(scanResult.fullScans, Translators.TKET));
     unscanned.keySet().removeAll(fullScans);
 
     // remove partial scan from unscanned
@@ -606,7 +611,7 @@
 
     // copy requested to unscanned map. we will remove ranges as they are scanned in trackScanning()
     for (Entry<KeyExtent,List<Range>> entry : requested.entrySet()) {
-      ArrayList<Range> ranges = new ArrayList<Range>();
+      ArrayList<Range> ranges = new ArrayList<>();
       for (Range range : entry.getValue()) {
         ranges.add(new Range(range));
       }
@@ -625,28 +630,37 @@
 
       try {
 
-        OpTimer opTimer = new OpTimer(log, Level.TRACE).start("Starting multi scan, tserver=" + server + "  #tablets=" + requested.size() + "  #ranges="
-            + sumSizes(requested.values()) + " ssil=" + options.serverSideIteratorList + " ssio=" + options.serverSideIteratorOptions);
+        OpTimer timer = null;
+
+        if (log.isTraceEnabled()) {
+          log.trace("tid={} Starting multi scan, tserver={}  #tablets={}  #ranges={} ssil={} ssio={}", Thread.currentThread().getId(), server,
+              requested.size(), sumSizes(requested.values()), options.serverSideIteratorList, options.serverSideIteratorOptions);
+
+          timer = new OpTimer().start();
+        }
 
         TabletType ttype = TabletType.type(requested.keySet());
         boolean waitForWrites = !ThriftScanner.serversWaitedForWrites.get(ttype).contains(server);
 
-        Map<TKeyExtent,List<TRange>> thriftTabletRanges = Translator.translate(requested, Translators.KET, new Translator.ListTranslator<Range,TRange>(
-            Translators.RT));
+        Map<TKeyExtent,List<TRange>> thriftTabletRanges = Translator.translate(requested, Translators.KET, new Translator.ListTranslator<>(Translators.RT));
         InitialMultiScan imsr = client.startMultiScan(Tracer.traceInfo(), context.rpcCreds(), thriftTabletRanges,
             Translator.translate(columns, Translators.CT), options.serverSideIteratorList, options.serverSideIteratorOptions,
-            ByteBufferUtil.toByteBuffers(authorizations.getAuthorizations()), waitForWrites);
+            ByteBufferUtil.toByteBuffers(authorizations.getAuthorizations()), waitForWrites,
+            SamplerConfigurationImpl.toThrift(options.getSamplerConfiguration()), options.batchTimeOut, options.classLoaderContext);
         if (waitForWrites)
           ThriftScanner.serversWaitedForWrites.get(ttype).add(server.toString());
 
         MultiScanResult scanResult = imsr.result;
 
-        opTimer.stop("Got 1st multi scan results, #results=" + scanResult.results.size() + (scanResult.more ? "  scanID=" + imsr.scanID : "")
-            + " in %DURATION%");
+        if (timer != null) {
+          timer.stop();
+          log.trace("tid={} Got 1st multi scan results, #results={} {} in {}", Thread.currentThread().getId(), scanResult.results.size(),
+              (scanResult.more ? "scanID=" + imsr.scanID : ""), String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+        }
 
-        ArrayList<Entry<Key,Value>> entries = new ArrayList<Map.Entry<Key,Value>>(scanResult.results.size());
+        ArrayList<Entry<Key,Value>> entries = new ArrayList<>(scanResult.results.size());
         for (TKeyValue kv : scanResult.results) {
-          entries.add(new SimpleImmutableEntry<Key,Value>(new Key(kv.key), new Value(kv.value)));
+          entries.add(new SimpleImmutableEntry<>(new Key(kv.key), new Value(kv.value)));
         }
 
         if (entries.size() > 0)
@@ -657,18 +671,28 @@
 
         trackScanning(failures, unscanned, scanResult);
 
+        AtomicLong nextOpid = new AtomicLong();
+
         while (scanResult.more) {
 
           timeoutTracker.check();
 
-          opTimer.start("Continuing multi scan, scanid=" + imsr.scanID);
-          scanResult = client.continueMultiScan(Tracer.traceInfo(), imsr.scanID);
-          opTimer.stop("Got more multi scan results, #results=" + scanResult.results.size() + (scanResult.more ? "  scanID=" + imsr.scanID : "")
-              + " in %DURATION%");
+          if (timer != null) {
+            log.trace("tid={} oid={} Continuing multi scan, scanid={}", Thread.currentThread().getId(), nextOpid.get(), imsr.scanID);
+            timer.reset().start();
+          }
 
-          entries = new ArrayList<Map.Entry<Key,Value>>(scanResult.results.size());
+          scanResult = client.continueMultiScan(Tracer.traceInfo(), imsr.scanID);
+
+          if (timer != null) {
+            timer.stop();
+            log.trace("tid={} oid={} Got more multi scan results, #results={} {} in {}", Thread.currentThread().getId(), nextOpid.getAndIncrement(),
+                scanResult.results.size(), (scanResult.more ? " scanID=" + imsr.scanID : ""), String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+          }
+
+          entries = new ArrayList<>(scanResult.results.size());
           for (TKeyValue kv : scanResult.results) {
-            entries.add(new SimpleImmutableEntry<Key,Value>(new Key(kv.key), new Value(kv.value)));
+            entries.add(new SimpleImmutableEntry<>(new Key(kv.key), new Value(kv.value)));
           }
 
           if (entries.size() > 0)
@@ -686,20 +710,29 @@
         ThriftUtil.returnClient(client);
       }
     } catch (TTransportException e) {
-      log.debug("Server : " + server + " msg : " + e.getMessage());
+      log.debug("Server : {} msg : {}", server, e.getMessage());
       timeoutTracker.errorOccured(e);
       throw new IOException(e);
     } catch (ThriftSecurityException e) {
-      log.debug("Server : " + server + " msg : " + e.getMessage(), e);
+      log.debug("Server : {} msg : {}", server, e.getMessage(), e);
       throw new AccumuloSecurityException(e.user, e.code, e);
     } catch (TApplicationException e) {
-      log.debug("Server : " + server + " msg : " + e.getMessage(), e);
+      log.debug("Server : {} msg : {}", server, e.getMessage(), e);
       throw new AccumuloServerException(server, e);
     } catch (NoSuchScanIDException e) {
-      log.debug("Server : " + server + " msg : " + e.getMessage(), e);
+      log.debug("Server : {} msg : {}", server, e.getMessage(), e);
       throw new IOException(e);
-    } catch (TException e) {
+    } catch (TSampleNotPresentException e) {
       log.debug("Server : " + server + " msg : " + e.getMessage(), e);
+      String tableInfo = "?";
+      if (e.getExtent() != null) {
+        String tableId = new KeyExtent(e.getExtent()).getTableId();
+        tableInfo = Tables.getPrintableTableInfoFromId(context.getInstance(), tableId);
+      }
+      String message = "Table " + tableInfo + " does not have sampling configured or built";
+      throw new SampleNotPresentException(message, e);
+    } catch (TException e) {
+      log.debug("Server : {} msg : {}", server, e.getMessage(), e);
       timeoutTracker.errorOccured(e);
       throw new IOException(e);
     } finally {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchWriter.java b/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchWriter.java
index a8afa5a..4df2db6 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchWriter.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/TabletServerBatchWriter.java
@@ -71,7 +71,6 @@
 import org.apache.accumulo.core.trace.Tracer;
 import org.apache.accumulo.core.trace.thrift.TInfo;
 import org.apache.accumulo.core.util.SimpleThreadPool;
-import org.apache.hadoop.io.Text;
 import org.apache.thrift.TApplicationException;
 import org.apache.thrift.TException;
 import org.apache.thrift.TServiceClient;
@@ -151,8 +150,8 @@
 
   // error handling
   private final Violations violations = new Violations();
-  private final Map<KeyExtent,Set<SecurityErrorCode>> authorizationFailures = new HashMap<KeyExtent,Set<SecurityErrorCode>>();
-  private final HashSet<String> serverSideErrors = new HashSet<String>();
+  private final Map<KeyExtent,Set<SecurityErrorCode>> authorizationFailures = new HashMap<>();
+  private final HashSet<String> serverSideErrors = new HashSet<>();
   private final FailedMutations failedMutations = new FailedMutations();
   private int unknownErrors = 0;
   private boolean somethingFailed = false;
@@ -504,7 +503,7 @@
   }
 
   private void updateAuthorizationFailures(Set<KeyExtent> keySet, SecurityErrorCode code) {
-    HashMap<KeyExtent,SecurityErrorCode> map = new HashMap<KeyExtent,SecurityErrorCode>();
+    HashMap<KeyExtent,SecurityErrorCode> map = new HashMap<>();
     for (KeyExtent ke : keySet)
       map.put(ke, code);
 
@@ -515,9 +514,9 @@
     if (authorizationFailures.size() > 0) {
 
       // was a table deleted?
-      HashSet<String> tableIds = new HashSet<String>();
+      HashSet<String> tableIds = new HashSet<>();
       for (KeyExtent ke : authorizationFailures.keySet())
-        tableIds.add(ke.getTableId().toString());
+        tableIds.add(ke.getTableId());
 
       Tables.clearCache(context.getInstance());
       for (String tableId : tableIds)
@@ -536,7 +535,7 @@
     for (Entry<KeyExtent,SecurityErrorCode> entry : addition.entrySet()) {
       Set<SecurityErrorCode> secs = source.get(entry.getKey());
       if (secs == null) {
-        secs = new HashSet<SecurityErrorCode>();
+        secs = new HashSet<>();
         source.put(entry.getKey(), secs);
       }
       secs.add(entry.getValue());
@@ -564,9 +563,9 @@
   private void checkForFailures() throws MutationsRejectedException {
     if (somethingFailed) {
       List<ConstraintViolationSummary> cvsList = violations.asList();
-      HashMap<TabletId,Set<org.apache.accumulo.core.client.security.SecurityErrorCode>> af = new HashMap<TabletId,Set<org.apache.accumulo.core.client.security.SecurityErrorCode>>();
+      HashMap<TabletId,Set<org.apache.accumulo.core.client.security.SecurityErrorCode>> af = new HashMap<>();
       for (Entry<KeyExtent,Set<SecurityErrorCode>> entry : authorizationFailures.entrySet()) {
-        HashSet<org.apache.accumulo.core.client.security.SecurityErrorCode> codes = new HashSet<org.apache.accumulo.core.client.security.SecurityErrorCode>();
+        HashSet<org.apache.accumulo.core.client.security.SecurityErrorCode> codes = new HashSet<>();
 
         for (SecurityErrorCode sce : entry.getValue()) {
           codes.add(org.apache.accumulo.core.client.security.SecurityErrorCode.valueOf(sce.name()));
@@ -621,7 +620,7 @@
     synchronized void add(String location, TabletServerMutations<Mutation> tsm) {
       init();
       for (Entry<KeyExtent,List<Mutation>> entry : tsm.getMutations().entrySet()) {
-        recentFailures.addAll(entry.getKey().getTableId().toString(), entry.getValue());
+        recentFailures.addAll(entry.getKey().getTableId(), entry.getValue());
       }
 
     }
@@ -664,10 +663,10 @@
     private final Map<String,TabletLocator> locators;
 
     public MutationWriter(int numSendThreads) {
-      serversMutations = new HashMap<String,TabletServerMutations<Mutation>>();
-      queued = new HashSet<String>();
+      serversMutations = new HashMap<>();
+      queued = new HashSet<>();
       sendThreadPool = new SimpleThreadPool(numSendThreads, this.getClass().getName());
-      locators = new HashMap<String,TabletLocator>();
+      locators = new HashMap<>();
       binningThreadPool = new SimpleThreadPool(1, "BinMutations", new SynchronousQueue<Runnable>());
       binningThreadPool.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
     }
@@ -675,8 +674,7 @@
     private synchronized TabletLocator getLocator(String tableId) {
       TabletLocator ret = locators.get(tableId);
       if (ret == null) {
-        ret = TabletLocator.getLocator(context, new Text(tableId));
-        ret = new TimeoutTabletLocator(ret, timeout);
+        ret = new TimeoutTabletLocator(timeout, context, tableId);
         locators.put(tableId, ret);
       }
 
@@ -695,7 +693,7 @@
           List<Mutation> tableMutations = entry.getValue();
 
           if (tableMutations != null) {
-            ArrayList<Mutation> tableFailures = new ArrayList<Mutation>();
+            ArrayList<Mutation> tableFailures = new ArrayList<>();
             locator.binMutations(context, tableMutations, binnedMutations, tableFailures);
 
             if (tableFailures.size() > 0) {
@@ -717,8 +715,7 @@
         // assume an IOError communicating with metadata tablet
         failedMutations.add(mutationsToProcess);
       } catch (AccumuloSecurityException e) {
-        updateAuthorizationFailures(Collections.singletonMap(new KeyExtent(new Text(tableId), null, null),
-            SecurityErrorCode.valueOf(e.getSecurityErrorCode().name())));
+        updateAuthorizationFailures(Collections.singletonMap(new KeyExtent(tableId, null, null), SecurityErrorCode.valueOf(e.getSecurityErrorCode().name())));
       } catch (TableDeletedException e) {
         updateUnknownErrors(e.getMessage(), e);
       } catch (TableOfflineException e) {
@@ -752,7 +749,7 @@
     }
 
     private void addMutations(MutationSet mutationsToSend) {
-      Map<String,TabletServerMutations<Mutation>> binnedMutations = new HashMap<String,TabletServerMutations<Mutation>>();
+      Map<String,TabletServerMutations<Mutation>> binnedMutations = new HashMap<>();
       Span span = Trace.start("binMutations");
       try {
         long t1 = System.currentTimeMillis();
@@ -795,7 +792,7 @@
         log.trace(String.format("Started sending %,d mutations to %,d tablet servers", count, binnedMutations.keySet().size()));
 
       // randomize order of servers
-      ArrayList<String> servers = new ArrayList<String>(binnedMutations.keySet());
+      ArrayList<String> servers = new ArrayList<>(binnedMutations.keySet());
       Collections.shuffle(servers);
 
       for (String server : servers)
@@ -848,7 +845,7 @@
 
           long count = 0;
 
-          Set<Text> tableIds = new TreeSet<Text>();
+          Set<String> tableIds = new TreeSet<>();
           for (Map.Entry<KeyExtent,List<Mutation>> entry : mutationBatch.entrySet()) {
             count += entry.getValue().size();
             tableIds.add(entry.getKey().getTableId());
@@ -896,12 +893,12 @@
           if (log.isTraceEnabled())
             log.trace("failed to send mutations to {} : {}", location, e.getMessage());
 
-          HashSet<String> tables = new HashSet<String>();
+          HashSet<String> tables = new HashSet<>();
           for (KeyExtent ke : mutationBatch.keySet())
-            tables.add(ke.getTableId().toString());
+            tables.add(ke.getTableId());
 
           for (String table : tables)
-            TabletLocator.getLocator(context, new Text(table)).invalidateCache(context.getInstance(), location);
+            getLocator(table).invalidateCache(context.getInstance(), location);
 
           failedMutations.add(location, tsm);
         } finally {
@@ -937,8 +934,8 @@
             try {
               client.update(tinfo, context.rpcCreds(), entry.getKey().toThrift(), entry.getValue().get(0).toThrift(), DurabilityImpl.toThrift(durability));
             } catch (NotServingTabletException e) {
-              allFailures.addAll(entry.getKey().getTableId().toString(), entry.getValue());
-              TabletLocator.getLocator(context, new Text(entry.getKey().getTableId())).invalidateCache(entry.getKey());
+              allFailures.addAll(entry.getKey().getTableId(), entry.getValue());
+              getLocator(entry.getKey().getTableId()).invalidateCache(entry.getKey());
             } catch (ConstraintViolationException e) {
               updatedConstraintViolations(Translator.translate(e.violationSummaries, Translators.TCVST));
             }
@@ -947,7 +944,7 @@
 
             long usid = client.startUpdate(tinfo, context.rpcCreds(), DurabilityImpl.toThrift(durability));
 
-            List<TMutation> updates = new ArrayList<TMutation>();
+            List<TMutation> updates = new ArrayList<>();
             for (Entry<KeyExtent,List<Mutation>> entry : tabMuts.entrySet()) {
               long size = 0;
               Iterator<Mutation> iter = entry.getValue().iterator();
@@ -977,12 +974,12 @@
               int numCommitted = (int) (long) entry.getValue();
               totalCommitted += numCommitted;
 
-              String table = failedExtent.getTableId().toString();
+              String tableId = failedExtent.getTableId();
 
-              TabletLocator.getLocator(context, new Text(table)).invalidateCache(failedExtent);
+              getLocator(tableId).invalidateCache(failedExtent);
 
               ArrayList<Mutation> mutations = (ArrayList<Mutation>) tabMuts.get(failedExtent);
-              allFailures.addAll(table, mutations.subList(numCommitted, mutations.size()));
+              allFailures.addAll(tableId, mutations.subList(numCommitted, mutations.size()));
             }
 
             if (failures.keySet().containsAll(tabMuts.keySet()) && totalCommitted == 0) {
@@ -1022,13 +1019,13 @@
     private int memoryUsed = 0;
 
     MutationSet() {
-      mutations = new HashMap<String,List<Mutation>>();
+      mutations = new HashMap<>();
     }
 
     void addMutation(String table, Mutation mutation) {
       List<Mutation> tabMutList = mutations.get(table);
       if (tabMutList == null) {
-        tabMutList = new ArrayList<Mutation>();
+        tabMutList = new ArrayList<>();
         mutations.put(table, tabMutList);
       }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java
index 39d3b32..91b2637 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java
@@ -26,16 +26,19 @@
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.SortedSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.client.TableDeletedException;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.TableOfflineException;
 import org.apache.accumulo.core.client.impl.TabletLocator.TabletLocation;
 import org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Column;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.KeyValue;
@@ -49,9 +52,11 @@
 import org.apache.accumulo.core.data.thrift.TKeyValue;
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.rpc.ThriftUtil;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.tabletserver.thrift.NoSuchScanIDException;
 import org.apache.accumulo.core.tabletserver.thrift.NotServingTabletException;
+import org.apache.accumulo.core.tabletserver.thrift.TSampleNotPresentException;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
 import org.apache.accumulo.core.tabletserver.thrift.TooManyFilesException;
 import org.apache.accumulo.core.trace.Span;
@@ -60,17 +65,17 @@
 import org.apache.accumulo.core.trace.thrift.TInfo;
 import org.apache.accumulo.core.util.OpTimer;
 import org.apache.hadoop.io.Text;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
 import org.apache.thrift.TApplicationException;
 import org.apache.thrift.TException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.common.net.HostAndPort;
 
 public class ThriftScanner {
-  private static final Logger log = Logger.getLogger(ThriftScanner.class);
+  private static final Logger log = LoggerFactory.getLogger(ThriftScanner.class);
 
-  public static final Map<TabletType,Set<String>> serversWaitedForWrites = new EnumMap<TabletType,Set<String>>(TabletType.class);
+  public static final Map<TabletType,Set<String>> serversWaitedForWrites = new EnumMap<>(TabletType.class);
 
   static {
     for (TabletType ttype : TabletType.values()) {
@@ -80,7 +85,8 @@
 
   public static boolean getBatchFromServer(ClientContext context, Range range, KeyExtent extent, String server, SortedMap<Key,Value> results,
       SortedSet<Column> fetchedColumns, List<IterInfo> serverSideIteratorList, Map<String,Map<String,String>> serverSideIteratorOptions, int size,
-      Authorizations authorizations, boolean retry) throws AccumuloException, AccumuloSecurityException, NotServingTabletException {
+      Authorizations authorizations, boolean retry, long batchTimeOut, String classLoaderContext) throws AccumuloException, AccumuloSecurityException,
+      NotServingTabletException {
     if (server == null)
       throw new AccumuloException(new IOException());
 
@@ -91,13 +97,14 @@
       try {
         // not reading whole rows (or stopping on row boundries) so there is no need to enable isolation below
         ScanState scanState = new ScanState(context, extent.getTableId(), authorizations, range, fetchedColumns, size, serverSideIteratorList,
-            serverSideIteratorOptions, false);
+            serverSideIteratorOptions, false, Constants.SCANNER_DEFAULT_READAHEAD_THRESHOLD, null, batchTimeOut, classLoaderContext);
 
         TabletType ttype = TabletType.type(extent);
         boolean waitForWrites = !serversWaitedForWrites.get(ttype).contains(server);
         InitialScan isr = client.startScan(tinfo, scanState.context.rpcCreds(), extent.toThrift(), scanState.range.toThrift(),
             Translator.translate(scanState.columns, Translators.CT), scanState.size, scanState.serverSideIteratorList, scanState.serverSideIteratorOptions,
-            scanState.authorizations.getAuthorizationsBB(), waitForWrites, scanState.isolated, scanState.readaheadThreshold);
+            scanState.authorizations.getAuthorizationsBB(), waitForWrites, scanState.isolated, scanState.readaheadThreshold, null, scanState.batchTimeOut,
+            classLoaderContext);
         if (waitForWrites)
           serversWaitedForWrites.get(ttype).add(server);
 
@@ -115,12 +122,12 @@
     } catch (TApplicationException tae) {
       throw new AccumuloServerException(server, tae);
     } catch (TooManyFilesException e) {
-      log.debug("Tablet (" + extent + ") has too many files " + server + " : " + e);
+      log.debug("Tablet ({}) has too many files {} : {}", extent, server, e.getMessage());
     } catch (ThriftSecurityException e) {
-      log.warn("Security Violation in scan request to " + server + ": " + e);
+      log.warn("Security Violation in scan request to {}: {}", server, e.getMessage());
       throw new AccumuloSecurityException(e.user, e.code, e);
     } catch (TException e) {
-      log.debug("Error getting transport to " + server + " : " + e);
+      log.debug("Error getting transport to {}: {}", server, e.getMessage());
     }
 
     throw new AccumuloException("getBatchFromServer: failed");
@@ -129,10 +136,11 @@
   public static class ScanState {
 
     boolean isolated;
-    Text tableId;
+    String tableId;
     Text startRow;
     boolean skipStartRow;
     long readaheadThreshold;
+    long batchTimeOut;
 
     Range range;
 
@@ -145,25 +153,25 @@
     TabletLocation prevLoc;
     Long scanID;
 
+    String classLoaderContext;
+
     boolean finished = false;
 
     List<IterInfo> serverSideIteratorList;
 
     Map<String,Map<String,String>> serverSideIteratorOptions;
 
-    public ScanState(ClientContext context, Text tableId, Authorizations authorizations, Range range, SortedSet<Column> fetchedColumns, int size,
-        List<IterInfo> serverSideIteratorList, Map<String,Map<String,String>> serverSideIteratorOptions, boolean isolated) {
-      this(context, tableId, authorizations, range, fetchedColumns, size, serverSideIteratorList, serverSideIteratorOptions, isolated,
-          Constants.SCANNER_DEFAULT_READAHEAD_THRESHOLD);
-    }
+    SamplerConfiguration samplerConfig;
 
-    public ScanState(ClientContext context, Text tableId, Authorizations authorizations, Range range, SortedSet<Column> fetchedColumns, int size,
-        List<IterInfo> serverSideIteratorList, Map<String,Map<String,String>> serverSideIteratorOptions, boolean isolated, long readaheadThreshold) {
+    public ScanState(ClientContext context, String tableId, Authorizations authorizations, Range range, SortedSet<Column> fetchedColumns, int size,
+        List<IterInfo> serverSideIteratorList, Map<String,Map<String,String>> serverSideIteratorOptions, boolean isolated, long readaheadThreshold,
+        SamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext) {
       this.context = context;
-      ;
-      this.authorizations = authorizations;
 
-      columns = new ArrayList<Column>(fetchedColumns.size());
+      this.authorizations = authorizations;
+      this.classLoaderContext = classLoaderContext;
+
+      columns = new ArrayList<>(fetchedColumns.size());
       for (Column column : fetchedColumns) {
         columns.add(column);
       }
@@ -187,6 +195,9 @@
       this.isolated = isolated;
       this.readaheadThreshold = readaheadThreshold;
 
+      this.samplerConfig = samplerConfig;
+
+      this.batchTimeOut = batchTimeOut;
     }
   }
 
@@ -234,16 +245,16 @@
             loc = TabletLocator.getLocator(context, scanState.tableId).locateTablet(context, scanState.startRow, scanState.skipStartRow, false);
 
             if (loc == null) {
-              if (!Tables.exists(instance, scanState.tableId.toString()))
-                throw new TableDeletedException(scanState.tableId.toString());
-              else if (Tables.getTableState(instance, scanState.tableId.toString()) == TableState.OFFLINE)
-                throw new TableOfflineException(instance, scanState.tableId.toString());
+              if (!Tables.exists(instance, scanState.tableId))
+                throw new TableDeletedException(scanState.tableId);
+              else if (Tables.getTableState(instance, scanState.tableId) == TableState.OFFLINE)
+                throw new TableOfflineException(instance, scanState.tableId);
 
               error = "Failed to locate tablet for table : " + scanState.tableId + " row : " + scanState.startRow;
               if (!error.equals(lastError))
-                log.debug(error);
+                log.debug("{}", error);
               else if (log.isTraceEnabled())
-                log.trace(error);
+                log.trace("{}", error);
               lastError = error;
               sleepMillis = pause(sleepMillis);
             } else {
@@ -263,14 +274,14 @@
               }
             }
           } catch (AccumuloServerException e) {
-            log.debug("Scan failed, server side exception : " + e.getMessage());
+            log.debug("Scan failed, server side exception : {}", e.getMessage());
             throw e;
           } catch (AccumuloException e) {
             error = "exception from tablet loc " + e.getMessage();
             if (!error.equals(lastError))
-              log.debug(error);
+              log.debug("{}", error);
             else if (log.isTraceEnabled())
-              log.trace(error);
+              log.trace("{}", error);
 
             lastError = error;
             sleepMillis = pause(sleepMillis);
@@ -285,18 +296,21 @@
           results = scan(loc, scanState, context);
         } catch (AccumuloSecurityException e) {
           Tables.clearCache(instance);
-          if (!Tables.exists(instance, scanState.tableId.toString()))
-            throw new TableDeletedException(scanState.tableId.toString());
-          e.setTableInfo(Tables.getPrintableTableInfoFromId(instance, scanState.tableId.toString()));
+          if (!Tables.exists(instance, scanState.tableId))
+            throw new TableDeletedException(scanState.tableId);
+          e.setTableInfo(Tables.getPrintableTableInfoFromId(instance, scanState.tableId));
           throw e;
         } catch (TApplicationException tae) {
           throw new AccumuloServerException(loc.tablet_location, tae);
+        } catch (TSampleNotPresentException tsnpe) {
+          String message = "Table " + Tables.getPrintableTableInfoFromId(instance, scanState.tableId) + " does not have sampling configured or built";
+          throw new SampleNotPresentException(message, tsnpe);
         } catch (NotServingTabletException e) {
           error = "Scan failed, not serving tablet " + loc;
           if (!error.equals(lastError))
-            log.debug(error);
+            log.debug("{}", error);
           else if (log.isTraceEnabled())
-            log.trace(error);
+            log.trace("{}", error);
           lastError = error;
 
           TabletLocator.getLocator(context, scanState.tableId).invalidateCache(loc.tablet_extent);
@@ -312,9 +326,9 @@
         } catch (NoSuchScanIDException e) {
           error = "Scan failed, no such scan id " + scanState.scanID + " " + loc;
           if (!error.equals(lastError))
-            log.debug(error);
+            log.debug("{}", error);
           else if (log.isTraceEnabled())
-            log.trace(error);
+            log.trace("{}", error);
           lastError = error;
 
           if (scanState.isolated)
@@ -324,14 +338,14 @@
         } catch (TooManyFilesException e) {
           error = "Tablet has too many files " + loc + " retrying...";
           if (!error.equals(lastError)) {
-            log.debug(error);
+            log.debug("{}", error);
             tooManyFilesCount = 0;
           } else {
             tooManyFilesCount++;
             if (tooManyFilesCount == 300)
-              log.warn(error);
+              log.warn("{}", error);
             else if (log.isTraceEnabled())
-              log.trace(error);
+              log.trace("{}", error);
           }
           lastError = error;
 
@@ -348,9 +362,9 @@
           TabletLocator.getLocator(context, scanState.tableId).invalidateCache(context.getInstance(), loc.tablet_location);
           error = "Scan failed, thrift error " + e.getClass().getName() + "  " + e.getMessage() + " " + loc;
           if (!error.equals(lastError))
-            log.debug(error);
+            log.debug("{}", error);
           else if (log.isTraceEnabled())
-            log.trace(error);
+            log.trace("{}", error);
           lastError = error;
           loc = null;
 
@@ -380,11 +394,11 @@
   }
 
   private static List<KeyValue> scan(TabletLocation loc, ScanState scanState, ClientContext context) throws AccumuloSecurityException,
-      NotServingTabletException, TException, NoSuchScanIDException, TooManyFilesException {
+      NotServingTabletException, TException, NoSuchScanIDException, TooManyFilesException, TSampleNotPresentException {
     if (scanState.finished)
       return null;
 
-    OpTimer opTimer = new OpTimer(log, Level.TRACE);
+    OpTimer timer = null;
 
     final TInfo tinfo = Tracer.traceInfo();
     final HostAndPort parsedLocation = HostAndPort.fromString(loc.tablet_location);
@@ -403,13 +417,19 @@
         String msg = "Starting scan tserver=" + loc.tablet_location + " tablet=" + loc.tablet_extent + " range=" + scanState.range + " ssil="
             + scanState.serverSideIteratorList + " ssio=" + scanState.serverSideIteratorOptions;
         Thread.currentThread().setName(msg);
-        opTimer.start(msg);
+
+        if (log.isTraceEnabled()) {
+          log.trace("tid={} {}", Thread.currentThread().getId(), msg);
+          timer = new OpTimer().start();
+        }
 
         TabletType ttype = TabletType.type(loc.tablet_extent);
         boolean waitForWrites = !serversWaitedForWrites.get(ttype).contains(loc.tablet_location);
+
         InitialScan is = client.startScan(tinfo, scanState.context.rpcCreds(), loc.tablet_extent.toThrift(), scanState.range.toThrift(),
             Translator.translate(scanState.columns, Translators.CT), scanState.size, scanState.serverSideIteratorList, scanState.serverSideIteratorOptions,
-            scanState.authorizations.getAuthorizationsBB(), waitForWrites, scanState.isolated, scanState.readaheadThreshold);
+            scanState.authorizations.getAuthorizationsBB(), waitForWrites, scanState.isolated, scanState.readaheadThreshold,
+            SamplerConfigurationImpl.toThrift(scanState.samplerConfig), scanState.batchTimeOut, scanState.classLoaderContext);
         if (waitForWrites)
           serversWaitedForWrites.get(ttype).add(loc.tablet_location);
 
@@ -421,10 +441,14 @@
           client.closeScan(tinfo, is.scanID);
 
       } else {
-        // log.debug("Calling continue scan : "+scanState.range+"  loc = "+loc);
+        // log.debug("Calling continue scan : "+scanState.range+" loc = "+loc);
         String msg = "Continuing scan tserver=" + loc.tablet_location + " scanid=" + scanState.scanID;
         Thread.currentThread().setName(msg);
-        opTimer.start(msg);
+
+        if (log.isTraceEnabled()) {
+          log.trace("tid={} {}", Thread.currentThread().getId(), msg);
+          timer = new OpTimer().start();
+        }
 
         sr = client.continueScan(tinfo, scanState.scanID);
         if (!sr.more) {
@@ -437,17 +461,36 @@
         // log.debug("No more : tab end row = "+loc.tablet_extent.getEndRow()+" range = "+scanState.range);
         if (loc.tablet_extent.getEndRow() == null) {
           scanState.finished = true;
-          opTimer.stop("Completely finished scan in %DURATION% #results=" + sr.results.size());
+
+          if (timer != null) {
+            timer.stop();
+            log.trace("tid={} Completely finished scan in {} #results={}", Thread.currentThread().getId(),
+                String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)), sr.results.size());
+          }
+
         } else if (scanState.range.getEndKey() == null || !scanState.range.afterEndKey(new Key(loc.tablet_extent.getEndRow()).followingKey(PartialKey.ROW))) {
           scanState.startRow = loc.tablet_extent.getEndRow();
           scanState.skipStartRow = true;
-          opTimer.stop("Finished scanning tablet in %DURATION% #results=" + sr.results.size());
+
+          if (timer != null) {
+            timer.stop();
+            log.trace("tid={} Finished scanning tablet in {} #results={}", Thread.currentThread().getId(),
+                String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)), sr.results.size());
+          }
         } else {
           scanState.finished = true;
-          opTimer.stop("Completely finished scan in %DURATION% #results=" + sr.results.size());
+          if (timer != null) {
+            timer.stop();
+            log.trace("tid={} Completely finished in {} #results={}", Thread.currentThread().getId(),
+                String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)), sr.results.size());
+          }
         }
       } else {
-        opTimer.stop("Finished scan in %DURATION% #results=" + sr.results.size() + " scanid=" + scanState.scanID);
+        if (timer != null) {
+          timer.stop();
+          log.trace("tid={} Finished scan in {} #results={} scanid={}", Thread.currentThread().getId(),
+              String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)), sr.results.size(), scanState.scanID);
+        }
       }
 
       Key.decompress(sr.results);
@@ -455,7 +498,7 @@
       if (sr.results.size() > 0 && !scanState.finished)
         scanState.range = new Range(new Key(sr.results.get(sr.results.size() - 1).key), false, scanState.range.getEndKey(), scanState.range.isEndKeyInclusive());
 
-      List<KeyValue> results = new ArrayList<KeyValue>(sr.results.size());
+      List<KeyValue> results = new ArrayList<>(sr.results.size());
       for (TKeyValue tkv : sr.results)
         results.add(new KeyValue(new Key(tkv.key), tkv.value));
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java
index ba62cec..682ecbd 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java
@@ -48,10 +48,10 @@
   private static final Random random = new Random();
   private long killTime = 1000 * 3;
 
-  private Map<ThriftTransportKey,List<CachedConnection>> cache = new HashMap<ThriftTransportKey,List<CachedConnection>>();
-  private Map<ThriftTransportKey,Long> errorCount = new HashMap<ThriftTransportKey,Long>();
-  private Map<ThriftTransportKey,Long> errorTime = new HashMap<ThriftTransportKey,Long>();
-  private Set<ThriftTransportKey> serversWarnedAbout = new HashSet<ThriftTransportKey>();
+  private Map<ThriftTransportKey,List<CachedConnection>> cache = new HashMap<>();
+  private Map<ThriftTransportKey,Long> errorCount = new HashMap<>();
+  private Map<ThriftTransportKey,Long> errorTime = new HashMap<>();
+  private Set<ThriftTransportKey> serversWarnedAbout = new HashSet<>();
 
   private CountDownLatch closerExitLatch;
 
@@ -95,7 +95,7 @@
     private void closeConnections() {
       while (true) {
 
-        ArrayList<CachedConnection> connectionsToClose = new ArrayList<CachedConnection>();
+        ArrayList<CachedConnection> connectionsToClose = new ArrayList<>();
 
         synchronized (pool) {
           for (List<CachedConnection> ccl : pool.getCache().values()) {
@@ -394,7 +394,7 @@
       List<CachedConnection> ccl = getCache().get(cacheKey);
 
       if (ccl == null) {
-        ccl = new LinkedList<CachedConnection>();
+        ccl = new LinkedList<>();
         getCache().put(cacheKey, ccl);
       }
 
@@ -413,10 +413,10 @@
   @VisibleForTesting
   public Pair<String,TTransport> getAnyTransport(List<ThriftTransportKey> servers, boolean preferCachedConnection) throws TTransportException {
 
-    servers = new ArrayList<ThriftTransportKey>(servers);
+    servers = new ArrayList<>(servers);
 
     if (preferCachedConnection) {
-      HashSet<ThriftTransportKey> serversSet = new HashSet<ThriftTransportKey>(servers);
+      HashSet<ThriftTransportKey> serversSet = new HashSet<>(servers);
 
       synchronized (this) {
 
@@ -424,7 +424,7 @@
         serversSet.retainAll(getCache().keySet());
 
         if (serversSet.size() > 0) {
-          ArrayList<ThriftTransportKey> cachedServers = new ArrayList<ThriftTransportKey>(serversSet);
+          ArrayList<ThriftTransportKey> cachedServers = new ArrayList<>(serversSet);
           Collections.shuffle(cachedServers, random);
 
           for (ThriftTransportKey ttk : cachedServers) {
@@ -463,7 +463,7 @@
       }
 
       try {
-        return new Pair<String,TTransport>(ttk.getServer().toString(), createNewTransport(ttk));
+        return new Pair<>(ttk.getServer().toString(), createNewTransport(ttk));
       } catch (TTransportException tte) {
         log.debug("Failed to connect to {}", servers.get(index), tte);
         servers.remove(index);
@@ -490,7 +490,7 @@
         List<CachedConnection> ccl = getCache().get(cacheKey);
 
         if (ccl == null) {
-          ccl = new LinkedList<CachedConnection>();
+          ccl = new LinkedList<>();
           getCache().put(cacheKey, ccl);
         }
 
@@ -511,7 +511,7 @@
     boolean existInCache = false;
     CachedTTransport ctsc = (CachedTTransport) tsc;
 
-    ArrayList<CachedConnection> closeList = new ArrayList<ThriftTransportPool.CachedConnection>();
+    ArrayList<CachedConnection> closeList = new ArrayList<>();
 
     synchronized (this) {
       List<CachedConnection> ccl = getCache().get(ctsc.getCacheKey());
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java b/core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
index c0cb219..6e92b68 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/TimeoutTabletLocator.java
@@ -16,13 +16,8 @@
  */
 package org.apache.accumulo.core.client.impl;
 
-import java.util.Collection;
-import java.util.List;
-import java.util.Map;
-
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.TimedOutException;
 import org.apache.accumulo.core.data.Mutation;
@@ -30,12 +25,16 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.hadoop.io.Text;
 
-/**
- *
- */
-public class TimeoutTabletLocator extends TabletLocator {
+import java.util.List;
+import java.util.Map;
 
-  private TabletLocator locator;
+/**
+ * Throws a {@link TimedOutException} if the specified timeout duration elapses between two failed TabletLocator calls.
+ * <p>
+ * This class is safe to cache locally.
+ */
+public class TimeoutTabletLocator extends SyncingTabletLocator {
+
   private long timeout;
   private Long firstFailTime = null;
 
@@ -51,17 +50,16 @@
     firstFailTime = null;
   }
 
-  public TimeoutTabletLocator(TabletLocator locator, long timeout) {
-    this.locator = locator;
+  public TimeoutTabletLocator(long timeout, final ClientContext context, final String table) {
+    super(context, table);
     this.timeout = timeout;
   }
 
   @Override
   public TabletLocation locateTablet(ClientContext context, Text row, boolean skipRow, boolean retry) throws AccumuloException, AccumuloSecurityException,
       TableNotFoundException {
-
     try {
-      TabletLocation ret = locator.locateTablet(context, row, skipRow, retry);
+      TabletLocation ret = super.locateTablet(context, row, skipRow, retry);
 
       if (ret == null)
         failed();
@@ -79,7 +77,7 @@
   public <T extends Mutation> void binMutations(ClientContext context, List<T> mutations, Map<String,TabletServerMutations<T>> binnedMutations, List<T> failures)
       throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
     try {
-      locator.binMutations(context, mutations, binnedMutations, failures);
+      super.binMutations(context, mutations, binnedMutations, failures);
 
       if (failures.size() == mutations.size())
         failed();
@@ -95,9 +93,8 @@
   @Override
   public List<Range> binRanges(ClientContext context, List<Range> ranges, Map<String,Map<KeyExtent,List<Range>>> binnedRanges) throws AccumuloException,
       AccumuloSecurityException, TableNotFoundException {
-
     try {
-      List<Range> ret = locator.binRanges(context, ranges, binnedRanges);
+      List<Range> ret = super.binRanges(context, ranges, binnedRanges);
 
       if (ranges.size() == ret.size())
         failed();
@@ -110,25 +107,4 @@
       throw ae;
     }
   }
-
-  @Override
-  public void invalidateCache(KeyExtent failedExtent) {
-    locator.invalidateCache(failedExtent);
-  }
-
-  @Override
-  public void invalidateCache(Collection<KeyExtent> keySet) {
-    locator.invalidateCache(keySet);
-  }
-
-  @Override
-  public void invalidateCache() {
-    locator.invalidateCache();
-  }
-
-  @Override
-  public void invalidateCache(Instance instance, String server) {
-    locator.invalidateCache(instance, server);
-  }
-
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/Translator.java b/core/src/main/java/org/apache/accumulo/core/client/impl/Translator.java
index e5141cf..00c43ac 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/Translator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/Translator.java
@@ -111,7 +111,7 @@
   }
 
   public static <IKT,OKT,T> Map<OKT,T> translate(Map<IKT,T> input, Translator<IKT,OKT> keyTranslator) {
-    HashMap<OKT,T> output = new HashMap<OKT,T>();
+    HashMap<OKT,T> output = new HashMap<>();
 
     for (Entry<IKT,T> entry : input.entrySet())
       output.put(keyTranslator.translate(entry.getKey()), entry.getValue());
@@ -120,7 +120,7 @@
   }
 
   public static <IKT,OKT,IVT,OVT> Map<OKT,OVT> translate(Map<IKT,IVT> input, Translator<IKT,OKT> keyTranslator, Translator<IVT,OVT> valueTranslator) {
-    HashMap<OKT,OVT> output = new HashMap<OKT,OVT>();
+    HashMap<OKT,OVT> output = new HashMap<>();
 
     for (Entry<IKT,IVT> entry : input.entrySet())
       output.put(keyTranslator.translate(entry.getKey()), valueTranslator.translate(entry.getValue()));
@@ -129,7 +129,7 @@
   }
 
   public static <IT,OT> List<OT> translate(Collection<IT> input, Translator<IT,OT> translator) {
-    ArrayList<OT> output = new ArrayList<OT>(input.size());
+    ArrayList<OT> output = new ArrayList<>(input.size());
 
     for (IT in : input)
       output.add(translator.translate(in));
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/Writer.java b/core/src/main/java/org/apache/accumulo/core/client/impl/Writer.java
index cf2d642..90691ef 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/Writer.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/Writer.java
@@ -17,8 +17,11 @@
 package org.apache.accumulo.core.client.impl;
 
 import static com.google.common.base.Preconditions.checkArgument;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
+import java.util.concurrent.TimeUnit;
+
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.TableNotFoundException;
@@ -32,7 +35,6 @@
 import org.apache.accumulo.core.tabletserver.thrift.TDurability;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
 import org.apache.accumulo.core.trace.Tracer;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.hadoop.io.Text;
 import org.apache.thrift.TException;
 import org.apache.thrift.TServiceClient;
@@ -46,17 +48,13 @@
   private static final Logger log = LoggerFactory.getLogger(Writer.class);
 
   private ClientContext context;
-  private Text table;
+  private String tableId;
 
-  public Writer(ClientContext context, Text table) {
+  public Writer(ClientContext context, String tableId) {
     checkArgument(context != null, "context is null");
-    checkArgument(table != null, "table is null");
+    checkArgument(tableId != null, "tableId is null");
     this.context = context;
-    this.table = table;
-  }
-
-  public Writer(ClientContext context, String table) {
-    this(context, new Text(table));
+    this.tableId = tableId;
   }
 
   private static void updateServer(ClientContext context, Mutation m, KeyExtent extent, HostAndPort server) throws TException, NotServingTabletException,
@@ -85,11 +83,11 @@
       throw new IllegalArgumentException("Can not add empty mutations");
 
     while (true) {
-      TabletLocation tabLoc = TabletLocator.getLocator(context, table).locateTablet(context, new Text(m.getRow()), false, true);
+      TabletLocation tabLoc = TabletLocator.getLocator(context, tableId).locateTablet(context, new Text(m.getRow()), false, true);
 
       if (tabLoc == null) {
         log.trace("No tablet location found for row " + new String(m.getRow(), UTF_8));
-        UtilWaitThread.sleep(500);
+        sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
         continue;
       }
 
@@ -99,18 +97,18 @@
         return;
       } catch (NotServingTabletException e) {
         log.trace("Not serving tablet, server = " + parsedLocation);
-        TabletLocator.getLocator(context, table).invalidateCache(tabLoc.tablet_extent);
+        TabletLocator.getLocator(context, tableId).invalidateCache(tabLoc.tablet_extent);
       } catch (ConstraintViolationException cve) {
         log.error("error sending update to " + parsedLocation + ": " + cve);
         // probably do not need to invalidate cache, but it does not hurt
-        TabletLocator.getLocator(context, table).invalidateCache(tabLoc.tablet_extent);
+        TabletLocator.getLocator(context, tableId).invalidateCache(tabLoc.tablet_extent);
         throw cve;
       } catch (TException e) {
         log.error("error sending update to " + parsedLocation + ": " + e);
-        TabletLocator.getLocator(context, table).invalidateCache(tabLoc.tablet_extent);
+        TabletLocator.getLocator(context, tableId).invalidateCache(tabLoc.tablet_extent);
       }
 
-      UtilWaitThread.sleep(500);
+      sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
     }
 
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ClientService.java b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ClientService.java
index 40b28fd..0df0107 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ClientService.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ClientService.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ClientService {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ClientService {
 
   public interface Iface {
 
@@ -4894,7 +4897,9 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
     }
 
     @Override
@@ -5205,7 +5210,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -5494,7 +5506,9 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
     }
 
     @Override
@@ -5805,7 +5819,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -6094,7 +6115,9 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
     }
 
     @Override
@@ -6405,7 +6428,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -7008,7 +7038,7 @@
         return getCredentials();
 
       case TID:
-        return Long.valueOf(getTid());
+        return getTid();
 
       case TABLE_ID:
         return getTableId();
@@ -7020,7 +7050,7 @@
         return getErrorDir();
 
       case SET_TIME:
-        return Boolean.valueOf(isSetTime());
+        return isSetTime();
 
       }
       throw new IllegalStateException();
@@ -7132,7 +7162,44 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tid = true;
+      list.add(present_tid);
+      if (present_tid)
+        list.add(tid);
+
+      boolean present_tableId = true && (isSetTableId());
+      list.add(present_tableId);
+      if (present_tableId)
+        list.add(tableId);
+
+      boolean present_files = true && (isSetFiles());
+      list.add(present_files);
+      if (present_files)
+        list.add(files);
+
+      boolean present_errorDir = true && (isSetErrorDir());
+      list.add(present_errorDir);
+      if (present_errorDir)
+        list.add(errorDir);
+
+      boolean present_setTime = true;
+      list.add(present_setTime);
+      if (present_setTime)
+        list.add(setTime);
+
+      return list.hashCode();
     }
 
     @Override
@@ -7370,11 +7437,11 @@
                 {
                   org.apache.thrift.protocol.TList _list8 = iprot.readListBegin();
                   struct.files = new ArrayList<String>(_list8.size);
-                  for (int _i9 = 0; _i9 < _list8.size; ++_i9)
+                  String _elem9;
+                  for (int _i10 = 0; _i10 < _list8.size; ++_i10)
                   {
-                    String _elem10;
-                    _elem10 = iprot.readString();
-                    struct.files.add(_elem10);
+                    _elem9 = iprot.readString();
+                    struct.files.add(_elem9);
                   }
                   iprot.readListEnd();
                 }
@@ -7547,11 +7614,11 @@
           {
             org.apache.thrift.protocol.TList _list13 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.files = new ArrayList<String>(_list13.size);
-            for (int _i14 = 0; _i14 < _list13.size; ++_i14)
+            String _elem14;
+            for (int _i15 = 0; _i15 < _list13.size; ++_i15)
             {
-              String _elem15;
-              _elem15 = iprot.readString();
-              struct.files.add(_elem15);
+              _elem14 = iprot.readString();
+              struct.files.add(_elem14);
             }
           }
           struct.setFilesIsSet(true);
@@ -7899,7 +7966,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -8031,11 +8115,11 @@
                 {
                   org.apache.thrift.protocol.TList _list16 = iprot.readListBegin();
                   struct.success = new ArrayList<String>(_list16.size);
-                  for (int _i17 = 0; _i17 < _list16.size; ++_i17)
+                  String _elem17;
+                  for (int _i18 = 0; _i18 < _list16.size; ++_i18)
                   {
-                    String _elem18;
-                    _elem18 = iprot.readString();
-                    struct.success.add(_elem18);
+                    _elem17 = iprot.readString();
+                    struct.success.add(_elem17);
                   }
                   iprot.readListEnd();
                 }
@@ -8152,11 +8236,11 @@
           {
             org.apache.thrift.protocol.TList _list21 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new ArrayList<String>(_list21.size);
-            for (int _i22 = 0; _i22 < _list21.size; ++_i22)
+            String _elem22;
+            for (int _i23 = 0; _i23 < _list21.size; ++_i23)
             {
-              String _elem23;
-              _elem23 = iprot.readString();
-              struct.success.add(_elem23);
+              _elem22 = iprot.readString();
+              struct.success.add(_elem22);
             }
           }
           struct.setSuccessIsSet(true);
@@ -8375,7 +8459,7 @@
         return getTinfo();
 
       case TID:
-        return Long.valueOf(getTid());
+        return getTid();
 
       }
       throw new IllegalStateException();
@@ -8432,7 +8516,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_tid = true;
+      list.add(present_tid);
+      if (present_tid)
+        list.add(tid);
+
+      return list.hashCode();
     }
 
     @Override
@@ -8784,7 +8880,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       }
       throw new IllegalStateException();
@@ -8830,7 +8926,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -9182,7 +9285,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -9541,7 +9651,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -9973,7 +10090,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tables = true && (isSetTables());
+      list.add(present_tables);
+      if (present_tables)
+        list.add(tables);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -10090,11 +10219,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set24 = iprot.readSetBegin();
                   struct.tables = new HashSet<String>(2*_set24.size);
-                  for (int _i25 = 0; _i25 < _set24.size; ++_i25)
+                  String _elem25;
+                  for (int _i26 = 0; _i26 < _set24.size; ++_i26)
                   {
-                    String _elem26;
-                    _elem26 = iprot.readString();
-                    struct.tables.add(_elem26);
+                    _elem25 = iprot.readString();
+                    struct.tables.add(_elem25);
                   }
                   iprot.readSetEnd();
                 }
@@ -10191,11 +10320,11 @@
           {
             org.apache.thrift.protocol.TSet _set29 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.tables = new HashSet<String>(2*_set29.size);
-            for (int _i30 = 0; _i30 < _set29.size; ++_i30)
+            String _elem30;
+            for (int _i31 = 0; _i31 < _set29.size; ++_i31)
             {
-              String _elem31;
-              _elem31 = iprot.readString();
-              struct.tables.add(_elem31);
+              _elem30 = iprot.readString();
+              struct.tables.add(_elem30);
             }
           }
           struct.setTablesIsSet(true);
@@ -10543,7 +10672,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_toe = true && (isSetToe());
+      list.add(present_toe);
+      if (present_toe)
+        list.add(toe);
+
+      return list.hashCode();
     }
 
     @Override
@@ -10675,12 +10821,12 @@
                 {
                   org.apache.thrift.protocol.TList _list32 = iprot.readListBegin();
                   struct.success = new ArrayList<TDiskUsage>(_list32.size);
-                  for (int _i33 = 0; _i33 < _list32.size; ++_i33)
+                  TDiskUsage _elem33;
+                  for (int _i34 = 0; _i34 < _list32.size; ++_i34)
                   {
-                    TDiskUsage _elem34;
-                    _elem34 = new TDiskUsage();
-                    _elem34.read(iprot);
-                    struct.success.add(_elem34);
+                    _elem33 = new TDiskUsage();
+                    _elem33.read(iprot);
+                    struct.success.add(_elem33);
                   }
                   iprot.readListEnd();
                 }
@@ -10797,12 +10943,12 @@
           {
             org.apache.thrift.protocol.TList _list37 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
             struct.success = new ArrayList<TDiskUsage>(_list37.size);
-            for (int _i38 = 0; _i38 < _list37.size; ++_i38)
+            TDiskUsage _elem38;
+            for (int _i39 = 0; _i39 < _list37.size; ++_i39)
             {
-              TDiskUsage _elem39;
-              _elem39 = new TDiskUsage();
-              _elem39.read(iprot);
-              struct.success.add(_elem39);
+              _elem38 = new TDiskUsage();
+              _elem38.read(iprot);
+              struct.success.add(_elem38);
             }
           }
           struct.setSuccessIsSet(true);
@@ -11076,7 +11222,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -11557,7 +11715,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -11671,11 +11841,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set40 = iprot.readSetBegin();
                   struct.success = new HashSet<String>(2*_set40.size);
-                  for (int _i41 = 0; _i41 < _set40.size; ++_i41)
+                  String _elem41;
+                  for (int _i42 = 0; _i42 < _set40.size; ++_i42)
                   {
-                    String _elem42;
-                    _elem42 = iprot.readString();
-                    struct.success.add(_elem42);
+                    _elem41 = iprot.readString();
+                    struct.success.add(_elem41);
                   }
                   iprot.readSetEnd();
                 }
@@ -11772,11 +11942,11 @@
           {
             org.apache.thrift.protocol.TSet _set45 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new HashSet<String>(2*_set45.size);
-            for (int _i46 = 0; _i46 < _set45.size; ++_i46)
+            String _elem46;
+            for (int _i47 = 0; _i47 < _set45.size; ++_i47)
             {
-              String _elem47;
-              _elem47 = iprot.readString();
-              struct.success.add(_elem47);
+              _elem46 = iprot.readString();
+              struct.success.add(_elem46);
             }
           }
           struct.setSuccessIsSet(true);
@@ -11906,7 +12076,7 @@
       this.tinfo = tinfo;
       this.credentials = credentials;
       this.principal = principal;
-      this.password = password;
+      this.password = org.apache.thrift.TBaseHelper.copyBinary(password);
     }
 
     /**
@@ -11924,7 +12094,6 @@
       }
       if (other.isSetPassword()) {
         this.password = org.apache.thrift.TBaseHelper.copyBinary(other.password);
-;
       }
     }
 
@@ -12018,16 +12187,16 @@
     }
 
     public ByteBuffer bufferForPassword() {
-      return password;
+      return org.apache.thrift.TBaseHelper.copyBinary(password);
     }
 
     public createLocalUser_args setPassword(byte[] password) {
-      setPassword(password == null ? (ByteBuffer)null : ByteBuffer.wrap(password));
+      this.password = password == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(password, password.length));
       return this;
     }
 
     public createLocalUser_args setPassword(ByteBuffer password) {
-      this.password = password;
+      this.password = org.apache.thrift.TBaseHelper.copyBinary(password);
       return this;
     }
 
@@ -12174,7 +12343,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_password = true && (isSetPassword());
+      list.add(present_password);
+      if (present_password)
+        list.add(password);
+
+      return list.hashCode();
     }
 
     @Override
@@ -12661,7 +12852,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -13135,7 +13333,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      return list.hashCode();
     }
 
     @Override
@@ -13581,7 +13796,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -13857,7 +14079,7 @@
       this.tinfo = tinfo;
       this.credentials = credentials;
       this.principal = principal;
-      this.password = password;
+      this.password = org.apache.thrift.TBaseHelper.copyBinary(password);
     }
 
     /**
@@ -13875,7 +14097,6 @@
       }
       if (other.isSetPassword()) {
         this.password = org.apache.thrift.TBaseHelper.copyBinary(other.password);
-;
       }
     }
 
@@ -13969,16 +14190,16 @@
     }
 
     public ByteBuffer bufferForPassword() {
-      return password;
+      return org.apache.thrift.TBaseHelper.copyBinary(password);
     }
 
     public changeLocalUserPassword_args setPassword(byte[] password) {
-      setPassword(password == null ? (ByteBuffer)null : ByteBuffer.wrap(password));
+      this.password = password == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(password, password.length));
       return this;
     }
 
     public changeLocalUserPassword_args setPassword(ByteBuffer password) {
-      this.password = password;
+      this.password = org.apache.thrift.TBaseHelper.copyBinary(password);
       return this;
     }
 
@@ -14125,7 +14346,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_password = true && (isSetPassword());
+      list.add(present_password);
+      if (present_password)
+        list.add(password);
+
+      return list.hashCode();
     }
 
     @Override
@@ -14612,7 +14855,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -15027,7 +15277,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -15433,7 +15695,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case SEC:
         return getSec();
@@ -15493,7 +15755,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -16006,7 +16280,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_toAuth = true && (isSetToAuth());
+      list.add(present_toAuth);
+      if (present_toAuth)
+        list.add(toAuth);
+
+      return list.hashCode();
     }
 
     @Override
@@ -16458,7 +16749,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case SEC:
         return getSec();
@@ -16518,7 +16809,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -17107,7 +17410,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_authorizations = true && (isSetAuthorizations());
+      list.add(present_authorizations);
+      if (present_authorizations)
+        list.add(authorizations);
+
+      return list.hashCode();
     }
 
     @Override
@@ -17206,7 +17531,7 @@
       if (this.authorizations == null) {
         sb.append("null");
       } else {
-        sb.append(this.authorizations);
+        org.apache.thrift.TBaseHelper.toString(this.authorizations, sb);
       }
       first = false;
       sb.append(")");
@@ -17289,11 +17614,11 @@
                 {
                   org.apache.thrift.protocol.TList _list48 = iprot.readListBegin();
                   struct.authorizations = new ArrayList<ByteBuffer>(_list48.size);
-                  for (int _i49 = 0; _i49 < _list48.size; ++_i49)
+                  ByteBuffer _elem49;
+                  for (int _i50 = 0; _i50 < _list48.size; ++_i50)
                   {
-                    ByteBuffer _elem50;
-                    _elem50 = iprot.readBinary();
-                    struct.authorizations.add(_elem50);
+                    _elem49 = iprot.readBinary();
+                    struct.authorizations.add(_elem49);
                   }
                   iprot.readListEnd();
                 }
@@ -17417,11 +17742,11 @@
           {
             org.apache.thrift.protocol.TList _list53 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.authorizations = new ArrayList<ByteBuffer>(_list53.size);
-            for (int _i54 = 0; _i54 < _list53.size; ++_i54)
+            ByteBuffer _elem54;
+            for (int _i55 = 0; _i55 < _list53.size; ++_i55)
             {
-              ByteBuffer _elem55;
-              _elem55 = iprot.readBinary();
-              struct.authorizations.add(_elem55);
+              _elem54 = iprot.readBinary();
+              struct.authorizations.add(_elem54);
             }
           }
           struct.setAuthorizationsIsSet(true);
@@ -17626,7 +17951,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -18100,7 +18432,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      return list.hashCode();
     }
 
     @Override
@@ -18622,7 +18971,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -18677,7 +19038,7 @@
       if (this.success == null) {
         sb.append("null");
       } else {
-        sb.append(this.success);
+        org.apache.thrift.TBaseHelper.toString(this.success, sb);
       }
       first = false;
       if (!first) sb.append(", ");
@@ -18736,11 +19097,11 @@
                 {
                   org.apache.thrift.protocol.TList _list56 = iprot.readListBegin();
                   struct.success = new ArrayList<ByteBuffer>(_list56.size);
-                  for (int _i57 = 0; _i57 < _list56.size; ++_i57)
+                  ByteBuffer _elem57;
+                  for (int _i58 = 0; _i58 < _list56.size; ++_i58)
                   {
-                    ByteBuffer _elem58;
-                    _elem58 = iprot.readBinary();
-                    struct.success.add(_elem58);
+                    _elem57 = iprot.readBinary();
+                    struct.success.add(_elem57);
                   }
                   iprot.readListEnd();
                 }
@@ -18837,11 +19198,11 @@
           {
             org.apache.thrift.protocol.TList _list61 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new ArrayList<ByteBuffer>(_list61.size);
-            for (int _i62 = 0; _i62 < _list61.size; ++_i62)
+            ByteBuffer _elem62;
+            for (int _i63 = 0; _i63 < _list61.size; ++_i63)
             {
-              ByteBuffer _elem63;
-              _elem63 = iprot.readBinary();
-              struct.success.add(_elem63);
+              _elem62 = iprot.readBinary();
+              struct.success.add(_elem62);
             }
           }
           struct.setSuccessIsSet(true);
@@ -19151,7 +19512,7 @@
         return getPrincipal();
 
       case SYS_PERM:
-        return Byte.valueOf(getSysPerm());
+        return getSysPerm();
 
       }
       throw new IllegalStateException();
@@ -19230,7 +19591,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_sysPerm = true;
+      list.add(present_sysPerm);
+      if (present_sysPerm)
+        list.add(sysPerm);
+
+      return list.hashCode();
     }
 
     @Override
@@ -19714,7 +20097,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case SEC:
         return getSec();
@@ -19774,7 +20157,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -20317,7 +20712,7 @@
         return getTableName();
 
       case TBL_PERM:
-        return Byte.valueOf(getTblPerm());
+        return getTblPerm();
 
       }
       throw new IllegalStateException();
@@ -20407,7 +20802,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_tblPerm = true;
+      list.add(present_tblPerm);
+      if (present_tblPerm)
+        list.add(tblPerm);
+
+      return list.hashCode();
     }
 
     @Override
@@ -20977,7 +21399,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case SEC:
         return getSec();
@@ -21051,7 +21473,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -21637,7 +22076,7 @@
         return getNs();
 
       case TBL_NSPC_PERM:
-        return Byte.valueOf(getTblNspcPerm());
+        return getTblNspcPerm();
 
       }
       throw new IllegalStateException();
@@ -21727,7 +22166,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_ns = true && (isSetNs());
+      list.add(present_ns);
+      if (present_ns)
+        list.add(ns);
+
+      boolean present_tblNspcPerm = true;
+      list.add(present_tblNspcPerm);
+      if (present_tblNspcPerm)
+        list.add(tblNspcPerm);
+
+      return list.hashCode();
     }
 
     @Override
@@ -22297,7 +22763,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case SEC:
         return getSec();
@@ -22371,7 +22837,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -22909,7 +23392,7 @@
         return getPrincipal();
 
       case PERMISSION:
-        return Byte.valueOf(getPermission());
+        return getPermission();
 
       }
       throw new IllegalStateException();
@@ -22988,7 +23471,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_permission = true;
+      list.add(present_permission);
+      if (present_permission)
+        list.add(permission);
+
+      return list.hashCode();
     }
 
     @Override
@@ -23471,7 +23976,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -23927,7 +24439,7 @@
         return getPrincipal();
 
       case PERMISSION:
-        return Byte.valueOf(getPermission());
+        return getPermission();
 
       }
       throw new IllegalStateException();
@@ -24006,7 +24518,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_permission = true;
+      list.add(present_permission);
+      if (present_permission)
+        list.add(permission);
+
+      return list.hashCode();
     }
 
     @Override
@@ -24489,7 +25023,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -24993,7 +25534,7 @@
         return getTableName();
 
       case PERMISSION:
-        return Byte.valueOf(getPermission());
+        return getPermission();
 
       }
       throw new IllegalStateException();
@@ -25083,7 +25624,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_permission = true;
+      list.add(present_permission);
+      if (present_permission)
+        list.add(permission);
+
+      return list.hashCode();
     }
 
     @Override
@@ -25666,7 +26234,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -26213,7 +26793,7 @@
         return getTableName();
 
       case PERMISSION:
-        return Byte.valueOf(getPermission());
+        return getPermission();
 
       }
       throw new IllegalStateException();
@@ -26303,7 +26883,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_permission = true;
+      list.add(present_permission);
+      if (present_permission)
+        list.add(permission);
+
+      return list.hashCode();
     }
 
     @Override
@@ -26886,7 +27493,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -27433,7 +28052,7 @@
         return getNs();
 
       case PERMISSION:
-        return Byte.valueOf(getPermission());
+        return getPermission();
 
       }
       throw new IllegalStateException();
@@ -27523,7 +28142,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_ns = true && (isSetNs());
+      list.add(present_ns);
+      if (present_ns)
+        list.add(ns);
+
+      boolean present_permission = true;
+      list.add(present_permission);
+      if (present_permission)
+        list.add(permission);
+
+      return list.hashCode();
     }
 
     @Override
@@ -28106,7 +28752,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -28653,7 +29311,7 @@
         return getNs();
 
       case PERMISSION:
-        return Byte.valueOf(getPermission());
+        return getPermission();
 
       }
       throw new IllegalStateException();
@@ -28743,7 +29401,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_ns = true && (isSetNs());
+      list.add(present_ns);
+      if (present_ns)
+        list.add(ns);
+
+      boolean present_permission = true;
+      list.add(present_permission);
+      if (present_permission)
+        list.add(permission);
+
+      return list.hashCode();
     }
 
     @Override
@@ -29326,7 +30011,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -29859,7 +30556,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_type = true && (isSetType());
+      list.add(present_type);
+      if (present_type)
+        list.add(type.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -30012,7 +30726,7 @@
               break;
             case 1: // TYPE
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.type = ConfigurationType.findByValue(iprot.readI32());
+                struct.type = org.apache.accumulo.core.client.impl.thrift.ConfigurationType.findByValue(iprot.readI32());
                 struct.setTypeIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -30102,7 +30816,7 @@
           struct.setCredentialsIsSet(true);
         }
         if (incoming.get(2)) {
-          struct.type = ConfigurationType.findByValue(iprot.readI32());
+          struct.type = org.apache.accumulo.core.client.impl.thrift.ConfigurationType.findByValue(iprot.readI32());
           struct.setTypeIsSet(true);
         }
       }
@@ -30319,7 +31033,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -30415,13 +31136,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map64 = iprot.readMapBegin();
                   struct.success = new HashMap<String,String>(2*_map64.size);
-                  for (int _i65 = 0; _i65 < _map64.size; ++_i65)
+                  String _key65;
+                  String _val66;
+                  for (int _i67 = 0; _i67 < _map64.size; ++_i67)
                   {
-                    String _key66;
-                    String _val67;
-                    _key66 = iprot.readString();
-                    _val67 = iprot.readString();
-                    struct.success.put(_key66, _val67);
+                    _key65 = iprot.readString();
+                    _val66 = iprot.readString();
+                    struct.success.put(_key65, _val66);
                   }
                   iprot.readMapEnd();
                 }
@@ -30500,13 +31221,13 @@
           {
             org.apache.thrift.protocol.TMap _map70 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new HashMap<String,String>(2*_map70.size);
-            for (int _i71 = 0; _i71 < _map70.size; ++_i71)
+            String _key71;
+            String _val72;
+            for (int _i73 = 0; _i73 < _map70.size; ++_i73)
             {
-              String _key72;
-              String _val73;
-              _key72 = iprot.readString();
-              _val73 = iprot.readString();
-              struct.success.put(_key72, _val73);
+              _key71 = iprot.readString();
+              _val72 = iprot.readString();
+              struct.success.put(_key71, _val72);
             }
           }
           struct.setSuccessIsSet(true);
@@ -30829,7 +31550,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -31348,7 +32086,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -31462,13 +32212,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map74 = iprot.readMapBegin();
                   struct.success = new HashMap<String,String>(2*_map74.size);
-                  for (int _i75 = 0; _i75 < _map74.size; ++_i75)
+                  String _key75;
+                  String _val76;
+                  for (int _i77 = 0; _i77 < _map74.size; ++_i77)
                   {
-                    String _key76;
-                    String _val77;
-                    _key76 = iprot.readString();
-                    _val77 = iprot.readString();
-                    struct.success.put(_key76, _val77);
+                    _key75 = iprot.readString();
+                    _val76 = iprot.readString();
+                    struct.success.put(_key75, _val76);
                   }
                   iprot.readMapEnd();
                 }
@@ -31567,13 +32317,13 @@
           {
             org.apache.thrift.protocol.TMap _map80 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new HashMap<String,String>(2*_map80.size);
-            for (int _i81 = 0; _i81 < _map80.size; ++_i81)
+            String _key81;
+            String _val82;
+            for (int _i83 = 0; _i83 < _map80.size; ++_i83)
             {
-              String _key82;
-              String _val83;
-              _key82 = iprot.readString();
-              _val83 = iprot.readString();
-              struct.success.put(_key82, _val83);
+              _key81 = iprot.readString();
+              _val82 = iprot.readString();
+              struct.success.put(_key81, _val82);
             }
           }
           struct.setSuccessIsSet(true);
@@ -31901,7 +32651,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_ns = true && (isSetNs());
+      list.add(present_ns);
+      if (present_ns)
+        list.add(ns);
+
+      return list.hashCode();
     }
 
     @Override
@@ -32420,7 +33187,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -32534,13 +33313,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map84 = iprot.readMapBegin();
                   struct.success = new HashMap<String,String>(2*_map84.size);
-                  for (int _i85 = 0; _i85 < _map84.size; ++_i85)
+                  String _key85;
+                  String _val86;
+                  for (int _i87 = 0; _i87 < _map84.size; ++_i87)
                   {
-                    String _key86;
-                    String _val87;
-                    _key86 = iprot.readString();
-                    _val87 = iprot.readString();
-                    struct.success.put(_key86, _val87);
+                    _key85 = iprot.readString();
+                    _val86 = iprot.readString();
+                    struct.success.put(_key85, _val86);
                   }
                   iprot.readMapEnd();
                 }
@@ -32639,13 +33418,13 @@
           {
             org.apache.thrift.protocol.TMap _map90 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new HashMap<String,String>(2*_map90.size);
-            for (int _i91 = 0; _i91 < _map90.size; ++_i91)
+            String _key91;
+            String _val92;
+            for (int _i93 = 0; _i93 < _map90.size; ++_i93)
             {
-              String _key92;
-              String _val93;
-              _key92 = iprot.readString();
-              _val93 = iprot.readString();
-              struct.success.put(_key92, _val93);
+              _key91 = iprot.readString();
+              _val92 = iprot.readString();
+              struct.success.put(_key91, _val92);
             }
           }
           struct.setSuccessIsSet(true);
@@ -33032,7 +33811,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_className = true && (isSetClassName());
+      list.add(present_className);
+      if (present_className)
+        list.add(className);
+
+      boolean present_interfaceMatch = true && (isSetInterfaceMatch());
+      list.add(present_interfaceMatch);
+      if (present_interfaceMatch)
+        list.add(interfaceMatch);
+
+      return list.hashCode();
     }
 
     @Override
@@ -33475,7 +34276,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       }
       throw new IllegalStateException();
@@ -33521,7 +34322,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -34109,7 +34917,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tableId = true && (isSetTableId());
+      list.add(present_tableId);
+      if (present_tableId)
+        list.add(tableId);
+
+      boolean present_className = true && (isSetClassName());
+      list.add(present_className);
+      if (present_className)
+        list.add(className);
+
+      boolean present_interfaceMatch = true && (isSetInterfaceMatch());
+      list.add(present_interfaceMatch);
+      if (present_interfaceMatch)
+        list.add(interfaceMatch);
+
+      return list.hashCode();
     }
 
     @Override
@@ -34683,7 +35518,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case SEC:
         return getSec();
@@ -34757,7 +35592,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -35431,7 +36283,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_namespaceId = true && (isSetNamespaceId());
+      list.add(present_namespaceId);
+      if (present_namespaceId)
+        list.add(namespaceId);
+
+      boolean present_className = true && (isSetClassName());
+      list.add(present_className);
+      if (present_className)
+        list.add(className);
+
+      boolean present_interfaceMatch = true && (isSetInterfaceMatch());
+      list.add(present_interfaceMatch);
+      if (present_interfaceMatch)
+        list.add(interfaceMatch);
+
+      return list.hashCode();
     }
 
     @Override
@@ -36005,7 +36884,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case SEC:
         return getSec();
@@ -36079,7 +36958,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ConfigurationType.java b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ConfigurationType.java
index 7399802..143393e 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ConfigurationType.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ConfigurationType.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/SecurityErrorCode.java b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/SecurityErrorCode.java
index 754f24e..8bc1964 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/SecurityErrorCode.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/SecurityErrorCode.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TDiskUsage.java b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TDiskUsage.java
index 1ae011b..30dc624 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TDiskUsage.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TDiskUsage.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TDiskUsage implements org.apache.thrift.TBase<TDiskUsage, TDiskUsage._Fields>, java.io.Serializable, Cloneable, Comparable<TDiskUsage> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TDiskUsage implements org.apache.thrift.TBase<TDiskUsage, TDiskUsage._Fields>, java.io.Serializable, Cloneable, Comparable<TDiskUsage> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TDiskUsage");
 
   private static final org.apache.thrift.protocol.TField TABLES_FIELD_DESC = new org.apache.thrift.protocol.TField("tables", org.apache.thrift.protocol.TType.LIST, (short)1);
@@ -264,7 +267,7 @@
       return getTables();
 
     case USAGE:
-      return Long.valueOf(getUsage());
+      return getUsage();
 
     }
     throw new IllegalStateException();
@@ -321,7 +324,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_tables = true && (isSetTables());
+    list.add(present_tables);
+    if (present_tables)
+      list.add(tables);
+
+    boolean present_usage = true;
+    list.add(present_usage);
+    if (present_usage)
+      list.add(usage);
+
+    return list.hashCode();
   }
 
   @Override
@@ -433,11 +448,11 @@
               {
                 org.apache.thrift.protocol.TList _list0 = iprot.readListBegin();
                 struct.tables = new ArrayList<String>(_list0.size);
-                for (int _i1 = 0; _i1 < _list0.size; ++_i1)
+                String _elem1;
+                for (int _i2 = 0; _i2 < _list0.size; ++_i2)
                 {
-                  String _elem2;
-                  _elem2 = iprot.readString();
-                  struct.tables.add(_elem2);
+                  _elem1 = iprot.readString();
+                  struct.tables.add(_elem1);
                 }
                 iprot.readListEnd();
               }
@@ -531,11 +546,11 @@
         {
           org.apache.thrift.protocol.TList _list5 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.tables = new ArrayList<String>(_list5.size);
-          for (int _i6 = 0; _i6 < _list5.size; ++_i6)
+          String _elem6;
+          for (int _i7 = 0; _i7 < _list5.size; ++_i7)
           {
-            String _elem7;
-            _elem7 = iprot.readString();
-            struct.tables.add(_elem7);
+            _elem6 = iprot.readString();
+            struct.tables.add(_elem6);
           }
         }
         struct.setTablesIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
index 4730276..5b49952 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperation.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperationExceptionType.java b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperationExceptionType.java
index 48b5619..52ec63f 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperationExceptionType.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/TableOperationExceptionType.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftSecurityException.java b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftSecurityException.java
index 9e94830..52bd98f 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftSecurityException.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftSecurityException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ThriftSecurityException extends TException implements org.apache.thrift.TBase<ThriftSecurityException, ThriftSecurityException._Fields>, java.io.Serializable, Cloneable, Comparable<ThriftSecurityException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ThriftSecurityException extends TException implements org.apache.thrift.TBase<ThriftSecurityException, ThriftSecurityException._Fields>, java.io.Serializable, Cloneable, Comparable<ThriftSecurityException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ThriftSecurityException");
 
   private static final org.apache.thrift.protocol.TField USER_FIELD_DESC = new org.apache.thrift.protocol.TField("user", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -318,7 +321,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_user = true && (isSetUser());
+    list.add(present_user);
+    if (present_user)
+      list.add(user);
+
+    boolean present_code = true && (isSetCode());
+    list.add(present_code);
+    if (present_code)
+      list.add(code.getValue());
+
+    return list.hashCode();
   }
 
   @Override
@@ -437,7 +452,7 @@
             break;
           case 2: // CODE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.code = SecurityErrorCode.findByValue(iprot.readI32());
+              struct.code = org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode.findByValue(iprot.readI32());
               struct.setCodeIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -510,7 +525,7 @@
         struct.setUserIsSet(true);
       }
       if (incoming.get(1)) {
-        struct.code = SecurityErrorCode.findByValue(iprot.readI32());
+        struct.code = org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode.findByValue(iprot.readI32());
         struct.setCodeIsSet(true);
       }
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftTableOperationException.java b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftTableOperationException.java
index fad7ea7..55a5d5a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftTableOperationException.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftTableOperationException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ThriftTableOperationException extends TException implements org.apache.thrift.TBase<ThriftTableOperationException, ThriftTableOperationException._Fields>, java.io.Serializable, Cloneable, Comparable<ThriftTableOperationException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ThriftTableOperationException extends TException implements org.apache.thrift.TBase<ThriftTableOperationException, ThriftTableOperationException._Fields>, java.io.Serializable, Cloneable, Comparable<ThriftTableOperationException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ThriftTableOperationException");
 
   private static final org.apache.thrift.protocol.TField TABLE_ID_FIELD_DESC = new org.apache.thrift.protocol.TField("tableId", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -511,7 +514,34 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_tableId = true && (isSetTableId());
+    list.add(present_tableId);
+    if (present_tableId)
+      list.add(tableId);
+
+    boolean present_tableName = true && (isSetTableName());
+    list.add(present_tableName);
+    if (present_tableName)
+      list.add(tableName);
+
+    boolean present_op = true && (isSetOp());
+    list.add(present_op);
+    if (present_op)
+      list.add(op.getValue());
+
+    boolean present_type = true && (isSetType());
+    list.add(present_type);
+    if (present_type)
+      list.add(type.getValue());
+
+    boolean present_description = true && (isSetDescription());
+    list.add(present_description);
+    if (present_description)
+      list.add(description);
+
+    return list.hashCode();
   }
 
   @Override
@@ -692,7 +722,7 @@
             break;
           case 3: // OP
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.op = TableOperation.findByValue(iprot.readI32());
+              struct.op = org.apache.accumulo.core.client.impl.thrift.TableOperation.findByValue(iprot.readI32());
               struct.setOpIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -700,7 +730,7 @@
             break;
           case 4: // TYPE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.type = TableOperationExceptionType.findByValue(iprot.readI32());
+              struct.type = org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType.findByValue(iprot.readI32());
               struct.setTypeIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -818,11 +848,11 @@
         struct.setTableNameIsSet(true);
       }
       if (incoming.get(2)) {
-        struct.op = TableOperation.findByValue(iprot.readI32());
+        struct.op = org.apache.accumulo.core.client.impl.thrift.TableOperation.findByValue(iprot.readI32());
         struct.setOpIsSet(true);
       }
       if (incoming.get(3)) {
-        struct.type = TableOperationExceptionType.findByValue(iprot.readI32());
+        struct.type = org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType.findByValue(iprot.readI32());
         struct.setTypeIsSet(true);
       }
       if (incoming.get(4)) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftTest.java b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftTest.java
index c4af921..aa828d4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftTest.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/impl/thrift/ThriftTest.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ThriftTest {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ThriftTest {
 
   public interface Iface {
 
@@ -663,7 +666,9 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
     }
 
     @Override
@@ -930,7 +935,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       }
       throw new IllegalStateException();
@@ -976,7 +981,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -1263,7 +1275,9 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
     }
 
     @Override
@@ -1530,7 +1544,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       }
       throw new IllegalStateException();
@@ -1576,7 +1590,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -1863,7 +1884,9 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
     }
 
     @Override
@@ -2175,7 +2198,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case EX:
         return getEx();
@@ -2235,7 +2258,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ex = true && (isSetEx());
+      list.add(present_ex);
+      if (present_ex)
+        list.add(ex);
+
+      return list.hashCode();
     }
 
     @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/BigIntegerLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/BigIntegerLexicoder.java
index 8147f18..669d070 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/BigIntegerLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/BigIntegerLexicoder.java
@@ -16,22 +16,22 @@
  */
 package org.apache.accumulo.core.client.lexicoder;
 
-import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
-import org.apache.accumulo.core.client.lexicoder.impl.FixedByteArrayOutputStream;
-import org.apache.accumulo.core.iterators.ValueFormatException;
-
 import java.io.ByteArrayInputStream;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.IOException;
 import java.math.BigInteger;
 
+import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
+import org.apache.accumulo.core.client.lexicoder.impl.FixedByteArrayOutputStream;
+import org.apache.accumulo.core.iterators.ValueFormatException;
+
 /**
  * A lexicoder to encode/decode a BigInteger to/from bytes that maintain its native Java sort order.
  *
  * @since 1.6.0
  */
-public class BigIntegerLexicoder extends AbstractLexicoder<BigInteger> implements Lexicoder<BigInteger> {
+public class BigIntegerLexicoder extends AbstractLexicoder<BigInteger> {
 
   @Override
   public byte[] encode(BigInteger v) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/BytesLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/BytesLexicoder.java
index ae4dfbd..c755db6 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/BytesLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/BytesLexicoder.java
@@ -24,7 +24,7 @@
  *
  * @since 1.6.0
  */
-public class BytesLexicoder extends AbstractLexicoder<byte[]> implements Lexicoder<byte[]> {
+public class BytesLexicoder extends AbstractLexicoder<byte[]> {
 
   @Override
   public byte[] encode(byte[] data) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/DateLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/DateLexicoder.java
index 84ad808..8ce9c4d 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/DateLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/DateLexicoder.java
@@ -16,16 +16,16 @@
  */
 package org.apache.accumulo.core.client.lexicoder;
 
-import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
-
 import java.util.Date;
 
+import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
+
 /**
  * A lexicoder for date objects. It preserves the native Java sort order for Date.
  *
  * @since 1.6.0
  */
-public class DateLexicoder extends AbstractLexicoder<Date> implements Lexicoder<Date> {
+public class DateLexicoder extends AbstractLexicoder<Date> {
 
   private LongLexicoder longEncoder = new LongLexicoder();
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/DoubleLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/DoubleLexicoder.java
index 67cf4d9..252523f 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/DoubleLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/DoubleLexicoder.java
@@ -23,7 +23,7 @@
  *
  * @since 1.6.0
  */
-public class DoubleLexicoder extends AbstractLexicoder<Double> implements Lexicoder<Double> {
+public class DoubleLexicoder extends AbstractLexicoder<Double> {
 
   private ULongLexicoder longEncoder = new ULongLexicoder();
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/FloatLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/FloatLexicoder.java
new file mode 100644
index 0000000..50c6205
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/FloatLexicoder.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.client.lexicoder;
+
+import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
+import org.apache.accumulo.core.iterators.ValueFormatException;
+
+/**
+ * A lexicoder for preserving the native Java sort order of Float values.
+ *
+ * @since 1.8.0
+ */
+public class FloatLexicoder extends AbstractLexicoder<Float> {
+
+  private UIntegerLexicoder intEncoder = new UIntegerLexicoder();
+
+  @Override
+  public byte[] encode(Float f) {
+    int i = Float.floatToRawIntBits(f);
+    if (i < 0) {
+      i = ~i;
+    } else {
+      i = i ^ 0x80000000;
+    }
+
+    return intEncoder.encode(i);
+  }
+
+  @Override
+  public Float decode(byte[] b) {
+    // This concrete implementation is provided for binary compatibility with 1.6; it can be removed in 2.0. See ACCUMULO-3789.
+    return super.decode(b);
+  }
+
+  @Override
+  protected Float decodeUnchecked(byte[] b, int offset, int len) throws ValueFormatException {
+    int i = intEncoder.decodeUnchecked(b, offset, len);
+    if (i < 0) {
+      i = i ^ 0x80000000;
+    } else {
+      i = ~i;
+    }
+
+    return Float.intBitsToFloat(i);
+  }
+
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/IntegerLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/IntegerLexicoder.java
index f113f48..b58e673 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/IntegerLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/IntegerLexicoder.java
@@ -24,7 +24,7 @@
  *
  * @since 1.6.0
  */
-public class IntegerLexicoder extends AbstractLexicoder<Integer> implements Lexicoder<Integer> {
+public class IntegerLexicoder extends AbstractLexicoder<Integer> {
 
   private UIntegerLexicoder uil = new UIntegerLexicoder();
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ListLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ListLexicoder.java
index d78cff2..8a38eb0 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ListLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ListLexicoder.java
@@ -31,7 +31,7 @@
  *
  * @since 1.6.0
  */
-public class ListLexicoder<LT> extends AbstractLexicoder<List<LT>> implements Lexicoder<List<LT>> {
+public class ListLexicoder<LT> extends AbstractLexicoder<List<LT>> {
 
   private Lexicoder<LT> lexicoder;
 
@@ -66,7 +66,7 @@
   protected List<LT> decodeUnchecked(byte[] b, int offset, int len) {
 
     byte[][] escapedElements = split(b, offset, len);
-    ArrayList<LT> ret = new ArrayList<LT>(escapedElements.length);
+    ArrayList<LT> ret = new ArrayList<>(escapedElements.length);
 
     for (byte[] escapedElement : escapedElements) {
       ret.add(lexicoder.decode(unescape(escapedElement)));
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/PairLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/PairLexicoder.java
index 9198b43..b286cf4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/PairLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/PairLexicoder.java
@@ -16,14 +16,14 @@
  */
 package org.apache.accumulo.core.client.lexicoder;
 
-import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
-import org.apache.accumulo.core.util.ComparablePair;
-
 import static org.apache.accumulo.core.client.lexicoder.impl.ByteUtils.concat;
 import static org.apache.accumulo.core.client.lexicoder.impl.ByteUtils.escape;
 import static org.apache.accumulo.core.client.lexicoder.impl.ByteUtils.split;
 import static org.apache.accumulo.core.client.lexicoder.impl.ByteUtils.unescape;
 
+import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
+import org.apache.accumulo.core.util.ComparablePair;
+
 /**
  * This class is a lexicoder that sorts a ComparablePair. Each item in the pair is encoded with the given lexicoder and concatenated together. This makes it
  * easy to construct a sortable key based on two components. There are many examples of this- but a key/value relationship is a great one.
@@ -49,8 +49,7 @@
  * @since 1.6.0
  */
 
-public class PairLexicoder<A extends Comparable<A>,B extends Comparable<B>> extends AbstractLexicoder<ComparablePair<A,B>> implements
-    Lexicoder<ComparablePair<A,B>> {
+public class PairLexicoder<A extends Comparable<A>,B extends Comparable<B>> extends AbstractLexicoder<ComparablePair<A,B>> {
 
   private Lexicoder<A> firstLexicoder;
   private Lexicoder<B> secondLexicoder;
@@ -79,7 +78,7 @@
       throw new RuntimeException("Data does not have 2 fields, it has " + fields.length);
     }
 
-    return new ComparablePair<A,B>(firstLexicoder.decode(unescape(fields[0])), secondLexicoder.decode(unescape(fields[1])));
+    return new ComparablePair<>(firstLexicoder.decode(unescape(fields[0])), secondLexicoder.decode(unescape(fields[1])));
   }
 
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ReverseLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ReverseLexicoder.java
index 3a422d5..5243ea9 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ReverseLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ReverseLexicoder.java
@@ -16,11 +16,11 @@
  */
 package org.apache.accumulo.core.client.lexicoder;
 
-import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
-
 import static org.apache.accumulo.core.client.lexicoder.impl.ByteUtils.escape;
 import static org.apache.accumulo.core.client.lexicoder.impl.ByteUtils.unescape;
 
+import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
+
 /**
  * A lexicoder that flips the sort order from another lexicoder. If this is applied to {@link DateLexicoder}, the most recent date will be sorted first and the
  * oldest date will be sorted last. If it's applied to {@link LongLexicoder}, the Long.MAX_VALUE will be sorted first and Long.MIN_VALUE will be sorted last,
@@ -29,7 +29,7 @@
  * @since 1.6.0
  */
 
-public class ReverseLexicoder<T> extends AbstractLexicoder<T> implements Lexicoder<T> {
+public class ReverseLexicoder<T> extends AbstractLexicoder<T> {
 
   private Lexicoder<T> lexicoder;
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/StringLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/StringLexicoder.java
index cec78de..17d4578 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/StringLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/StringLexicoder.java
@@ -16,10 +16,10 @@
  */
 package org.apache.accumulo.core.client.lexicoder;
 
-import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
-
 import static java.nio.charset.StandardCharsets.UTF_8;
 
+import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoder;
+
 /**
  * This lexicoder encodes/decodes a given String to/from bytes without further processing. It can be combined with other encoders like the
  * {@link ReverseLexicoder} to flip the default sort order.
@@ -27,7 +27,7 @@
  * @since 1.6.0
  */
 
-public class StringLexicoder extends AbstractLexicoder<String> implements Lexicoder<String> {
+public class StringLexicoder extends AbstractLexicoder<String> {
 
   @Override
   public byte[] encode(String data) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/TextLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/TextLexicoder.java
index 2129ce7..c5a7596 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/TextLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/TextLexicoder.java
@@ -27,7 +27,7 @@
  * @since 1.6.0
  */
 
-public class TextLexicoder extends AbstractLexicoder<Text> implements Lexicoder<Text> {
+public class TextLexicoder extends AbstractLexicoder<Text> {
 
   @Override
   public byte[] encode(Text data) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/UIntegerLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/UIntegerLexicoder.java
index bd41ab6..c156d4c 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/UIntegerLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/UIntegerLexicoder.java
@@ -24,7 +24,7 @@
  *
  * @since 1.6.0
  */
-public class UIntegerLexicoder extends AbstractLexicoder<Integer> implements Lexicoder<Integer> {
+public class UIntegerLexicoder extends AbstractLexicoder<Integer> {
 
   @Override
   public byte[] encode(Integer i) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ULongLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ULongLexicoder.java
index 0176043..8ebd0b9 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ULongLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/ULongLexicoder.java
@@ -24,7 +24,7 @@
  *
  * @since 1.6.0
  */
-public class ULongLexicoder extends AbstractLexicoder<Long> implements Lexicoder<Long> {
+public class ULongLexicoder extends AbstractLexicoder<Long> {
 
   @Override
   public byte[] encode(Long l) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/UUIDLexicoder.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/UUIDLexicoder.java
index 70405f5..2d5b811 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/UUIDLexicoder.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/UUIDLexicoder.java
@@ -31,7 +31,7 @@
  *
  * @since 1.6.0
  */
-public class UUIDLexicoder extends AbstractLexicoder<UUID> implements Lexicoder<UUID> {
+public class UUIDLexicoder extends AbstractLexicoder<UUID> {
 
   /**
    * {@inheritDoc}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/impl/ByteUtils.java b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/impl/ByteUtils.java
index b168807..0daceaa 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/lexicoder/impl/ByteUtils.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/lexicoder/impl/ByteUtils.java
@@ -97,7 +97,7 @@
    * Splits a byte array by 0x00
    */
   public static byte[][] split(byte[] data, int dataOffset, int len) {
-    ArrayList<Integer> offsets = new ArrayList<Integer>();
+    ArrayList<Integer> offsets = new ArrayList<>();
 
     for (int i = dataOffset; i < (dataOffset + len); i++) {
       if (data[i] == 0x00) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java
index 86a7adf..6165346 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java
@@ -26,6 +26,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -56,7 +57,7 @@
 import org.apache.accumulo.core.client.mapreduce.impl.SplitUtils;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator;
-import org.apache.accumulo.core.client.mock.MockInstance;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.DelegationToken;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
@@ -67,8 +68,8 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapred.InputFormat;
 import org.apache.hadoop.mapred.InputSplit;
@@ -78,6 +79,8 @@
 import org.apache.log4j.Level;
 import org.apache.log4j.Logger;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 /**
  * An abstract input format to provide shared methods common to all other input format classes. At the very least, any classes inheriting from this class will
  * need to define their own {@link RecordReader}.
@@ -87,6 +90,31 @@
   protected static final Logger log = Logger.getLogger(CLASS);
 
   /**
+   * Sets the name of the classloader context on this scanner
+   *
+   * @param job
+   *          the Hadoop job instance to be configured
+   * @param context
+   *          name of the classloader context
+   * @since 1.8.0
+   */
+  public static void setClassLoaderContext(JobConf job, String context) {
+    InputConfigurator.setClassLoaderContext(CLASS, job, context);
+  }
+
+  /**
+   * Returns the name of the current classloader context set on this scanner
+   *
+   * @param job
+   *          the Hadoop job instance to be configured
+   * @return name of the current context
+   * @since 1.8.0
+   */
+  public static String getClassLoaderContext(JobConf job) {
+    return InputConfigurator.getClassLoaderContext(CLASS, job);
+  }
+
+  /**
    * Sets the connector information needed to communicate with Accumulo in this job.
    *
    * <p>
@@ -227,7 +255,9 @@
    * @param instanceName
    *          the Accumulo instance name
    * @since 1.5.0
+   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
    */
+  @Deprecated
   public static void setMockInstance(JobConf job, String instanceName) {
     InputConfigurator.setMockInstance(CLASS, job, instanceName);
   }
@@ -240,7 +270,6 @@
    * @return an Accumulo instance
    * @since 1.5.0
    * @see #setZooKeeperInstance(JobConf, ClientConfiguration)
-   * @see #setMockInstance(JobConf, String)
    */
   protected static Instance getInstance(JobConf job) {
     return InputConfigurator.getInstance(CLASS, job);
@@ -480,7 +509,7 @@
       if (null == authorizations) {
         authorizations = getScanAuthorizations(job);
       }
-
+      String classLoaderContext = getClassLoaderContext(job);
       String table = baseSplit.getTableName();
 
       // in case the table name changed, we can still use the previous name for terms of configuration,
@@ -500,6 +529,9 @@
           int scanThreads = 1;
           scanner = instance.getConnector(principal, token).createBatchScanner(baseSplit.getTableName(), authorizations, scanThreads);
           setupIterators(job, scanner, baseSplit.getTableName(), baseSplit);
+          if (null != classLoaderContext) {
+            scanner.setClassLoaderContext(classLoaderContext);
+          }
         } catch (Exception e) {
           throw new IOException(e);
         }
@@ -529,7 +561,7 @@
         try {
           if (isOffline) {
             scanner = new OfflineScanner(instance, new Credentials(principal, token), baseSplit.getTableId(), authorizations);
-          } else if (instance instanceof MockInstance) {
+          } else if (DeprecationUtil.isMockInstance(instance)) {
             scanner = instance.getConnector(principal, token).createScanner(baseSplit.getTableName(), authorizations);
           } else {
             ClientConfiguration clientConf = getClientConfiguration(job);
@@ -571,6 +603,15 @@
         }
       }
 
+      SamplerConfiguration samplerConfig = baseSplit.getSamplerConfiguration();
+      if (null == samplerConfig) {
+        samplerConfig = tableConfig.getSamplerConfiguration();
+      }
+
+      if (samplerConfig != null) {
+        scannerBase.setSamplerConfiguration(samplerConfig);
+      }
+
       scannerIterator = scannerBase.iterator();
       numKeysRead = 0;
     }
@@ -621,7 +662,7 @@
     validateOptions(job);
 
     Random random = new Random();
-    LinkedList<InputSplit> splits = new LinkedList<InputSplit>();
+    LinkedList<InputSplit> splits = new LinkedList<>();
     Map<String,InputTableConfig> tableConfigs = getInputTableConfigs(job);
     for (Map.Entry<String,InputTableConfig> tableConfigEntry : tableConfigs.entrySet()) {
       String tableName = tableConfigEntry.getKey();
@@ -630,7 +671,7 @@
       Instance instance = getInstance(job);
       String tableId;
       // resolve table name to id once, and use id from this point forward
-      if (instance instanceof MockInstance) {
+      if (DeprecationUtil.isMockInstance(instance)) {
         tableId = "";
       } else {
         try {
@@ -655,19 +696,20 @@
 
       List<Range> ranges = autoAdjust ? Range.mergeOverlapping(tableConfig.getRanges()) : tableConfig.getRanges();
       if (ranges.isEmpty()) {
-        ranges = new ArrayList<Range>(1);
+        ranges = new ArrayList<>(1);
         ranges.add(new Range());
       }
 
       // get the metadata information for these ranges
-      Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<String,Map<KeyExtent,List<Range>>>();
+      Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<>();
       TabletLocator tl;
       try {
         if (tableConfig.isOfflineScan()) {
           binnedRanges = binOfflineTable(job, tableId, ranges);
           while (binnedRanges == null) {
             // Some tablets were still online, try again
-            UtilWaitThread.sleep(100 + random.nextInt(100)); // sleep randomly between 100 and 200 ms
+            // sleep randomly between 100 and 200 ms
+            sleepUninterruptibly(100 + random.nextInt(100), TimeUnit.MILLISECONDS);
             binnedRanges = binOfflineTable(job, tableId, ranges);
           }
         } else {
@@ -678,7 +720,7 @@
           ClientContext context = new ClientContext(getInstance(job), new Credentials(getPrincipal(job), getAuthenticationToken(job)),
               getClientConfiguration(job));
           while (!tl.binRanges(context, ranges, binnedRanges).isEmpty()) {
-            if (!(instance instanceof MockInstance)) {
+            if (!DeprecationUtil.isMockInstance(instance)) {
               if (!Tables.exists(instance, tableId))
                 throw new TableDeletedException(tableId);
               if (Tables.getTableState(instance, tableId) == TableState.OFFLINE)
@@ -686,7 +728,8 @@
             }
             binnedRanges.clear();
             log.warn("Unable to locate bins for specified ranges. Retrying.");
-            UtilWaitThread.sleep(100 + random.nextInt(100)); // sleep randomly between 100 and 200 ms
+            // sleep randomly between 100 and 200 ms
+            sleepUninterruptibly(100 + random.nextInt(100), TimeUnit.MILLISECONDS);
             tl.invalidateCache();
           }
         }
@@ -697,9 +740,9 @@
       HashMap<Range,ArrayList<String>> splitsToAdd = null;
 
       if (!autoAdjust)
-        splitsToAdd = new HashMap<Range,ArrayList<String>>();
+        splitsToAdd = new HashMap<>();
 
-      HashMap<String,String> hostNameCache = new HashMap<String,String>();
+      HashMap<String,String> hostNameCache = new HashMap<>();
       for (Map.Entry<String,Map<KeyExtent,List<Range>>> tserverBin : binnedRanges.entrySet()) {
         String ip = tserverBin.getKey().split(":", 2)[0];
         String location = hostNameCache.get(ip);
@@ -712,7 +755,7 @@
           Range ke = extentRanges.getKey().toDataRange();
           if (batchScan) {
             // group ranges by tablet to be read by a BatchScanner
-            ArrayList<Range> clippedRanges = new ArrayList<Range>();
+            ArrayList<Range> clippedRanges = new ArrayList<>();
             for (Range r : extentRanges.getValue())
               clippedRanges.add(ke.clip(r));
 
@@ -736,7 +779,7 @@
                 // don't divide ranges
                 ArrayList<String> locations = splitsToAdd.get(r);
                 if (locations == null)
-                  locations = new ArrayList<String>(1);
+                  locations = new ArrayList<>(1);
                 locations.add(location);
                 splitsToAdd.put(r, locations);
               }
@@ -759,4 +802,5 @@
 
     return splits.toArray(new InputSplit[splits.size()]);
   }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
index 545908f..f2bc4cd 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
@@ -17,19 +17,16 @@
 package org.apache.accumulo.core.client.mapred;
 
 import java.io.IOException;
-import java.util.Arrays;
 
 import org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator;
+import org.apache.accumulo.core.client.rfile.RFile;
+import org.apache.accumulo.core.client.rfile.RFileWriter;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.ArrayByteSequence;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.file.FileOperations;
-import org.apache.accumulo.core.file.FileSKVWriter;
-import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.commons.collections.map.LRUMap;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -140,6 +137,20 @@
     FileOutputConfigurator.setReplication(CLASS, job, replication);
   }
 
+  /**
+   * Specify a sampler to be used when writing out data. This will result in the output file having sample data.
+   *
+   * @param job
+   *          The Hadoop job instance to be configured
+   * @param samplerConfig
+   *          The configuration for creating sample data in the output file.
+   * @since 1.8.0
+   */
+
+  public static void setSampler(JobConf job, SamplerConfiguration samplerConfig) {
+    FileOutputConfigurator.setSampler(CLASS, job, samplerConfig);
+  }
+
   @Override
   public RecordWriter<Key,Value> getRecordWriter(FileSystem ignored, JobConf job, String name, Progressable progress) throws IOException {
     // get the path of the temporary output file
@@ -148,11 +159,10 @@
 
     final String extension = acuConf.get(Property.TABLE_FILE_TYPE);
     final Path file = new Path(getWorkOutputPath(job), getUniqueName(job, "part") + "." + extension);
-
-    final LRUMap validVisibilities = new LRUMap(ConfiguratorBase.getVisibilityCacheSize(conf));
+    final int visCacheSize = ConfiguratorBase.getVisibilityCacheSize(conf);
 
     return new RecordWriter<Key,Value>() {
-      FileSKVWriter out = null;
+      RFileWriter out = null;
 
       @Override
       public void close(Reporter reporter) throws IOException {
@@ -162,16 +172,9 @@
 
       @Override
       public void write(Key key, Value value) throws IOException {
-
-        Boolean wasChecked = (Boolean) validVisibilities.get(key.getColumnVisibilityData());
-        if (wasChecked == null) {
-          byte[] cv = key.getColumnVisibilityData().toArray();
-          new ColumnVisibility(cv);
-          validVisibilities.put(new ArrayByteSequence(Arrays.copyOf(cv, cv.length)), Boolean.TRUE);
-        }
-
         if (out == null) {
-          out = FileOperations.getInstance().openWriter(file.toString(), file.getFileSystem(conf), conf, acuConf);
+          out = RFile.newWriter().to(file.toString()).withFileSystem(file.getFileSystem(conf)).withTableProperties(acuConf)
+              .withVisibilityCacheSize(visCacheSize).build();
           out.startDefaultLocalityGroup();
         }
         out.append(key, value);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormat.java
index 856a11a..5f00ec3 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormat.java
@@ -42,7 +42,7 @@
  * <li>{@link AccumuloInputFormat#setConnectorInfo(JobConf, String, AuthenticationToken)}
  * <li>{@link AccumuloInputFormat#setConnectorInfo(JobConf, String, String)}
  * <li>{@link AccumuloInputFormat#setScanAuthorizations(JobConf, Authorizations)}
- * <li>{@link AccumuloInputFormat#setZooKeeperInstance(JobConf, ClientConfiguration)} OR {@link AccumuloInputFormat#setMockInstance(JobConf, String)}
+ * <li>{@link AccumuloInputFormat#setZooKeeperInstance(JobConf, ClientConfiguration)}
  * </ul>
  *
  * Other static methods are optional.
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloMultiTableInputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloMultiTableInputFormat.java
index 00a79f2..3a2e3fa 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloMultiTableInputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloMultiTableInputFormat.java
@@ -40,7 +40,7 @@
  * <li>{@link AccumuloInputFormat#setConnectorInfo(JobConf, String, org.apache.accumulo.core.client.security.tokens.AuthenticationToken)}
  * <li>{@link AccumuloInputFormat#setConnectorInfo(JobConf, String, String)}
  * <li>{@link AccumuloInputFormat#setScanAuthorizations(JobConf, org.apache.accumulo.core.security.Authorizations)}
- * <li>{@link AccumuloInputFormat#setZooKeeperInstance(JobConf, ClientConfiguration)} OR {@link AccumuloInputFormat#setMockInstance(JobConf, String)}
+ * <li>{@link AccumuloInputFormat#setZooKeeperInstance(JobConf, ClientConfiguration)}
  * <li>{@link AccumuloMultiTableInputFormat#setInputTableConfigs(org.apache.hadoop.mapred.JobConf, java.util.Map)}
  * </ul>
  *
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java
index c194cf6..5feadb8 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java
@@ -40,7 +40,6 @@
 import org.apache.accumulo.core.client.impl.DelegationTokenImpl;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.SecurityErrorCode;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
@@ -71,7 +70,7 @@
  * <ul>
  * <li>{@link AccumuloOutputFormat#setConnectorInfo(JobConf, String, AuthenticationToken)}
  * <li>{@link AccumuloOutputFormat#setConnectorInfo(JobConf, String, String)}
- * <li>{@link AccumuloOutputFormat#setZooKeeperInstance(JobConf, ClientConfiguration)} OR {@link AccumuloOutputFormat#setMockInstance(JobConf, String)}
+ * <li>{@link AccumuloOutputFormat#setZooKeeperInstance(JobConf, ClientConfiguration)}
  * </ul>
  *
  * Other static methods are optional.
@@ -239,14 +238,16 @@
   }
 
   /**
-   * Configures a {@link MockInstance} for this job.
+   * Configures a {@link org.apache.accumulo.core.client.mock.MockInstance} for this job.
    *
    * @param job
    *          the Hadoop job instance to be configured
    * @param instanceName
    *          the Accumulo instance name
    * @since 1.5.0
+   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
    */
+  @Deprecated
   public static void setMockInstance(JobConf job, String instanceName) {
     OutputConfigurator.setMockInstance(CLASS, job, instanceName);
   }
@@ -259,7 +260,6 @@
    * @return an Accumulo instance
    * @since 1.5.0
    * @see #setZooKeeperInstance(JobConf, ClientConfiguration)
-   * @see #setMockInstance(JobConf, String)
    */
   protected static Instance getInstance(JobConf job) {
     return OutputConfigurator.getInstance(CLASS, job);
@@ -429,7 +429,7 @@
       if (simulate)
         log.info("Simulating output only. No writes to tables will occur");
 
-      this.bws = new HashMap<Text,BatchWriter>();
+      this.bws = new HashMap<>();
 
       String tname = getDefaultTableName(job);
       this.defaultTableName = (tname == null) ? null : new Text(tname);
@@ -543,12 +543,13 @@
         mtbw.close();
       } catch (MutationsRejectedException e) {
         if (e.getSecurityErrorCodes().size() >= 0) {
-          HashMap<String,Set<SecurityErrorCode>> tables = new HashMap<String,Set<SecurityErrorCode>>();
+          HashMap<String,Set<SecurityErrorCode>> tables = new HashMap<>();
           for (Entry<TabletId,Set<SecurityErrorCode>> ke : e.getSecurityErrorCodes().entrySet()) {
-            Set<SecurityErrorCode> secCodes = tables.get(ke.getKey().getTableId().toString());
+            String tableId = ke.getKey().getTableId().toString();
+            Set<SecurityErrorCode> secCodes = tables.get(tableId);
             if (secCodes == null) {
-              secCodes = new HashSet<SecurityErrorCode>();
-              tables.put(ke.getKey().getTableId().toString(), secCodes);
+              secCodes = new HashSet<>();
+              tables.put(tableId, secCodes);
             }
             secCodes.addAll(ke.getValue());
           }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloRowInputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloRowInputFormat.java
index 6f257ff..5049ef7 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloRowInputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloRowInputFormat.java
@@ -43,7 +43,7 @@
  * <li>{@link AccumuloRowInputFormat#setConnectorInfo(JobConf, String, AuthenticationToken)}
  * <li>{@link AccumuloRowInputFormat#setInputTableName(JobConf, String)}
  * <li>{@link AccumuloRowInputFormat#setScanAuthorizations(JobConf, Authorizations)}
- * <li>{@link AccumuloRowInputFormat#setZooKeeperInstance(JobConf, ClientConfiguration)} OR {@link AccumuloRowInputFormat#setMockInstance(JobConf, String)}
+ * <li>{@link AccumuloRowInputFormat#setZooKeeperInstance(JobConf, ClientConfiguration)}
  * </ul>
  *
  * Other static methods are optional.
@@ -78,7 +78,7 @@
 
       @Override
       public PeekingIterator<Entry<Key,Value>> createValue() {
-        return new PeekingIterator<Entry<Key,Value>>();
+        return new PeekingIterator<>();
       }
     };
     recordReader.initialize(split, job);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/InputFormatBase.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/InputFormatBase.java
index ffb02a9..0cf57d2 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/InputFormatBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/InputFormatBase.java
@@ -25,9 +25,11 @@
 import org.apache.accumulo.core.client.IsolatedScanner;
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.ScannerBase;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.impl.TabletLocator;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
@@ -249,7 +251,6 @@
   }
 
   /**
-   * <p>
    * Enable reading offline tables. By default, this feature is disabled and only online tables are scanned. This will make the map reduce job directly read the
    * table's files. If the table is not offline, then the job will fail. If the table comes online during the map reduce job, it is likely that the job will
    * fail.
@@ -338,6 +339,23 @@
   }
 
   /**
+   * Causes input format to read sample data. If sample data was created using a different configuration or a tables sampler configuration changes while reading
+   * data, then the input format will throw an error.
+   *
+   *
+   * @param job
+   *          the Hadoop job instance to be configured
+   * @param samplerConfig
+   *          The sampler configuration that sample must have been created with inorder for reading sample data to succeed.
+   *
+   * @since 1.8.0
+   * @see ScannerBase#setSamplerConfiguration(SamplerConfiguration)
+   */
+  public static void setSamplerConfiguration(JobConf job, SamplerConfiguration samplerConfig) {
+    InputConfigurator.setSamplerConfiguration(CLASS, job, samplerConfig);
+  }
+
+  /**
    * Initializes an Accumulo {@link org.apache.accumulo.core.client.impl.TabletLocator} based on the configuration.
    *
    * @param job
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java
index 2575fe5..9ccf78a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AbstractInputFormat.java
@@ -26,6 +26,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -55,7 +56,7 @@
 import org.apache.accumulo.core.client.mapreduce.impl.SplitUtils;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator;
-import org.apache.accumulo.core.client.mock.MockInstance;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.DelegationToken;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
@@ -66,8 +67,8 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapreduce.InputFormat;
@@ -80,6 +81,8 @@
 import org.apache.log4j.Level;
 import org.apache.log4j.Logger;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 /**
  * An abstract input format to provide shared methods common to all other input format classes. At the very least, any classes inheriting from this class will
  * need to define their own {@link RecordReader}.
@@ -90,6 +93,31 @@
   protected static final Logger log = Logger.getLogger(CLASS);
 
   /**
+   * Sets the name of the classloader context on this scanner
+   *
+   * @param job
+   *          the Hadoop job instance to be configured
+   * @param context
+   *          name of the classloader context
+   * @since 1.8.0
+   */
+  public static void setClassLoaderContext(Job job, String context) {
+    InputConfigurator.setClassLoaderContext(CLASS, job.getConfiguration(), context);
+  }
+
+  /**
+   * Returns the name of the current classloader context set on this scanner
+   *
+   * @param job
+   *          the Hadoop job instance to be configured
+   * @return name of the current context
+   * @since 1.8.0
+   */
+  public static String getClassLoaderContext(JobContext job) {
+    return InputConfigurator.getClassLoaderContext(CLASS, job.getConfiguration());
+  }
+
+  /**
    * Sets the connector information needed to communicate with Accumulo in this job.
    *
    * <p>
@@ -253,7 +281,9 @@
    * @param instanceName
    *          the Accumulo instance name
    * @since 1.5.0
+   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
    */
+  @Deprecated
   public static void setMockInstance(Job job, String instanceName) {
     InputConfigurator.setMockInstance(CLASS, job.getConfiguration(), instanceName);
   }
@@ -266,7 +296,6 @@
    * @return an Accumulo instance
    * @since 1.5.0
    * @see #setZooKeeperInstance(Job, ClientConfiguration)
-   * @see #setMockInstance(Job, String)
    */
   protected static Instance getInstance(JobContext context) {
     return InputConfigurator.getInstance(CLASS, context.getConfiguration());
@@ -511,7 +540,7 @@
       if (null == authorizations) {
         authorizations = getScanAuthorizations(attempt);
       }
-
+      String classLoaderContext = getClassLoaderContext(attempt);
       String table = split.getTableName();
 
       // in case the table name changed, we can still use the previous name for terms of configuration,
@@ -531,6 +560,9 @@
           int scanThreads = 1;
           scanner = instance.getConnector(principal, token).createBatchScanner(split.getTableName(), authorizations, scanThreads);
           setupIterators(attempt, scanner, split.getTableName(), split);
+          if (null != classLoaderContext) {
+            scanner.setClassLoaderContext(classLoaderContext);
+          }
         } catch (Exception e) {
           e.printStackTrace();
           throw new IOException(e);
@@ -559,7 +591,7 @@
         try {
           if (isOffline) {
             scanner = new OfflineScanner(instance, new Credentials(principal, token), split.getTableId(), authorizations);
-          } else if (instance instanceof MockInstance) {
+          } else if (DeprecationUtil.isMockInstance(instance)) {
             scanner = instance.getConnector(principal, token).createScanner(split.getTableName(), authorizations);
           } else {
             ClientConfiguration clientConf = getClientConfiguration(attempt);
@@ -601,6 +633,15 @@
         }
       }
 
+      SamplerConfiguration samplerConfig = split.getSamplerConfiguration();
+      if (null == samplerConfig) {
+        samplerConfig = tableConfig.getSamplerConfiguration();
+      }
+
+      if (samplerConfig != null) {
+        scannerBase.setSamplerConfiguration(samplerConfig);
+      }
+
       scannerIterator = scannerBase.iterator();
       numKeysRead = 0;
     }
@@ -667,7 +708,7 @@
     log.setLevel(logLevel);
     validateOptions(context);
     Random random = new Random();
-    LinkedList<InputSplit> splits = new LinkedList<InputSplit>();
+    LinkedList<InputSplit> splits = new LinkedList<>();
     Map<String,InputTableConfig> tableConfigs = getInputTableConfigs(context);
     for (Map.Entry<String,InputTableConfig> tableConfigEntry : tableConfigs.entrySet()) {
 
@@ -677,7 +718,7 @@
       Instance instance = getInstance(context);
       String tableId;
       // resolve table name to id once, and use id from this point forward
-      if (instance instanceof MockInstance) {
+      if (DeprecationUtil.isMockInstance(instance)) {
         tableId = "";
       } else {
         try {
@@ -702,19 +743,20 @@
 
       List<Range> ranges = autoAdjust ? Range.mergeOverlapping(tableConfig.getRanges()) : tableConfig.getRanges();
       if (ranges.isEmpty()) {
-        ranges = new ArrayList<Range>(1);
+        ranges = new ArrayList<>(1);
         ranges.add(new Range());
       }
 
       // get the metadata information for these ranges
-      Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<String,Map<KeyExtent,List<Range>>>();
+      Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<>();
       TabletLocator tl;
       try {
         if (tableConfig.isOfflineScan()) {
           binnedRanges = binOfflineTable(context, tableId, ranges);
           while (binnedRanges == null) {
             // Some tablets were still online, try again
-            UtilWaitThread.sleep(100 + random.nextInt(100)); // sleep randomly between 100 and 200 ms
+            // sleep randomly between 100 and 200 ms
+            sleepUninterruptibly(100 + random.nextInt(100), TimeUnit.MILLISECONDS);
             binnedRanges = binOfflineTable(context, tableId, ranges);
 
           }
@@ -726,7 +768,7 @@
           ClientContext clientContext = new ClientContext(getInstance(context), new Credentials(getPrincipal(context), getAuthenticationToken(context)),
               getClientConfiguration(context));
           while (!tl.binRanges(clientContext, ranges, binnedRanges).isEmpty()) {
-            if (!(instance instanceof MockInstance)) {
+            if (!DeprecationUtil.isMockInstance(instance)) {
               if (!Tables.exists(instance, tableId))
                 throw new TableDeletedException(tableId);
               if (Tables.getTableState(instance, tableId) == TableState.OFFLINE)
@@ -734,7 +776,8 @@
             }
             binnedRanges.clear();
             log.warn("Unable to locate bins for specified ranges. Retrying.");
-            UtilWaitThread.sleep(100 + random.nextInt(100)); // sleep randomly between 100 and 200 ms
+            // sleep randomly between 100 and 200 ms
+            sleepUninterruptibly(100 + random.nextInt(100), TimeUnit.MILLISECONDS);
             tl.invalidateCache();
           }
         }
@@ -747,9 +790,9 @@
       HashMap<Range,ArrayList<String>> splitsToAdd = null;
 
       if (!autoAdjust)
-        splitsToAdd = new HashMap<Range,ArrayList<String>>();
+        splitsToAdd = new HashMap<>();
 
-      HashMap<String,String> hostNameCache = new HashMap<String,String>();
+      HashMap<String,String> hostNameCache = new HashMap<>();
       for (Map.Entry<String,Map<KeyExtent,List<Range>>> tserverBin : binnedRanges.entrySet()) {
         String ip = tserverBin.getKey().split(":", 2)[0];
         String location = hostNameCache.get(ip);
@@ -762,7 +805,7 @@
           Range ke = extentRanges.getKey().toDataRange();
           if (batchScan) {
             // group ranges by tablet to be read by a BatchScanner
-            ArrayList<Range> clippedRanges = new ArrayList<Range>();
+            ArrayList<Range> clippedRanges = new ArrayList<>();
             for (Range r : extentRanges.getValue())
               clippedRanges.add(ke.clip(r));
             BatchInputSplit split = new BatchInputSplit(tableName, tableId, clippedRanges, new String[] {location});
@@ -785,7 +828,7 @@
                 // don't divide ranges
                 ArrayList<String> locations = splitsToAdd.get(r);
                 if (locations == null)
-                  locations = new ArrayList<String>(1);
+                  locations = new ArrayList<>(1);
                 locations.add(location);
                 splitsToAdd.put(r, locations);
               }
@@ -807,4 +850,5 @@
     }
     return splits;
   }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
index 8c3f8cf..75afe2b 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
@@ -17,18 +17,16 @@
 package org.apache.accumulo.core.client.mapreduce;
 
 import java.io.IOException;
-import java.util.Arrays;
 
+import org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator;
+import org.apache.accumulo.core.client.rfile.RFile;
+import org.apache.accumulo.core.client.rfile.RFileWriter;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.ArrayByteSequence;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.file.FileOperations;
-import org.apache.accumulo.core.file.FileSKVWriter;
-import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.commons.collections.map.LRUMap;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.mapreduce.Job;
@@ -138,6 +136,20 @@
     FileOutputConfigurator.setReplication(CLASS, job.getConfiguration(), replication);
   }
 
+  /**
+   * Specify a sampler to be used when writing out data. This will result in the output file having sample data.
+   *
+   * @param job
+   *          The Hadoop job instance to be configured
+   * @param samplerConfig
+   *          The configuration for creating sample data in the output file.
+   * @since 1.8.0
+   */
+
+  public static void setSampler(Job job, SamplerConfiguration samplerConfig) {
+    FileOutputConfigurator.setSampler(CLASS, job.getConfiguration(), samplerConfig);
+  }
+
   @Override
   public RecordWriter<Key,Value> getRecordWriter(TaskAttemptContext context) throws IOException {
     // get the path of the temporary output file
@@ -146,11 +158,10 @@
 
     final String extension = acuConf.get(Property.TABLE_FILE_TYPE);
     final Path file = this.getDefaultWorkFile(context, "." + extension);
-
-    final LRUMap validVisibilities = new LRUMap(1000);
+    final int visCacheSize = ConfiguratorBase.getVisibilityCacheSize(conf);
 
     return new RecordWriter<Key,Value>() {
-      FileSKVWriter out = null;
+      RFileWriter out = null;
 
       @Override
       public void close(TaskAttemptContext context) throws IOException {
@@ -160,21 +171,13 @@
 
       @Override
       public void write(Key key, Value value) throws IOException {
-
-        Boolean wasChecked = (Boolean) validVisibilities.get(key.getColumnVisibilityData());
-        if (wasChecked == null) {
-          byte[] cv = key.getColumnVisibilityData().toArray();
-          new ColumnVisibility(cv);
-          validVisibilities.put(new ArrayByteSequence(Arrays.copyOf(cv, cv.length)), Boolean.TRUE);
-        }
-
         if (out == null) {
-          out = FileOperations.getInstance().openWriter(file.toString(), file.getFileSystem(conf), conf, acuConf);
+          out = RFile.newWriter().to(file.toString()).withFileSystem(file.getFileSystem(conf)).withTableProperties(acuConf)
+              .withVisibilityCacheSize(visCacheSize).build();
           out.startDefaultLocalityGroup();
         }
         out.append(key, value);
       }
     };
   }
-
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormat.java
index 33eccc1..837b3fe 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormat.java
@@ -41,7 +41,7 @@
  * <ul>
  * <li>{@link AccumuloInputFormat#setConnectorInfo(Job, String, AuthenticationToken)}
  * <li>{@link AccumuloInputFormat#setScanAuthorizations(Job, Authorizations)}
- * <li>{@link AccumuloInputFormat#setZooKeeperInstance(Job, ClientConfiguration)} OR {@link AccumuloInputFormat#setMockInstance(Job, String)}
+ * <li>{@link AccumuloInputFormat#setZooKeeperInstance(Job, ClientConfiguration)}
  * </ul>
  *
  * Other static methods are optional.
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloMultiTableInputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloMultiTableInputFormat.java
index e8e49f0..e2d13be 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloMultiTableInputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloMultiTableInputFormat.java
@@ -44,7 +44,7 @@
  * <ul>
  * <li>{@link AccumuloMultiTableInputFormat#setConnectorInfo(Job, String, AuthenticationToken)}
  * <li>{@link AccumuloMultiTableInputFormat#setScanAuthorizations(Job, Authorizations)}
- * <li>{@link AccumuloMultiTableInputFormat#setZooKeeperInstance(Job, ClientConfiguration)} OR {@link AccumuloInputFormat#setMockInstance(Job, String)}
+ * <li>{@link AccumuloMultiTableInputFormat#setZooKeeperInstance(Job, ClientConfiguration)}
  * <li>{@link AccumuloMultiTableInputFormat#setInputTableConfigs(Job, Map)}
  * </ul>
  *
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
index 4cb46a3..1e06ca3 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
@@ -40,7 +40,6 @@
 import org.apache.accumulo.core.client.impl.DelegationTokenImpl;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.OutputConfigurator;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.SecurityErrorCode;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
@@ -72,7 +71,7 @@
  * <ul>
  * <li>{@link AccumuloOutputFormat#setConnectorInfo(Job, String, AuthenticationToken)}
  * <li>{@link AccumuloOutputFormat#setConnectorInfo(Job, String, String)}
- * <li>{@link AccumuloOutputFormat#setZooKeeperInstance(Job, ClientConfiguration)} OR {@link AccumuloOutputFormat#setMockInstance(Job, String)}
+ * <li>{@link AccumuloOutputFormat#setZooKeeperInstance(Job, ClientConfiguration)}
  * </ul>
  *
  * Other static methods are optional.
@@ -239,7 +238,7 @@
   }
 
   /**
-   * Configures a {@link MockInstance} for this job.
+   * Configures a {@link org.apache.accumulo.core.client.mock.MockInstance} for this job.
    *
    * @param job
    *          the Hadoop job instance to be configured
@@ -247,6 +246,7 @@
    *          the Accumulo instance name
    * @since 1.5.0
    */
+  @Deprecated
   public static void setMockInstance(Job job, String instanceName) {
     OutputConfigurator.setMockInstance(CLASS, job.getConfiguration(), instanceName);
   }
@@ -259,7 +259,6 @@
    * @return an Accumulo instance
    * @since 1.5.0
    * @see #setZooKeeperInstance(Job, ClientConfiguration)
-   * @see #setMockInstance(Job, String)
    */
   protected static Instance getInstance(JobContext context) {
     return OutputConfigurator.getInstance(CLASS, context.getConfiguration());
@@ -429,7 +428,7 @@
       if (simulate)
         log.info("Simulating output only. No writes to tables will occur");
 
-      this.bws = new HashMap<Text,BatchWriter>();
+      this.bws = new HashMap<>();
 
       String tname = getDefaultTableName(context);
       this.defaultTableName = (tname == null) ? null : new Text(tname);
@@ -543,12 +542,13 @@
         mtbw.close();
       } catch (MutationsRejectedException e) {
         if (e.getSecurityErrorCodes().size() >= 0) {
-          HashMap<String,Set<SecurityErrorCode>> tables = new HashMap<String,Set<SecurityErrorCode>>();
+          HashMap<String,Set<SecurityErrorCode>> tables = new HashMap<>();
           for (Entry<TabletId,Set<SecurityErrorCode>> ke : e.getSecurityErrorCodes().entrySet()) {
-            Set<SecurityErrorCode> secCodes = tables.get(ke.getKey().getTableId().toString());
+            String tableId = ke.getKey().getTableId().toString();
+            Set<SecurityErrorCode> secCodes = tables.get(tableId);
             if (secCodes == null) {
-              secCodes = new HashSet<SecurityErrorCode>();
-              tables.put(ke.getKey().getTableId().toString(), secCodes);
+              secCodes = new HashSet<>();
+              tables.put(tableId, secCodes);
             }
             secCodes.addAll(ke.getValue());
           }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloRowInputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloRowInputFormat.java
index 77081bf..043f88a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloRowInputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloRowInputFormat.java
@@ -43,7 +43,7 @@
  * <li>{@link AccumuloRowInputFormat#setConnectorInfo(Job, String, AuthenticationToken)}
  * <li>{@link AccumuloRowInputFormat#setInputTableName(Job, String)}
  * <li>{@link AccumuloRowInputFormat#setScanAuthorizations(Job, Authorizations)}
- * <li>{@link AccumuloRowInputFormat#setZooKeeperInstance(Job, ClientConfiguration)} OR {@link AccumuloRowInputFormat#setMockInstance(Job, String)}
+ * <li>{@link AccumuloRowInputFormat#setZooKeeperInstance(Job, ClientConfiguration)}
  * </ul>
  *
  * Other static methods are optional.
@@ -68,7 +68,7 @@
       public boolean nextKeyValue() throws IOException, InterruptedException {
         if (!rowIterator.hasNext())
           return false;
-        currentV = new PeekingIterator<Entry<Key,Value>>(rowIterator.next());
+        currentV = new PeekingIterator<>(rowIterator.next());
         numKeysRead = rowIterator.getKVCount();
         currentKey = currentV.peek().getKey();
         currentK = new Text(currentKey.getRow());
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java
index 6ab8a19..324d5c7 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputFormatBase.java
@@ -25,9 +25,11 @@
 import org.apache.accumulo.core.client.IsolatedScanner;
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.ScannerBase;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.impl.TabletLocator;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
@@ -248,7 +250,6 @@
   }
 
   /**
-   * <p>
    * Enable reading offline tables. By default, this feature is disabled and only online tables are scanned. This will make the map reduce job directly read the
    * table's files. If the table is not offline, then the job will fail. If the table comes online during the map reduce job, it is likely that the job will
    * fail.
@@ -337,6 +338,23 @@
   }
 
   /**
+   * Causes input format to read sample data. If sample data was created using a different configuration or a tables sampler configuration changes while reading
+   * data, then the input format will throw an error.
+   *
+   *
+   * @param job
+   *          the Hadoop job instance to be configured
+   * @param samplerConfig
+   *          The sampler configuration that sample must have been created with inorder for reading sample data to succeed.
+   *
+   * @since 1.8.0
+   * @see ScannerBase#setSamplerConfiguration(SamplerConfiguration)
+   */
+  public static void setSamplerConfiguration(Job job, SamplerConfiguration samplerConfig) {
+    InputConfigurator.setSamplerConfiguration(CLASS, job.getConfiguration(), samplerConfig);
+  }
+
+  /**
    * Initializes an Accumulo {@link org.apache.accumulo.core.client.impl.TabletLocator} based on the configuration.
    *
    * @param context
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputTableConfig.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputTableConfig.java
index 257f6c9..a8724c2 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputTableConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/InputTableConfig.java
@@ -25,6 +25,8 @@
 import java.util.List;
 
 import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.ScannerBase;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.io.Text;
@@ -43,6 +45,7 @@
   private boolean useLocalIterators = false;
   private boolean useIsolatedScanners = false;
   private boolean offlineScan = false;
+  private SamplerConfiguration samplerConfig = null;
 
   public InputTableConfig() {}
 
@@ -171,7 +174,6 @@
   }
 
   /**
-   * <p>
    * Enable reading offline tables. By default, this feature is disabled and only online tables are scanned. This will make the map reduce job directly read the
    * table's files. If the table is not offline, then the job will fail. If the table comes online during the map reduce job, it is likely that the job will
    * fail.
@@ -241,6 +243,26 @@
     return useIsolatedScanners;
   }
 
+  /**
+   * Set the sampler configuration to use when reading from the data.
+   *
+   * @see ScannerBase#setSamplerConfiguration(SamplerConfiguration)
+   * @see InputFormatBase#setSamplerConfiguration(org.apache.hadoop.mapreduce.Job, SamplerConfiguration)
+   *
+   * @since 1.8.0
+   */
+  public void setSamplerConfiguration(SamplerConfiguration samplerConfiguration) {
+    this.samplerConfig = samplerConfiguration;
+  }
+
+  /**
+   *
+   * @since 1.8.0
+   */
+  public SamplerConfiguration getSamplerConfiguration() {
+    return samplerConfig;
+  }
+
   @Override
   public void write(DataOutput dataOutput) throws IOException {
     if (iterators != null) {
@@ -283,13 +305,13 @@
     // load iterators
     long iterSize = dataInput.readInt();
     if (iterSize > 0)
-      iterators = new ArrayList<IteratorSetting>();
+      iterators = new ArrayList<>();
     for (int i = 0; i < iterSize; i++)
       iterators.add(new IteratorSetting(dataInput));
     // load ranges
     long rangeSize = dataInput.readInt();
     if (rangeSize > 0)
-      ranges = new ArrayList<Range>();
+      ranges = new ArrayList<>();
     for (int i = 0; i < rangeSize; i++) {
       Range range = new Range();
       range.readFields(dataInput);
@@ -298,7 +320,7 @@
     // load columns
     long columnSize = dataInput.readInt();
     if (columnSize > 0)
-      columns = new HashSet<Pair<Text,Text>>();
+      columns = new HashSet<>();
     for (int i = 0; i < columnSize; i++) {
       long numPairs = dataInput.readInt();
       Text colFam = new Text();
@@ -308,7 +330,7 @@
       } else if (numPairs == 2) {
         Text colQual = new Text();
         colQual.readFields(dataInput);
-        columns.add(new Pair<Text,Text>(colFam, colQual));
+        columns.add(new Pair<>(colFam, colQual));
       }
     }
     autoAdjustRanges = dataInput.readBoolean();
@@ -340,6 +362,8 @@
       return false;
     if (ranges != null ? !ranges.equals(that.ranges) : that.ranges != null)
       return false;
+    if (samplerConfig != null ? !samplerConfig.equals(that.samplerConfig) : that.samplerConfig != null)
+      return false;
     return true;
   }
 
@@ -352,6 +376,7 @@
     result = 31 * result + (useLocalIterators ? 1 : 0);
     result = 31 * result + (useIsolatedScanners ? 1 : 0);
     result = 31 * result + (offlineScan ? 1 : 0);
+    result = 31 * result + (samplerConfig == null ? 0 : samplerConfig.hashCode());
     return result;
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
index e337977..1e89500 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplit.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.core.client.mapreduce;
 
+import static java.nio.charset.StandardCharsets.UTF_8;
+
 import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
@@ -32,24 +34,24 @@
 import org.apache.accumulo.core.client.ZooKeeperInstance;
 import org.apache.accumulo.core.client.mapreduce.impl.SplitUtils;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase.TokenSource;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.Base64;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.mapreduce.InputSplit;
 import org.apache.log4j.Level;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
-
 /**
  * The Class RangeInputSplit. Encapsulates an Accumulo range for use in Map Reduce jobs.
  */
@@ -64,6 +66,7 @@
   private Authorizations auths;
   private Set<Pair<Text,Text>> fetchedColumns;
   private List<IteratorSetting> iterators;
+  private SamplerConfiguration samplerConfig;
   private Level level;
 
   public RangeInputSplit() {
@@ -157,7 +160,7 @@
 
     if (in.readBoolean()) {
       int numColumns = in.readInt();
-      List<String> columns = new ArrayList<String>(numColumns);
+      List<String> columns = new ArrayList<>(numColumns);
       for (int i = 0; i < numColumns; i++) {
         columns.add(in.readUTF());
       }
@@ -206,7 +209,7 @@
 
     if (in.readBoolean()) {
       int numIterators = in.readInt();
-      iterators = new ArrayList<IteratorSetting>(numIterators);
+      iterators = new ArrayList<>(numIterators);
       for (int i = 0; i < numIterators; i++) {
         iterators.add(new IteratorSetting(in));
       }
@@ -215,6 +218,10 @@
     if (in.readBoolean()) {
       level = Level.toLevel(in.readInt());
     }
+
+    if (in.readBoolean()) {
+      samplerConfig = new SamplerConfigurationImpl(in).toSamplerConfiguration();
+    }
   }
 
   @Override
@@ -301,6 +308,11 @@
     if (null != level) {
       out.writeInt(level.toInt());
     }
+
+    out.writeBoolean(null != samplerConfig);
+    if (null != samplerConfig) {
+      new SamplerConfigurationImpl(samplerConfig).write(out);
+    }
   }
 
   /**
@@ -354,7 +366,7 @@
     }
 
     if (isMockInstance()) {
-      return new MockInstance(getInstanceName());
+      return DeprecationUtil.makeMockInstance(getInstanceName());
     }
 
     if (null == zooKeepers) {
@@ -414,10 +426,18 @@
     this.locations = Arrays.copyOf(locations, locations.length);
   }
 
+  /**
+   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
+   */
+  @Deprecated
   public Boolean isMockInstance() {
     return mockInstance;
   }
 
+  /**
+   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
+   */
+  @Deprecated
   public void setMockInstance(Boolean mockInstance) {
     this.mockInstance = mockInstance;
   }
@@ -455,7 +475,7 @@
   }
 
   public void setFetchedColumns(Collection<Pair<Text,Text>> fetchedColumns) {
-    this.fetchedColumns = new HashSet<Pair<Text,Text>>();
+    this.fetchedColumns = new HashSet<>();
     for (Pair<Text,Text> columns : fetchedColumns) {
       this.fetchedColumns.add(columns);
     }
@@ -502,6 +522,15 @@
     sb.append(" fetchColumns: ").append(fetchedColumns);
     sb.append(" iterators: ").append(iterators);
     sb.append(" logLevel: ").append(level);
+    sb.append(" samplerConfig: ").append(samplerConfig);
     return sb.toString();
   }
+
+  public void setSamplerConfiguration(SamplerConfiguration samplerConfiguration) {
+    this.samplerConfig = samplerConfiguration;
+  }
+
+  public SamplerConfiguration getSamplerConfiguration() {
+    return samplerConfig;
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplit.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplit.java
index 04875ac..2965788 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplit.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplit.java
@@ -107,7 +107,7 @@
     super.readFields(in);
 
     int numRanges = in.readInt();
-    ranges = new ArrayList<Range>(numRanges);
+    ranges = new ArrayList<>(numRanges);
     for (int i = 0; i < numRanges; ++i) {
       Range r = new Range();
       r.readFields(in);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/SplitUtils.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/SplitUtils.java
index d19b499..b81b064 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/SplitUtils.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/impl/SplitUtils.java
@@ -23,11 +23,11 @@
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.mapreduce.InputTableConfig;
 import org.apache.accumulo.core.client.mapreduce.RangeInputSplit;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.hadoop.io.Text;
 import org.apache.log4j.Level;
 
@@ -41,7 +41,7 @@
       Authorizations auths, Level logLevel) {
     split.setInstanceName(instance.getInstanceName());
     split.setZooKeepers(instance.getZooKeepers());
-    split.setMockInstance(instance instanceof MockInstance);
+    DeprecationUtil.setMockInstance(split, DeprecationUtil.isMockInstance(instance));
 
     split.setPrincipal(principal);
     split.setToken(token);
@@ -50,6 +50,8 @@
     split.setFetchedColumns(tableConfig.getFetchedColumns());
     split.setIterators(tableConfig.getIterators());
     split.setLogLevel(logLevel);
+
+    split.setSamplerConfiguration(tableConfig.getSamplerConfiguration());
   }
 
   public static float getProgress(ByteSequence start, ByteSequence end, ByteSequence position) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
index c3a4b4f..67fe2f4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java
@@ -35,10 +35,10 @@
 import org.apache.accumulo.core.client.impl.Credentials;
 import org.apache.accumulo.core.client.impl.DelegationTokenImpl;
 import org.apache.accumulo.core.client.mapreduce.impl.DelegationTokenStub;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
 import org.apache.accumulo.core.util.Base64;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FileSystem;
@@ -81,7 +81,7 @@
   }
 
   /**
-   * Configuration keys for {@link Instance}, {@link ZooKeeperInstance}, and {@link MockInstance}.
+   * Configuration keys for available {@link Instance} types.
    *
    * @since 1.6.0
    */
@@ -321,7 +321,7 @@
   }
 
   /**
-   * Configures a {@link MockInstance} for this job.
+   * Configures a {@link org.apache.accumulo.core.client.mock.MockInstance} for this job.
    *
    * @param implementingClass
    *          the class whose name will be used as a prefix for the property configuration key
@@ -330,7 +330,9 @@
    * @param instanceName
    *          the Accumulo instance name
    * @since 1.6.0
+   * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework
    */
+  @Deprecated
   public static void setMockInstance(Class<?> implementingClass, Configuration conf, String instanceName) {
     String key = enumToConfKey(implementingClass, InstanceOpts.TYPE);
     if (!conf.get(key, "").isEmpty())
@@ -351,12 +353,11 @@
    * @return an Accumulo instance
    * @since 1.6.0
    * @see #setZooKeeperInstance(Class, Configuration, ClientConfiguration)
-   * @see #setMockInstance(Class, Configuration, String)
    */
   public static Instance getInstance(Class<?> implementingClass, Configuration conf) {
     String instanceType = conf.get(enumToConfKey(implementingClass, InstanceOpts.TYPE), "");
     if ("MockInstance".equals(instanceType))
-      return new MockInstance(conf.get(enumToConfKey(implementingClass, InstanceOpts.NAME)));
+      return DeprecationUtil.makeMockInstance(conf.get(enumToConfKey(implementingClass, InstanceOpts.NAME)));
     else if ("ZooKeeperInstance".equals(instanceType)) {
       return new ZooKeeperInstance(getClientConfiguration(implementingClass, conf));
     } else if (instanceType.isEmpty())
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
index f0f67b2..049395f 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
@@ -17,11 +17,15 @@
 package org.apache.accumulo.core.client.mapreduce.lib.impl;
 
 import java.util.Arrays;
+import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Set;
 
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.ConfigurationCopy;
 import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.hadoop.conf.Configuration;
 
 /**
@@ -97,8 +101,17 @@
     String prefix = enumToConfKey(implementingClass, Opts.ACCUMULO_PROPERTIES) + ".";
     ConfigurationCopy acuConf = new ConfigurationCopy(AccumuloConfiguration.getDefaultConfiguration());
     for (Entry<String,String> entry : conf)
-      if (entry.getKey().startsWith(prefix))
-        acuConf.set(Property.getPropertyByKey(entry.getKey().substring(prefix.length())), entry.getValue());
+      if (entry.getKey().startsWith(prefix)) {
+        String propString = entry.getKey().substring(prefix.length());
+        Property prop = Property.getPropertyByKey(propString);
+        if (prop != null) {
+          acuConf.set(prop, entry.getValue());
+        } else if (Property.isValidTablePropertyKey(propString)) {
+          acuConf.set(propString, entry.getValue());
+        } else {
+          throw new IllegalArgumentException("Unknown accumulo file property " + propString);
+        }
+      }
     return acuConf;
   }
 
@@ -184,4 +197,16 @@
     setAccumuloProperty(implementingClass, conf, Property.TABLE_FILE_REPLICATION, replication);
   }
 
+  /**
+   * @since 1.8.0
+   */
+  public static void setSampler(Class<?> implementingClass, Configuration conf, SamplerConfiguration samplerConfig) {
+    Map<String,String> props = new SamplerConfigurationImpl(samplerConfig).toTablePropertiesMap();
+
+    Set<Entry<String,String>> es = props.entrySet();
+    for (Entry<String,String> entry : es) {
+      conf.set(enumToConfKey(implementingClass, Opts.ACCUMULO_PROPERTIES) + "." + entry.getKey(), entry.getValue());
+    }
+  }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
index 0e640b4..986e071 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java
@@ -18,6 +18,7 @@
 
 import static com.google.common.base.Preconditions.checkArgument;
 import static java.nio.charset.StandardCharsets.UTF_8;
+import static java.util.Objects.requireNonNull;
 
 import java.io.ByteArrayInputStream;
 import java.io.ByteArrayOutputStream;
@@ -52,7 +53,7 @@
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.TabletLocator;
 import org.apache.accumulo.core.client.mapreduce.InputTableConfig;
-import org.apache.accumulo.core.client.mock.impl.MockTabletLocator;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.PartialKey;
@@ -63,9 +64,11 @@
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.util.Base64;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.core.util.TextUtil;
 import org.apache.hadoop.conf.Configuration;
@@ -87,7 +90,7 @@
    * @since 1.6.0
    */
   public static enum ScanOpts {
-    TABLE_NAME, AUTHORIZATIONS, RANGES, COLUMNS, ITERATORS, TABLE_CONFIGS
+    TABLE_NAME, AUTHORIZATIONS, RANGES, COLUMNS, ITERATORS, TABLE_CONFIGS, SAMPLER_CONFIG, CLASSLOADER_CONTEXT
   }
 
   /**
@@ -100,6 +103,36 @@
   }
 
   /**
+   * Sets the name of the context classloader to use for scans
+   *
+   * @param implementingClass
+   *          the class whose name will be used as a prefix for the property configuration key
+   * @param conf
+   *          the Hadoop configuration object to configure
+   * @param context
+   *          the name of the context classloader
+   * @since 1.8.0
+   */
+  public static void setClassLoaderContext(Class<?> implementingClass, Configuration conf, String context) {
+    checkArgument(context != null, "context is null");
+    conf.set(enumToConfKey(implementingClass, ScanOpts.CLASSLOADER_CONTEXT), context);
+  }
+
+  /**
+   * Gets the name of the context classloader to use for scans
+   *
+   * @param implementingClass
+   *          the class whose name will be used as a prefix for the property configuration key
+   * @param conf
+   *          the Hadoop configuration object to configure
+   * @return the classloader context name
+   * @since 1.8.0
+   */
+  public static String getClassLoaderContext(Class<?> implementingClass, Configuration conf) {
+    return conf.get(enumToConfKey(implementingClass, ScanOpts.CLASSLOADER_CONTEXT), null);
+  }
+
+  /**
    * Sets the name of the input table, over which this job will scan.
    *
    * @param implementingClass
@@ -176,7 +209,7 @@
   public static void setRanges(Class<?> implementingClass, Configuration conf, Collection<Range> ranges) {
     checkArgument(ranges != null, "ranges is null");
 
-    ArrayList<String> rangeStrings = new ArrayList<String>(ranges.size());
+    ArrayList<String> rangeStrings = new ArrayList<>(ranges.size());
     try {
       for (Range r : ranges) {
         ByteArrayOutputStream baos = new ByteArrayOutputStream();
@@ -205,7 +238,7 @@
   public static List<Range> getRanges(Class<?> implementingClass, Configuration conf) throws IOException {
 
     Collection<String> encodedRanges = conf.getStringCollection(enumToConfKey(implementingClass, ScanOpts.RANGES));
-    List<Range> ranges = new ArrayList<Range>();
+    List<Range> ranges = new ArrayList<>();
     for (String rangeString : encodedRanges) {
       ByteArrayInputStream bais = new ByteArrayInputStream(Base64.decodeBase64(rangeString.getBytes(UTF_8)));
       Range range = new Range();
@@ -231,11 +264,11 @@
 
     // If no iterators are present, return an empty list
     if (iterators == null || iterators.isEmpty())
-      return new ArrayList<IteratorSetting>();
+      return new ArrayList<>();
 
     // Compose the set of iterators encoded in the job configuration
     StringTokenizer tokens = new StringTokenizer(iterators, StringUtils.COMMA_STR);
-    List<IteratorSetting> list = new ArrayList<IteratorSetting>();
+    List<IteratorSetting> list = new ArrayList<>();
     try {
       while (tokens.hasMoreTokens()) {
         String itstring = tokens.nextToken();
@@ -271,7 +304,7 @@
 
   public static String[] serializeColumns(Collection<Pair<Text,Text>> columnFamilyColumnQualifierPairs) {
     checkArgument(columnFamilyColumnQualifierPairs != null, "columnFamilyColumnQualifierPairs is null");
-    ArrayList<String> columnStrings = new ArrayList<String>(columnFamilyColumnQualifierPairs.size());
+    ArrayList<String> columnStrings = new ArrayList<>(columnFamilyColumnQualifierPairs.size());
     for (Pair<Text,Text> column : columnFamilyColumnQualifierPairs) {
 
       if (column.getFirst() == null)
@@ -300,7 +333,7 @@
   public static Set<Pair<Text,Text>> getFetchedColumns(Class<?> implementingClass, Configuration conf) {
     checkArgument(conf != null, "conf is null");
     String confValue = conf.get(enumToConfKey(implementingClass, ScanOpts.COLUMNS));
-    List<String> serialized = new ArrayList<String>();
+    List<String> serialized = new ArrayList<>();
     if (confValue != null) {
       // Split and include any trailing empty strings to allow empty column families
       for (String val : confValue.split(",", -1)) {
@@ -311,7 +344,7 @@
   }
 
   public static Set<Pair<Text,Text>> deserializeFetchedColumns(Collection<String> serialized) {
-    Set<Pair<Text,Text>> columns = new HashSet<Pair<Text,Text>>();
+    Set<Pair<Text,Text>> columns = new HashSet<>();
 
     if (null == serialized) {
       return columns;
@@ -321,7 +354,7 @@
       int idx = col.indexOf(":");
       Text cf = new Text(idx < 0 ? Base64.decodeBase64(col.getBytes(UTF_8)) : Base64.decodeBase64(col.substring(0, idx).getBytes(UTF_8)));
       Text cq = idx < 0 ? null : new Text(Base64.decodeBase64(col.substring(idx + 1).getBytes(UTF_8)));
-      columns.add(new Pair<Text,Text>(cf, cq));
+      columns.add(new Pair<>(cf, cq));
     }
     return columns;
   }
@@ -466,7 +499,6 @@
   }
 
   /**
-   * <p>
    * Enable reading offline tables. By default, this feature is disabled and only online tables are scanned. This will make the map reduce job directly read the
    * table's files. If the table is not offline, then the job will fail. If the table comes online during the map reduce job, it is likely that the job will
    * fail.
@@ -589,7 +621,7 @@
    * @since 1.6.0
    */
   public static Map<String,InputTableConfig> getInputTableConfigs(Class<?> implementingClass, Configuration conf) {
-    Map<String,InputTableConfig> configs = new HashMap<String,InputTableConfig>();
+    Map<String,InputTableConfig> configs = new HashMap<>();
     Map.Entry<String,InputTableConfig> defaultConfig = getDefaultInputTableConfig(implementingClass, conf);
     if (defaultConfig != null)
       configs.put(defaultConfig.getKey(), defaultConfig.getValue());
@@ -645,12 +677,12 @@
   public static TabletLocator getTabletLocator(Class<?> implementingClass, Configuration conf, String tableId) throws TableNotFoundException {
     String instanceType = conf.get(enumToConfKey(implementingClass, InstanceOpts.TYPE));
     if ("MockInstance".equals(instanceType))
-      return new MockTabletLocator();
+      return DeprecationUtil.makeMockLocator();
     Instance instance = getInstance(implementingClass, conf);
     ClientConfiguration clientConf = getClientConfiguration(implementingClass, conf);
     ClientContext context = new ClientContext(instance,
         new Credentials(getPrincipal(implementingClass, conf), getAuthenticationToken(implementingClass, conf)), clientConf);
-    return TabletLocator.getLocator(context, new Text(tableId));
+    return TabletLocator.getLocator(context, tableId);
   }
 
   /**
@@ -805,6 +837,11 @@
       if (ranges != null)
         queryConfig.setRanges(ranges);
 
+      SamplerConfiguration samplerConfig = getSamplerConfiguration(implementingClass, conf);
+      if (samplerConfig != null) {
+        queryConfig.setSamplerConfiguration(samplerConfig);
+      }
+
       queryConfig.setAutoAdjustRanges(getAutoAdjustRanges(implementingClass, conf)).setUseIsolatedScanners(isIsolated(implementingClass, conf))
           .setUseLocalIterators(usesLocalIterators(implementingClass, conf)).setOfflineScan(isOfflineScan(implementingClass, conf));
       return Maps.immutableEntry(tableName, queryConfig);
@@ -814,7 +851,7 @@
 
   public static Map<String,Map<KeyExtent,List<Range>>> binOffline(String tableId, List<Range> ranges, Instance instance, Connector conn)
       throws AccumuloException, TableNotFoundException {
-    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<String,Map<KeyExtent,List<Range>>>();
+    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<>();
 
     if (Tables.getTableState(instance, tableId) != TableState.OFFLINE) {
       Tables.clearCache(instance);
@@ -831,7 +868,7 @@
       else
         startRow = new Text();
 
-      Range metadataRange = new Range(new KeyExtent(new Text(tableId), startRow, null).getMetadataEntry(), true, null, false);
+      Range metadataRange = new Range(new KeyExtent(tableId, startRow, null).getMetadataEntry(), true, null, false);
       Scanner scanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
       MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.fetch(scanner);
       scanner.fetchColumnFamily(MetadataSchema.TabletsSection.LastLocationColumnFamily.NAME);
@@ -869,7 +906,7 @@
         if (location != null)
           return null;
 
-        if (!extent.getTableId().toString().equals(tableId)) {
+        if (!extent.getTableId().equals(tableId)) {
           throw new AccumuloException("Saw unexpected table Id " + tableId + " " + extent);
         }
 
@@ -879,13 +916,13 @@
 
         Map<KeyExtent,List<Range>> tabletRanges = binnedRanges.get(last);
         if (tabletRanges == null) {
-          tabletRanges = new HashMap<KeyExtent,List<Range>>();
+          tabletRanges = new HashMap<>();
           binnedRanges.put(last, tabletRanges);
         }
 
         List<Range> rangeList = tabletRanges.get(extent);
         if (rangeList == null) {
-          rangeList = new ArrayList<Range>();
+          rangeList = new ArrayList<>();
           tabletRanges.put(extent, rangeList);
         }
 
@@ -901,4 +938,47 @@
     }
     return binnedRanges;
   }
+
+  private static String toBase64(Writable writable) {
+    ByteArrayOutputStream baos = new ByteArrayOutputStream();
+    DataOutputStream dos = new DataOutputStream(baos);
+    try {
+      writable.write(dos);
+      dos.close();
+    } catch (IOException e) {
+      throw new RuntimeException(e);
+    }
+
+    return Base64.encodeBase64String(baos.toByteArray());
+  }
+
+  private static <T extends Writable> T fromBase64(T writable, String enc) {
+    ByteArrayInputStream bais = new ByteArrayInputStream(Base64.decodeBase64(enc));
+    DataInputStream dis = new DataInputStream(bais);
+    try {
+      writable.readFields(dis);
+    } catch (IOException e) {
+      throw new RuntimeException(e);
+    }
+    return writable;
+  }
+
+  public static void setSamplerConfiguration(Class<?> implementingClass, Configuration conf, SamplerConfiguration samplerConfig) {
+    requireNonNull(samplerConfig);
+
+    String key = enumToConfKey(implementingClass, ScanOpts.SAMPLER_CONFIG);
+    String val = toBase64(new SamplerConfigurationImpl(samplerConfig));
+
+    conf.set(key, val);
+  }
+
+  public static SamplerConfiguration getSamplerConfiguration(Class<?> implementingClass, Configuration conf) {
+    String key = enumToConfKey(implementingClass, ScanOpts.SAMPLER_CONFIG);
+
+    String encodedSC = conf.get(key);
+    if (encodedSC == null)
+      return null;
+
+    return fromBase64(new SamplerConfigurationImpl(), encodedSC).toSamplerConfiguration();
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
index c0c0097..fa80831 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/partition/RangePartitioner.java
@@ -89,7 +89,7 @@
       if (cf != null) {
         for (Path path : cf) {
           if (path.toUri().getPath().endsWith(cutFileName.substring(cutFileName.lastIndexOf('/')))) {
-            TreeSet<Text> cutPoints = new TreeSet<Text>();
+            TreeSet<Text> cutPoints = new TreeSet<>();
             Scanner in = new Scanner(new BufferedReader(new InputStreamReader(new FileInputStream(path.toString()), UTF_8)));
             try {
               while (in.hasNextLine())
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/ConfiguratorBase.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/ConfiguratorBase.java
index 20fbbea..6914071 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/ConfiguratorBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/ConfiguratorBase.java
@@ -20,7 +20,6 @@
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
 import org.apache.hadoop.conf.Configuration;
@@ -46,7 +45,7 @@
   }
 
   /**
-   * Configuration keys for {@link Instance}, {@link ZooKeeperInstance}, and {@link MockInstance}.
+   * Configuration keys for {@link Instance}, {@link ZooKeeperInstance}, and {@link org.apache.accumulo.core.client.mock.MockInstance}.
    *
    * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
    * @since 1.5.0
@@ -206,7 +205,7 @@
   }
 
   /**
-   * Configures a {@link MockInstance} for this job.
+   * Configures a {@link org.apache.accumulo.core.client.mock.MockInstance} for this job.
    *
    * @param implementingClass
    *          the class whose name will be used as a prefix for the property configuration key
@@ -233,7 +232,6 @@
    * @deprecated since 1.6.0; Configure your job with the appropriate InputFormat or OutputFormat.
    * @since 1.5.0
    * @see #setZooKeeperInstance(Class, Configuration, String, String)
-   * @see #setMockInstance(Class, Configuration, String)
    */
   @Deprecated
   public static Instance getInstance(Class<?> implementingClass, Configuration conf) {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/InputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/InputConfigurator.java
index 8d0c4b1..b85253c 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/InputConfigurator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/InputConfigurator.java
@@ -367,7 +367,6 @@
   }
 
   /**
-   * <p>
    * Enable reading offline tables. By default, this feature is disabled and only online tables are scanned. This will make the map reduce job directly read the
    * table's files. If the table is not offline, then the job will fail. If the table comes online during the map reduce job, it is likely that the job will
    * fail.
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/IteratorAdapter.java b/core/src/main/java/org/apache/accumulo/core/client/mock/IteratorAdapter.java
index d4d4004..d88dac9 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/IteratorAdapter.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/IteratorAdapter.java
@@ -16,42 +16,18 @@
  */
 package org.apache.accumulo.core.client.mock;
 
-import java.io.IOException;
-import java.util.Iterator;
-import java.util.Map.Entry;
-import java.util.NoSuchElementException;
-
 import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.KeyValue;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 
-public class IteratorAdapter implements Iterator<Entry<Key,Value>> {
-
-  SortedKeyValueIterator<Key,Value> inner;
+/**
+ * @deprecated since 1.8.0; use {@link org.apache.accumulo.core.iterators.IteratorAdapter} instead.
+ */
+@Deprecated
+public class IteratorAdapter extends org.apache.accumulo.core.iterators.IteratorAdapter {
 
   public IteratorAdapter(SortedKeyValueIterator<Key,Value> inner) {
-    this.inner = inner;
+    super(inner);
   }
 
-  @Override
-  public boolean hasNext() {
-    return inner.hasTop();
-  }
-
-  @Override
-  public Entry<Key,Value> next() {
-    try {
-      Entry<Key,Value> result = new KeyValue(new Key(inner.getTopKey()), new Value(inner.getTopValue()).get());
-      inner.next();
-      return result;
-    } catch (IOException ex) {
-      throw new NoSuchElementException();
-    }
-  }
-
-  @Override
-  public void remove() {
-    throw new UnsupportedOperationException();
-  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockAccumulo.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockAccumulo.java
index e1ca768..f362add 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockAccumulo.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockAccumulo.java
@@ -40,19 +40,20 @@
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.io.Text;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 public class MockAccumulo {
-  final Map<String,MockTable> tables = new HashMap<String,MockTable>();
-  final Map<String,MockNamespace> namespaces = new HashMap<String,MockNamespace>();
-  final Map<String,String> systemProperties = new HashMap<String,String>();
-  Map<String,MockUser> users = new HashMap<String,MockUser>();
+  final Map<String,MockTable> tables = new HashMap<>();
+  final Map<String,MockNamespace> namespaces = new HashMap<>();
+  final Map<String,String> systemProperties = new HashMap<>();
+  Map<String,MockUser> users = new HashMap<>();
   final FileSystem fs;
   final AtomicInteger tableIdCounter = new AtomicInteger(0);
 
+  @Deprecated
   MockAccumulo(FileSystem fs) {
-    this.fs = fs;
-  }
-
-  {
     MockUser root = new MockUser("root", new PasswordToken(new byte[0]), Authorizations.EMPTY);
     root.permissions.add(SystemPermission.SYSTEM);
     users.put(root.name, root);
@@ -61,6 +62,7 @@
     createTable("root", RootTable.NAME, true, TimeType.LOGICAL);
     createTable("root", MetadataTable.NAME, true, TimeType.LOGICAL);
     createTable("root", ReplicationTable.NAME, true, TimeType.LOGICAL);
+    this.fs = fs;
   }
 
   public FileSystem getFileSystem() {
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchDeleter.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchDeleter.java
index bb9f2c8..bacd844 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchDeleter.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchDeleter.java
@@ -37,7 +37,10 @@
  * </ol>
  *
  * Otherwise, it behaves as expected.
+ *
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
  */
+@Deprecated
 public class MockBatchDeleter extends MockBatchScanner implements BatchDeleter {
 
   private final MockAccumulo acc;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchScanner.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchScanner.java
index 4034271..1ea27b5 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchScanner.java
@@ -32,6 +32,10 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.commons.collections.iterators.IteratorChain;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 public class MockBatchScanner extends MockScannerBase implements BatchScanner {
 
   List<Range> ranges = null;
@@ -46,7 +50,7 @@
       throw new IllegalArgumentException("ranges must be non null and contain at least 1 range");
     }
 
-    this.ranges = new ArrayList<Range>(ranges);
+    this.ranges = new ArrayList<>(ranges);
   }
 
   @SuppressWarnings("unchecked")
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchWriter.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchWriter.java
index 163587f..53a0ddc 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchWriter.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockBatchWriter.java
@@ -22,6 +22,10 @@
 import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.data.Mutation;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 public class MockBatchWriter implements BatchWriter {
 
   final String tablename;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockConfiguration.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockConfiguration.java
index 410105b..244d6f8 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockConfiguration.java
@@ -24,6 +24,10 @@
 
 import com.google.common.base.Predicate;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 class MockConfiguration extends AccumuloConfiguration {
   Map<String,String> map;
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockConnector.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockConnector.java
index d348400..9b5601b 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockConnector.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockConnector.java
@@ -40,6 +40,10 @@
 import org.apache.accumulo.core.client.security.tokens.NullToken;
 import org.apache.accumulo.core.security.Authorizations;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 public class MockConnector extends Connector {
 
   String username;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstance.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstance.java
index 57cd5ee..50d212f 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstance.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstance.java
@@ -49,12 +49,13 @@
  * An alternative to Mock Accumulo called MiniAccumuloCluster was introduced in Accumulo 1.5. MiniAccumuloCluster spins up actual Accumulo server processes, can
  * be used for unit testing, and its behavior should match Accumulo. The drawback of MiniAccumuloCluster is that it starts more slowly than Mock Accumulo.
  *
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
  */
-
+@Deprecated
 public class MockInstance implements Instance {
 
   static final String genericAddress = "localhost:1234";
-  static final Map<String,MockAccumulo> instances = new HashMap<String,MockAccumulo>();
+  static final Map<String,MockAccumulo> instances = new HashMap<>();
   MockAccumulo acu;
   String instanceName;
 
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstanceOperations.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstanceOperations.java
index c1acc04..e264104 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstanceOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockInstanceOperations.java
@@ -29,6 +29,10 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 class MockInstanceOperations implements InstanceOperations {
   private static final Logger log = LoggerFactory.getLogger(MockInstanceOperations.class);
   MockAccumulo acu;
@@ -59,12 +63,12 @@
 
   @Override
   public List<String> getTabletServers() {
-    return new ArrayList<String>();
+    return new ArrayList<>();
   }
 
   @Override
   public List<ActiveScan> getActiveScans(String tserver) throws AccumuloException, AccumuloSecurityException {
-    return new ArrayList<ActiveScan>();
+    return new ArrayList<>();
   }
 
   @Override
@@ -80,7 +84,7 @@
 
   @Override
   public List<ActiveCompaction> getActiveCompactions(String tserver) throws AccumuloException, AccumuloSecurityException {
-    return new ArrayList<ActiveCompaction>();
+    return new ArrayList<>();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockMultiTableBatchWriter.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockMultiTableBatchWriter.java
index 9cc3dfb..5b9bc2b 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockMultiTableBatchWriter.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockMultiTableBatchWriter.java
@@ -26,13 +26,17 @@
 import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.client.TableNotFoundException;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 public class MockMultiTableBatchWriter implements MultiTableBatchWriter {
   MockAccumulo acu = null;
   Map<String,MockBatchWriter> bws = null;
 
   public MockMultiTableBatchWriter(MockAccumulo acu) {
     this.acu = acu;
-    bws = new HashMap<String,MockBatchWriter>();
+    bws = new HashMap<>();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespace.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespace.java
index 955564f..456580b 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespace.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespace.java
@@ -27,13 +27,17 @@
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.security.NamespacePermission;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 public class MockNamespace {
 
   final HashMap<String,String> settings;
-  Map<String,EnumSet<NamespacePermission>> userPermissions = new HashMap<String,EnumSet<NamespacePermission>>();
+  Map<String,EnumSet<NamespacePermission>> userPermissions = new HashMap<>();
 
   public MockNamespace() {
-    settings = new HashMap<String,String>();
+    settings = new HashMap<>();
     for (Entry<String,String> entry : AccumuloConfiguration.getDefaultConfiguration()) {
       String key = entry.getKey();
       if (key.startsWith(Property.TABLE_PREFIX.getKey())) {
@@ -43,7 +47,7 @@
   }
 
   public List<String> getTables(MockAccumulo acu) {
-    List<String> l = new LinkedList<String>();
+    List<String> l = new LinkedList<>();
     for (String t : acu.tables.keySet()) {
       if (acu.tables.get(t).getNamespace().equals(this)) {
         l.add(t);
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespaceOperations.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespaceOperations.java
index 004124d..b1cb980 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespaceOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockNamespaceOperations.java
@@ -34,6 +34,10 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 class MockNamespaceOperations extends NamespaceOperationsHelper {
 
   private static final Logger log = LoggerFactory.getLogger(MockNamespaceOperations.class);
@@ -48,7 +52,7 @@
 
   @Override
   public SortedSet<String> list() {
-    return new TreeSet<String>(acu.namespaces.keySet());
+    return new TreeSet<>(acu.namespaces.keySet());
   }
 
   @Override
@@ -112,7 +116,7 @@
 
   @Override
   public Map<String,String> namespaceIdMap() {
-    Map<String,String> result = new HashMap<String,String>();
+    Map<String,String> result = new HashMap<>();
     for (String table : acu.tables.keySet()) {
       result.put(table, table);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockScanner.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockScanner.java
index a9b6fd5..1e36964 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockScanner.java
@@ -30,6 +30,10 @@
 import org.apache.accumulo.core.iterators.SortedMapIterator;
 import org.apache.accumulo.core.security.Authorizations;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 public class MockScanner extends MockScannerBase implements Scanner {
 
   int batchSize = 0;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockScannerBase.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockScannerBase.java
index 3c746e1..ad79ec0 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockScannerBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockScannerBase.java
@@ -23,8 +23,8 @@
 import java.util.Iterator;
 import java.util.Map.Entry;
 
-import org.apache.accumulo.core.client.ScannerBase;
 import org.apache.accumulo.core.client.impl.ScannerOptions;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.ArrayByteSequence;
 import org.apache.accumulo.core.data.ByteSequence;
@@ -43,7 +43,11 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.commons.lang.NotImplementedException;
 
-public class MockScannerBase extends ScannerOptions implements ScannerBase {
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
+public class MockScannerBase extends ScannerOptions {
 
   protected final MockTable table;
   protected final Authorizations auths;
@@ -54,7 +58,7 @@
   }
 
   static HashSet<ByteSequence> createColumnBSS(Collection<Column> columns) {
-    HashSet<ByteSequence> columnSet = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> columnSet = new HashSet<>();
     for (Column c : columns) {
       columnSet.add(new ArrayByteSequence(c.getColumnFamily()));
     }
@@ -89,7 +93,7 @@
       return false;
     }
 
-    private ArrayList<SortedKeyValueIterator<Key,Value>> topLevelIterators = new ArrayList<SortedKeyValueIterator<Key,Value>>();
+    private ArrayList<SortedKeyValueIterator<Key,Value>> topLevelIterators = new ArrayList<>();
 
     @Override
     public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {
@@ -104,16 +108,31 @@
     SortedKeyValueIterator<Key,Value> getTopLevelIterator(SortedKeyValueIterator<Key,Value> iter) {
       if (topLevelIterators.isEmpty())
         return iter;
-      ArrayList<SortedKeyValueIterator<Key,Value>> allIters = new ArrayList<SortedKeyValueIterator<Key,Value>>(topLevelIterators);
+      ArrayList<SortedKeyValueIterator<Key,Value>> allIters = new ArrayList<>(topLevelIterators);
       allIters.add(iter);
       return new MultiIterator(allIters, false);
     }
+
+    @Override
+    public boolean isSamplingEnabled() {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public SamplerConfiguration getSamplerConfiguration() {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public IteratorEnvironment cloneWithSamplingEnabled() {
+      throw new UnsupportedOperationException();
+    }
   }
 
   public SortedKeyValueIterator<Key,Value> createFilter(SortedKeyValueIterator<Key,Value> inner) throws IOException {
     byte[] defaultLabels = {};
     inner = new ColumnFamilySkippingIterator(new DeletingIterator(inner, false));
-    ColumnQualifierFilter cqf = new ColumnQualifierFilter(inner, new HashSet<Column>(fetchedColumns));
+    ColumnQualifierFilter cqf = new ColumnQualifierFilter(inner, new HashSet<>(fetchedColumns));
     VisibilityFilter vf = new VisibilityFilter(cqf, auths, defaultLabels);
     AccumuloConfiguration conf = new MockConfiguration(table.settings);
     MockIteratorEnvironment iterEnv = new MockIteratorEnvironment(auths);
@@ -131,4 +150,9 @@
   public Authorizations getAuthorizations() {
     return auths;
   }
+
+  @Override
+  public void setClassLoaderContext(String context) {
+    throw new UnsupportedOperationException();
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockSecurityOperations.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockSecurityOperations.java
index cc51a47..bf4b46e 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockSecurityOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockSecurityOperations.java
@@ -32,6 +32,10 @@
 import org.apache.accumulo.core.security.SystemPermission;
 import org.apache.accumulo.core.security.TablePermission;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 class MockSecurityOperations implements SecurityOperations {
 
   final private MockAccumulo acu;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockTable.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockTable.java
index 6f66c60..1445650 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockTable.java
@@ -40,6 +40,10 @@
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.hadoop.io.Text;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 public class MockTable {
 
   static class MockMemKey extends Key {
@@ -81,15 +85,15 @@
       }
       return 0;
     }
-  };
+  }
 
-  final SortedMap<Key,Value> table = new ConcurrentSkipListMap<Key,Value>();
+  final SortedMap<Key,Value> table = new ConcurrentSkipListMap<>();
   int mutationCount = 0;
   final Map<String,String> settings;
-  Map<String,EnumSet<TablePermission>> userPermissions = new HashMap<String,EnumSet<TablePermission>>();
+  Map<String,EnumSet<TablePermission>> userPermissions = new HashMap<>();
   private TimeType timeType;
-  SortedSet<Text> splits = new ConcurrentSkipListSet<Text>();
-  Map<String,Set<Text>> localityGroups = new TreeMap<String,Set<Text>>();
+  SortedSet<Text> splits = new ConcurrentSkipListSet<>();
+  Map<String,Set<Text>> localityGroups = new TreeMap<>();
   private MockNamespace namespace;
   private String namespaceName;
   private String tableId;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperations.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperations.java
index 0da8bd1..de89137 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockTableOperations.java
@@ -16,8 +16,6 @@
  */
 package org.apache.accumulo.core.client.mock;
 
-import static com.google.common.base.Preconditions.checkArgument;
-
 import java.io.DataInputStream;
 import java.io.IOException;
 import java.util.ArrayList;
@@ -41,10 +39,12 @@
 import org.apache.accumulo.core.client.admin.CompactionConfig;
 import org.apache.accumulo.core.client.admin.DiskUsage;
 import org.apache.accumulo.core.client.admin.FindMax;
+import org.apache.accumulo.core.client.admin.Locations;
 import org.apache.accumulo.core.client.admin.NewTableConfiguration;
 import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.client.impl.TableOperationsHelper;
 import org.apache.accumulo.core.client.impl.Tables;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
@@ -65,7 +65,12 @@
 import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+import static com.google.common.base.Preconditions.checkArgument;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 class MockTableOperations extends TableOperationsHelper {
   private static final Logger log = LoggerFactory.getLogger(MockTableOperations.class);
   private static final byte[] ZERO = {0};
@@ -79,7 +84,7 @@
 
   @Override
   public SortedSet<String> list() {
-    return new TreeSet<String>(acu.tables.keySet());
+    return new TreeSet<>(acu.tables.keySet());
   }
 
   @Override
@@ -204,7 +209,7 @@
       throw new TableNotFoundException(null, tableName, null);
     }
 
-    Set<Entry<String,String>> props = new HashSet<Entry<String,String>>(acu.namespaces.get(namespace).settings.entrySet());
+    Set<Entry<String,String>> props = new HashSet<>(acu.namespaces.get(namespace).settings.entrySet());
 
     Set<Entry<String,String>> tableProps = acu.tables.get(tableName).settings.entrySet();
     for (Entry<String,String> e : tableProps) {
@@ -283,8 +288,8 @@
      */
     for (FileStatus importStatus : fs.listStatus(importPath)) {
       try {
-        FileSKVIterator importIterator = FileOperations.getInstance().openReader(importStatus.getPath().toString(), true, fs, fs.getConf(),
-            AccumuloConfiguration.getDefaultConfiguration());
+        FileSKVIterator importIterator = FileOperations.getInstance().newReaderBuilder().forFile(importStatus.getPath().toString(), fs, fs.getConf())
+            .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).seekToBeginning().build();
         while (importIterator.hasTop()) {
           Key key = importIterator.getTopKey();
           Value value = importIterator.getTopValue();
@@ -354,7 +359,7 @@
 
   @Override
   public Map<String,String> tableIdMap() {
-    Map<String,String> result = new HashMap<String,String>();
+    Map<String,String> result = new HashMap<>();
     for (Entry<String,MockTable> entry : acu.tables.entrySet()) {
       String table = entry.getKey();
       if (RootTable.NAME.equals(table))
@@ -370,8 +375,8 @@
   @Override
   public List<DiskUsage> getDiskUsage(Set<String> tables) throws AccumuloException, AccumuloSecurityException {
 
-    List<DiskUsage> diskUsages = new ArrayList<DiskUsage>();
-    diskUsages.add(new DiskUsage(new TreeSet<String>(tables), 0l));
+    List<DiskUsage> diskUsages = new ArrayList<>();
+    diskUsages.add(new DiskUsage(new TreeSet<>(tables), 0l));
 
     return diskUsages;
   }
@@ -396,7 +401,7 @@
     Text endText = end != null ? new Text(end) : new Text(t.table.lastKey().getRow().getBytes());
     startText.append(ZERO, 0, 1);
     endText.append(ZERO, 0, 1);
-    Set<Key> keep = new TreeSet<Key>(t.table.subMap(new Key(startText), new Key(endText)).keySet());
+    Set<Key> keep = new TreeSet<>(t.table.subMap(new Key(startText), new Key(endText)).keySet());
     t.table.keySet().removeAll(keep);
   }
 
@@ -476,4 +481,25 @@
     }
     return true;
   }
+
+  @Override
+  public void setSamplerConfiguration(String tableName, SamplerConfiguration samplerConfiguration) throws TableNotFoundException, AccumuloException,
+      AccumuloSecurityException {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public void clearSamplerConfiguration(String tableName) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public SamplerConfiguration getSamplerConfiguration(String tableName) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public Locations locate(String tableName, Collection<Range> ranges) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
+    throw new UnsupportedOperationException();
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/MockUser.java b/core/src/main/java/org/apache/accumulo/core/client/mock/MockUser.java
index efc896e..e32edad 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/MockUser.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/MockUser.java
@@ -22,6 +22,10 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.SystemPermission;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 public class MockUser {
   final EnumSet<SystemPermission> permissions;
   final String name;
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java b/core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
index 1c0c6a9..a52af79 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/impl/MockTabletLocator.java
@@ -32,6 +32,10 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.hadoop.io.Text;
 
+/**
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
 public class MockTabletLocator extends TabletLocator {
   public MockTabletLocator() {}
 
@@ -44,7 +48,7 @@
   @Override
   public <T extends Mutation> void binMutations(ClientContext context, List<T> mutations, Map<String,TabletServerMutations<T>> binnedMutations, List<T> failures)
       throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    TabletServerMutations<T> tsm = new TabletServerMutations<T>("5");
+    TabletServerMutations<T> tsm = new TabletServerMutations<>("5");
     for (T m : mutations)
       tsm.addMutation(new KeyExtent(), m);
     binnedMutations.put("", tsm);
@@ -53,7 +57,7 @@
   @Override
   public List<Range> binRanges(ClientContext context, List<Range> ranges, Map<String,Map<KeyExtent,List<Range>>> binnedRanges) throws AccumuloException,
       AccumuloSecurityException, TableNotFoundException {
-    binnedRanges.put("", Collections.singletonMap(new KeyExtent(new Text(), null, null), ranges));
+    binnedRanges.put("", Collections.singletonMap(new KeyExtent("", null, null), ranges));
     return Collections.emptyList();
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java b/core/src/main/java/org/apache/accumulo/core/client/mock/package-info.java
similarity index 67%
copy from core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
copy to core/src/main/java/org/apache/accumulo/core/client/mock/package-info.java
index 01f5fa8..cdd5593 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mock/package-info.java
@@ -14,19 +14,12 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.util;
+/**
+ * Mock framework for Accumulo
+ *
+ * <p>
+ * Deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
+ */
+@Deprecated
+package org.apache.accumulo.core.client.mock;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class UtilWaitThread {
-  private static final Logger log = LoggerFactory.getLogger(UtilWaitThread.class);
-
-  public static void sleep(long millis) {
-    try {
-      Thread.sleep(millis);
-    } catch (InterruptedException e) {
-      log.error("{}", e.getMessage(), e);
-    }
-  }
-}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java b/core/src/main/java/org/apache/accumulo/core/client/rfile/FSConfArgs.java
similarity index 60%
copy from core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
copy to core/src/main/java/org/apache/accumulo/core/client/rfile/FSConfArgs.java
index 01f5fa8..1679e43 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/rfile/FSConfArgs.java
@@ -14,19 +14,34 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.util;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
+package org.apache.accumulo.core.client.rfile;
 
-public class UtilWaitThread {
-  private static final Logger log = LoggerFactory.getLogger(UtilWaitThread.class);
+import java.io.IOException;
 
-  public static void sleep(long millis) {
-    try {
-      Thread.sleep(millis);
-    } catch (InterruptedException e) {
-      log.error("{}", e.getMessage(), e);
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+
+class FSConfArgs {
+
+  FileSystem fs;
+  Configuration conf;
+
+  FileSystem getFileSystem() throws IOException {
+    if (fs == null) {
+      fs = FileSystem.get(getConf());
     }
+    return fs;
+  }
+
+  Configuration getConf() throws IOException {
+    if (fs != null) {
+      return fs.getConf();
+    }
+
+    if (conf == null) {
+      conf = new Configuration();
+    }
+    return conf;
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/client/rfile/RFile.java b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFile.java
new file mode 100644
index 0000000..bc5995e
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFile.java
@@ -0,0 +1,275 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.rfile;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.admin.TableOperations;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.io.Text;
+
+/**
+ * RFile is Accumulo's internal storage format for Key Value pairs. This class is a Factory that enables creating a {@link Scanner} for reading and a
+ * {@link RFileWriter} for writing Rfiles.
+ *
+ * <p>
+ * The {@link Scanner} created by this class makes it easy to experiment with real data from a live system on a developers workstation. Also the {@link Scanner}
+ * can be used to write tools to analyze Accumulo's raw data.
+ *
+ * @since 1.8.0
+ */
+public class RFile {
+
+  /**
+   * This is an intermediate interface in a larger builder pattern. Supports setting the required input sources for reading a RFile.
+   *
+   * @since 1.8.0
+   */
+  public static interface InputArguments {
+    /**
+     * Specify RFiles to read from. When multiple inputs are specified the {@link Scanner} constructed will present a merged view.
+     *
+     * @param inputs
+     *          one or more RFiles to read.
+     * @return this
+     */
+    ScannerOptions from(RFileSource... inputs);
+
+    /**
+     * Specify RFiles to read from. When multiple are specified the {@link Scanner} constructed will present a merged view.
+     *
+     * @param files
+     *          one or more RFiles to read.
+     * @return this
+     */
+    ScannerFSOptions from(String... files);
+  }
+
+  /**
+   * This is an intermediate interface in a larger builder pattern. Enables optionally setting a FileSystem to read RFile(s) from.
+   *
+   * @since 1.8.0
+   */
+  public static interface ScannerFSOptions extends ScannerOptions {
+    /**
+     * Optionally provide a FileSystem to open RFiles. If not specified, the FileSystem will be constructed using configuration on the classpath.
+     *
+     * @param fs
+     *          use this FileSystem to open files.
+     * @return this
+     */
+    ScannerOptions withFileSystem(FileSystem fs);
+  }
+
+  /**
+   * This is an intermediate interface in a larger builder pattern. Supports setting optional parameters for reading RFile(s) and building a scanner over
+   * RFile(s).
+   *
+   * @since 1.8.0
+   */
+  public static interface ScannerOptions {
+
+    /**
+     * By default the {@link Scanner} created will setup the default Accumulo system iterators. The iterators do things like the following :
+     *
+     * <ul>
+     * <li>Suppress deleted data</li>
+     * <li>Filter based on @link {@link Authorizations}</li>
+     * <li>Filter columns specified by functions like {@link Scanner#fetchColumn(Text, Text)} and {@link Scanner#fetchColumnFamily(Text)}</li>
+     * </ul>
+     *
+     * <p>
+     * Calling this method will turn off these system iterators and allow reading the raw data in an RFile. When reading the raw data, delete data and delete
+     * markers may be seen. Delete markers are {@link Key}s with the delete flag set.
+     *
+     * <p>
+     * Disabling system iterators will cause {@link #withAuthorizations(Authorizations)}, {@link Scanner#fetchColumn(Text, Text)}, and
+     * {@link Scanner#fetchColumnFamily(Text)} to throw runtime exceptions.
+     *
+     * @return this
+     */
+    public ScannerOptions withoutSystemIterators();
+
+    /**
+     * The authorizations passed here will be used to filter Keys, from the {@link Scanner}, based on the content of the column visibility field.
+     *
+     * @param auths
+     *          scan with these authorizations
+     * @return this
+     */
+    public ScannerOptions withAuthorizations(Authorizations auths);
+
+    /**
+     * Enabling this option will cache RFiles data in memory. This option is useful when doing lots of random accesses.
+     *
+     * @param cacheSize
+     *          the size of the data cache in bytes.
+     * @return this
+     */
+    public ScannerOptions withDataCache(long cacheSize);
+
+    /**
+     * Enabling this option will cache RFiles indexes in memory. Index data within a RFile is used to find data when seeking to a {@link Key}. This option is
+     * useful when doing lots of random accesses.
+     *
+     * @param cacheSize
+     *          the size of the index cache in bytes.
+     * @return this
+     */
+    public ScannerOptions withIndexCache(long cacheSize);
+
+    /**
+     * This option allows limiting the {@link Scanner} from reading data outside of a given range. A scanner will not see any data outside of this range even if
+     * the RFile(s) have data outside the range.
+     *
+     * @return this
+     */
+    public ScannerOptions withBounds(Range range);
+
+    /**
+     * Construct the {@link Scanner} with iterators specified in a tables properties. Properties for a table can be obtained by calling
+     * {@link TableOperations#getProperties(String)}
+     *
+     * @param props
+     *          iterable over Accumulo table key value properties.
+     * @return this
+     */
+    public ScannerOptions withTableProperties(Iterable<Entry<String,String>> props);
+
+    /**
+     * @see #withTableProperties(Iterable)
+     * @param props
+     *          a map instead of an Iterable
+     * @return this
+     */
+    public ScannerOptions withTableProperties(Map<String,String> props);
+
+    /**
+     * @return a Scanner over RFile using the specified options.
+     */
+    public Scanner build();
+  }
+
+  /**
+   * Entry point for building a new {@link Scanner} over one or more RFiles.
+   */
+  public static InputArguments newScanner() {
+    return new RFileScannerBuilder();
+  }
+
+  /**
+   * This is an intermediate interface in a larger builder pattern. Supports setting the required output sink to write a RFile to.
+   *
+   * @since 1.8.0
+   */
+  public static interface OutputArguments {
+    /**
+     * @param filename
+     *          name of file to write RFile data
+     * @return this
+     */
+    public WriterFSOptions to(String filename);
+
+    /**
+     * @param out
+     *          output stream to write RFile data
+     * @return this
+     */
+    public WriterOptions to(OutputStream out);
+  }
+
+  /**
+   * This is an intermediate interface in a larger builder pattern. Enables optionally setting a FileSystem to write to.
+   *
+   * @since 1.8.0
+   */
+  public static interface WriterFSOptions extends WriterOptions {
+    /**
+     * Optionally provide a FileSystem to open a file to write a RFile. If not specified, the FileSystem will be constructed using configuration on the
+     * classpath.
+     *
+     * @param fs
+     *          use this FileSystem to open files.
+     * @return this
+     */
+    WriterOptions withFileSystem(FileSystem fs);
+  }
+
+  /**
+   * This is an intermediate interface in a larger builder pattern. Supports setting optional parameters for creating a RFile and building a RFileWriter.
+   *
+   * @since 1.8.0
+   */
+  public static interface WriterOptions {
+    /**
+     * An option to store sample data in the generated RFile.
+     *
+     * @param samplerConf
+     *          configuration to use when generating sample data.
+     * @throws IllegalArgumentException
+     *           if table properties were previously specified and the table properties also specify a sampler.
+     * @return this
+     */
+    public WriterOptions withSampler(SamplerConfiguration samplerConf);
+
+    /**
+     * Create an RFile using the same configuration as an Accumulo table. Properties for a table can be obtained by calling
+     * {@link TableOperations#getProperties(String)}
+     *
+     * @param props
+     *          iterable over Accumulo table key value properties.
+     * @throws IllegalArgumentException
+     *           if sampler was previously specified and the table properties also specify a sampler.
+     * @return this
+     */
+    public WriterOptions withTableProperties(Iterable<Entry<String,String>> props);
+
+    /**
+     * @see #withTableProperties(Iterable)
+     */
+    public WriterOptions withTableProperties(Map<String,String> props);
+
+    /**
+     * @param maxSize
+     *          As keys are added to an RFile the visibility field is validated. Validating the visibility field requires parsing it. In order to make
+     *          validation faster, previously seen visibilities are cached. This option allows setting the maximum size of this cache.
+     * @return this
+     */
+    public WriterOptions withVisibilityCacheSize(int maxSize);
+
+    /**
+     * @return a new RfileWriter created with the options previously specified.
+     */
+    public RFileWriter build() throws IOException;
+  }
+
+  /**
+   * Entry point for creating a new RFile writer.
+   */
+  public static OutputArguments newWriter() {
+    return new RFileWriterBuilder();
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java
new file mode 100644
index 0000000..4dfba68
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java
@@ -0,0 +1,330 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.rfile;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
+import org.apache.accumulo.core.client.impl.ScannerOptions;
+import org.apache.accumulo.core.client.rfile.RFileScannerBuilder.InputArgs;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.ConfigurationCopy;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Column;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.blockfile.cache.BlockCache;
+import org.apache.accumulo.core.file.blockfile.cache.CacheEntry;
+import org.apache.accumulo.core.file.blockfile.cache.LruBlockCache;
+import org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile;
+import org.apache.accumulo.core.file.rfile.RFile;
+import org.apache.accumulo.core.file.rfile.RFile.Reader;
+import org.apache.accumulo.core.iterators.IteratorAdapter;
+import org.apache.accumulo.core.iterators.IteratorUtil;
+import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.iterators.system.MultiIterator;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.LocalityGroupUtil;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.io.Text;
+
+import com.google.common.base.Preconditions;
+
+class RFileScanner extends ScannerOptions implements Scanner {
+
+  private static final byte[] EMPTY_BYTES = new byte[0];
+  private static final Range EMPTY_RANGE = new Range();
+
+  private Range range;
+  private BlockCache dataCache = null;
+  private BlockCache indexCache = null;
+  private Opts opts;
+  private int batchSize = 1000;
+  private long readaheadThreshold = 3;
+
+  private static final long CACHE_BLOCK_SIZE = AccumuloConfiguration.getDefaultConfiguration().getMemoryInBytes(Property.TSERV_DEFAULT_BLOCKSIZE);
+
+  static class Opts {
+    InputArgs in;
+    Authorizations auths = Authorizations.EMPTY;
+    long dataCacheSize;
+    long indexCacheSize;
+    boolean useSystemIterators = true;
+    public HashMap<String,String> tableConfig;
+    Range bounds;
+  }
+
+  // This cache exist as a hack to avoid leaking decompressors. When the RFile code is not given a
+  // cache it reads blocks directly from the decompressor. However if a user does not read all data
+  // for a scan this can leave a BCFile block open and a decompressor allocated.
+  //
+  // By providing a cache to the RFile code it forces each block to be read into memory. When a
+  // block is accessed the entire thing is read into memory immediately allocating and deallocating
+  // a decompressor. If the user does not read all data, no decompressors are left allocated.
+  private static class NoopCache implements BlockCache {
+    @Override
+    public CacheEntry cacheBlock(String blockName, byte[] buf, boolean inMemory) {
+      return null;
+    }
+
+    @Override
+    public CacheEntry cacheBlock(String blockName, byte[] buf) {
+      return null;
+    }
+
+    @Override
+    public CacheEntry getBlock(String blockName) {
+      return null;
+    }
+
+    @Override
+    public long getMaxSize() {
+      return Integer.MAX_VALUE;
+    }
+  }
+
+  RFileScanner(Opts opts) {
+    if (!opts.auths.equals(Authorizations.EMPTY) && !opts.useSystemIterators) {
+      throw new IllegalArgumentException("Set authorizations and specified not to use system iterators");
+    }
+
+    this.opts = opts;
+    if (opts.indexCacheSize > 0) {
+      this.indexCache = new LruBlockCache(opts.indexCacheSize, CACHE_BLOCK_SIZE);
+    } else {
+      this.indexCache = new NoopCache();
+    }
+
+    if (opts.dataCacheSize > 0) {
+      this.dataCache = new LruBlockCache(opts.dataCacheSize, CACHE_BLOCK_SIZE);
+    } else {
+      this.dataCache = new NoopCache();
+    }
+  }
+
+  @Override
+  public synchronized void fetchColumnFamily(Text col) {
+    Preconditions.checkArgument(opts.useSystemIterators, "Can only fetch columns when using system iterators");
+    super.fetchColumnFamily(col);
+  }
+
+  @Override
+  public synchronized void fetchColumn(Text colFam, Text colQual) {
+    Preconditions.checkArgument(opts.useSystemIterators, "Can only fetch columns when using system iterators");
+    super.fetchColumn(colFam, colQual);
+  }
+
+  @Override
+  public void fetchColumn(IteratorSetting.Column column) {
+    Preconditions.checkArgument(opts.useSystemIterators, "Can only fetch columns when using system iterators");
+    super.fetchColumn(column);
+  }
+
+  @Override
+  public void setClassLoaderContext(String classLoaderContext) {
+    throw new UnsupportedOperationException();
+  }
+
+  @Deprecated
+  @Override
+  public void setTimeOut(int timeOut) {
+    if (timeOut == Integer.MAX_VALUE)
+      setTimeout(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
+    else
+      setTimeout(timeOut, TimeUnit.SECONDS);
+  }
+
+  @Deprecated
+  @Override
+  public int getTimeOut() {
+    long timeout = getTimeout(TimeUnit.SECONDS);
+    if (timeout >= Integer.MAX_VALUE)
+      return Integer.MAX_VALUE;
+    return (int) timeout;
+  }
+
+  @Override
+  public void setRange(Range range) {
+    this.range = range;
+  }
+
+  @Override
+  public Range getRange() {
+    return range;
+  }
+
+  @Override
+  public void setBatchSize(int size) {
+    this.batchSize = size;
+  }
+
+  @Override
+  public int getBatchSize() {
+    return batchSize;
+  }
+
+  @Override
+  public void enableIsolation() {}
+
+  @Override
+  public void disableIsolation() {}
+
+  @Override
+  public synchronized void setReadaheadThreshold(long batches) {
+    Preconditions.checkArgument(batches > 0);
+    readaheadThreshold = batches;
+  }
+
+  @Override
+  public synchronized long getReadaheadThreshold() {
+    return readaheadThreshold;
+  }
+
+  @Override
+  public Authorizations getAuthorizations() {
+    return opts.auths;
+  }
+
+  @Override
+  public void addScanIterator(IteratorSetting cfg) {
+    super.addScanIterator(cfg);
+  }
+
+  @Override
+  public void removeScanIterator(String iteratorName) {
+    super.removeScanIterator(iteratorName);
+  }
+
+  @Override
+  public void updateScanIteratorOption(String iteratorName, String key, String value) {
+    super.updateScanIteratorOption(iteratorName, key, value);
+  }
+
+  private class IterEnv extends BaseIteratorEnvironment {
+    @Override
+    public IteratorScope getIteratorScope() {
+      return IteratorScope.scan;
+    }
+
+    @Override
+    public boolean isFullMajorCompaction() {
+      return false;
+    }
+
+    @Override
+    public Authorizations getAuthorizations() {
+      return opts.auths;
+    }
+
+    @Override
+    public boolean isSamplingEnabled() {
+      return RFileScanner.this.getSamplerConfiguration() != null;
+    }
+
+    @Override
+    public SamplerConfiguration getSamplerConfiguration() {
+      return RFileScanner.this.getSamplerConfiguration();
+    }
+  }
+
+  @Override
+  public Iterator<Entry<Key,Value>> iterator() {
+    try {
+      RFileSource[] sources = opts.in.getSources();
+      List<SortedKeyValueIterator<Key,Value>> readers = new ArrayList<>(sources.length);
+      for (int i = 0; i < sources.length; i++) {
+        FSDataInputStream inputStream = (FSDataInputStream) sources[i].getInputStream();
+        readers.add(new RFile.Reader(new CachableBlockFile.Reader(inputStream, sources[i].getLength(), opts.in.getConf(), dataCache, indexCache,
+            AccumuloConfiguration.getDefaultConfiguration())));
+      }
+
+      if (getSamplerConfiguration() != null) {
+        for (int i = 0; i < readers.size(); i++) {
+          readers.set(i, ((Reader) readers.get(i)).getSample(new SamplerConfigurationImpl(getSamplerConfiguration())));
+        }
+      }
+
+      SortedKeyValueIterator<Key,Value> iterator;
+      if (opts.bounds != null) {
+        iterator = new MultiIterator(readers, opts.bounds);
+      } else {
+        iterator = new MultiIterator(readers, false);
+      }
+
+      Set<ByteSequence> families = Collections.emptySet();
+
+      if (opts.useSystemIterators) {
+        SortedSet<Column> cols = this.getFetchedColumns();
+        families = LocalityGroupUtil.families(cols);
+        iterator = IteratorUtil.setupSystemScanIterators(iterator, cols, getAuthorizations(), EMPTY_BYTES);
+      }
+
+      try {
+        if (opts.tableConfig != null && opts.tableConfig.size() > 0) {
+          ConfigurationCopy conf = new ConfigurationCopy(opts.tableConfig);
+          iterator = IteratorUtil.loadIterators(IteratorScope.scan, iterator, null, conf, serverSideIteratorList, serverSideIteratorOptions, new IterEnv());
+        } else {
+          iterator = IteratorUtil.loadIterators(iterator, serverSideIteratorList, serverSideIteratorOptions, new IterEnv(), false, null);
+        }
+      } catch (IOException e) {
+        throw new RuntimeException(e);
+      }
+
+      iterator.seek(getRange() == null ? EMPTY_RANGE : getRange(), families, families.size() == 0 ? false : true);
+      return new IteratorAdapter(iterator);
+
+    } catch (IOException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  @Override
+  public void close() {
+    if (dataCache instanceof LruBlockCache) {
+      ((LruBlockCache) dataCache).shutdown();
+    }
+
+    if (indexCache instanceof LruBlockCache) {
+      ((LruBlockCache) indexCache).shutdown();
+    }
+
+    try {
+      for (RFileSource source : opts.in.getSources()) {
+        source.getInputStream().close();
+      }
+    } catch (IOException e) {
+      throw new RuntimeException(e);
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScannerBuilder.java b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScannerBuilder.java
new file mode 100644
index 0000000..3a55172
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScannerBuilder.java
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.rfile;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.rfile.RFile.ScannerFSOptions;
+import org.apache.accumulo.core.client.rfile.RFile.ScannerOptions;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import com.google.common.base.Preconditions;
+
+class RFileScannerBuilder implements RFile.InputArguments, RFile.ScannerFSOptions {
+
+  static class InputArgs extends FSConfArgs {
+    private Path[] paths;
+    private RFileSource[] sources;
+
+    InputArgs(String... files) {
+      this.paths = new Path[files.length];
+      for (int i = 0; i < files.length; i++) {
+        this.paths[i] = new Path(files[i]);
+      }
+    }
+
+    InputArgs(RFileSource... sources) {
+      this.sources = sources;
+    }
+
+    RFileSource[] getSources() throws IOException {
+      if (sources == null) {
+        sources = new RFileSource[paths.length];
+        for (int i = 0; i < paths.length; i++) {
+          sources[i] = new RFileSource(getFileSystem().open(paths[i]), getFileSystem().getFileStatus(paths[i]).getLen());
+        }
+      } else {
+        for (int i = 0; i < sources.length; i++) {
+          if (!(sources[i].getInputStream() instanceof FSDataInputStream)) {
+            sources[i] = new RFileSource(new FSDataInputStream(sources[i].getInputStream()), sources[i].getLength());
+          }
+        }
+      }
+
+      return sources;
+    }
+  }
+
+  private RFileScanner.Opts opts = new RFileScanner.Opts();
+
+  @Override
+  public ScannerOptions withoutSystemIterators() {
+    opts.useSystemIterators = false;
+    return this;
+  }
+
+  @Override
+  public ScannerOptions withAuthorizations(Authorizations auths) {
+    Objects.requireNonNull(auths);
+    opts.auths = auths;
+    return this;
+  }
+
+  @Override
+  public ScannerOptions withDataCache(long cacheSize) {
+    Preconditions.checkArgument(cacheSize > 0);
+    opts.dataCacheSize = cacheSize;
+    return this;
+  }
+
+  @Override
+  public ScannerOptions withIndexCache(long cacheSize) {
+    Preconditions.checkArgument(cacheSize > 0);
+    opts.indexCacheSize = cacheSize;
+    return this;
+  }
+
+  @Override
+  public Scanner build() {
+    return new RFileScanner(opts);
+  }
+
+  @Override
+  public ScannerOptions withFileSystem(FileSystem fs) {
+    Objects.requireNonNull(fs);
+    opts.in.fs = fs;
+    return this;
+  }
+
+  @Override
+  public ScannerOptions from(RFileSource... inputs) {
+    opts.in = new InputArgs(inputs);
+    return this;
+  }
+
+  @Override
+  public ScannerFSOptions from(String... files) {
+    opts.in = new InputArgs(files);
+    return this;
+  }
+
+  @Override
+  public ScannerOptions withTableProperties(Iterable<Entry<String,String>> tableConfig) {
+    Objects.requireNonNull(tableConfig);
+    this.opts.tableConfig = new HashMap<>();
+    for (Entry<String,String> entry : tableConfig) {
+      this.opts.tableConfig.put(entry.getKey(), entry.getValue());
+    }
+    return this;
+  }
+
+  @Override
+  public ScannerOptions withTableProperties(Map<String,String> tableConfig) {
+    Objects.requireNonNull(tableConfig);
+    this.opts.tableConfig = new HashMap<>(tableConfig);
+    return this;
+  }
+
+  @Override
+  public ScannerOptions withBounds(Range range) {
+    Objects.requireNonNull(range);
+    this.opts.bounds = range;
+    return this;
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileSource.java b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileSource.java
new file mode 100644
index 0000000..21298c3
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileSource.java
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.rfile;
+
+import java.io.InputStream;
+
+/**
+ * RFile metadata is stored at the end of the file. Inorder to read an RFile, its length must be known. This provides a way to pass an InputStream and length
+ * for reading an RFile.
+ *
+ * @since 1.8.0
+ */
+public class RFileSource {
+  private final InputStream in;
+  private final long len;
+
+  public RFileSource(InputStream in, long len) {
+    this.in = in;
+    this.len = len;
+  }
+
+  public InputStream getInputStream() {
+    return in;
+  }
+
+  public long getLength() {
+    return len;
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileWriter.java b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileWriter.java
new file mode 100644
index 0000000..9995888
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileWriter.java
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.rfile;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map.Entry;
+import java.util.Set;
+
+import org.apache.accumulo.core.client.admin.TableOperations;
+import org.apache.accumulo.core.data.ArrayByteSequence;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.FileSKVWriter;
+import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.commons.collections.map.LRUMap;
+
+import com.google.common.base.Preconditions;
+
+//formatter was adding spaces that checkstyle did not like, so turned off formatter
+//@formatter:off
+/**
+ * This class provides an API for writing RFiles. It can be used to create file for bulk import into Accumulo using
+ * {@link TableOperations#importDirectory(String, String, String, boolean)}
+ *
+ * <p>
+ * A RFileWriter has the following constraints. Violating these constraints will result in runtime exceptions.
+ *
+ * <ul>
+ * <li>Keys must be appended in sorted order within a locality group.</li>
+ * <li>Locality groups must have a mutually exclusive set of column families.</li>
+ * <li>The default locality group must be started last.</li>
+ * </ul>
+ *
+ * <p>
+ * Below is an example of using RFileWriter
+ *
+ * <pre>
+ * <code>
+ *     {@code Iterable<Entry<Key, Value>>} localityGroup1Data = ...
+ *     {@code Iterable<Entry<Key, Value>>} localityGroup2Data = ...
+ *     {@code Iterable<Entry<Key, Value>>} defaultGroupData = ...
+ *
+ *     try(RFileWriter writer = RFile.newWriter().to(file).build()) {
+ *
+ *       // Start a locality group before appending data.
+ *       writer.startNewLocalityGroup("groupA", "columnFam1", "columnFam2");
+ *       // Append data to the locality group that was started above. Must append in sorted order.
+ *       writer.append(localityGroup1Data);
+ *
+ *       // Add another locality group.
+ *       writer.startNewLocalityGroup("groupB", "columnFam3", "columnFam4");
+ *       writer.append(localityGroup2Data);
+ *
+ *       // The default locality group must be started last. The column families for the default group do not need to be specified.
+ *       writer.startDefaultLocalityGroup();
+ *       // Data appended here can not contain any column families specified in previous locality groups.
+ *       writer.append(defaultGroupData);
+ *
+ *       // This is a try-with-resources so the writer is closed here at the end of the code block.
+ *     }
+ * </code>
+ * </pre>
+ *
+ * <p>
+ * Create instances by calling {@link RFile#newWriter()}
+ *
+ * @since 1.8.0
+ */
+// @formatter:on
+public class RFileWriter implements AutoCloseable {
+
+  private FileSKVWriter writer;
+  private final LRUMap validVisibilities;
+  private boolean startedLG;
+  private boolean startedDefaultLG;
+
+  RFileWriter(FileSKVWriter fileSKVWriter, int visCacheSize) {
+    this.writer = fileSKVWriter;
+    this.validVisibilities = new LRUMap(visCacheSize);
+  }
+
+  private void _startNewLocalityGroup(String name, Set<ByteSequence> columnFamilies) throws IOException {
+    Preconditions.checkState(!startedDefaultLG, "Cannont start a locality group after starting the default locality group");
+    writer.startNewLocalityGroup(name, columnFamilies);
+    startedLG = true;
+  }
+
+  /**
+   * Before appending any data, a locality group must be started. The default locality group must be started last.
+   *
+   * @param name
+   *          locality group name, used for informational purposes
+   * @param families
+   *          the column families the locality group can contain
+   *
+   * @throws IllegalStateException
+   *           When default locality group already started.
+   */
+  public void startNewLocalityGroup(String name, List<byte[]> families) throws IOException {
+    HashSet<ByteSequence> fams = new HashSet<>();
+    for (byte[] family : families) {
+      fams.add(new ArrayByteSequence(family));
+    }
+    _startNewLocalityGroup(name, fams);
+  }
+
+  /**
+   * See javadoc for {@link #startNewLocalityGroup(String, List)}
+   *
+   * @throws IllegalStateException
+   *           When default locality group already started.
+   */
+  public void startNewLocalityGroup(String name, byte[]... families) throws IOException {
+    startNewLocalityGroup(name, Arrays.asList(families));
+  }
+
+  /**
+   * See javadoc for {@link #startNewLocalityGroup(String, List)}.
+   *
+   * @param families
+   *          will be encoded using UTF-8
+   *
+   * @throws IllegalStateException
+   *           When default locality group already started.
+   */
+  public void startNewLocalityGroup(String name, Set<String> families) throws IOException {
+    HashSet<ByteSequence> fams = new HashSet<>();
+    for (String family : families) {
+      fams.add(new ArrayByteSequence(family));
+    }
+    _startNewLocalityGroup(name, fams);
+  }
+
+  /**
+   * See javadoc for {@link #startNewLocalityGroup(String, List)}.
+   *
+   * @param families
+   *          will be encoded using UTF-8
+   *
+   * @throws IllegalStateException
+   *           When default locality group already started.
+   */
+  public void startNewLocalityGroup(String name, String... families) throws IOException {
+    HashSet<ByteSequence> fams = new HashSet<>();
+    for (String family : families) {
+      fams.add(new ArrayByteSequence(family));
+    }
+    _startNewLocalityGroup(name, fams);
+  }
+
+  /**
+   * A locality group in which the column families do not need to specified. The locality group must be started after all other locality groups. Can not append
+   * column families that were in a previous locality group. If no locality groups were started, then the first append will start the default locality group.
+   *
+   * @throws IllegalStateException
+   *           When default locality group already started.
+   */
+
+  public void startDefaultLocalityGroup() throws IOException {
+    Preconditions.checkState(!startedDefaultLG);
+    writer.startDefaultLocalityGroup();
+    startedDefaultLG = true;
+    startedLG = true;
+  }
+
+  /**
+   * Append the key and value to the last locality group that was started. If no locality group was started, then the default group will automatically be
+   * started.
+   *
+   * @param key
+   *          This key must be greater than or equal to the last key appended. For non-default locality groups, the keys column family must be one of the column
+   *          families specified when calling startNewLocalityGroup(). Must be non-null.
+   * @param val
+   *          value to append, must be non-null.
+   *
+   * @throws IllegalArgumentException
+   *           This is thrown when data is appended out of order OR when the key contains a invalid visibility OR when a column family is not valid for a
+   *           locality group.
+   */
+  public void append(Key key, Value val) throws IOException {
+    if (!startedLG) {
+      startDefaultLocalityGroup();
+    }
+    Boolean wasChecked = (Boolean) validVisibilities.get(key.getColumnVisibilityData());
+    if (wasChecked == null) {
+      byte[] cv = key.getColumnVisibilityData().toArray();
+      new ColumnVisibility(cv);
+      validVisibilities.put(new ArrayByteSequence(Arrays.copyOf(cv, cv.length)), Boolean.TRUE);
+    }
+    writer.append(key, val);
+  }
+
+  /**
+   * Append the keys and values to the last locality group that was started.
+   *
+   * @param keyValues
+   *          The keys must be in sorted order. The first key returned by the iterable must be greater than or equal to the last key appended. For non-default
+   *          locality groups, the keys column family must be one of the column families specified when calling startNewLocalityGroup(). Must be non-null. If no
+   *          locality group was started, then the default group will automatically be started.
+   *
+   * @throws IllegalArgumentException
+   *           This is thrown when data is appended out of order OR when the key contains a invalid visibility OR when a column family is not valid for a
+   *           locality group.
+   */
+  public void append(Iterable<Entry<Key,Value>> keyValues) throws IOException {
+    for (Entry<Key,Value> entry : keyValues) {
+      append(entry.getKey(), entry.getValue());
+    }
+  }
+
+  @Override
+  public void close() throws IOException {
+    writer.close();
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileWriterBuilder.java b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileWriterBuilder.java
new file mode 100644
index 0000000..667cbef
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileWriterBuilder.java
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.rfile;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+
+import org.apache.accumulo.core.client.rfile.RFile.WriterFSOptions;
+import org.apache.accumulo.core.client.rfile.RFile.WriterOptions;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.ConfigurationCopy;
+import org.apache.accumulo.core.file.FileOperations;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Iterables;
+
+class RFileWriterBuilder implements RFile.OutputArguments, RFile.WriterFSOptions {
+
+  private static class OutputArgs extends FSConfArgs {
+    private Path path;
+    private OutputStream out;
+
+    OutputArgs(String filename) {
+      this.path = new Path(filename);
+    }
+
+    OutputArgs(OutputStream out) {
+      this.out = out;
+    }
+
+    OutputStream getOutputStream() {
+      return out;
+    }
+  }
+
+  private OutputArgs out;
+  private SamplerConfiguration sampler = null;
+  private Map<String,String> tableConfig = Collections.emptyMap();
+  private int visCacheSize = 1000;
+
+  @Override
+  public WriterOptions withSampler(SamplerConfiguration samplerConf) {
+    Objects.requireNonNull(samplerConf);
+    SamplerConfigurationImpl.checkDisjoint(tableConfig, samplerConf);
+    this.sampler = samplerConf;
+    return this;
+  }
+
+  @Override
+  public RFileWriter build() throws IOException {
+    FileOperations fileops = FileOperations.getInstance();
+    AccumuloConfiguration acuconf = AccumuloConfiguration.getDefaultConfiguration();
+    HashMap<String,String> userProps = new HashMap<>();
+    if (sampler != null) {
+      userProps.putAll(new SamplerConfigurationImpl(sampler).toTablePropertiesMap());
+    }
+    userProps.putAll(tableConfig);
+
+    if (userProps.size() > 0) {
+      acuconf = new ConfigurationCopy(Iterables.concat(acuconf, userProps.entrySet()));
+    }
+
+    if (out.getOutputStream() != null) {
+      FSDataOutputStream fsdo;
+      if (out.getOutputStream() instanceof FSDataOutputStream) {
+        fsdo = (FSDataOutputStream) out.getOutputStream();
+      } else {
+        fsdo = new FSDataOutputStream(out.getOutputStream(), new FileSystem.Statistics("foo"));
+      }
+      return new RFileWriter(fileops.newWriterBuilder().forOutputStream(".rf", fsdo, out.getConf()).withTableConfiguration(acuconf).build(), visCacheSize);
+    } else {
+      return new RFileWriter(fileops.newWriterBuilder().forFile(out.path.toString(), out.getFileSystem(), out.getConf()).withTableConfiguration(acuconf)
+          .build(), visCacheSize);
+    }
+  }
+
+  @Override
+  public WriterOptions withFileSystem(FileSystem fs) {
+    Objects.requireNonNull(fs);
+    out.fs = fs;
+    return this;
+  }
+
+  @Override
+  public WriterFSOptions to(String filename) {
+    Objects.requireNonNull(filename);
+    this.out = new OutputArgs(filename);
+    return this;
+  }
+
+  @Override
+  public WriterOptions to(OutputStream out) {
+    Objects.requireNonNull(out);
+    this.out = new OutputArgs(out);
+    return this;
+  }
+
+  @Override
+  public WriterOptions withTableProperties(Iterable<Entry<String,String>> tableConfig) {
+    Objects.requireNonNull(tableConfig);
+    HashMap<String,String> cfg = new HashMap<>();
+    for (Entry<String,String> entry : tableConfig) {
+      cfg.put(entry.getKey(), entry.getValue());
+    }
+
+    SamplerConfigurationImpl.checkDisjoint(cfg, sampler);
+    this.tableConfig = cfg;
+    return this;
+  }
+
+  @Override
+  public WriterOptions withTableProperties(Map<String,String> tableConfig) {
+    Objects.requireNonNull(tableConfig);
+    return withTableProperties(tableConfig.entrySet());
+  }
+
+  @Override
+  public WriterOptions withVisibilityCacheSize(int maxSize) {
+    Preconditions.checkArgument(maxSize > 0);
+    this.visCacheSize = maxSize;
+    return this;
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/sample/AbstractHashSampler.java b/core/src/main/java/org/apache/accumulo/core/client/sample/AbstractHashSampler.java
new file mode 100644
index 0000000..5c8176a
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/sample/AbstractHashSampler.java
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.sample;
+
+import static com.google.common.base.Preconditions.checkArgument;
+import static java.util.Objects.requireNonNull;
+
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Set;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.sample.impl.DataoutputHasher;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.hash.HashFunction;
+import com.google.common.hash.Hasher;
+import com.google.common.hash.Hashing;
+
+/**
+ * A base class that can be used to create Samplers based on hashing. This class offers consistent options for configuring the hash function. The subclass
+ * decides which parts of the key to hash.
+ *
+ * <p>
+ * This class support two options passed into {@link #init(SamplerConfiguration)}. One option is {@code hasher} which specifies a hashing algorithm. Valid
+ * values for this option are {@code md5}, {@code sha1}, and {@code murmur3_32}. If you are not sure, then choose {@code murmur3_32}.
+ *
+ * <p>
+ * The second option is {@code modulus} which can have any positive integer as a value.
+ *
+ * <p>
+ * Any data where {@code hash(data) % modulus == 0} will be selected for the sample.
+ *
+ * @since 1.8.0
+ */
+
+public abstract class AbstractHashSampler implements Sampler {
+
+  private HashFunction hashFunction;
+  private int modulus;
+
+  private static final Set<String> VALID_OPTIONS = ImmutableSet.of("hasher", "modulus");
+
+  /**
+   * Subclasses with options should override this method and return true if the option is valid for the subclass or if {@code super.isValidOption(opt)} returns
+   * true.
+   */
+
+  protected boolean isValidOption(String option) {
+    return VALID_OPTIONS.contains(option);
+  }
+
+  /**
+   * Subclasses with options should override this method and call {@code super.init(config)}.
+   */
+
+  @Override
+  public void init(SamplerConfiguration config) {
+    String hasherOpt = config.getOptions().get("hasher");
+    String modulusOpt = config.getOptions().get("modulus");
+
+    requireNonNull(hasherOpt, "Hasher not specified");
+    requireNonNull(modulusOpt, "Modulus not specified");
+
+    for (String option : config.getOptions().keySet()) {
+      checkArgument(isValidOption(option), "Unknown option : %s", option);
+    }
+
+    switch (hasherOpt) {
+      case "murmur3_32":
+        hashFunction = Hashing.murmur3_32();
+        break;
+      case "md5":
+        hashFunction = Hashing.md5();
+        break;
+      case "sha1":
+        hashFunction = Hashing.sha1();
+        break;
+      default:
+        throw new IllegalArgumentException("Uknown hahser " + hasherOpt);
+    }
+
+    modulus = Integer.parseInt(modulusOpt);
+  }
+
+  /**
+   * Subclass must override this method and hash some portion of the key.
+   *
+   * @param hasher
+   *          Data written to this will be used to compute the hash for the key.
+   */
+  protected abstract void hash(DataOutput hasher, Key k) throws IOException;
+
+  @Override
+  public boolean accept(Key k) {
+    Hasher hasher = hashFunction.newHasher();
+    try {
+      hash(new DataoutputHasher(hasher), k);
+    } catch (IOException e) {
+      throw new RuntimeException(e);
+    }
+    return hasher.hash().asInt() % modulus == 0;
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/sample/RowColumnSampler.java b/core/src/main/java/org/apache/accumulo/core/client/sample/RowColumnSampler.java
new file mode 100644
index 0000000..a0482d9
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/sample/RowColumnSampler.java
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.sample;
+
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Set;
+
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+
+import com.google.common.collect.ImmutableSet;
+
+/**
+ * This sampler can hash any subset of a Key's fields. The fields that hashed for the sample are determined by the configuration options passed in
+ * {@link #init(SamplerConfiguration)}. The following key values are valid options.
+ *
+ * <ul>
+ * <li>row=true|false
+ * <li>family=true|false
+ * <li>qualifier=true|false
+ * <li>visibility=true|false
+ * </ul>
+ *
+ * <p>
+ * If not specified in the options, fields default to false.
+ *
+ * <p>
+ * To determine what options are valid for hashing see {@link AbstractHashSampler}
+ *
+ * <p>
+ * To configure Accumulo to generate sample data on one thousandth of the column qualifiers, the following SamplerConfiguration could be created and used to
+ * configure a table.
+ *
+ * <p>
+ * {@code new SamplerConfiguration(RowColumnSampler.class.getName()).setOptions(ImmutableMap.of("hasher","murmur3_32","modulus","1009","qualifier","true"))}
+ *
+ * <p>
+ * With this configuration, if a column qualifier is selected then all key values contains that column qualifier will end up in the sample data.
+ *
+ * @since 1.8.0
+ */
+
+public class RowColumnSampler extends AbstractHashSampler {
+
+  private boolean row = true;
+  private boolean family = true;
+  private boolean qualifier = true;
+  private boolean visibility = true;
+
+  private static final Set<String> VALID_OPTIONS = ImmutableSet.of("row", "family", "qualifier", "visibility");
+
+  private boolean hashField(SamplerConfiguration config, String field) {
+    String optValue = config.getOptions().get(field);
+    if (optValue != null) {
+      return Boolean.parseBoolean(optValue);
+    }
+
+    return false;
+  }
+
+  @Override
+  protected boolean isValidOption(String option) {
+    return super.isValidOption(option) || VALID_OPTIONS.contains(option);
+  }
+
+  @Override
+  public void init(SamplerConfiguration config) {
+    super.init(config);
+
+    row = hashField(config, "row");
+    family = hashField(config, "family");
+    qualifier = hashField(config, "qualifier");
+    visibility = hashField(config, "visibility");
+
+    if (!row && !family && !qualifier && !visibility) {
+      throw new IllegalStateException("Must hash at least one key field");
+    }
+  }
+
+  private void putByteSquence(ByteSequence data, DataOutput hasher) throws IOException {
+    hasher.write(data.getBackingArray(), data.offset(), data.length());
+  }
+
+  @Override
+  protected void hash(DataOutput hasher, Key k) throws IOException {
+    if (row) {
+      putByteSquence(k.getRowData(), hasher);
+    }
+
+    if (family) {
+      putByteSquence(k.getColumnFamilyData(), hasher);
+    }
+
+    if (qualifier) {
+      putByteSquence(k.getColumnQualifierData(), hasher);
+    }
+
+    if (visibility) {
+      putByteSquence(k.getColumnVisibilityData(), hasher);
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/sample/RowSampler.java b/core/src/main/java/org/apache/accumulo/core/client/sample/RowSampler.java
new file mode 100644
index 0000000..107ba49
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/sample/RowSampler.java
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.sample;
+
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+
+/**
+ * Builds a sample based on entire rows. If a row is selected for the sample, then all of its columns will be included.
+ *
+ * <p>
+ * To determine what options are valid for hashing see {@link AbstractHashSampler}. This class offers no addition options, it always hashes on the row.
+ *
+ * <p>
+ * To configure Accumulo to generate sample data on one thousandth of the rows, the following SamplerConfiguration could be created and used to configure a
+ * table.
+ *
+ * <p>
+ * {@code new SamplerConfiguration(RowSampler.class.getName()).setOptions(ImmutableMap.of("hasher","murmur3_32","modulus","1009"))}
+ *
+ * @since 1.8.0
+ */
+
+public class RowSampler extends AbstractHashSampler {
+
+  @Override
+  protected void hash(DataOutput hasher, Key k) throws IOException {
+    ByteSequence row = k.getRowData();
+    hasher.write(row.getBackingArray(), row.offset(), row.length());
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/sample/Sampler.java b/core/src/main/java/org/apache/accumulo/core/client/sample/Sampler.java
new file mode 100644
index 0000000..8b4db95
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/sample/Sampler.java
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.sample;
+
+import org.apache.accumulo.core.data.Key;
+
+/**
+ * A function that decides which key values are stored in a tables sample. As Accumuo compacts data and creates rfiles it uses a Sampler to decided what to
+ * store in the rfiles sample section. The class name of the Sampler and the Samplers configuration are stored in each rfile. A scan of a tables sample will
+ * only succeed if all rfiles were created with the same sampler and sampler configuration.
+ *
+ * <p>
+ * Since the decisions that Sampler makes are persisted, the behavior of a Sampler for a given configuration should always be the same. One way to offer a new
+ * behavior is to offer new options, while still supporting old behavior with a Samplers existing options.
+ *
+ * <p>
+ * Ideally a sampler that selects a Key k1 would also select updates for k1. For example if a Sampler selects :
+ * {@code row='000989' family='name' qualifier='last' visibility='ADMIN' time=9 value='Doe'}, it would be nice if it also selected :
+ * {@code row='000989' family='name' qualifier='last' visibility='ADMIN' time=20 value='Dough'}. Using hash and modulo on the key fields is a good way to
+ * accomplish this and {@link AbstractHashSampler} provides a good basis for implementation.
+ *
+ * @since 1.8.0
+ */
+public interface Sampler {
+
+  /**
+   * An implementation of Sampler must have a noarg constructor. After construction this method is called once to initialize a sampler before it is used.
+   *
+   * @param config
+   *          Configuration options for a sampler.
+   */
+  void init(SamplerConfiguration config);
+
+  /**
+   * @param k
+   *          A key that was written to a rfile.
+   * @return True if the key (and its associtated value) should be stored in the rfile's sample. Return false if it should not be included.
+   */
+  boolean accept(Key k);
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/sample/SamplerConfiguration.java b/core/src/main/java/org/apache/accumulo/core/client/sample/SamplerConfiguration.java
new file mode 100644
index 0000000..e774ec5
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/client/sample/SamplerConfiguration.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.sample;
+
+import static com.google.common.base.Preconditions.checkArgument;
+import static java.util.Objects.requireNonNull;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Map.Entry;
+
+/**
+ * This class encapsultes configuration and options needed to setup and use sampling.
+ *
+ * @since 1.8.0
+ */
+
+public class SamplerConfiguration {
+
+  private String className;
+  private Map<String,String> options = new HashMap<>();
+
+  public SamplerConfiguration(Class<? extends Sampler> samplerClass) {
+    this(samplerClass.getName());
+  }
+
+  public SamplerConfiguration(String samplerClassName) {
+    requireNonNull(samplerClassName);
+    this.className = samplerClassName;
+  }
+
+  public SamplerConfiguration setOptions(Map<String,String> options) {
+    requireNonNull(options);
+    this.options = new HashMap<>(options.size());
+
+    for (Entry<String,String> entry : options.entrySet()) {
+      addOption(entry.getKey(), entry.getValue());
+    }
+
+    return this;
+  }
+
+  public SamplerConfiguration addOption(String option, String value) {
+    checkArgument(option != null, "option is null");
+    checkArgument(value != null, "value is null");
+    this.options.put(option, value);
+    return this;
+  }
+
+  public Map<String,String> getOptions() {
+    return Collections.unmodifiableMap(options);
+  }
+
+  public String getSamplerClassName() {
+    return className;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    if (o instanceof SamplerConfiguration) {
+      SamplerConfiguration osc = (SamplerConfiguration) o;
+
+      return className.equals(osc.className) && options.equals(osc.options);
+    }
+
+    return false;
+  }
+
+  @Override
+  public int hashCode() {
+    return className.hashCode() + 31 * options.hashCode();
+  }
+
+  @Override
+  public String toString() {
+    return className + " " + options;
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
index 5c20555..a623e0d 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
@@ -129,7 +129,7 @@
   class Properties implements Destroyable, Map<String,char[]> {
 
     private boolean destroyed = false;
-    private HashMap<String,char[]> map = new HashMap<String,char[]>();
+    private HashMap<String,char[]> map = new HashMap<>();
 
     private void checkDestroyed() {
       if (destroyed)
diff --git a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderToken.java b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderToken.java
index a34aadf..5ac6f02 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/CredentialProviderToken.java
@@ -75,7 +75,7 @@
 
   @Override
   public Set<TokenProperty> getProperties() {
-    LinkedHashSet<TokenProperty> properties = new LinkedHashSet<TokenProperty>();
+    LinkedHashSet<TokenProperty> properties = new LinkedHashSet<>();
     // Neither name or CPs are sensitive
     properties.add(new TokenProperty(NAME_PROPERTY, "Alias to extract from CredentialProvider", false));
     properties.add(new TokenProperty(CREDENTIAL_PROVIDERS_PROPERTY, "Comma separated list of URLs defining CredentialProvider(s)", false));
diff --git a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/KerberosToken.java b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/KerberosToken.java
index 284a838..1a4869d 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/KerberosToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/KerberosToken.java
@@ -59,6 +59,20 @@
   }
 
   /**
+   * Creates a Kerberos token for the specified principal using the provided keytab. The principal and keytab combination are verified by attempting a log in.
+   * <p>
+   * This constructor does not have any side effects.
+   *
+   * @param principal
+   *          The Kerberos principal
+   * @param keytab
+   *          A keytab file containing the principal's credentials.
+   */
+  public KerberosToken(String principal, File keytab) throws IOException {
+    this(principal, keytab, false);
+  }
+
+  /**
    * Creates a token and logs in via {@link UserGroupInformation} using the provided principal and keytab. A key for the principal must exist in the keytab,
    * otherwise login will fail.
    *
@@ -68,7 +82,9 @@
    *          A keytab file
    * @param replaceCurrentUser
    *          Should the current Hadoop user be replaced with this user
+   * @deprecated since 1.8.0, @see #KerberosToken(String, File)
    */
+  @Deprecated
   public KerberosToken(String principal, File keytab, boolean replaceCurrentUser) throws IOException {
     requireNonNull(principal, "Principal was null");
     requireNonNull(keytab, "Keytab was null");
diff --git a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/PasswordToken.java b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/PasswordToken.java
index 9cbf914..f4ea78e 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/PasswordToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/PasswordToken.java
@@ -150,7 +150,7 @@
 
   @Override
   public Set<TokenProperty> getProperties() {
-    Set<TokenProperty> internal = new LinkedHashSet<TokenProperty>();
+    Set<TokenProperty> internal = new LinkedHashSet<>();
     internal.add(new TokenProperty("password", "the password for the principal", true));
     return internal;
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/compaction/CompactionSettings.java b/core/src/main/java/org/apache/accumulo/core/compaction/CompactionSettings.java
index 43f8c0f..1c5369e 100644
--- a/core/src/main/java/org/apache/accumulo/core/compaction/CompactionSettings.java
+++ b/core/src/main/java/org/apache/accumulo/core/compaction/CompactionSettings.java
@@ -21,6 +21,7 @@
 
 public enum CompactionSettings {
 
+  SF_NO_SAMPLE(new NullType()),
   SF_GT_ESIZE_OPT(new SizeType()),
   SF_LT_ESIZE_OPT(new SizeType()),
   SF_NAME_RE_OPT(new PatternType()),
diff --git a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java b/core/src/main/java/org/apache/accumulo/core/compaction/NullType.java
similarity index 67%
copy from core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
copy to core/src/main/java/org/apache/accumulo/core/compaction/NullType.java
index 01f5fa8..fe148ae 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
+++ b/core/src/main/java/org/apache/accumulo/core/compaction/NullType.java
@@ -14,19 +14,16 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.util;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
+package org.apache.accumulo.core.compaction;
 
-public class UtilWaitThread {
-  private static final Logger log = LoggerFactory.getLogger(UtilWaitThread.class);
+import static com.google.common.base.Preconditions.checkArgument;
 
-  public static void sleep(long millis) {
-    try {
-      Thread.sleep(millis);
-    } catch (InterruptedException e) {
-      log.error("{}", e.getMessage(), e);
-    }
+public class NullType implements Type {
+  @Override
+  public String convert(String str) {
+    checkArgument(str == null);
+    return "";
   }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
index 9ff4185..23ad278 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
@@ -29,6 +29,8 @@
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.impl.Tables;
+import org.apache.accumulo.core.conf.PropertyType.PortRange;
+import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -117,7 +119,7 @@
    * @return property value
    */
   public String get(String property) {
-    Map<String,String> propMap = new HashMap<String,String>(1);
+    Map<String,String> propMap = new HashMap<>(1);
     getProperties(propMap, new MatchFilter(property));
     return propMap.get(property);
   }
@@ -150,7 +152,7 @@
   @Override
   public Iterator<Entry<String,String>> iterator() {
     Predicate<String> all = Predicates.alwaysTrue();
-    TreeMap<String,String> entries = new TreeMap<String,String>();
+    TreeMap<String,String> entries = new TreeMap<>();
     getProperties(entries, all);
     return entries.entrySet().iterator();
   }
@@ -176,7 +178,7 @@
   public Map<String,String> getAllPropertiesWithPrefix(Property property) {
     checkType(property, PropertyType.PREFIX);
 
-    Map<String,String> propMap = new HashMap<String,String>();
+    Map<String,String> propMap = new HashMap<>();
     getProperties(propMap, new PrefixFilter(property.getKey()));
     return propMap;
   }
@@ -326,7 +328,7 @@
    * @return interpreted fraction as a decimal value
    */
   public double getFraction(String str) {
-    if (str.charAt(str.length() - 1) == '%')
+    if (str.length() > 0 && str.charAt(str.length() - 1) == '%')
       return Double.parseDouble(str.substring(0, str.length() - 1)) / 100.0;
     return Double.parseDouble(str);
   }
@@ -341,18 +343,38 @@
    *           if the property is of the wrong type
    * @see #getTimeInMillis(String)
    */
-  public int getPort(Property property) {
+  public int[] getPort(Property property) {
     checkType(property, PropertyType.PORT);
 
     String portString = get(property);
-    int port = Integer.parseInt(portString);
-    if (port != 0) {
-      if (port < 1024 || port > 65535) {
-        log.error("Invalid port number " + port + "; Using default " + property.getDefaultValue());
-        port = Integer.parseInt(property.getDefaultValue());
+    int[] ports = null;
+    try {
+      Pair<Integer,Integer> portRange = PortRange.parse(portString);
+      int low = portRange.getFirst();
+      int high = portRange.getSecond();
+      ports = new int[high - low + 1];
+      for (int i = 0, j = low; j <= high; i++, j++) {
+        ports[i] = j;
+      }
+    } catch (IllegalArgumentException e) {
+      ports = new int[1];
+      try {
+        int port = Integer.parseInt(portString);
+        if (port != 0) {
+          if (port < 1024 || port > 65535) {
+            log.error("Invalid port number " + port + "; Using default " + property.getDefaultValue());
+            ports[0] = Integer.parseInt(property.getDefaultValue());
+          } else {
+            ports[0] = port;
+          }
+        } else {
+          ports[0] = port;
+        }
+      } catch (NumberFormatException e1) {
+        throw new IllegalArgumentException("Invalid port syntax. Must be a single positive integers or a range (M-N) of positive integers");
       }
     }
-    return port;
+    return ports;
   }
 
   /**
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java b/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java
index b15cb4c..615d430 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java
@@ -57,7 +57,7 @@
       else if (prop.getType() == PropertyType.PREFIX)
         fatal(PREFIX + "incomplete property key (" + key + ")");
       else if (!prop.getType().isValidFormat(value))
-        fatal(PREFIX + "improperly formatted value for key (" + key + ", type=" + prop.getType() + ")");
+        fatal(PREFIX + "improperly formatted value for key (" + key + ", type=" + prop.getType() + ") : " + value);
 
       if (key.equals(Property.INSTANCE_ZK_TIMEOUT.getKey())) {
         instanceZkTimeoutValue = value;
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationDocGen.java b/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationDocGen.java
index 7357a9b..1c34b99 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationDocGen.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/ConfigurationDocGen.java
@@ -305,8 +305,8 @@
 
   ConfigurationDocGen(PrintStream doc) {
     this.doc = doc;
-    this.prefixes = new ArrayList<Property>();
-    this.sortedProps = new TreeMap<String,Property>();
+    this.prefixes = new ArrayList<>();
+    this.sortedProps = new TreeMap<>();
 
     for (Property prop : Property.values()) {
       if (prop.isExperimental())
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShim.java b/core/src/main/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShim.java
index 977097e..eb1920d 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShim.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShim.java
@@ -64,7 +64,7 @@
   private static Boolean hadoopClassesAvailable = null;
 
   // access to cachedProviders should be synchronized when necessary (for example see getCredentialProviders)
-  private static final ConcurrentHashMap<String,List<Object>> cachedProviders = new ConcurrentHashMap<String,List<Object>>();
+  private static final ConcurrentHashMap<String,List<Object>> cachedProviders = new ConcurrentHashMap<>();
 
   /**
    * Determine if we can load the necessary CredentialProvider classes. Only loaded the first time, so subsequent invocations of this method should return fast.
@@ -291,7 +291,7 @@
       return Collections.emptyList();
     }
 
-    ArrayList<String> aliases = new ArrayList<String>();
+    ArrayList<String> aliases = new ArrayList<>();
     for (Object providerObj : providerObjList) {
       if (null != providerObj) {
         Object aliasesObj;
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
index 34d7fd2..e1ff7e1 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/DefaultConfiguration.java
@@ -29,7 +29,7 @@
 public class DefaultConfiguration extends AccumuloConfiguration {
   private final static Map<String,String> resolvedProps;
   static {
-    Map<String,String> m = new HashMap<String,String>();
+    Map<String,String> m = new HashMap<>();
     for (Property prop : Property.values()) {
       if (!prop.getType().equals(PropertyType.PREFIX)) {
         m.put(prop.getKey(), prop.getDefaultValue());
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ObservableConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/ObservableConfiguration.java
index 4c3f932..a1cd4ad 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ObservableConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/ObservableConfiguration.java
@@ -69,7 +69,7 @@
   }
 
   private static Collection<ConfigurationObserver> snapshot(Collection<ConfigurationObserver> observers) {
-    Collection<ConfigurationObserver> c = new java.util.ArrayList<ConfigurationObserver>();
+    Collection<ConfigurationObserver> c = new java.util.ArrayList<>();
     synchronized (observers) {
       c.addAll(observers);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/Property.java b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
index c427610..c49457f 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/Property.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
@@ -30,6 +30,7 @@
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.file.rfile.RFile;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
+import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.util.format.DefaultFormatter;
 import org.apache.accumulo.core.util.interpret.DefaultScanInterpreter;
 import org.apache.accumulo.start.classloader.AccumuloClassLoader;
@@ -229,6 +230,10 @@
       "Minimum number of threads dedicated to answering coordinator requests"),
   MASTER_REPLICATION_COORDINATOR_THREADCHECK("master.replication.coordinator.threadcheck.time", "5s", PropertyType.TIMEDURATION,
       "The time between adjustments of the coordinator thread pool"),
+  MASTER_STATUS_THREAD_POOL_SIZE("master.status.threadpool.size", "1", PropertyType.COUNT,
+      "The number of threads to use when fetching the tablet server status for balancing."),
+  MASTER_METADATA_SUSPENDABLE("master.metadata.suspendable", "false", PropertyType.BOOLEAN, "Allow tablets for the " + MetadataTable.NAME
+      + " table to be suspended via table.suspend.duration."),
 
   // properties that are specific to tablet server behavior
   TSERV_PREFIX("tserver.", null, PropertyType.PREFIX, "Properties in this category affect the behavior of the tablet servers"),
@@ -246,7 +251,7 @@
       + "size is ok because of group commit."),
   TSERV_TOTAL_MUTATION_QUEUE_MAX("tserver.total.mutation.queue.max", "50M", PropertyType.MEMORY,
       "The amount of memory used to store write-ahead-log mutations before flushing them."),
-  TSERV_TABLET_SPLIT_FINDMIDPOINT_MAXOPEN("tserver.tablet.split.midpoint.files.max", "30", PropertyType.COUNT,
+  TSERV_TABLET_SPLIT_FINDMIDPOINT_MAXOPEN("tserver.tablet.split.midpoint.files.max", "300", PropertyType.COUNT,
       "To find a tablets split points, all index files are opened. This setting determines how many index "
           + "files can be opened at once. When there are more index files than this setting multiple passes "
           + "must be made, which is slower. However opening too many files at once can cause problems."),
@@ -291,6 +296,8 @@
       "The maximum number of concurrent tablet migrations for a tablet server"),
   TSERV_MAJC_MAXCONCURRENT("tserver.compaction.major.concurrent.max", "3", PropertyType.COUNT,
       "The maximum number of concurrent major compactions for a tablet server"),
+  TSERV_MAJC_THROUGHPUT("tserver.compaction.major.throughput", "0B", PropertyType.MEMORY,
+      "Maximum number of bytes to read or write per second over all major compactions on a TabletServer, or 0B for unlimited."),
   TSERV_MINC_MAXCONCURRENT("tserver.compaction.minor.concurrent.max", "4", PropertyType.COUNT,
       "The maximum number of concurrent minor compactions for a tablet server"),
   TSERV_MAJC_TRACE_PERCENT("tserver.compaction.major.trace.percent", "0.1", PropertyType.FRACTION, "The percent of major compactions to trace"),
@@ -347,6 +354,8 @@
       "Memory to provide to batchwriter to replay mutations for replication"),
   TSERV_ASSIGNMENT_MAXCONCURRENT("tserver.assignment.concurrent.max", "2", PropertyType.COUNT,
       "The number of threads available to load tablets. Recoveries are still performed serially."),
+  TSERV_SLOW_FLUSH_MILLIS("tserver.slow.flush.time", "100ms", PropertyType.TIMEDURATION,
+      "If a flush to the write-ahead log takes longer than this period of time, debugging information will written, and may result in a log rollover."),
 
   // properties that are specific to logger server behavior
   LOGGER_PREFIX("logger.", null, PropertyType.PREFIX, "Properties in this category affect the behavior of the write-ahead logger servers"),
@@ -360,17 +369,15 @@
   GC_CYCLE_START("gc.cycle.start", "30s", PropertyType.TIMEDURATION, "Time to wait before attempting to garbage collect any old files."),
   GC_CYCLE_DELAY("gc.cycle.delay", "5m", PropertyType.TIMEDURATION, "Time between garbage collection cycles. In each cycle, old files "
       + "no longer in use are removed from the filesystem."),
-  GC_PORT("gc.port.client", "50091", PropertyType.PORT, "The listening port for the garbage collector's monitor service"),
+  GC_PORT("gc.port.client", "9998", PropertyType.PORT, "The listening port for the garbage collector's monitor service"),
   GC_DELETE_THREADS("gc.threads.delete", "16", PropertyType.COUNT, "The number of threads used to delete files"),
-  GC_TRASH_IGNORE("gc.trash.ignore", "false", PropertyType.BOOLEAN, "Do not use the Trash, even if it is configured"),
-  GC_FILE_ARCHIVE("gc.file.archive", "false", PropertyType.BOOLEAN, "Archive any files/directories instead of moving to the HDFS trash or deleting"),
+  GC_TRASH_IGNORE("gc.trash.ignore", "false", PropertyType.BOOLEAN, "Do not use the Trash, even if it is configured."),
+  GC_FILE_ARCHIVE("gc.file.archive", "false", PropertyType.BOOLEAN, "Archive any files/directories instead of moving to the HDFS trash or deleting."),
   GC_TRACE_PERCENT("gc.trace.percent", "0.01", PropertyType.FRACTION, "Percent of gc cycles to trace"),
-  GC_WAL_DEAD_SERVER_WAIT("gc.wal.dead.server.wait", "1h", PropertyType.TIMEDURATION,
-      "Time to wait after a tserver is first seen as dead before removing associated WAL files"),
 
   // properties that are specific to the monitor server behavior
   MONITOR_PREFIX("monitor.", null, PropertyType.PREFIX, "Properties in this category affect the behavior of the monitor web server."),
-  MONITOR_PORT("monitor.port.client", "50095", PropertyType.PORT, "The listening port for the monitor's http service"),
+  MONITOR_PORT("monitor.port.client", "9995", PropertyType.PORT, "The listening port for the monitor's http service"),
   MONITOR_LOG4J_PORT("monitor.port.log4j", "4560", PropertyType.PORT, "The listening port for the monitor's log4j logging collection."),
   MONITOR_BANNER_TEXT("monitor.banner.text", "", PropertyType.STRING, "The banner text displayed on the monitor page."),
   MONITOR_BANNER_COLOR("monitor.banner.color", "#c4c4c4", PropertyType.STRING, "The color of the banner text displayed on the monitor page."),
@@ -380,11 +387,11 @@
   MONITOR_SSL_KEYSTORE("monitor.ssl.keyStore", "", PropertyType.PATH, "The keystore for enabling monitor SSL."),
   @Sensitive
   MONITOR_SSL_KEYSTOREPASS("monitor.ssl.keyStorePassword", "", PropertyType.STRING, "The keystore password for enabling monitor SSL."),
-  MONITOR_SSL_KEYSTORETYPE("monitor.ssl.keyStoreType", "", PropertyType.STRING, "Type of SSL keystore"),
+  MONITOR_SSL_KEYSTORETYPE("monitor.ssl.keyStoreType", "jks", PropertyType.STRING, "Type of SSL keystore"),
   MONITOR_SSL_TRUSTSTORE("monitor.ssl.trustStore", "", PropertyType.PATH, "The truststore for enabling monitor SSL."),
   @Sensitive
   MONITOR_SSL_TRUSTSTOREPASS("monitor.ssl.trustStorePassword", "", PropertyType.STRING, "The truststore password for enabling monitor SSL."),
-  MONITOR_SSL_TRUSTSTORETYPE("monitor.ssl.trustStoreType", "", PropertyType.STRING, "Type of SSL truststore"),
+  MONITOR_SSL_TRUSTSTORETYPE("monitor.ssl.trustStoreType", "jks", PropertyType.STRING, "Type of SSL truststore"),
   MONITOR_SSL_INCLUDE_CIPHERS("monitor.ssl.include.ciphers", "", PropertyType.STRING,
       "A comma-separated list of allows SSL Ciphers, see monitor.ssl.exclude.ciphers to disallow ciphers"),
   MONITOR_SSL_EXCLUDE_CIPHERS("monitor.ssl.exclude.ciphers", "", PropertyType.STRING,
@@ -529,6 +536,19 @@
   @Experimental
   TABLE_VOLUME_CHOOSER("table.volume.chooser", "org.apache.accumulo.server.fs.RandomVolumeChooser", PropertyType.CLASSNAME,
       "The class that will be used to select which volume will be used to create new files for this table."),
+  TABLE_SAMPLER(
+      "table.sampler",
+      "",
+      PropertyType.CLASSNAME,
+      "The name of a class that implements org.apache.accumulo.core.Sampler.  Setting this option enables storing a sample of data which can be scanned."
+          + "  Always having a current sample can useful for query optimization and data comprehension.   After enabling sampling for an existing table, a compaction "
+          + "is needed to compute the sample for existing data.  The compact command in the shell has an option to only compact files without sample data."),
+  TABLE_SAMPLER_OPTS("table.sampler.opt.", null, PropertyType.PREFIX,
+      "The property is used to set options for a sampler.  If a sample had two options like hasher and modulous, then the two properties "
+          + "table.sampler.opt.hasher=${hash algorithm} and table.sampler.opt.modulous=${mod} would be set."),
+  TABLE_SUSPEND_DURATION("table.suspend.duration", "0s", PropertyType.TIMEDURATION,
+      "For tablets belonging to this table: When a tablet server dies, allow the tablet server this duration to revive before reassigning its tablets"
+          + "to other tablet servers."),
 
   // VFS ClassLoader properties
   VFS_CLASSLOADER_SYSTEM_CLASSPATH_PROPERTY(AccumuloVFSClassLoader.VFS_CLASSLOADER_SYSTEM_CLASSPATH_PROPERTY, "", PropertyType.STRING,
@@ -759,8 +779,8 @@
    */
   public synchronized static boolean isValidPropertyKey(String key) {
     if (validProperties == null) {
-      validProperties = new HashSet<String>();
-      validPrefixes = new HashSet<String>();
+      validProperties = new HashSet<>();
+      validPrefixes = new HashSet<>();
 
       for (Property p : Property.values()) {
         if (p.getType().equals(PropertyType.PREFIX)) {
@@ -785,7 +805,7 @@
    */
   public synchronized static boolean isValidTablePropertyKey(String key) {
     if (validTableProperties == null) {
-      validTableProperties = new HashSet<String>();
+      validTableProperties = new HashSet<>();
       for (Property p : Property.values()) {
         if (!p.getType().equals(PropertyType.PREFIX) && p.getKey().startsWith(Property.TABLE_PREFIX.getKey())) {
           validTableProperties.add(p.getKey());
@@ -796,7 +816,7 @@
     return validTableProperties.contains(key) || key.startsWith(Property.TABLE_CONSTRAINT_PREFIX.getKey())
         || key.startsWith(Property.TABLE_ITERATOR_PREFIX.getKey()) || key.startsWith(Property.TABLE_LOCALITY_GROUP_PREFIX.getKey())
         || key.startsWith(Property.TABLE_COMPACTION_STRATEGY_PREFIX.getKey()) || key.startsWith(Property.TABLE_REPLICATION_TARGET.getKey())
-        || key.startsWith(Property.TABLE_ARBITRARY_PROP_PREFIX.getKey());
+        || key.startsWith(Property.TABLE_ARBITRARY_PROP_PREFIX.getKey()) || key.startsWith(TABLE_SAMPLER_OPTS.getKey());
   }
 
   private static final EnumSet<Property> fixedProperties = EnumSet.of(Property.TSERV_CLIENTPORT, Property.TSERV_NATIVEMAP_ENABLED,
@@ -933,7 +953,7 @@
    */
   public static Map<String,String> getCompactionStrategyOptions(AccumuloConfiguration tableConf) {
     Map<String,String> longNames = tableConf.getAllPropertiesWithPrefix(Property.TABLE_COMPACTION_STRATEGY_PREFIX);
-    Map<String,String> result = new HashMap<String,String>();
+    Map<String,String> result = new HashMap<>();
     for (Entry<String,String> entry : longNames.entrySet()) {
       result.put(entry.getKey().substring(Property.TABLE_COMPACTION_STRATEGY_PREFIX.getKey().length()), entry.getValue());
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/PropertyType.java b/core/src/main/java/org/apache/accumulo/core/conf/PropertyType.java
index 3814b6f..1120b87 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/PropertyType.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/PropertyType.java
@@ -16,73 +16,93 @@
  */
 package org.apache.accumulo.core.conf;
 
+import static java.util.Objects.requireNonNull;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
 import org.apache.accumulo.core.Constants;
+import org.apache.accumulo.core.util.Pair;
+import org.apache.commons.lang.math.IntRange;
 import org.apache.hadoop.fs.Path;
 
+import com.google.common.base.Function;
+import com.google.common.base.Predicate;
+import com.google.common.base.Predicates;
+import com.google.common.collect.Collections2;
+
 /**
  * Types of {@link Property} values. Each type has a short name, a description, and a regex which valid values match. All of these fields are optional.
  */
 public enum PropertyType {
-  PREFIX(null, null, null),
+  PREFIX(null, Predicates.<String> alwaysFalse(), null),
 
-  TIMEDURATION("duration", "\\d{1," + Long.toString(Long.MAX_VALUE).length() + "}(?:ms|s|m|h|d)?",
+  TIMEDURATION("duration", boundedUnits(0, Long.MAX_VALUE, true, "", "ms", "s", "m", "h", "d"),
       "A non-negative integer optionally followed by a unit of time (whitespace disallowed), as in 30s.\n"
           + "If no unit of time is specified, seconds are assumed. Valid units are 'ms', 's', 'm', 'h' for milliseconds, seconds, minutes, and hours.\n"
           + "Examples of valid durations are '600', '30s', '45m', '30000ms', '3d', and '1h'.\n"
           + "Examples of invalid durations are '1w', '1h30m', '1s 200ms', 'ms', '', and 'a'.\n"
-          + "Unless otherwise stated, the max value for the duration represented in milliseconds is " + Long.MAX_VALUE), DATETIME("date/time",
-      "(?:19|20)\\d{12}[A-Z]{3}", "A date/time string in the format: YYYYMMDDhhmmssTTT where TTT is the 3 character time zone"), MEMORY("memory", "\\d{1,"
-      + Long.toString(Long.MAX_VALUE).length() + "}(?:B|K|M|G)?",
+          + "Unless otherwise stated, the max value for the duration represented in milliseconds is " + Long.MAX_VALUE),
+
+  MEMORY("memory", boundedUnits(0, Long.MAX_VALUE, false, "", "B", "K", "M", "G"),
       "A positive integer optionally followed by a unit of memory (whitespace disallowed), as in 2G.\n"
           + "If no unit is specified, bytes are assumed. Valid units are 'B', 'K', 'M', 'G', for bytes, kilobytes, megabytes, and gigabytes.\n"
           + "Examples of valid memories are '1024', '20B', '100K', '1500M', '2G'.\n"
           + "Examples of invalid memories are '1M500K', '1M 2K', '1MB', '1.5G', '1,024K', '', and 'a'.\n"
           + "Unless otherwise stated, the max value for the memory represented in bytes is " + Long.MAX_VALUE),
 
-  HOSTLIST("host list", "[\\w-]+(?:\\.[\\w-]+)*(?:\\:\\d{1,5})?(?:,[\\w-]+(?:\\.[\\w-]+)*(?:\\:\\d{1,5})?)*",
+  HOSTLIST("host list", new Matches("[\\w-]+(?:\\.[\\w-]+)*(?:\\:\\d{1,5})?(?:,[\\w-]+(?:\\.[\\w-]+)*(?:\\:\\d{1,5})?)*"),
       "A comma-separated list of hostnames or ip addresses, with optional port numbers.\n"
           + "Examples of valid host lists are 'localhost:2000,www.example.com,10.10.1.1:500' and 'localhost'.\n"
           + "Examples of invalid host lists are '', ':1000', and 'localhost:80000'"),
 
-  PORT("port", "\\d{1,5}", "An positive integer in the range 1024-65535, not already in use or specified elsewhere in the configuration"), COUNT("count",
-      "\\d{1,10}", "A non-negative integer in the range of 0-" + Integer.MAX_VALUE), FRACTION("fraction/percentage", "\\d*(?:\\.\\d+)?%?",
+  @SuppressWarnings("unchecked")
+  PORT("port", Predicates.or(new Bounds(1024, 65535), in(true, "0"), new PortRange("\\d{4,5}-\\d{4,5}")),
+      "An positive integer in the range 1024-65535 (not already in use or specified elsewhere in the configuration),\n"
+          + "zero to indicate any open ephemeral port, or a range of positive integers specified as M-N"),
+
+  COUNT("count", new Bounds(0, Integer.MAX_VALUE), "A non-negative integer in the range of 0-" + Integer.MAX_VALUE),
+
+  FRACTION("fraction/percentage", new FractionPredicate(),
       "A floating point number that represents either a fraction or, if suffixed with the '%' character, a percentage.\n"
           + "Examples of valid fractions/percentages are '10', '1000%', '0.05', '5%', '0.2%', '0.0005'.\n"
           + "Examples of invalid fractions/percentages are '', '10 percent', 'Hulk Hogan'"),
 
-  PATH("path", ".*",
+  PATH("path", Predicates.<String> alwaysTrue(),
       "A string that represents a filesystem path, which can be either relative or absolute to some directory. The filesystem depends on the property. The "
-          + "following environment variables will be substituted: " + Constants.PATH_PROPERTY_ENV_VARS), ABSOLUTEPATH("absolute path", null,
-      "An absolute filesystem path. The filesystem depends on the property. This is the same as path, but enforces that its root is explicitly specified.") {
+          + "following environment variables will be substituted: " + Constants.PATH_PROPERTY_ENV_VARS),
+
+  ABSOLUTEPATH("absolute path", new Predicate<String>() {
     @Override
-    public boolean isValidFormat(String value) {
-      if (value.trim().equals(""))
-        return true;
-      return new Path(value).isAbsolute();
+    public boolean apply(final String input) {
+      return input == null || input.trim().isEmpty() || new Path(input.trim()).isAbsolute();
     }
-  },
+  }, "An absolute filesystem path. The filesystem depends on the property. This is the same as path, but enforces that its root is explicitly specified."),
 
-  CLASSNAME("java class", "[\\w$.]*", "A fully qualified java class name representing a class on the classpath.\n"
+  CLASSNAME("java class", new Matches("[\\w$.]*"), "A fully qualified java class name representing a class on the classpath.\n"
       + "An example is 'java.lang.String', rather than 'String'"),
 
-  CLASSNAMELIST("java class list", "[\\w$.,]*", "A list of fully qualified java class names representing classes on the classpath.\n"
+  CLASSNAMELIST("java class list", new Matches("[\\w$.,]*"), "A list of fully qualified java class names representing classes on the classpath.\n"
       + "An example is 'java.lang.String', rather than 'String'"),
 
-  DURABILITY("durability", "(?:none|log|flush|sync)", "One of 'none', 'log', 'flush' or 'sync'."),
+  DURABILITY("durability", in(true, null, "none", "log", "flush", "sync"), "One of 'none', 'log', 'flush' or 'sync'."),
 
-  STRING("string", ".*",
-      "An arbitrary string of characters whose format is unspecified and interpreted based on the context of the property to which it applies."), BOOLEAN(
-      "boolean", "(?:True|true|False|false)", "Has a value of either 'true' or 'false'"), URI("uri", ".*", "A valid URI");
+  STRING("string", Predicates.<String> alwaysTrue(),
+      "An arbitrary string of characters whose format is unspecified and interpreted based on the context of the property to which it applies."),
+
+  BOOLEAN("boolean", in(false, null, "true", "false"), "Has a value of either 'true' or 'false' (case-insensitive)"),
+
+  URI("uri", Predicates.<String> alwaysTrue(), "A valid URI");
 
   private String shortname, format;
-  private Pattern regex;
+  private Predicate<String> predicate;
 
-  private PropertyType(String shortname, String regex, String formatDescription) {
+  private PropertyType(String shortname, Predicate<String> predicate, String formatDescription) {
     this.shortname = shortname;
+    this.predicate = predicate;
     this.format = formatDescription;
-    this.regex = regex == null ? null : Pattern.compile(regex, Pattern.DOTALL);
   }
 
   @Override
@@ -105,6 +125,182 @@
    * @return true if value is valid or null, or if this type has no regex
    */
   public boolean isValidFormat(String value) {
-    return (regex == null || value == null) ? true : regex.matcher(value).matches();
+    return predicate.apply(value);
   }
+
+  private static Predicate<String> in(final boolean caseSensitive, final String... strings) {
+    List<String> allowedSet = Arrays.asList(strings);
+    if (caseSensitive) {
+      return Predicates.in(allowedSet);
+    } else {
+      Function<String,String> toLower = new Function<String,String>() {
+        @Override
+        public String apply(final String input) {
+          return input == null ? null : input.toLowerCase();
+        }
+      };
+      return Predicates.compose(Predicates.in(Collections2.transform(allowedSet, toLower)), toLower);
+    }
+  }
+
+  private static Predicate<String> boundedUnits(final long lowerBound, final long upperBound, final boolean caseSensitive, final String... suffixes) {
+    return Predicates.or(Predicates.isNull(),
+        Predicates.and(new HasSuffix(caseSensitive, suffixes), Predicates.compose(new Bounds(lowerBound, upperBound), new StripUnits())));
+  }
+
+  private static class StripUnits implements Function<String,String> {
+    private static Pattern SUFFIX_REGEX = Pattern.compile("[^\\d]*$");
+
+    @Override
+    public String apply(final String input) {
+      requireNonNull(input);
+      return SUFFIX_REGEX.matcher(input.trim()).replaceAll("");
+    }
+  }
+
+  private static class HasSuffix implements Predicate<String> {
+
+    private final Predicate<String> p;
+
+    public HasSuffix(final boolean caseSensitive, final String... suffixes) {
+      p = in(caseSensitive, suffixes);
+    }
+
+    @Override
+    public boolean apply(final String input) {
+      requireNonNull(input);
+      Matcher m = StripUnits.SUFFIX_REGEX.matcher(input);
+      if (m.find()) {
+        if (m.groupCount() != 0) {
+          throw new AssertionError(m.groupCount());
+        }
+        return p.apply(m.group());
+      } else {
+        return true;
+      }
+    }
+  }
+
+  private static class FractionPredicate implements Predicate<String> {
+    @Override
+    public boolean apply(final String input) {
+      if (input == null) {
+        return true;
+      }
+      try {
+        double d;
+        if (input.length() > 0 && input.charAt(input.length() - 1) == '%') {
+          d = Double.parseDouble(input.substring(0, input.length() - 1));
+        } else {
+          d = Double.parseDouble(input);
+        }
+        return d >= 0;
+      } catch (NumberFormatException e) {
+        return false;
+      }
+    }
+  }
+
+  private static class Bounds implements Predicate<String> {
+
+    private final long lowerBound, upperBound;
+    private final boolean lowerInclusive, upperInclusive;
+
+    public Bounds(final long lowerBound, final long upperBound) {
+      this(lowerBound, true, upperBound, true);
+    }
+
+    public Bounds(final long lowerBound, final boolean lowerInclusive, final long upperBound, final boolean upperInclusive) {
+      this.lowerBound = lowerBound;
+      this.lowerInclusive = lowerInclusive;
+      this.upperBound = upperBound;
+      this.upperInclusive = upperInclusive;
+    }
+
+    @Override
+    public boolean apply(final String input) {
+      if (input == null) {
+        return true;
+      }
+      long number;
+      try {
+        number = Long.parseLong(input);
+      } catch (NumberFormatException e) {
+        return false;
+      }
+      if (number < lowerBound || (!lowerInclusive && number == lowerBound)) {
+        return false;
+      }
+      if (number > upperBound || (!upperInclusive && number == upperBound)) {
+        return false;
+      }
+      return true;
+    }
+
+  }
+
+  private static class Matches implements Predicate<String> {
+
+    protected final Pattern pattern;
+
+    public Matches(final String pattern) {
+      this(pattern, Pattern.DOTALL);
+    }
+
+    public Matches(final String pattern, int flags) {
+      this(Pattern.compile(requireNonNull(pattern), flags));
+    }
+
+    public Matches(final Pattern pattern) {
+      requireNonNull(pattern);
+      this.pattern = pattern;
+    }
+
+    @Override
+    public boolean apply(final String input) {
+      // TODO when the input is null, it just means that the property wasn't set
+      // we can add checks for not null for required properties with Predicates.and(Predicates.notNull(), ...),
+      // or we can stop assuming that null is always okay for a Matches predicate, and do that explicitly with Predicates.or(Predicates.isNull(), ...)
+      return input == null || pattern.matcher(input).matches();
+    }
+
+  }
+
+  public static class PortRange extends Matches {
+
+    private static final IntRange VALID_RANGE = new IntRange(1024, 65535);
+
+    public PortRange(final String pattern) {
+      super(pattern);
+    }
+
+    @Override
+    public boolean apply(final String input) {
+      if (super.apply(input)) {
+        try {
+          PortRange.parse(input);
+          return true;
+        } catch (IllegalArgumentException e) {
+          return false;
+        }
+      } else {
+        return false;
+      }
+    }
+
+    public static Pair<Integer,Integer> parse(String portRange) {
+      int idx = portRange.indexOf('-');
+      if (idx != -1) {
+        int low = Integer.parseInt(portRange.substring(0, idx));
+        int high = Integer.parseInt(portRange.substring(idx + 1));
+        if (!VALID_RANGE.containsInteger(low) || !VALID_RANGE.containsInteger(high) || !(low <= high)) {
+          throw new IllegalArgumentException("Invalid port range specified, only 1024 to 65535 supported.");
+        }
+        return new Pair<>(low, high);
+      }
+      throw new IllegalArgumentException("Invalid port range specification, must use M-N notation.");
+    }
+
+  }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/constraints/Constraint.java b/core/src/main/java/org/apache/accumulo/core/constraints/Constraint.java
index 78ec0ac..a936ef5 100644
--- a/core/src/main/java/org/apache/accumulo/core/constraints/Constraint.java
+++ b/core/src/main/java/org/apache/accumulo/core/constraints/Constraint.java
@@ -24,24 +24,19 @@
 import org.apache.accumulo.core.security.Authorizations;
 
 /**
- * <p>
  * Constraint objects are used to determine if mutations will be applied to a table.
- * </p>
  *
  * <p>
  * This interface expects implementers to return violation codes. The reason codes are returned instead of arbitrary strings to encourage conciseness.
  * Conciseness is needed because violations are aggregated. If a user sends a batch of 10,000 mutations to Accumulo, only aggregated counts about which
  * violations occurred are returned. If the constraint implementer were allowed to return arbitrary violation strings like the following:
- * </p>
  *
  * <p>
  * Value "abc" is not a number<br>
  * Value "vbg" is not a number
- * </p>
  *
  * <p>
  * This would not aggregate very well, because the same violation is represented with two different strings.
- * </p>
  */
 public interface Constraint {
 
diff --git a/core/src/main/java/org/apache/accumulo/core/constraints/DefaultKeySizeConstraint.java b/core/src/main/java/org/apache/accumulo/core/constraints/DefaultKeySizeConstraint.java
index 7cc42c1..a2cc337 100644
--- a/core/src/main/java/org/apache/accumulo/core/constraints/DefaultKeySizeConstraint.java
+++ b/core/src/main/java/org/apache/accumulo/core/constraints/DefaultKeySizeConstraint.java
@@ -41,7 +41,7 @@
     return null;
   }
 
-  final static List<Short> NO_VIOLATIONS = new ArrayList<Short>();
+  final static List<Short> NO_VIOLATIONS = new ArrayList<>();
 
   @Override
   public List<Short> check(Environment env, Mutation mutation) {
@@ -50,7 +50,7 @@
     if (mutation.numBytes() < maxSize)
       return NO_VIOLATIONS;
 
-    List<Short> violations = new ArrayList<Short>();
+    List<Short> violations = new ArrayList<>();
 
     for (ColumnUpdate cu : mutation.getUpdates()) {
       int size = mutation.getRow().length;
diff --git a/core/src/main/java/org/apache/accumulo/core/constraints/Violations.java b/core/src/main/java/org/apache/accumulo/core/constraints/Violations.java
index 1db13cd..65c372f 100644
--- a/core/src/main/java/org/apache/accumulo/core/constraints/Violations.java
+++ b/core/src/main/java/org/apache/accumulo/core/constraints/Violations.java
@@ -61,7 +61,7 @@
    * Creates a new empty object.
    */
   public Violations() {
-    cvsmap = new HashMap<CVSKey,ConstraintViolationSummary>();
+    cvsmap = new HashMap<>();
   }
 
   /**
@@ -128,7 +128,7 @@
    * @return list of violation summaries
    */
   public List<ConstraintViolationSummary> asList() {
-    return new ArrayList<ConstraintViolationSummary>(cvsmap.values());
+    return new ArrayList<>(cvsmap.values());
   }
 
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java b/core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
index d03f66e..6a962c2 100644
--- a/core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
+++ b/core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
@@ -56,7 +56,7 @@
 
     HashSet<String> ok = null;
     if (updates.size() > 1)
-      ok = new HashSet<String>();
+      ok = new HashSet<>();
 
     VisibilityEvaluator ve = null;
 
diff --git a/core/src/main/java/org/apache/accumulo/core/data/Condition.java b/core/src/main/java/org/apache/accumulo/core/data/Condition.java
index 2dc2a0f..1ca5d06 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/Condition.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Condition.java
@@ -265,8 +265,8 @@
     checkArgument(iterators != null, "iterators is null");
 
     if (iterators.length > 1) {
-      HashSet<String> names = new HashSet<String>();
-      HashSet<Integer> prios = new HashSet<Integer>();
+      HashSet<String> names = new HashSet<>();
+      HashSet<Integer> prios = new HashSet<>();
 
       for (IteratorSetting iteratorSetting : iterators) {
         if (!names.add(iteratorSetting.getName()))
diff --git a/core/src/main/java/org/apache/accumulo/core/data/ConditionalMutation.java b/core/src/main/java/org/apache/accumulo/core/data/ConditionalMutation.java
index ccec325..a10f9b7 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/ConditionalMutation.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/ConditionalMutation.java
@@ -34,7 +34,7 @@
  */
 public class ConditionalMutation extends Mutation {
 
-  private List<Condition> conditions = new ArrayList<Condition>();
+  private List<Condition> conditions = new ArrayList<>();
 
   public ConditionalMutation(byte[] row, Condition... conditions) {
     super(row);
@@ -64,7 +64,7 @@
 
   public ConditionalMutation(ConditionalMutation cm) {
     super(cm);
-    this.conditions = new ArrayList<Condition>(cm.conditions);
+    this.conditions = new ArrayList<>(cm.conditions);
   }
 
   private void init(Condition... conditions) {
diff --git a/core/src/main/java/org/apache/accumulo/core/data/Key.java b/core/src/main/java/org/apache/accumulo/core/data/Key.java
index 758436d..66ad5ca 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/Key.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Key.java
@@ -107,6 +107,19 @@
   }
 
   /**
+   * Creates a key with the specified row, empty column family, empty column qualifier, empty column visibility, timestamp {@link Long#MAX_VALUE}, and delete
+   * marker false. This constructor creates a copy of row. If you don't want to create a copy of row, you should call
+   * {@link Key#Key(byte[] row, byte[] cf, byte[] cq, byte[] cv, long ts, boolean deleted, boolean copy)} instead.
+   *
+   * @param row
+   *          row ID
+   * @since 1.8.0
+   */
+  public Key(byte[] row) {
+    init(row, 0, row.length, EMPTY_BYTES, 0, 0, EMPTY_BYTES, 0, 0, EMPTY_BYTES, 0, 0, Long.MAX_VALUE, false, true);
+  }
+
+  /**
    * Creates a key with the specified row, empty column family, empty column qualifier, empty column visibility, the specified timestamp, and delete marker
    * false.
    *
@@ -121,7 +134,24 @@
   }
 
   /**
-   * Creates a key. The delete marker defaults to false.
+   * Creates a key with the specified row, empty column family, empty column qualifier, empty column visibility, the specified timestamp, and delete marker
+   * false. This constructor creates a copy of row. If you don't want to create a copy, you should call
+   * {@link Key#Key(byte[] row, byte[] cf, byte[] cq, byte[] cv, long ts, boolean deleted, boolean copy)} instead.
+   *
+   * @param row
+   *          row ID
+   * @param ts
+   *          timestamp
+   * @since 1.8.0
+   */
+  public Key(byte[] row, long ts) {
+    this(row);
+    timestamp = ts;
+  }
+
+  /**
+   * Creates a key. The delete marker defaults to false. This constructor creates a copy of each specified array. If you don't want to create a copy of the
+   * arrays, you should call {@link Key#Key(byte[] row, byte[] cf, byte[] cq, byte[] cv, long ts, boolean deleted, boolean copy)} instead.
    *
    * @param row
    *          bytes containing row ID
@@ -155,7 +185,8 @@
   }
 
   /**
-   * Creates a key. The delete marker defaults to false.
+   * Creates a key. The delete marker defaults to false. This constructor creates a copy of each specified array. If you don't want to create a copy of the
+   * arrays, you should call {@link Key#Key(byte[] row, byte[] cf, byte[] cq, byte[] cv, long ts, boolean deleted, boolean copy)} instead.
    *
    * @param row
    *          row ID
@@ -173,7 +204,8 @@
   }
 
   /**
-   * Creates a key.
+   * Creates a key. This constructor creates a copy of each specified arrays. If you don't want to create a copy, you should call
+   * {@link Key#Key(byte[] row, byte[] cf, byte[] cq, byte[] cv, long ts, boolean deleted, boolean copy)} instead.
    *
    * @param row
    *          row ID
@@ -223,6 +255,17 @@
   }
 
   /**
+   * Creates a key with the specified row, the specified column family, empty column qualifier, empty column visibility, timestamp {@link Long#MAX_VALUE}, and
+   * delete marker false. This constructor creates a copy of each specified array. If you don't want to create a copy of the arrays, you should call
+   * {@link Key#Key(byte[] row, byte[] cf, byte[] cq, byte[] cv, long ts, boolean deleted, boolean copy)} instead.
+   *
+   * @since 1.8.0
+   */
+  public Key(byte[] row, byte[] cf) {
+    init(row, 0, row.length, cf, 0, cf.length, EMPTY_BYTES, 0, 0, EMPTY_BYTES, 0, 0, Long.MAX_VALUE, false, true);
+  }
+
+  /**
    * Creates a key with the specified row, the specified column family, the specified column qualifier, empty column visibility, timestamp
    * {@link Long#MAX_VALUE}, and delete marker false.
    */
@@ -231,6 +274,17 @@
   }
 
   /**
+   * Creates a key with the specified row, the specified column family, the specified column qualifier, empty column visibility, timestamp
+   * {@link Long#MAX_VALUE}, and delete marker false. This constructor creates a copy of each specified array. If you don't want to create a copy of the arrays,
+   * you should call {@link Key#Key(byte[] row, byte[] cf, byte[] cq, byte[] cv, long ts, boolean deleted, boolean copy)} instead.
+   *
+   * @since 1.8.0
+   */
+  public Key(byte[] row, byte[] cf, byte[] cq) {
+    init(row, 0, row.length, cf, 0, cf.length, cq, 0, cq.length, EMPTY_BYTES, 0, 0, Long.MAX_VALUE, false, true);
+  }
+
+  /**
    * Creates a key with the specified row, the specified column family, the specified column qualifier, the specified column visibility, timestamp
    * {@link Long#MAX_VALUE}, and delete marker false.
    */
@@ -240,6 +294,17 @@
   }
 
   /**
+   * Creates a key with the specified row, the specified column family, the specified column qualifier, the specified column visibility, timestamp
+   * {@link Long#MAX_VALUE}, and delete marker false. This constructor creates a copy of each specified array. If you don't want to create a copy of the arrays,
+   * you should call {@link Key#Key(byte[] row, byte[] cf, byte[] cq, byte[] cv, long ts, boolean deleted, boolean copy)} instead.
+   *
+   * @since 1.8.0
+   */
+  public Key(byte[] row, byte[] cf, byte[] cq, byte[] cv) {
+    init(row, 0, row.length, cf, 0, cf.length, cq, 0, cq.length, cv, 0, cv.length, Long.MAX_VALUE, false, true);
+  }
+
+  /**
    * Creates a key with the specified row, the specified column family, the specified column qualifier, empty column visibility, the specified timestamp, and
    * delete marker false.
    */
@@ -248,6 +313,17 @@
   }
 
   /**
+   * Creates a key with the specified row, the specified column family, the specified column qualifier, empty column visibility, the specified timestamp, and
+   * delete marker false. This constructor creates a copy of each specified array. If you don't want to create a copy of the arrays, you should call
+   * {@link Key#Key(byte[] row, byte[] cf, byte[] cq, byte[] cv, long ts, boolean deleted, boolean copy)} instead.
+   *
+   * @since 1.8.0
+   */
+  public Key(byte[] row, byte[] cf, byte[] cq, long ts) {
+    init(row, 0, row.length, cf, 0, cf.length, cq, 0, cq.length, EMPTY_BYTES, 0, 0, ts, false, true);
+  }
+
+  /**
    * Creates a key with the specified row, the specified column family, the specified column qualifier, the specified column visibility, the specified
    * timestamp, and delete marker false.
    */
@@ -266,6 +342,18 @@
   }
 
   /**
+   * Creates a key with the specified row, the specified column family, the specified column qualifier, the specified column visibility, the specified
+   * timestamp, and delete marker false. This constructor creates a copy of each specified array. If you don't want to create a copy of the arrays, you should
+   * call {@link Key#Key(byte[] row, byte[] cf, byte[] cq, byte[] cv, long ts, boolean deleted, boolean copy)} instead.
+   *
+   * @since 1.8.0
+   */
+  public Key(byte[] row, byte[] cf, byte[] cq, ColumnVisibility cv, long ts) {
+    byte[] expr = cv.getExpression();
+    init(row, 0, row.length, cf, 0, cf.length, cq, 0, cq.length, expr, 0, expr.length, ts, false, true);
+  }
+
+  /**
    * Converts CharSequence to Text and creates a Key using {@link #Key(Text)}.
    */
   public Key(CharSequence row) {
diff --git a/core/src/main/java/org/apache/accumulo/core/data/KeyExtent.java b/core/src/main/java/org/apache/accumulo/core/data/KeyExtent.java
index 7bbb0c2..4e3d058 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/KeyExtent.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/KeyExtent.java
@@ -51,11 +51,11 @@
   }
 
   public KeyExtent(Text table, Text endRow, Text prevEndRow) {
-    this.wrapped = new org.apache.accumulo.core.data.impl.KeyExtent(table, endRow, prevEndRow);
+    this.wrapped = new org.apache.accumulo.core.data.impl.KeyExtent(table.toString(), endRow, prevEndRow);
   }
 
   public KeyExtent(KeyExtent extent) {
-    this.wrapped = new org.apache.accumulo.core.data.impl.KeyExtent(extent.getTableId(), extent.getEndRow(), extent.getPrevEndRow());
+    this.wrapped = new org.apache.accumulo.core.data.impl.KeyExtent(extent.getTableId().toString(), extent.getEndRow(), extent.getPrevEndRow());
   }
 
   public KeyExtent(TKeyExtent tke) {
@@ -78,11 +78,11 @@
   }
 
   public void setTableId(Text tId) {
-    wrapped.setTableId(tId);
+    wrapped.setTableId(tId.toString());
   }
 
   public Text getTableId() {
-    return wrapped.getTableId();
+    return new Text(wrapped.getTableId());
   }
 
   public void setEndRow(Text endRow) {
@@ -189,7 +189,7 @@
   }
 
   private static KeyExtent wrap(org.apache.accumulo.core.data.impl.KeyExtent ke) {
-    return new KeyExtent(ke.getTableId(), ke.getEndRow(), ke.getPrevEndRow());
+    return new KeyExtent(new Text(ke.getTableId()), ke.getEndRow(), ke.getPrevEndRow());
   }
 
   private static SortedSet<KeyExtent> wrap(Collection<org.apache.accumulo.core.data.impl.KeyExtent> unwrapped) {
@@ -202,7 +202,7 @@
   }
 
   public static Text getMetadataEntry(Text tableId, Text endRow) {
-    return MetadataSchema.TabletsSection.getRow(tableId, endRow);
+    return MetadataSchema.TabletsSection.getRow(tableId.toString(), endRow);
   }
 
   /**
diff --git a/core/src/main/java/org/apache/accumulo/core/data/Mutation.java b/core/src/main/java/org/apache/accumulo/core/data/Mutation.java
index 1d15ef4..ebc72f5 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/Mutation.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Mutation.java
@@ -38,7 +38,6 @@
 import org.apache.hadoop.io.WritableUtils;
 
 /**
- * <p>
  * Mutation represents an action that manipulates a row in a table. A mutation holds a list of column/value pairs that represent an atomic set of modifications
  * to make to a row.
  *
@@ -53,13 +52,11 @@
  * <p>
  * All of the put methods append data to the mutation; they do not overwrite anything that was previously put. The mutation holds a list of all columns/values
  * that were put into it.
- * </p>
  *
  * <p>
  * The putDelete() methods do not remove something that was previously added to the mutation; rather, they indicate that Accumulo should insert a delete marker
  * for that row column. A delete marker effectively hides entries for that row column with a timestamp earlier than the marker's. (The hidden data is eventually
  * removed during Accumulo garbage collection.)
- * </p>
  */
 public class Mutation implements Writable {
 
@@ -74,7 +71,7 @@
    */
   public static enum SERIALIZED_FORMAT {
     VERSION1, VERSION2
-  };
+  }
 
   private boolean useOldDeserialize = false;
   private byte[] row;
@@ -322,7 +319,7 @@
       put(val, valLength);
     } else {
       if (values == null) {
-        values = new ArrayList<byte[]>();
+        values = new ArrayList<>();
       }
       byte copy[] = new byte[valLength];
       System.arraycopy(val, 0, copy, 0, valLength);
@@ -972,7 +969,7 @@
     if (!valuesPresent) {
       values = null;
     } else {
-      values = new ArrayList<byte[]>();
+      values = new ArrayList<>();
       int numValues = WritableUtils.readVInt(in);
       for (int i = 0; i < numValues; i++) {
         len = WritableUtils.readVInt(in);
@@ -1012,7 +1009,7 @@
     if (!valuesPresent) {
       localValues = null;
     } else {
-      localValues = new ArrayList<byte[]>();
+      localValues = new ArrayList<>();
       int numValues = in.readInt();
       for (int i = 0; i < numValues; i++) {
         len = in.readInt();
diff --git a/core/src/main/java/org/apache/accumulo/core/data/Range.java b/core/src/main/java/org/apache/accumulo/core/data/Range.java
index c114e2b..306ee2d 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/Range.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Range.java
@@ -427,10 +427,10 @@
     if (ranges.size() == 1)
       return Collections.singletonList(ranges.iterator().next());
 
-    List<Range> ral = new ArrayList<Range>(ranges);
+    List<Range> ral = new ArrayList<>(ranges);
     Collections.sort(ral);
 
-    ArrayList<Range> ret = new ArrayList<Range>(ranges.size());
+    ArrayList<Range> ret = new ArrayList<>(ranges.size());
 
     Range currentRange = ral.get(0);
     boolean currentStartKeyInclusive = ral.get(0).startKeyInclusive;
diff --git a/core/src/main/java/org/apache/accumulo/core/data/TabletId.java b/core/src/main/java/org/apache/accumulo/core/data/TabletId.java
index 113183d3..8680760 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/TabletId.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/TabletId.java
@@ -30,4 +30,11 @@
   public Text getEndRow();
 
   public Text getPrevEndRow();
+
+  /**
+   * @return a range based on the row range of the tablet. The range will cover {@code (<prev end row>, <end row>]}.
+   * @since 1.8.0
+   */
+  public Range toRange();
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/data/Value.java b/core/src/main/java/org/apache/accumulo/core/data/Value.java
index 6883885..95c3c70 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/Value.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Value.java
@@ -25,9 +25,11 @@
 import java.io.DataOutput;
 import java.io.IOException;
 import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
 import java.util.List;
 
 import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.WritableComparable;
 import org.apache.hadoop.io.WritableComparator;
 
@@ -48,6 +50,30 @@
   }
 
   /**
+   * Creates a value using the UTF-8 encoding of the CharSequence
+   *
+   * @param cs
+   *          may not be null
+   *
+   * @since 1.8.0
+   */
+  public Value(CharSequence cs) {
+    this(cs.toString().getBytes(StandardCharsets.UTF_8));
+  }
+
+  /**
+   * Creates a Value using the bytes of the Text. Makes a copy, does not use the byte array from the Text.
+   *
+   * @param text
+   *          may not be null
+   *
+   * @since 1.8.0
+   */
+  public Value(Text text) {
+    this(text.getBytes(), 0, text.getLength());
+  }
+
+  /**
    * Creates a Value using a byte array as the initial value. The given byte array is used directly as the backing array, so later changes made to the array
    * reflect into the new Value.
    *
diff --git a/core/src/main/java/org/apache/accumulo/core/data/impl/KeyExtent.java b/core/src/main/java/org/apache/accumulo/core/data/impl/KeyExtent.java
index d2ae00b..dcb8eb7 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/impl/KeyExtent.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/impl/KeyExtent.java
@@ -16,12 +16,15 @@
  */
 package org.apache.accumulo.core.data.impl;
 
+import static java.nio.charset.StandardCharsets.UTF_8;
+
 import java.io.ByteArrayOutputStream;
 import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.DataOutputStream;
 import java.io.IOException;
 import java.lang.ref.WeakReference;
+import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
@@ -55,25 +58,24 @@
 
 public class KeyExtent implements WritableComparable<KeyExtent> {
 
-  private static final WeakHashMap<Text,WeakReference<Text>> tableIds = new WeakHashMap<Text,WeakReference<Text>>();
+  private static final WeakHashMap<String,WeakReference<String>> tableIds = new WeakHashMap<>();
 
-  private static Text dedupeTableId(Text tableId) {
+  private static String dedupeTableId(String tableId) {
     synchronized (tableIds) {
-      WeakReference<Text> etir = tableIds.get(tableId);
+      WeakReference<String> etir = tableIds.get(tableId);
       if (etir != null) {
-        Text eti = etir.get();
+        String eti = etir.get();
         if (eti != null) {
           return eti;
         }
       }
 
-      tableId = new Text(tableId);
-      tableIds.put(tableId, new WeakReference<Text>(tableId));
+      tableIds.put(tableId, new WeakReference<>(tableId));
       return tableId;
     }
   }
 
-  private Text textTableId;
+  private String tableId;
   private Text textEndRow;
   private Text textPrevEndRow;
 
@@ -95,12 +97,12 @@
    *
    */
   public KeyExtent() {
-    this.setTableId(new Text());
+    this.setTableId("");
     this.setEndRow(new Text(), false, false);
     this.setPrevEndRow(new Text(), false, false);
   }
 
-  public KeyExtent(Text table, Text endRow, Text prevEndRow) {
+  public KeyExtent(String table, Text endRow, Text prevEndRow) {
     this.setTableId(table);
     this.setEndRow(endRow, false, true);
     this.setPrevEndRow(prevEndRow, false, true);
@@ -110,7 +112,7 @@
 
   public KeyExtent(KeyExtent extent) {
     // extent has already deduped table id, so there is no need to do it again
-    this.textTableId = extent.textTableId;
+    this.tableId = extent.tableId;
     this.setEndRow(extent.getEndRow(), false, true);
     this.setPrevEndRow(extent.getPrevEndRow(), false, true);
 
@@ -118,7 +120,7 @@
   }
 
   public KeyExtent(TKeyExtent tke) {
-    this.setTableId(new Text(ByteBufferUtil.toBytes(tke.table)));
+    this.setTableId(dedupeTableId(new String(ByteBufferUtil.toBytes(tke.table), UTF_8)));
     this.setEndRow(tke.endRow == null ? null : new Text(ByteBufferUtil.toBytes(tke.endRow)), false, false);
     this.setPrevEndRow(tke.prevEndRow == null ? null : new Text(ByteBufferUtil.toBytes(tke.prevEndRow)), false, false);
 
@@ -133,7 +135,7 @@
     return getMetadataEntry(getTableId(), getEndRow());
   }
 
-  public static Text getMetadataEntry(Text tableId, Text endRow) {
+  public static Text getMetadataEntry(String tableId, Text endRow) {
     return MetadataSchema.TabletsSection.getRow(tableId, endRow);
   }
 
@@ -164,12 +166,12 @@
    * Sets the extents table id
    *
    */
-  public void setTableId(Text tId) {
+  public void setTableId(String tId) {
 
     if (tId == null)
       throw new IllegalArgumentException("null table name not allowed");
 
-    this.textTableId = dedupeTableId(tId);
+    this.tableId = dedupeTableId(tId);
 
     hashCode = 0;
   }
@@ -178,8 +180,8 @@
    * Returns the extent's table id
    *
    */
-  public Text getTableId() {
-    return textTableId;
+  public String getTableId() {
+    return tableId;
   }
 
   private void setEndRow(Text endRow, boolean check, boolean copy) {
@@ -246,7 +248,7 @@
   public void readFields(DataInput in) throws IOException {
     Text tid = new Text();
     tid.readFields(in);
-    setTableId(tid);
+    setTableId(tid.toString());
     boolean hasRow = in.readBoolean();
     if (hasRow) {
       Text er = new Text();
@@ -270,7 +272,7 @@
 
   @Override
   public void write(DataOutput out) throws IOException {
-    getTableId().write(out);
+    new Text(getTableId()).write(out);
     if (getEndRow() != null) {
       out.writeBoolean(true);
       getEndRow().write(out);
@@ -307,7 +309,7 @@
       startRow = new Text();
     if (endRow == null)
       endRow = new Text();
-    Collection<KeyExtent> keys = new ArrayList<KeyExtent>();
+    Collection<KeyExtent> keys = new ArrayList<>();
     for (KeyExtent ckes : kes) {
       if (ckes.getPrevEndRow() == null) {
         if (ckes.getEndRow() == null) {
@@ -453,14 +455,14 @@
     if (!(o instanceof KeyExtent))
       return false;
     KeyExtent oke = (KeyExtent) o;
-    return textTableId.equals(oke.textTableId) && equals(textEndRow, oke.textEndRow) && equals(textPrevEndRow, oke.textPrevEndRow);
+    return tableId.equals(oke.tableId) && equals(textEndRow, oke.textEndRow) && equals(textPrevEndRow, oke.textPrevEndRow);
   }
 
   @Override
   public String toString() {
     String endRowString;
     String prevEndRowString;
-    String tableIdString = getTableId().toString().replaceAll(";", "\\\\;").replaceAll("\\\\", "\\\\\\\\");
+    String tableIdString = getTableId().replaceAll(";", "\\\\;").replaceAll("\\\\", "\\\\\\\\");
 
     if (getEndRow() == null)
       endRowString = "<";
@@ -526,14 +528,12 @@
         throw new IllegalArgumentException("< must come at end of Metadata row  " + flattenedExtent);
       }
 
-      Text tableId = new Text();
-      tableId.set(flattenedExtent.getBytes(), 0, flattenedExtent.getLength() - 1);
+      String tableId = new String(flattenedExtent.getBytes(), 0, flattenedExtent.getLength() - 1, UTF_8);
       this.setTableId(tableId);
       this.setEndRow(null, false, false);
     } else {
 
-      Text tableId = new Text();
-      tableId.set(flattenedExtent.getBytes(), 0, semiPos);
+      String tableId = new String(flattenedExtent.getBytes(), 0, semiPos, UTF_8);
 
       Text endRow = new Text();
       endRow.set(flattenedExtent.getBytes(), semiPos + 1, flattenedExtent.getLength() - (semiPos + 1));
@@ -547,7 +547,7 @@
   public static byte[] tableOfMetadataRow(Text row) {
     KeyExtent ke = new KeyExtent();
     ke.decodeMetadataRow(row);
-    return TextUtil.getBytes(ke.getTableId());
+    return ke.getTableId().getBytes(UTF_8);
   }
 
   public boolean contains(final ByteSequence bsrow) {
@@ -611,7 +611,7 @@
 
       if (ke.getPrevEndRow() == tabletKe.getPrevEndRow() || ke.getPrevEndRow() != null && tabletKe.getPrevEndRow() != null
           && tabletKe.getPrevEndRow().compareTo(ke.getPrevEndRow()) == 0) {
-        children = new TreeSet<KeyExtent>();
+        children = new TreeSet<>();
       }
 
       if (children != null) {
@@ -624,7 +624,7 @@
       }
     }
 
-    return new TreeSet<KeyExtent>();
+    return new TreeSet<>();
   }
 
   public static KeyExtent findContainingExtent(KeyExtent extent, SortedSet<KeyExtent> extents) {
@@ -690,7 +690,7 @@
       start = extents.tailSet(lookupKey);
     }
 
-    TreeSet<KeyExtent> result = new TreeSet<KeyExtent>();
+    TreeSet<KeyExtent> result = new TreeSet<>();
     for (KeyExtent ke : start) {
       if (startsAfter(nke, ke)) {
         break;
@@ -701,7 +701,7 @@
   }
 
   public boolean overlaps(KeyExtent other) {
-    SortedSet<KeyExtent> set = new TreeSet<KeyExtent>();
+    SortedSet<KeyExtent> set = new TreeSet<>();
     set.add(other);
     return !findOverlapping(this, set).isEmpty();
   }
@@ -722,7 +722,7 @@
       start = extents.tailMap(lookupKey);
     }
 
-    TreeSet<KeyExtent> result = new TreeSet<KeyExtent>();
+    TreeSet<KeyExtent> result = new TreeSet<>();
     for (Entry<KeyExtent,?> entry : start.entrySet()) {
       KeyExtent ke = entry.getKey();
       if (startsAfter(nke, ke)) {
@@ -738,8 +738,8 @@
   }
 
   public TKeyExtent toThrift() {
-    return new TKeyExtent(TextUtil.getByteBuffer(textTableId), textEndRow == null ? null : TextUtil.getByteBuffer(textEndRow), textPrevEndRow == null ? null
-        : TextUtil.getByteBuffer(textPrevEndRow));
+    return new TKeyExtent(ByteBuffer.wrap(tableId.getBytes(UTF_8)), textEndRow == null ? null : TextUtil.getByteBuffer(textEndRow),
+        textPrevEndRow == null ? null : TextUtil.getByteBuffer(textPrevEndRow));
   }
 
   public boolean isPreviousExtent(KeyExtent prevExtent) {
@@ -759,10 +759,10 @@
   }
 
   public boolean isMeta() {
-    return getTableId().toString().equals(MetadataTable.ID) || isRootTablet();
+    return getTableId().equals(MetadataTable.ID) || isRootTablet();
   }
 
   public boolean isRootTablet() {
-    return getTableId().toString().equals(RootTable.ID);
+    return getTableId().equals(RootTable.ID);
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/data/impl/TabletIdImpl.java b/core/src/main/java/org/apache/accumulo/core/data/impl/TabletIdImpl.java
index 61e882a..24a7141 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/impl/TabletIdImpl.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/impl/TabletIdImpl.java
@@ -17,6 +17,7 @@
 
 package org.apache.accumulo.core.data.impl;
 
+import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.TabletId;
 import org.apache.hadoop.io.Text;
 
@@ -52,7 +53,7 @@
 
   @Deprecated
   public TabletIdImpl(org.apache.accumulo.core.data.KeyExtent ke) {
-    this.ke = new KeyExtent(ke.getTableId(), ke.getEndRow(), ke.getPrevEndRow());
+    this.ke = new KeyExtent(ke.getTableId().toString(), ke.getEndRow(), ke.getPrevEndRow());
   }
 
   public TabletIdImpl(KeyExtent ke) {
@@ -66,7 +67,7 @@
 
   @Override
   public Text getTableId() {
-    return ke.getTableId();
+    return new Text(ke.getTableId());
   }
 
   @Override
@@ -97,4 +98,9 @@
   public String toString() {
     return ke.toString();
   }
+
+  @Override
+  public Range toRange() {
+    return ke.toDataRange();
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/InitialMultiScan.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/InitialMultiScan.java
index d124c14..fd59d4b 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/InitialMultiScan.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/InitialMultiScan.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class InitialMultiScan implements org.apache.thrift.TBase<InitialMultiScan, InitialMultiScan._Fields>, java.io.Serializable, Cloneable, Comparable<InitialMultiScan> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class InitialMultiScan implements org.apache.thrift.TBase<InitialMultiScan, InitialMultiScan._Fields>, java.io.Serializable, Cloneable, Comparable<InitialMultiScan> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("InitialMultiScan");
 
   private static final org.apache.thrift.protocol.TField SCAN_ID_FIELD_DESC = new org.apache.thrift.protocol.TField("scanID", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -244,7 +247,7 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case SCAN_ID:
-      return Long.valueOf(getScanID());
+      return getScanID();
 
     case RESULT:
       return getResult();
@@ -304,7 +307,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_scanID = true;
+    list.add(present_scanID);
+    if (present_scanID)
+      list.add(scanID);
+
+    boolean present_result = true && (isSetResult());
+    list.add(present_result);
+    if (present_result)
+      list.add(result);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/InitialScan.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/InitialScan.java
index 38239d7..879bdcb 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/InitialScan.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/InitialScan.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class InitialScan implements org.apache.thrift.TBase<InitialScan, InitialScan._Fields>, java.io.Serializable, Cloneable, Comparable<InitialScan> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class InitialScan implements org.apache.thrift.TBase<InitialScan, InitialScan._Fields>, java.io.Serializable, Cloneable, Comparable<InitialScan> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("InitialScan");
 
   private static final org.apache.thrift.protocol.TField SCAN_ID_FIELD_DESC = new org.apache.thrift.protocol.TField("scanID", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -244,7 +247,7 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case SCAN_ID:
-      return Long.valueOf(getScanID());
+      return getScanID();
 
     case RESULT:
       return getResult();
@@ -304,7 +307,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_scanID = true;
+    list.add(present_scanID);
+    if (present_scanID)
+      list.add(scanID);
+
+    boolean present_result = true && (isSetResult());
+    list.add(present_result);
+    if (present_result)
+      list.add(result);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/IterInfo.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/IterInfo.java
index 890a6b8..3edc5d7 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/IterInfo.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/IterInfo.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class IterInfo implements org.apache.thrift.TBase<IterInfo, IterInfo._Fields>, java.io.Serializable, Cloneable, Comparable<IterInfo> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class IterInfo implements org.apache.thrift.TBase<IterInfo, IterInfo._Fields>, java.io.Serializable, Cloneable, Comparable<IterInfo> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("IterInfo");
 
   private static final org.apache.thrift.protocol.TField PRIORITY_FIELD_DESC = new org.apache.thrift.protocol.TField("priority", org.apache.thrift.protocol.TType.I32, (short)1);
@@ -289,7 +292,7 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case PRIORITY:
-      return Integer.valueOf(getPriority());
+      return getPriority();
 
     case CLASS_NAME:
       return getClassName();
@@ -363,7 +366,24 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_priority = true;
+    list.add(present_priority);
+    if (present_priority)
+      list.add(priority);
+
+    boolean present_className = true && (isSetClassName());
+    list.add(present_className);
+    if (present_className)
+      list.add(className);
+
+    boolean present_iterName = true && (isSetIterName());
+    list.add(present_iterName);
+    if (present_iterName)
+      list.add(iterName);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/MapFileInfo.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/MapFileInfo.java
index 0fbf04f..ab11e18 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/MapFileInfo.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/MapFileInfo.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class MapFileInfo implements org.apache.thrift.TBase<MapFileInfo, MapFileInfo._Fields>, java.io.Serializable, Cloneable, Comparable<MapFileInfo> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class MapFileInfo implements org.apache.thrift.TBase<MapFileInfo, MapFileInfo._Fields>, java.io.Serializable, Cloneable, Comparable<MapFileInfo> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("MapFileInfo");
 
   private static final org.apache.thrift.protocol.TField ESTIMATED_SIZE_FIELD_DESC = new org.apache.thrift.protocol.TField("estimatedSize", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -199,7 +202,7 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case ESTIMATED_SIZE:
-      return Long.valueOf(getEstimatedSize());
+      return getEstimatedSize();
 
     }
     throw new IllegalStateException();
@@ -245,7 +248,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_estimatedSize = true;
+    list.add(present_estimatedSize);
+    if (present_estimatedSize)
+      list.add(estimatedSize);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/MultiScanResult.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/MultiScanResult.java
index f1d9e61..eba7661 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/MultiScanResult.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/MultiScanResult.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class MultiScanResult implements org.apache.thrift.TBase<MultiScanResult, MultiScanResult._Fields>, java.io.Serializable, Cloneable, Comparable<MultiScanResult> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class MultiScanResult implements org.apache.thrift.TBase<MultiScanResult, MultiScanResult._Fields>, java.io.Serializable, Cloneable, Comparable<MultiScanResult> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("MultiScanResult");
 
   private static final org.apache.thrift.protocol.TField RESULTS_FIELD_DESC = new org.apache.thrift.protocol.TField("results", org.apache.thrift.protocol.TType.LIST, (short)1);
@@ -535,10 +538,10 @@
       return getPartNextKey();
 
     case PART_NEXT_KEY_INCLUSIVE:
-      return Boolean.valueOf(isPartNextKeyInclusive());
+      return isPartNextKeyInclusive();
 
     case MORE:
-      return Boolean.valueOf(isMore());
+      return isMore();
 
     }
     throw new IllegalStateException();
@@ -650,7 +653,44 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_results = true && (isSetResults());
+    list.add(present_results);
+    if (present_results)
+      list.add(results);
+
+    boolean present_failures = true && (isSetFailures());
+    list.add(present_failures);
+    if (present_failures)
+      list.add(failures);
+
+    boolean present_fullScans = true && (isSetFullScans());
+    list.add(present_fullScans);
+    if (present_fullScans)
+      list.add(fullScans);
+
+    boolean present_partScan = true && (isSetPartScan());
+    list.add(present_partScan);
+    if (present_partScan)
+      list.add(partScan);
+
+    boolean present_partNextKey = true && (isSetPartNextKey());
+    list.add(present_partNextKey);
+    if (present_partNextKey)
+      list.add(partNextKey);
+
+    boolean present_partNextKeyInclusive = true;
+    list.add(present_partNextKeyInclusive);
+    if (present_partNextKeyInclusive)
+      list.add(partNextKeyInclusive);
+
+    boolean present_more = true;
+    list.add(present_more);
+    if (present_more)
+      list.add(more);
+
+    return list.hashCode();
   }
 
   @Override
@@ -854,12 +894,12 @@
               {
                 org.apache.thrift.protocol.TList _list24 = iprot.readListBegin();
                 struct.results = new ArrayList<TKeyValue>(_list24.size);
-                for (int _i25 = 0; _i25 < _list24.size; ++_i25)
+                TKeyValue _elem25;
+                for (int _i26 = 0; _i26 < _list24.size; ++_i26)
                 {
-                  TKeyValue _elem26;
-                  _elem26 = new TKeyValue();
-                  _elem26.read(iprot);
-                  struct.results.add(_elem26);
+                  _elem25 = new TKeyValue();
+                  _elem25.read(iprot);
+                  struct.results.add(_elem25);
                 }
                 iprot.readListEnd();
               }
@@ -873,25 +913,25 @@
               {
                 org.apache.thrift.protocol.TMap _map27 = iprot.readMapBegin();
                 struct.failures = new HashMap<TKeyExtent,List<TRange>>(2*_map27.size);
-                for (int _i28 = 0; _i28 < _map27.size; ++_i28)
+                TKeyExtent _key28;
+                List<TRange> _val29;
+                for (int _i30 = 0; _i30 < _map27.size; ++_i30)
                 {
-                  TKeyExtent _key29;
-                  List<TRange> _val30;
-                  _key29 = new TKeyExtent();
-                  _key29.read(iprot);
+                  _key28 = new TKeyExtent();
+                  _key28.read(iprot);
                   {
                     org.apache.thrift.protocol.TList _list31 = iprot.readListBegin();
-                    _val30 = new ArrayList<TRange>(_list31.size);
-                    for (int _i32 = 0; _i32 < _list31.size; ++_i32)
+                    _val29 = new ArrayList<TRange>(_list31.size);
+                    TRange _elem32;
+                    for (int _i33 = 0; _i33 < _list31.size; ++_i33)
                     {
-                      TRange _elem33;
-                      _elem33 = new TRange();
-                      _elem33.read(iprot);
-                      _val30.add(_elem33);
+                      _elem32 = new TRange();
+                      _elem32.read(iprot);
+                      _val29.add(_elem32);
                     }
                     iprot.readListEnd();
                   }
-                  struct.failures.put(_key29, _val30);
+                  struct.failures.put(_key28, _val29);
                 }
                 iprot.readMapEnd();
               }
@@ -905,12 +945,12 @@
               {
                 org.apache.thrift.protocol.TList _list34 = iprot.readListBegin();
                 struct.fullScans = new ArrayList<TKeyExtent>(_list34.size);
-                for (int _i35 = 0; _i35 < _list34.size; ++_i35)
+                TKeyExtent _elem35;
+                for (int _i36 = 0; _i36 < _list34.size; ++_i36)
                 {
-                  TKeyExtent _elem36;
-                  _elem36 = new TKeyExtent();
-                  _elem36.read(iprot);
-                  struct.fullScans.add(_elem36);
+                  _elem35 = new TKeyExtent();
+                  _elem35.read(iprot);
+                  struct.fullScans.add(_elem35);
                 }
                 iprot.readListEnd();
               }
@@ -1124,12 +1164,12 @@
         {
           org.apache.thrift.protocol.TList _list45 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.results = new ArrayList<TKeyValue>(_list45.size);
-          for (int _i46 = 0; _i46 < _list45.size; ++_i46)
+          TKeyValue _elem46;
+          for (int _i47 = 0; _i47 < _list45.size; ++_i47)
           {
-            TKeyValue _elem47;
-            _elem47 = new TKeyValue();
-            _elem47.read(iprot);
-            struct.results.add(_elem47);
+            _elem46 = new TKeyValue();
+            _elem46.read(iprot);
+            struct.results.add(_elem46);
           }
         }
         struct.setResultsIsSet(true);
@@ -1138,24 +1178,24 @@
         {
           org.apache.thrift.protocol.TMap _map48 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.LIST, iprot.readI32());
           struct.failures = new HashMap<TKeyExtent,List<TRange>>(2*_map48.size);
-          for (int _i49 = 0; _i49 < _map48.size; ++_i49)
+          TKeyExtent _key49;
+          List<TRange> _val50;
+          for (int _i51 = 0; _i51 < _map48.size; ++_i51)
           {
-            TKeyExtent _key50;
-            List<TRange> _val51;
-            _key50 = new TKeyExtent();
-            _key50.read(iprot);
+            _key49 = new TKeyExtent();
+            _key49.read(iprot);
             {
               org.apache.thrift.protocol.TList _list52 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-              _val51 = new ArrayList<TRange>(_list52.size);
-              for (int _i53 = 0; _i53 < _list52.size; ++_i53)
+              _val50 = new ArrayList<TRange>(_list52.size);
+              TRange _elem53;
+              for (int _i54 = 0; _i54 < _list52.size; ++_i54)
               {
-                TRange _elem54;
-                _elem54 = new TRange();
-                _elem54.read(iprot);
-                _val51.add(_elem54);
+                _elem53 = new TRange();
+                _elem53.read(iprot);
+                _val50.add(_elem53);
               }
             }
-            struct.failures.put(_key50, _val51);
+            struct.failures.put(_key49, _val50);
           }
         }
         struct.setFailuresIsSet(true);
@@ -1164,12 +1204,12 @@
         {
           org.apache.thrift.protocol.TList _list55 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.fullScans = new ArrayList<TKeyExtent>(_list55.size);
-          for (int _i56 = 0; _i56 < _list55.size; ++_i56)
+          TKeyExtent _elem56;
+          for (int _i57 = 0; _i57 < _list55.size; ++_i57)
           {
-            TKeyExtent _elem57;
-            _elem57 = new TKeyExtent();
-            _elem57.read(iprot);
-            struct.fullScans.add(_elem57);
+            _elem56 = new TKeyExtent();
+            _elem56.read(iprot);
+            struct.fullScans.add(_elem56);
           }
         }
         struct.setFullScansIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/ScanResult.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/ScanResult.java
index 3035db2..d147578 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/ScanResult.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/ScanResult.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ScanResult implements org.apache.thrift.TBase<ScanResult, ScanResult._Fields>, java.io.Serializable, Cloneable, Comparable<ScanResult> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ScanResult implements org.apache.thrift.TBase<ScanResult, ScanResult._Fields>, java.io.Serializable, Cloneable, Comparable<ScanResult> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ScanResult");
 
   private static final org.apache.thrift.protocol.TField RESULTS_FIELD_DESC = new org.apache.thrift.protocol.TField("results", org.apache.thrift.protocol.TType.LIST, (short)1);
@@ -267,7 +270,7 @@
       return getResults();
 
     case MORE:
-      return Boolean.valueOf(isMore());
+      return isMore();
 
     }
     throw new IllegalStateException();
@@ -324,7 +327,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_results = true && (isSetResults());
+    list.add(present_results);
+    if (present_results)
+      list.add(results);
+
+    boolean present_more = true;
+    list.add(present_more);
+    if (present_more)
+      list.add(more);
+
+    return list.hashCode();
   }
 
   @Override
@@ -436,12 +451,12 @@
               {
                 org.apache.thrift.protocol.TList _list16 = iprot.readListBegin();
                 struct.results = new ArrayList<TKeyValue>(_list16.size);
-                for (int _i17 = 0; _i17 < _list16.size; ++_i17)
+                TKeyValue _elem17;
+                for (int _i18 = 0; _i18 < _list16.size; ++_i18)
                 {
-                  TKeyValue _elem18;
-                  _elem18 = new TKeyValue();
-                  _elem18.read(iprot);
-                  struct.results.add(_elem18);
+                  _elem17 = new TKeyValue();
+                  _elem17.read(iprot);
+                  struct.results.add(_elem17);
                 }
                 iprot.readListEnd();
               }
@@ -535,12 +550,12 @@
         {
           org.apache.thrift.protocol.TList _list21 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.results = new ArrayList<TKeyValue>(_list21.size);
-          for (int _i22 = 0; _i22 < _list21.size; ++_i22)
+          TKeyValue _elem22;
+          for (int _i23 = 0; _i23 < _list21.size; ++_i23)
           {
-            TKeyValue _elem23;
-            _elem23 = new TKeyValue();
-            _elem23.read(iprot);
-            struct.results.add(_elem23);
+            _elem22 = new TKeyValue();
+            _elem22.read(iprot);
+            struct.results.add(_elem22);
           }
         }
         struct.setResultsIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TCMResult.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TCMResult.java
index 7c60cd1..40ba6b1 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TCMResult.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TCMResult.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TCMResult implements org.apache.thrift.TBase<TCMResult, TCMResult._Fields>, java.io.Serializable, Cloneable, Comparable<TCMResult> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TCMResult implements org.apache.thrift.TBase<TCMResult, TCMResult._Fields>, java.io.Serializable, Cloneable, Comparable<TCMResult> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TCMResult");
 
   private static final org.apache.thrift.protocol.TField CMID_FIELD_DESC = new org.apache.thrift.protocol.TField("cmid", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -260,7 +263,7 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case CMID:
-      return Long.valueOf(getCmid());
+      return getCmid();
 
     case STATUS:
       return getStatus();
@@ -320,7 +323,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_cmid = true;
+    list.add(present_cmid);
+    if (present_cmid)
+      list.add(cmid);
+
+    boolean present_status = true && (isSetStatus());
+    list.add(present_status);
+    if (present_status)
+      list.add(status.getValue());
+
+    return list.hashCode();
   }
 
   @Override
@@ -437,7 +452,7 @@
             break;
           case 2: // STATUS
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.status = TCMStatus.findByValue(iprot.readI32());
+              struct.status = org.apache.accumulo.core.data.thrift.TCMStatus.findByValue(iprot.readI32());
               struct.setStatusIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -508,7 +523,7 @@
         struct.setCmidIsSet(true);
       }
       if (incoming.get(1)) {
-        struct.status = TCMStatus.findByValue(iprot.readI32());
+        struct.status = org.apache.accumulo.core.data.thrift.TCMStatus.findByValue(iprot.readI32());
         struct.setStatusIsSet(true);
       }
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TCMStatus.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TCMStatus.java
index 993e9e2..6cbe5af 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TCMStatus.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TCMStatus.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TColumn.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TColumn.java
index 127f7c6..839d801 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TColumn.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TColumn.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TColumn implements org.apache.thrift.TBase<TColumn, TColumn._Fields>, java.io.Serializable, Cloneable, Comparable<TColumn> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TColumn implements org.apache.thrift.TBase<TColumn, TColumn._Fields>, java.io.Serializable, Cloneable, Comparable<TColumn> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TColumn");
 
   private static final org.apache.thrift.protocol.TField COLUMN_FAMILY_FIELD_DESC = new org.apache.thrift.protocol.TField("columnFamily", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -152,9 +155,9 @@
     ByteBuffer columnVisibility)
   {
     this();
-    this.columnFamily = columnFamily;
-    this.columnQualifier = columnQualifier;
-    this.columnVisibility = columnVisibility;
+    this.columnFamily = org.apache.thrift.TBaseHelper.copyBinary(columnFamily);
+    this.columnQualifier = org.apache.thrift.TBaseHelper.copyBinary(columnQualifier);
+    this.columnVisibility = org.apache.thrift.TBaseHelper.copyBinary(columnVisibility);
   }
 
   /**
@@ -163,15 +166,12 @@
   public TColumn(TColumn other) {
     if (other.isSetColumnFamily()) {
       this.columnFamily = org.apache.thrift.TBaseHelper.copyBinary(other.columnFamily);
-;
     }
     if (other.isSetColumnQualifier()) {
       this.columnQualifier = org.apache.thrift.TBaseHelper.copyBinary(other.columnQualifier);
-;
     }
     if (other.isSetColumnVisibility()) {
       this.columnVisibility = org.apache.thrift.TBaseHelper.copyBinary(other.columnVisibility);
-;
     }
   }
 
@@ -192,16 +192,16 @@
   }
 
   public ByteBuffer bufferForColumnFamily() {
-    return columnFamily;
+    return org.apache.thrift.TBaseHelper.copyBinary(columnFamily);
   }
 
   public TColumn setColumnFamily(byte[] columnFamily) {
-    setColumnFamily(columnFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(columnFamily));
+    this.columnFamily = columnFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(columnFamily, columnFamily.length));
     return this;
   }
 
   public TColumn setColumnFamily(ByteBuffer columnFamily) {
-    this.columnFamily = columnFamily;
+    this.columnFamily = org.apache.thrift.TBaseHelper.copyBinary(columnFamily);
     return this;
   }
 
@@ -226,16 +226,16 @@
   }
 
   public ByteBuffer bufferForColumnQualifier() {
-    return columnQualifier;
+    return org.apache.thrift.TBaseHelper.copyBinary(columnQualifier);
   }
 
   public TColumn setColumnQualifier(byte[] columnQualifier) {
-    setColumnQualifier(columnQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(columnQualifier));
+    this.columnQualifier = columnQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(columnQualifier, columnQualifier.length));
     return this;
   }
 
   public TColumn setColumnQualifier(ByteBuffer columnQualifier) {
-    this.columnQualifier = columnQualifier;
+    this.columnQualifier = org.apache.thrift.TBaseHelper.copyBinary(columnQualifier);
     return this;
   }
 
@@ -260,16 +260,16 @@
   }
 
   public ByteBuffer bufferForColumnVisibility() {
-    return columnVisibility;
+    return org.apache.thrift.TBaseHelper.copyBinary(columnVisibility);
   }
 
   public TColumn setColumnVisibility(byte[] columnVisibility) {
-    setColumnVisibility(columnVisibility == null ? (ByteBuffer)null : ByteBuffer.wrap(columnVisibility));
+    this.columnVisibility = columnVisibility == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(columnVisibility, columnVisibility.length));
     return this;
   }
 
   public TColumn setColumnVisibility(ByteBuffer columnVisibility) {
-    this.columnVisibility = columnVisibility;
+    this.columnVisibility = org.apache.thrift.TBaseHelper.copyBinary(columnVisibility);
     return this;
   }
 
@@ -394,7 +394,24 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_columnFamily = true && (isSetColumnFamily());
+    list.add(present_columnFamily);
+    if (present_columnFamily)
+      list.add(columnFamily);
+
+    boolean present_columnQualifier = true && (isSetColumnQualifier());
+    list.add(present_columnQualifier);
+    if (present_columnQualifier)
+      list.add(columnQualifier);
+
+    boolean present_columnVisibility = true && (isSetColumnVisibility());
+    list.add(present_columnVisibility);
+    if (present_columnVisibility)
+      list.add(columnVisibility);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TCondition.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TCondition.java
index a4ae6df..2a4e131 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TCondition.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TCondition.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TCondition implements org.apache.thrift.TBase<TCondition, TCondition._Fields>, java.io.Serializable, Cloneable, Comparable<TCondition> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TCondition implements org.apache.thrift.TBase<TCondition, TCondition._Fields>, java.io.Serializable, Cloneable, Comparable<TCondition> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TCondition");
 
   private static final org.apache.thrift.protocol.TField CF_FIELD_DESC = new org.apache.thrift.protocol.TField("cf", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -187,15 +190,15 @@
     ByteBuffer iterators)
   {
     this();
-    this.cf = cf;
-    this.cq = cq;
-    this.cv = cv;
+    this.cf = org.apache.thrift.TBaseHelper.copyBinary(cf);
+    this.cq = org.apache.thrift.TBaseHelper.copyBinary(cq);
+    this.cv = org.apache.thrift.TBaseHelper.copyBinary(cv);
     this.ts = ts;
     setTsIsSet(true);
     this.hasTimestamp = hasTimestamp;
     setHasTimestampIsSet(true);
-    this.val = val;
-    this.iterators = iterators;
+    this.val = org.apache.thrift.TBaseHelper.copyBinary(val);
+    this.iterators = org.apache.thrift.TBaseHelper.copyBinary(iterators);
   }
 
   /**
@@ -205,25 +208,20 @@
     __isset_bitfield = other.__isset_bitfield;
     if (other.isSetCf()) {
       this.cf = org.apache.thrift.TBaseHelper.copyBinary(other.cf);
-;
     }
     if (other.isSetCq()) {
       this.cq = org.apache.thrift.TBaseHelper.copyBinary(other.cq);
-;
     }
     if (other.isSetCv()) {
       this.cv = org.apache.thrift.TBaseHelper.copyBinary(other.cv);
-;
     }
     this.ts = other.ts;
     this.hasTimestamp = other.hasTimestamp;
     if (other.isSetVal()) {
       this.val = org.apache.thrift.TBaseHelper.copyBinary(other.val);
-;
     }
     if (other.isSetIterators()) {
       this.iterators = org.apache.thrift.TBaseHelper.copyBinary(other.iterators);
-;
     }
   }
 
@@ -250,16 +248,16 @@
   }
 
   public ByteBuffer bufferForCf() {
-    return cf;
+    return org.apache.thrift.TBaseHelper.copyBinary(cf);
   }
 
   public TCondition setCf(byte[] cf) {
-    setCf(cf == null ? (ByteBuffer)null : ByteBuffer.wrap(cf));
+    this.cf = cf == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(cf, cf.length));
     return this;
   }
 
   public TCondition setCf(ByteBuffer cf) {
-    this.cf = cf;
+    this.cf = org.apache.thrift.TBaseHelper.copyBinary(cf);
     return this;
   }
 
@@ -284,16 +282,16 @@
   }
 
   public ByteBuffer bufferForCq() {
-    return cq;
+    return org.apache.thrift.TBaseHelper.copyBinary(cq);
   }
 
   public TCondition setCq(byte[] cq) {
-    setCq(cq == null ? (ByteBuffer)null : ByteBuffer.wrap(cq));
+    this.cq = cq == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(cq, cq.length));
     return this;
   }
 
   public TCondition setCq(ByteBuffer cq) {
-    this.cq = cq;
+    this.cq = org.apache.thrift.TBaseHelper.copyBinary(cq);
     return this;
   }
 
@@ -318,16 +316,16 @@
   }
 
   public ByteBuffer bufferForCv() {
-    return cv;
+    return org.apache.thrift.TBaseHelper.copyBinary(cv);
   }
 
   public TCondition setCv(byte[] cv) {
-    setCv(cv == null ? (ByteBuffer)null : ByteBuffer.wrap(cv));
+    this.cv = cv == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(cv, cv.length));
     return this;
   }
 
   public TCondition setCv(ByteBuffer cv) {
-    this.cv = cv;
+    this.cv = org.apache.thrift.TBaseHelper.copyBinary(cv);
     return this;
   }
 
@@ -398,16 +396,16 @@
   }
 
   public ByteBuffer bufferForVal() {
-    return val;
+    return org.apache.thrift.TBaseHelper.copyBinary(val);
   }
 
   public TCondition setVal(byte[] val) {
-    setVal(val == null ? (ByteBuffer)null : ByteBuffer.wrap(val));
+    this.val = val == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(val, val.length));
     return this;
   }
 
   public TCondition setVal(ByteBuffer val) {
-    this.val = val;
+    this.val = org.apache.thrift.TBaseHelper.copyBinary(val);
     return this;
   }
 
@@ -432,16 +430,16 @@
   }
 
   public ByteBuffer bufferForIterators() {
-    return iterators;
+    return org.apache.thrift.TBaseHelper.copyBinary(iterators);
   }
 
   public TCondition setIterators(byte[] iterators) {
-    setIterators(iterators == null ? (ByteBuffer)null : ByteBuffer.wrap(iterators));
+    this.iterators = iterators == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(iterators, iterators.length));
     return this;
   }
 
   public TCondition setIterators(ByteBuffer iterators) {
-    this.iterators = iterators;
+    this.iterators = org.apache.thrift.TBaseHelper.copyBinary(iterators);
     return this;
   }
 
@@ -533,10 +531,10 @@
       return getCv();
 
     case TS:
-      return Long.valueOf(getTs());
+      return getTs();
 
     case HAS_TIMESTAMP:
-      return Boolean.valueOf(isHasTimestamp());
+      return isHasTimestamp();
 
     case VAL:
       return getVal();
@@ -654,7 +652,44 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_cf = true && (isSetCf());
+    list.add(present_cf);
+    if (present_cf)
+      list.add(cf);
+
+    boolean present_cq = true && (isSetCq());
+    list.add(present_cq);
+    if (present_cq)
+      list.add(cq);
+
+    boolean present_cv = true && (isSetCv());
+    list.add(present_cv);
+    if (present_cv)
+      list.add(cv);
+
+    boolean present_ts = true;
+    list.add(present_ts);
+    if (present_ts)
+      list.add(ts);
+
+    boolean present_hasTimestamp = true;
+    list.add(present_hasTimestamp);
+    if (present_hasTimestamp)
+      list.add(hasTimestamp);
+
+    boolean present_val = true && (isSetVal());
+    list.add(present_val);
+    if (present_val)
+      list.add(val);
+
+    boolean present_iterators = true && (isSetIterators());
+    list.add(present_iterators);
+    if (present_iterators)
+      list.add(iterators);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TConditionalMutation.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TConditionalMutation.java
index 431a81c..98b9e97 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TConditionalMutation.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TConditionalMutation.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TConditionalMutation implements org.apache.thrift.TBase<TConditionalMutation, TConditionalMutation._Fields>, java.io.Serializable, Cloneable, Comparable<TConditionalMutation> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TConditionalMutation implements org.apache.thrift.TBase<TConditionalMutation, TConditionalMutation._Fields>, java.io.Serializable, Cloneable, Comparable<TConditionalMutation> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TConditionalMutation");
 
   private static final org.apache.thrift.protocol.TField CONDITIONS_FIELD_DESC = new org.apache.thrift.protocol.TField("conditions", org.apache.thrift.protocol.TType.LIST, (short)1);
@@ -315,7 +318,7 @@
       return getMutation();
 
     case ID:
-      return Long.valueOf(getId());
+      return getId();
 
     }
     throw new IllegalStateException();
@@ -383,7 +386,24 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_conditions = true && (isSetConditions());
+    list.add(present_conditions);
+    if (present_conditions)
+      list.add(conditions);
+
+    boolean present_mutation = true && (isSetMutation());
+    list.add(present_mutation);
+    if (present_mutation)
+      list.add(mutation);
+
+    boolean present_id = true;
+    list.add(present_id);
+    if (present_id)
+      list.add(id);
+
+    return list.hashCode();
   }
 
   @Override
@@ -516,12 +536,12 @@
               {
                 org.apache.thrift.protocol.TList _list86 = iprot.readListBegin();
                 struct.conditions = new ArrayList<TCondition>(_list86.size);
-                for (int _i87 = 0; _i87 < _list86.size; ++_i87)
+                TCondition _elem87;
+                for (int _i88 = 0; _i88 < _list86.size; ++_i88)
                 {
-                  TCondition _elem88;
-                  _elem88 = new TCondition();
-                  _elem88.read(iprot);
-                  struct.conditions.add(_elem88);
+                  _elem87 = new TCondition();
+                  _elem87.read(iprot);
+                  struct.conditions.add(_elem87);
                 }
                 iprot.readListEnd();
               }
@@ -635,12 +655,12 @@
         {
           org.apache.thrift.protocol.TList _list91 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.conditions = new ArrayList<TCondition>(_list91.size);
-          for (int _i92 = 0; _i92 < _list91.size; ++_i92)
+          TCondition _elem92;
+          for (int _i93 = 0; _i93 < _list91.size; ++_i93)
           {
-            TCondition _elem93;
-            _elem93 = new TCondition();
-            _elem93.read(iprot);
-            struct.conditions.add(_elem93);
+            _elem92 = new TCondition();
+            _elem92.read(iprot);
+            struct.conditions.add(_elem92);
           }
         }
         struct.setConditionsIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TConditionalSession.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TConditionalSession.java
index eb08623..8bab8cb 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TConditionalSession.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TConditionalSession.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TConditionalSession implements org.apache.thrift.TBase<TConditionalSession, TConditionalSession._Fields>, java.io.Serializable, Cloneable, Comparable<TConditionalSession> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TConditionalSession implements org.apache.thrift.TBase<TConditionalSession, TConditionalSession._Fields>, java.io.Serializable, Cloneable, Comparable<TConditionalSession> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TConditionalSession");
 
   private static final org.apache.thrift.protocol.TField SESSION_ID_FIELD_DESC = new org.apache.thrift.protocol.TField("sessionId", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -289,13 +292,13 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case SESSION_ID:
-      return Long.valueOf(getSessionId());
+      return getSessionId();
 
     case TSERVER_LOCK:
       return getTserverLock();
 
     case TTL:
-      return Long.valueOf(getTtl());
+      return getTtl();
 
     }
     throw new IllegalStateException();
@@ -363,7 +366,24 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_sessionId = true;
+    list.add(present_sessionId);
+    if (present_sessionId)
+      list.add(sessionId);
+
+    boolean present_tserverLock = true && (isSetTserverLock());
+    list.add(present_tserverLock);
+    if (present_tserverLock)
+      list.add(tserverLock);
+
+    boolean present_ttl = true;
+    list.add(present_ttl);
+    if (present_ttl)
+      list.add(ttl);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TConstraintViolationSummary.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TConstraintViolationSummary.java
index 1dc51e3..824fb5f 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TConstraintViolationSummary.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TConstraintViolationSummary.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TConstraintViolationSummary implements org.apache.thrift.TBase<TConstraintViolationSummary, TConstraintViolationSummary._Fields>, java.io.Serializable, Cloneable, Comparable<TConstraintViolationSummary> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TConstraintViolationSummary implements org.apache.thrift.TBase<TConstraintViolationSummary, TConstraintViolationSummary._Fields>, java.io.Serializable, Cloneable, Comparable<TConstraintViolationSummary> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TConstraintViolationSummary");
 
   private static final org.apache.thrift.protocol.TField CONSTRAIN_CLASS_FIELD_DESC = new org.apache.thrift.protocol.TField("constrainClass", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -337,13 +340,13 @@
       return getConstrainClass();
 
     case VIOLATION_CODE:
-      return Short.valueOf(getViolationCode());
+      return getViolationCode();
 
     case VIOLATION_DESCRIPTION:
       return getViolationDescription();
 
     case NUMBER_OF_VIOLATING_MUTATIONS:
-      return Long.valueOf(getNumberOfViolatingMutations());
+      return getNumberOfViolatingMutations();
 
     }
     throw new IllegalStateException();
@@ -422,7 +425,29 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_constrainClass = true && (isSetConstrainClass());
+    list.add(present_constrainClass);
+    if (present_constrainClass)
+      list.add(constrainClass);
+
+    boolean present_violationCode = true;
+    list.add(present_violationCode);
+    if (present_violationCode)
+      list.add(violationCode);
+
+    boolean present_violationDescription = true && (isSetViolationDescription());
+    list.add(present_violationDescription);
+    if (present_violationDescription)
+      list.add(violationDescription);
+
+    boolean present_numberOfViolatingMutations = true;
+    list.add(present_numberOfViolatingMutations);
+    if (present_numberOfViolatingMutations)
+      list.add(numberOfViolatingMutations);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TKey.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TKey.java
index 2b81651..fc66459 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TKey.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TKey.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TKey implements org.apache.thrift.TBase<TKey, TKey._Fields>, java.io.Serializable, Cloneable, Comparable<TKey> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TKey implements org.apache.thrift.TBase<TKey, TKey._Fields>, java.io.Serializable, Cloneable, Comparable<TKey> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TKey");
 
   private static final org.apache.thrift.protocol.TField ROW_FIELD_DESC = new org.apache.thrift.protocol.TField("row", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -170,10 +173,10 @@
     long timestamp)
   {
     this();
-    this.row = row;
-    this.colFamily = colFamily;
-    this.colQualifier = colQualifier;
-    this.colVisibility = colVisibility;
+    this.row = org.apache.thrift.TBaseHelper.copyBinary(row);
+    this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(colFamily);
+    this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
+    this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
     this.timestamp = timestamp;
     setTimestampIsSet(true);
   }
@@ -185,19 +188,15 @@
     __isset_bitfield = other.__isset_bitfield;
     if (other.isSetRow()) {
       this.row = org.apache.thrift.TBaseHelper.copyBinary(other.row);
-;
     }
     if (other.isSetColFamily()) {
       this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(other.colFamily);
-;
     }
     if (other.isSetColQualifier()) {
       this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(other.colQualifier);
-;
     }
     if (other.isSetColVisibility()) {
       this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(other.colVisibility);
-;
     }
     this.timestamp = other.timestamp;
   }
@@ -222,16 +221,16 @@
   }
 
   public ByteBuffer bufferForRow() {
-    return row;
+    return org.apache.thrift.TBaseHelper.copyBinary(row);
   }
 
   public TKey setRow(byte[] row) {
-    setRow(row == null ? (ByteBuffer)null : ByteBuffer.wrap(row));
+    this.row = row == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(row, row.length));
     return this;
   }
 
   public TKey setRow(ByteBuffer row) {
-    this.row = row;
+    this.row = org.apache.thrift.TBaseHelper.copyBinary(row);
     return this;
   }
 
@@ -256,16 +255,16 @@
   }
 
   public ByteBuffer bufferForColFamily() {
-    return colFamily;
+    return org.apache.thrift.TBaseHelper.copyBinary(colFamily);
   }
 
   public TKey setColFamily(byte[] colFamily) {
-    setColFamily(colFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(colFamily));
+    this.colFamily = colFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colFamily, colFamily.length));
     return this;
   }
 
   public TKey setColFamily(ByteBuffer colFamily) {
-    this.colFamily = colFamily;
+    this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(colFamily);
     return this;
   }
 
@@ -290,16 +289,16 @@
   }
 
   public ByteBuffer bufferForColQualifier() {
-    return colQualifier;
+    return org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
   }
 
   public TKey setColQualifier(byte[] colQualifier) {
-    setColQualifier(colQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(colQualifier));
+    this.colQualifier = colQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colQualifier, colQualifier.length));
     return this;
   }
 
   public TKey setColQualifier(ByteBuffer colQualifier) {
-    this.colQualifier = colQualifier;
+    this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
     return this;
   }
 
@@ -324,16 +323,16 @@
   }
 
   public ByteBuffer bufferForColVisibility() {
-    return colVisibility;
+    return org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
   }
 
   public TKey setColVisibility(byte[] colVisibility) {
-    setColVisibility(colVisibility == null ? (ByteBuffer)null : ByteBuffer.wrap(colVisibility));
+    this.colVisibility = colVisibility == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colVisibility, colVisibility.length));
     return this;
   }
 
   public TKey setColVisibility(ByteBuffer colVisibility) {
-    this.colVisibility = colVisibility;
+    this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
     return this;
   }
 
@@ -435,7 +434,7 @@
       return getColVisibility();
 
     case TIMESTAMP:
-      return Long.valueOf(getTimestamp());
+      return getTimestamp();
 
     }
     throw new IllegalStateException();
@@ -525,7 +524,34 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_row = true && (isSetRow());
+    list.add(present_row);
+    if (present_row)
+      list.add(row);
+
+    boolean present_colFamily = true && (isSetColFamily());
+    list.add(present_colFamily);
+    if (present_colFamily)
+      list.add(colFamily);
+
+    boolean present_colQualifier = true && (isSetColQualifier());
+    list.add(present_colQualifier);
+    if (present_colQualifier)
+      list.add(colQualifier);
+
+    boolean present_colVisibility = true && (isSetColVisibility());
+    list.add(present_colVisibility);
+    if (present_colVisibility)
+      list.add(colVisibility);
+
+    boolean present_timestamp = true;
+    list.add(present_timestamp);
+    if (present_timestamp)
+      list.add(timestamp);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TKeyExtent.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TKeyExtent.java
index 20f1fea..9aaa4dc 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TKeyExtent.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TKeyExtent.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TKeyExtent implements org.apache.thrift.TBase<TKeyExtent, TKeyExtent._Fields>, java.io.Serializable, Cloneable, Comparable<TKeyExtent> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TKeyExtent implements org.apache.thrift.TBase<TKeyExtent, TKeyExtent._Fields>, java.io.Serializable, Cloneable, Comparable<TKeyExtent> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TKeyExtent");
 
   private static final org.apache.thrift.protocol.TField TABLE_FIELD_DESC = new org.apache.thrift.protocol.TField("table", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -152,9 +155,9 @@
     ByteBuffer prevEndRow)
   {
     this();
-    this.table = table;
-    this.endRow = endRow;
-    this.prevEndRow = prevEndRow;
+    this.table = org.apache.thrift.TBaseHelper.copyBinary(table);
+    this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
+    this.prevEndRow = org.apache.thrift.TBaseHelper.copyBinary(prevEndRow);
   }
 
   /**
@@ -163,15 +166,12 @@
   public TKeyExtent(TKeyExtent other) {
     if (other.isSetTable()) {
       this.table = org.apache.thrift.TBaseHelper.copyBinary(other.table);
-;
     }
     if (other.isSetEndRow()) {
       this.endRow = org.apache.thrift.TBaseHelper.copyBinary(other.endRow);
-;
     }
     if (other.isSetPrevEndRow()) {
       this.prevEndRow = org.apache.thrift.TBaseHelper.copyBinary(other.prevEndRow);
-;
     }
   }
 
@@ -192,16 +192,16 @@
   }
 
   public ByteBuffer bufferForTable() {
-    return table;
+    return org.apache.thrift.TBaseHelper.copyBinary(table);
   }
 
   public TKeyExtent setTable(byte[] table) {
-    setTable(table == null ? (ByteBuffer)null : ByteBuffer.wrap(table));
+    this.table = table == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(table, table.length));
     return this;
   }
 
   public TKeyExtent setTable(ByteBuffer table) {
-    this.table = table;
+    this.table = org.apache.thrift.TBaseHelper.copyBinary(table);
     return this;
   }
 
@@ -226,16 +226,16 @@
   }
 
   public ByteBuffer bufferForEndRow() {
-    return endRow;
+    return org.apache.thrift.TBaseHelper.copyBinary(endRow);
   }
 
   public TKeyExtent setEndRow(byte[] endRow) {
-    setEndRow(endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(endRow));
+    this.endRow = endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(endRow, endRow.length));
     return this;
   }
 
   public TKeyExtent setEndRow(ByteBuffer endRow) {
-    this.endRow = endRow;
+    this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
     return this;
   }
 
@@ -260,16 +260,16 @@
   }
 
   public ByteBuffer bufferForPrevEndRow() {
-    return prevEndRow;
+    return org.apache.thrift.TBaseHelper.copyBinary(prevEndRow);
   }
 
   public TKeyExtent setPrevEndRow(byte[] prevEndRow) {
-    setPrevEndRow(prevEndRow == null ? (ByteBuffer)null : ByteBuffer.wrap(prevEndRow));
+    this.prevEndRow = prevEndRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(prevEndRow, prevEndRow.length));
     return this;
   }
 
   public TKeyExtent setPrevEndRow(ByteBuffer prevEndRow) {
-    this.prevEndRow = prevEndRow;
+    this.prevEndRow = org.apache.thrift.TBaseHelper.copyBinary(prevEndRow);
     return this;
   }
 
@@ -394,7 +394,24 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_table = true && (isSetTable());
+    list.add(present_table);
+    if (present_table)
+      list.add(table);
+
+    boolean present_endRow = true && (isSetEndRow());
+    list.add(present_endRow);
+    if (present_endRow)
+      list.add(endRow);
+
+    boolean present_prevEndRow = true && (isSetPrevEndRow());
+    list.add(present_prevEndRow);
+    if (present_prevEndRow)
+      list.add(prevEndRow);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TKeyValue.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TKeyValue.java
index 75161b0..01a3820 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TKeyValue.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TKeyValue.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TKeyValue implements org.apache.thrift.TBase<TKeyValue, TKeyValue._Fields>, java.io.Serializable, Cloneable, Comparable<TKeyValue> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TKeyValue implements org.apache.thrift.TBase<TKeyValue, TKeyValue._Fields>, java.io.Serializable, Cloneable, Comparable<TKeyValue> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TKeyValue");
 
   private static final org.apache.thrift.protocol.TField KEY_FIELD_DESC = new org.apache.thrift.protocol.TField("key", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -145,7 +148,7 @@
   {
     this();
     this.key = key;
-    this.value = value;
+    this.value = org.apache.thrift.TBaseHelper.copyBinary(value);
   }
 
   /**
@@ -157,7 +160,6 @@
     }
     if (other.isSetValue()) {
       this.value = org.apache.thrift.TBaseHelper.copyBinary(other.value);
-;
     }
   }
 
@@ -201,16 +203,16 @@
   }
 
   public ByteBuffer bufferForValue() {
-    return value;
+    return org.apache.thrift.TBaseHelper.copyBinary(value);
   }
 
   public TKeyValue setValue(byte[] value) {
-    setValue(value == null ? (ByteBuffer)null : ByteBuffer.wrap(value));
+    this.value = value == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(value, value.length));
     return this;
   }
 
   public TKeyValue setValue(ByteBuffer value) {
-    this.value = value;
+    this.value = org.apache.thrift.TBaseHelper.copyBinary(value);
     return this;
   }
 
@@ -313,7 +315,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_key = true && (isSetKey());
+    list.add(present_key);
+    if (present_key)
+      list.add(key);
+
+    boolean present_value = true && (isSetValue());
+    list.add(present_value);
+    if (present_value)
+      list.add(value);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TMutation.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TMutation.java
index c13e989..1a153a9 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TMutation.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TMutation.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TMutation implements org.apache.thrift.TBase<TMutation, TMutation._Fields>, java.io.Serializable, Cloneable, Comparable<TMutation> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TMutation implements org.apache.thrift.TBase<TMutation, TMutation._Fields>, java.io.Serializable, Cloneable, Comparable<TMutation> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TMutation");
 
   private static final org.apache.thrift.protocol.TField ROW_FIELD_DESC = new org.apache.thrift.protocol.TField("row", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -142,7 +145,7 @@
   // isset id assignments
   private static final int __ENTRIES_ISSET_ID = 0;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.SOURCES};
+  private static final _Fields optionals[] = {_Fields.SOURCES};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -172,8 +175,8 @@
     int entries)
   {
     this();
-    this.row = row;
-    this.data = data;
+    this.row = org.apache.thrift.TBaseHelper.copyBinary(row);
+    this.data = org.apache.thrift.TBaseHelper.copyBinary(data);
     this.values = values;
     this.entries = entries;
     setEntriesIsSet(true);
@@ -186,11 +189,9 @@
     __isset_bitfield = other.__isset_bitfield;
     if (other.isSetRow()) {
       this.row = org.apache.thrift.TBaseHelper.copyBinary(other.row);
-;
     }
     if (other.isSetData()) {
       this.data = org.apache.thrift.TBaseHelper.copyBinary(other.data);
-;
     }
     if (other.isSetValues()) {
       List<ByteBuffer> __this__values = new ArrayList<ByteBuffer>(other.values);
@@ -223,16 +224,16 @@
   }
 
   public ByteBuffer bufferForRow() {
-    return row;
+    return org.apache.thrift.TBaseHelper.copyBinary(row);
   }
 
   public TMutation setRow(byte[] row) {
-    setRow(row == null ? (ByteBuffer)null : ByteBuffer.wrap(row));
+    this.row = row == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(row, row.length));
     return this;
   }
 
   public TMutation setRow(ByteBuffer row) {
-    this.row = row;
+    this.row = org.apache.thrift.TBaseHelper.copyBinary(row);
     return this;
   }
 
@@ -257,16 +258,16 @@
   }
 
   public ByteBuffer bufferForData() {
-    return data;
+    return org.apache.thrift.TBaseHelper.copyBinary(data);
   }
 
   public TMutation setData(byte[] data) {
-    setData(data == null ? (ByteBuffer)null : ByteBuffer.wrap(data));
+    this.data = data == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(data, data.length));
     return this;
   }
 
   public TMutation setData(ByteBuffer data) {
-    this.data = data;
+    this.data = org.apache.thrift.TBaseHelper.copyBinary(data);
     return this;
   }
 
@@ -443,7 +444,7 @@
       return getValues();
 
     case ENTRIES:
-      return Integer.valueOf(getEntries());
+      return getEntries();
 
     case SOURCES:
       return getSources();
@@ -536,7 +537,34 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_row = true && (isSetRow());
+    list.add(present_row);
+    if (present_row)
+      list.add(row);
+
+    boolean present_data = true && (isSetData());
+    list.add(present_data);
+    if (present_data)
+      list.add(data);
+
+    boolean present_values = true && (isSetValues());
+    list.add(present_values);
+    if (present_values)
+      list.add(values);
+
+    boolean present_entries = true;
+    list.add(present_entries);
+    if (present_entries)
+      list.add(entries);
+
+    boolean present_sources = true && (isSetSources());
+    list.add(present_sources);
+    if (present_sources)
+      list.add(sources);
+
+    return list.hashCode();
   }
 
   @Override
@@ -637,7 +665,7 @@
     if (this.values == null) {
       sb.append("null");
     } else {
-      sb.append(this.values);
+      org.apache.thrift.TBaseHelper.toString(this.values, sb);
     }
     first = false;
     if (!first) sb.append(", ");
@@ -720,11 +748,11 @@
               {
                 org.apache.thrift.protocol.TList _list0 = iprot.readListBegin();
                 struct.values = new ArrayList<ByteBuffer>(_list0.size);
-                for (int _i1 = 0; _i1 < _list0.size; ++_i1)
+                ByteBuffer _elem1;
+                for (int _i2 = 0; _i2 < _list0.size; ++_i2)
                 {
-                  ByteBuffer _elem2;
-                  _elem2 = iprot.readBinary();
-                  struct.values.add(_elem2);
+                  _elem1 = iprot.readBinary();
+                  struct.values.add(_elem1);
                 }
                 iprot.readListEnd();
               }
@@ -746,11 +774,11 @@
               {
                 org.apache.thrift.protocol.TList _list3 = iprot.readListBegin();
                 struct.sources = new ArrayList<String>(_list3.size);
-                for (int _i4 = 0; _i4 < _list3.size; ++_i4)
+                String _elem4;
+                for (int _i5 = 0; _i5 < _list3.size; ++_i5)
                 {
-                  String _elem5;
-                  _elem5 = iprot.readString();
-                  struct.sources.add(_elem5);
+                  _elem4 = iprot.readString();
+                  struct.sources.add(_elem4);
                 }
                 iprot.readListEnd();
               }
@@ -892,11 +920,11 @@
         {
           org.apache.thrift.protocol.TList _list10 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.values = new ArrayList<ByteBuffer>(_list10.size);
-          for (int _i11 = 0; _i11 < _list10.size; ++_i11)
+          ByteBuffer _elem11;
+          for (int _i12 = 0; _i12 < _list10.size; ++_i12)
           {
-            ByteBuffer _elem12;
-            _elem12 = iprot.readBinary();
-            struct.values.add(_elem12);
+            _elem11 = iprot.readBinary();
+            struct.values.add(_elem11);
           }
         }
         struct.setValuesIsSet(true);
@@ -909,11 +937,11 @@
         {
           org.apache.thrift.protocol.TList _list13 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.sources = new ArrayList<String>(_list13.size);
-          for (int _i14 = 0; _i14 < _list13.size; ++_i14)
+          String _elem14;
+          for (int _i15 = 0; _i15 < _list13.size; ++_i15)
           {
-            String _elem15;
-            _elem15 = iprot.readString();
-            struct.sources.add(_elem15);
+            _elem14 = iprot.readString();
+            struct.sources.add(_elem14);
           }
         }
         struct.setSourcesIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/TRange.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/TRange.java
index 0ad791c..d4df3d3 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/TRange.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/TRange.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TRange implements org.apache.thrift.TBase<TRange, TRange._Fields>, java.io.Serializable, Cloneable, Comparable<TRange> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TRange implements org.apache.thrift.TBase<TRange, TRange._Fields>, java.io.Serializable, Cloneable, Comparable<TRange> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TRange");
 
   private static final org.apache.thrift.protocol.TField START_FIELD_DESC = new org.apache.thrift.protocol.TField("start", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -430,16 +433,16 @@
       return getStop();
 
     case START_KEY_INCLUSIVE:
-      return Boolean.valueOf(isStartKeyInclusive());
+      return isStartKeyInclusive();
 
     case STOP_KEY_INCLUSIVE:
-      return Boolean.valueOf(isStopKeyInclusive());
+      return isStopKeyInclusive();
 
     case INFINITE_START_KEY:
-      return Boolean.valueOf(isInfiniteStartKey());
+      return isInfiniteStartKey();
 
     case INFINITE_STOP_KEY:
-      return Boolean.valueOf(isInfiniteStopKey());
+      return isInfiniteStopKey();
 
     }
     throw new IllegalStateException();
@@ -540,7 +543,39 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_start = true && (isSetStart());
+    list.add(present_start);
+    if (present_start)
+      list.add(start);
+
+    boolean present_stop = true && (isSetStop());
+    list.add(present_stop);
+    if (present_stop)
+      list.add(stop);
+
+    boolean present_startKeyInclusive = true;
+    list.add(present_startKeyInclusive);
+    if (present_startKeyInclusive)
+      list.add(startKeyInclusive);
+
+    boolean present_stopKeyInclusive = true;
+    list.add(present_stopKeyInclusive);
+    if (present_stopKeyInclusive)
+      list.add(stopKeyInclusive);
+
+    boolean present_infiniteStartKey = true;
+    list.add(present_infiniteStartKey);
+    if (present_infiniteStartKey)
+      list.add(infiniteStartKey);
+
+    boolean present_infiniteStopKey = true;
+    list.add(present_infiniteStopKey);
+    if (present_infiniteStopKey)
+      list.add(infiniteStopKey);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/data/thrift/UpdateErrors.java b/core/src/main/java/org/apache/accumulo/core/data/thrift/UpdateErrors.java
index 58f2f02..f3da03f 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/thrift/UpdateErrors.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/thrift/UpdateErrors.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class UpdateErrors implements org.apache.thrift.TBase<UpdateErrors, UpdateErrors._Fields>, java.io.Serializable, Cloneable, Comparable<UpdateErrors> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class UpdateErrors implements org.apache.thrift.TBase<UpdateErrors, UpdateErrors._Fields>, java.io.Serializable, Cloneable, Comparable<UpdateErrors> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("UpdateErrors");
 
   private static final org.apache.thrift.protocol.TField FAILED_EXTENTS_FIELD_DESC = new org.apache.thrift.protocol.TField("failedExtents", org.apache.thrift.protocol.TType.MAP, (short)1);
@@ -431,7 +434,24 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_failedExtents = true && (isSetFailedExtents());
+    list.add(present_failedExtents);
+    if (present_failedExtents)
+      list.add(failedExtents);
+
+    boolean present_violationSummaries = true && (isSetViolationSummaries());
+    list.add(present_violationSummaries);
+    if (present_violationSummaries)
+      list.add(violationSummaries);
+
+    boolean present_authorizationFailures = true && (isSetAuthorizationFailures());
+    list.add(present_authorizationFailures);
+    if (present_authorizationFailures)
+      list.add(authorizationFailures);
+
+    return list.hashCode();
   }
 
   @Override
@@ -563,14 +583,14 @@
               {
                 org.apache.thrift.protocol.TMap _map58 = iprot.readMapBegin();
                 struct.failedExtents = new HashMap<TKeyExtent,Long>(2*_map58.size);
-                for (int _i59 = 0; _i59 < _map58.size; ++_i59)
+                TKeyExtent _key59;
+                long _val60;
+                for (int _i61 = 0; _i61 < _map58.size; ++_i61)
                 {
-                  TKeyExtent _key60;
-                  long _val61;
-                  _key60 = new TKeyExtent();
-                  _key60.read(iprot);
-                  _val61 = iprot.readI64();
-                  struct.failedExtents.put(_key60, _val61);
+                  _key59 = new TKeyExtent();
+                  _key59.read(iprot);
+                  _val60 = iprot.readI64();
+                  struct.failedExtents.put(_key59, _val60);
                 }
                 iprot.readMapEnd();
               }
@@ -584,12 +604,12 @@
               {
                 org.apache.thrift.protocol.TList _list62 = iprot.readListBegin();
                 struct.violationSummaries = new ArrayList<TConstraintViolationSummary>(_list62.size);
-                for (int _i63 = 0; _i63 < _list62.size; ++_i63)
+                TConstraintViolationSummary _elem63;
+                for (int _i64 = 0; _i64 < _list62.size; ++_i64)
                 {
-                  TConstraintViolationSummary _elem64;
-                  _elem64 = new TConstraintViolationSummary();
-                  _elem64.read(iprot);
-                  struct.violationSummaries.add(_elem64);
+                  _elem63 = new TConstraintViolationSummary();
+                  _elem63.read(iprot);
+                  struct.violationSummaries.add(_elem63);
                 }
                 iprot.readListEnd();
               }
@@ -603,14 +623,14 @@
               {
                 org.apache.thrift.protocol.TMap _map65 = iprot.readMapBegin();
                 struct.authorizationFailures = new HashMap<TKeyExtent,org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode>(2*_map65.size);
-                for (int _i66 = 0; _i66 < _map65.size; ++_i66)
+                TKeyExtent _key66;
+                org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode _val67;
+                for (int _i68 = 0; _i68 < _map65.size; ++_i68)
                 {
-                  TKeyExtent _key67;
-                  org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode _val68;
-                  _key67 = new TKeyExtent();
-                  _key67.read(iprot);
-                  _val68 = org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode.findByValue(iprot.readI32());
-                  struct.authorizationFailures.put(_key67, _val68);
+                  _key66 = new TKeyExtent();
+                  _key66.read(iprot);
+                  _val67 = org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode.findByValue(iprot.readI32());
+                  struct.authorizationFailures.put(_key66, _val67);
                 }
                 iprot.readMapEnd();
               }
@@ -739,14 +759,14 @@
         {
           org.apache.thrift.protocol.TMap _map75 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I64, iprot.readI32());
           struct.failedExtents = new HashMap<TKeyExtent,Long>(2*_map75.size);
-          for (int _i76 = 0; _i76 < _map75.size; ++_i76)
+          TKeyExtent _key76;
+          long _val77;
+          for (int _i78 = 0; _i78 < _map75.size; ++_i78)
           {
-            TKeyExtent _key77;
-            long _val78;
-            _key77 = new TKeyExtent();
-            _key77.read(iprot);
-            _val78 = iprot.readI64();
-            struct.failedExtents.put(_key77, _val78);
+            _key76 = new TKeyExtent();
+            _key76.read(iprot);
+            _val77 = iprot.readI64();
+            struct.failedExtents.put(_key76, _val77);
           }
         }
         struct.setFailedExtentsIsSet(true);
@@ -755,12 +775,12 @@
         {
           org.apache.thrift.protocol.TList _list79 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.violationSummaries = new ArrayList<TConstraintViolationSummary>(_list79.size);
-          for (int _i80 = 0; _i80 < _list79.size; ++_i80)
+          TConstraintViolationSummary _elem80;
+          for (int _i81 = 0; _i81 < _list79.size; ++_i81)
           {
-            TConstraintViolationSummary _elem81;
-            _elem81 = new TConstraintViolationSummary();
-            _elem81.read(iprot);
-            struct.violationSummaries.add(_elem81);
+            _elem80 = new TConstraintViolationSummary();
+            _elem80.read(iprot);
+            struct.violationSummaries.add(_elem80);
           }
         }
         struct.setViolationSummariesIsSet(true);
@@ -769,14 +789,14 @@
         {
           org.apache.thrift.protocol.TMap _map82 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.I32, iprot.readI32());
           struct.authorizationFailures = new HashMap<TKeyExtent,org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode>(2*_map82.size);
-          for (int _i83 = 0; _i83 < _map82.size; ++_i83)
+          TKeyExtent _key83;
+          org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode _val84;
+          for (int _i85 = 0; _i85 < _map82.size; ++_i85)
           {
-            TKeyExtent _key84;
-            org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode _val85;
-            _key84 = new TKeyExtent();
-            _key84.read(iprot);
-            _val85 = org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode.findByValue(iprot.readI32());
-            struct.authorizationFailures.put(_key84, _val85);
+            _key83 = new TKeyExtent();
+            _key83.read(iprot);
+            _val84 = org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode.findByValue(iprot.readI32());
+            struct.authorizationFailures.put(_key83, _val84);
           }
         }
         struct.setAuthorizationFailuresIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/file/BloomFilterLayer.java b/core/src/main/java/org/apache/accumulo/core/file/BloomFilterLayer.java
index 765aa0c..6e5728a 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/BloomFilterLayer.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/BloomFilterLayer.java
@@ -50,6 +50,7 @@
 import org.apache.accumulo.core.file.rfile.RFile;
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.NamingThreadFactory;
 import org.apache.accumulo.fate.util.LoggingRunnable;
@@ -79,7 +80,7 @@
     }
 
     if (maxLoadThreads > 0) {
-      BlockingQueue<Runnable> q = new LinkedBlockingQueue<Runnable>();
+      BlockingQueue<Runnable> q = new LinkedBlockingQueue<>();
       loadThreadPool = new ThreadPoolExecutor(0, maxLoadThreads, 60, TimeUnit.SECONDS, q, new NamingThreadFactory("bloom-loader"));
     }
 
@@ -94,6 +95,7 @@
     private FileSKVWriter writer;
     private KeyFunctor transformer = null;
     private boolean closed = false;
+    private long length = -1;
 
     Writer(FileSKVWriter writer, AccumuloConfiguration acuconf) {
       this.writer = writer;
@@ -154,6 +156,7 @@
       out.flush();
       out.close();
       writer.close();
+      length = writer.getLength();
       closed = true;
     }
 
@@ -177,6 +180,14 @@
     public boolean supportsLocalityGroups() {
       return writer.supportsLocalityGroups();
     }
+
+    @Override
+    public long getLength() throws IOException {
+      if (closed) {
+        return length;
+      }
+      return writer.getLength();
+    }
   }
 
   static class BloomFilterLoader {
@@ -414,6 +425,11 @@
       reader.setInterruptFlag(flag);
     }
 
+    @Override
+    public FileSKVIterator getSample(SamplerConfigurationImpl sampleConfig) {
+      return new BloomFilterLayer.Reader(reader.getSample(sampleConfig), bfl);
+    }
+
   }
 
   public static void main(String[] args) throws IOException {
@@ -421,13 +437,13 @@
 
     Random r = new Random();
 
-    HashSet<Integer> valsSet = new HashSet<Integer>();
+    HashSet<Integer> valsSet = new HashSet<>();
 
     for (int i = 0; i < 100000; i++) {
       valsSet.add(r.nextInt(Integer.MAX_VALUE));
     }
 
-    ArrayList<Integer> vals = new ArrayList<Integer>(valsSet);
+    ArrayList<Integer> vals = new ArrayList<>(valsSet);
     Collections.sort(vals);
 
     ConfigurationCopy acuconf = new ConfigurationCopy(AccumuloConfiguration.getDefaultConfiguration());
@@ -442,7 +458,7 @@
 
     String suffix = FileOperations.getNewFileExtension(acuconf);
     String fname = "/tmp/test." + suffix;
-    FileSKVWriter bmfw = FileOperations.getInstance().openWriter(fname, fs, conf, acuconf);
+    FileSKVWriter bmfw = FileOperations.getInstance().newWriterBuilder().forFile(fname, fs, conf).withTableConfiguration(acuconf).build();
 
     long t1 = System.currentTimeMillis();
 
@@ -461,7 +477,7 @@
     bmfw.close();
 
     t1 = System.currentTimeMillis();
-    FileSKVIterator bmfr = FileOperations.getInstance().openReader(fname, false, fs, conf, acuconf);
+    FileSKVIterator bmfr = FileOperations.getInstance().newReaderBuilder().forFile(fname, fs, conf).withTableConfiguration(acuconf).build();
     t2 = System.currentTimeMillis();
     out.println("Opened " + fname + " in " + (t2 - t1));
 
diff --git a/core/src/main/java/org/apache/accumulo/core/file/DispatchingFileFactory.java b/core/src/main/java/org/apache/accumulo/core/file/DispatchingFileFactory.java
index 1e7ecc9..c7d8248 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/DispatchingFileFactory.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/DispatchingFileFactory.java
@@ -17,24 +17,18 @@
 package org.apache.accumulo.core.file;
 
 import java.io.IOException;
-import java.util.Set;
 
 import org.apache.accumulo.core.Constants;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.ByteSequence;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.file.blockfile.cache.BlockCache;
 import org.apache.accumulo.core.file.map.MapFileOperations;
 import org.apache.accumulo.core.file.rfile.RFile;
 import org.apache.accumulo.core.file.rfile.RFileOperations;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 
 class DispatchingFileFactory extends FileOperations {
 
-  private FileOperations findFileFactory(String file) {
+  private FileOperations findFileFactory(FileAccessOperation<?> options) {
+    String file = options.getFilename();
 
     Path p = new Path(file);
     String name = p.getName();
@@ -59,78 +53,52 @@
     }
   }
 
-  @Override
-  public FileSKVIterator openIndex(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    return findFileFactory(file).openIndex(file, fs, conf, acuconf, null, null);
-  }
-
-  @Override
-  public FileSKVIterator openReader(String file, boolean seekToBeginning, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    FileSKVIterator iter = findFileFactory(file).openReader(file, seekToBeginning, fs, conf, acuconf, null, null);
-    if (acuconf.getBoolean(Property.TABLE_BLOOM_ENABLED)) {
-      return new BloomFilterLayer.Reader(iter, acuconf);
+  /** If the table configuration disallows caching, rewrite the options object to not pass the caches. */
+  private static <T extends FileReaderOperation<T>> T selectivelyDisableCaches(T input) {
+    if (!input.getTableConfiguration().getBoolean(Property.TABLE_INDEXCACHE_ENABLED)) {
+      input = input.withIndexCache(null);
     }
-    return iter;
-  }
-
-  @Override
-  public FileSKVWriter openWriter(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    FileSKVWriter writer = findFileFactory(file).openWriter(file, fs, conf, acuconf);
-    if (acuconf.getBoolean(Property.TABLE_BLOOM_ENABLED)) {
-      return new BloomFilterLayer.Writer(writer, acuconf);
+    if (!input.getTableConfiguration().getBoolean(Property.TABLE_BLOCKCACHE_ENABLED)) {
+      input = input.withDataCache(null);
     }
-    return writer;
+    return input;
   }
 
   @Override
-  public long getFileSize(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    return findFileFactory(file).getFileSize(file, fs, conf, acuconf);
+  protected long getFileSize(GetFileSizeOperation options) throws IOException {
+    return findFileFactory(options).getFileSize(options);
   }
 
   @Override
-  public FileSKVIterator openReader(String file, Range range, Set<ByteSequence> columnFamilies, boolean inclusive, FileSystem fs, Configuration conf,
-      AccumuloConfiguration tableConf) throws IOException {
-    return findFileFactory(file).openReader(file, range, columnFamilies, inclusive, fs, conf, tableConf, null, null);
-  }
-
-  @Override
-  public FileSKVIterator openReader(String file, Range range, Set<ByteSequence> columnFamilies, boolean inclusive, FileSystem fs, Configuration conf,
-      AccumuloConfiguration tableConf, BlockCache dataCache, BlockCache indexCache) throws IOException {
-
-    if (!tableConf.getBoolean(Property.TABLE_INDEXCACHE_ENABLED))
-      indexCache = null;
-    if (!tableConf.getBoolean(Property.TABLE_BLOCKCACHE_ENABLED))
-      dataCache = null;
-
-    return findFileFactory(file).openReader(file, range, columnFamilies, inclusive, fs, conf, tableConf, dataCache, indexCache);
-  }
-
-  @Override
-  public FileSKVIterator openReader(String file, boolean seekToBeginning, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf,
-      BlockCache dataCache, BlockCache indexCache) throws IOException {
-
-    if (!acuconf.getBoolean(Property.TABLE_INDEXCACHE_ENABLED))
-      indexCache = null;
-    if (!acuconf.getBoolean(Property.TABLE_BLOCKCACHE_ENABLED))
-      dataCache = null;
-
-    FileSKVIterator iter = findFileFactory(file).openReader(file, seekToBeginning, fs, conf, acuconf, dataCache, indexCache);
-    if (acuconf.getBoolean(Property.TABLE_BLOOM_ENABLED)) {
-      return new BloomFilterLayer.Reader(iter, acuconf);
+  protected FileSKVWriter openWriter(OpenWriterOperation options) throws IOException {
+    FileSKVWriter writer = findFileFactory(options).openWriter(options);
+    if (options.getTableConfiguration().getBoolean(Property.TABLE_BLOOM_ENABLED)) {
+      return new BloomFilterLayer.Writer(writer, options.getTableConfiguration());
+    } else {
+      return writer;
     }
-    return iter;
   }
 
   @Override
-  public FileSKVIterator openIndex(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf, BlockCache dCache, BlockCache iCache)
-      throws IOException {
-
-    if (!acuconf.getBoolean(Property.TABLE_INDEXCACHE_ENABLED))
-      iCache = null;
-    if (!acuconf.getBoolean(Property.TABLE_BLOCKCACHE_ENABLED))
-      dCache = null;
-
-    return findFileFactory(file).openIndex(file, fs, conf, acuconf, dCache, iCache);
+  protected FileSKVIterator openIndex(OpenIndexOperation options) throws IOException {
+    options = selectivelyDisableCaches(options);
+    return findFileFactory(options).openIndex(options);
   }
 
+  @Override
+  protected FileSKVIterator openReader(OpenReaderOperation options) throws IOException {
+    options = selectivelyDisableCaches(options);
+    FileSKVIterator iter = findFileFactory(options).openReader(options);
+    if (options.getTableConfiguration().getBoolean(Property.TABLE_BLOOM_ENABLED)) {
+      return new BloomFilterLayer.Reader(iter, options.getTableConfiguration());
+    } else {
+      return iter;
+    }
+  }
+
+  @Override
+  protected FileSKVIterator openScanReader(OpenScanReaderOperation options) throws IOException {
+    options = selectivelyDisableCaches(options);
+    return findFileFactory(options).openScanReader(options);
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/FileOperations.java b/core/src/main/java/org/apache/accumulo/core/file/FileOperations.java
index 3798453..10bb784 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/FileOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/FileOperations.java
@@ -19,6 +19,7 @@
 import java.io.IOException;
 import java.util.Arrays;
 import java.util.HashSet;
+import java.util.Objects;
 import java.util.Set;
 
 import org.apache.accumulo.core.Constants;
@@ -28,12 +29,14 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.file.blockfile.cache.BlockCache;
 import org.apache.accumulo.core.file.rfile.RFile;
+import org.apache.accumulo.core.util.ratelimit.RateLimiter;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 
 public abstract class FileOperations {
 
-  private static final HashSet<String> validExtensions = new HashSet<String>(Arrays.asList(Constants.MAPFILE_EXTENSION, RFile.EXTENSION));
+  private static final HashSet<String> validExtensions = new HashSet<>(Arrays.asList(Constants.MAPFILE_EXTENSION, RFile.EXTENSION));
 
   public static Set<String> getValidExtensions() {
     return validExtensions;
@@ -47,37 +50,498 @@
     return new DispatchingFileFactory();
   }
 
-  /**
-   * Open a reader that will not be seeked giving an initial seek location. This is useful for file operations that only need to scan data within a range and do
-   * not need to seek. Therefore file metadata such as indexes does not need to be kept in memory while the file is scanned. Also seek optimizations like bloom
-   * filters do not need to be loaded.
-   *
-   */
+  //
+  // Abstract methods (to be implemented by subclasses)
+  //
 
-  public abstract FileSKVIterator openReader(String file, Range range, Set<ByteSequence> columnFamilies, boolean inclusive, FileSystem fs, Configuration conf,
-      AccumuloConfiguration tableConf) throws IOException;
+  protected abstract long getFileSize(GetFileSizeOperation options) throws IOException;
 
-  public abstract FileSKVIterator openReader(String file, Range range, Set<ByteSequence> columnFamilies, boolean inclusive, FileSystem fs, Configuration conf,
-      AccumuloConfiguration tableConf, BlockCache dataCache, BlockCache indexCache) throws IOException;
+  protected abstract FileSKVWriter openWriter(OpenWriterOperation options) throws IOException;
+
+  protected abstract FileSKVIterator openIndex(OpenIndexOperation options) throws IOException;
+
+  protected abstract FileSKVIterator openScanReader(OpenScanReaderOperation options) throws IOException;
+
+  protected abstract FileSKVIterator openReader(OpenReaderOperation options) throws IOException;
+
+  //
+  // File operations
+  //
 
   /**
-   * Open a reader that fully support seeking and also enable any optimizations related to seeking, like bloom filters.
+   * Construct an operation object allowing one to query the size of a file. <br>
+   * Syntax:
    *
+   * <pre>
+   * long size = fileOperations.getFileSize().forFile(filename, fileSystem, fsConfiguration).withTableConfiguration(tableConf).execute();
+   * </pre>
    */
+  public NeedsFile<GetFileSizeOperationBuilder> getFileSize() {
+    return new GetFileSizeOperation();
+  }
 
-  public abstract FileSKVIterator openReader(String file, boolean seekToBeginning, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf)
-      throws IOException;
+  /**
+   * Construct an operation object allowing one to create a writer for a file. <br>
+   * Syntax:
+   *
+   * <pre>
+   * FileSKVWriter writer = fileOperations.newWriterBuilder()
+   *     .forFile(...)
+   *     .withTableConfiguration(...)
+   *     .withRateLimiter(...) // optional
+   *     .withCompression(...) // optional
+   *     .build();
+   * </pre>
+   */
+  public NeedsFileOrOuputStream<OpenWriterOperationBuilder> newWriterBuilder() {
+    return new OpenWriterOperation();
+  }
 
-  public abstract FileSKVIterator openReader(String file, boolean seekToBeginning, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf,
-      BlockCache dataCache, BlockCache indexCache) throws IOException;
+  /**
+   * Construct an operation object allowing one to create an index iterator for a file. <br>
+   * Syntax:
+   *
+   * <pre>
+   * FileSKVIterator iterator = fileOperations.newIndexReaderBuilder()
+   *     .forFile(...)
+   *     .withTableConfiguration(...)
+   *     .withRateLimiter(...) // optional
+   *     .withBlockCache(...) // optional
+   *     .build();
+   * </pre>
+   */
+  public NeedsFile<OpenIndexOperationBuilder> newIndexReaderBuilder() {
+    return new OpenIndexOperation();
+  }
 
-  public abstract FileSKVWriter openWriter(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException;
+  /**
+   * Construct an operation object allowing one to create a "scan" reader for a file. Scan readers do not have any optimizations for seeking beyond their
+   * initial position. This is useful for file operations that only need to scan data within a range and do not need to seek. Therefore file metadata such as
+   * indexes does not need to be kept in memory while the file is scanned. Also seek optimizations like bloom filters do not need to be loaded. <br>
+   * Syntax:
+   *
+   * <pre>
+   * FileSKVIterator scanner = fileOperations.newScanReaderBuilder()
+   *     .forFile(...)
+   *     .withTableConfiguration(...)
+   *     .overRange(...)
+   *     .withRateLimiter(...) // optional
+   *     .withBlockCache(...) // optional
+   *     .build();
+   * </pre>
+   */
+  @SuppressWarnings("unchecked")
+  public NeedsFile<NeedsRange<OpenScanReaderOperationBuilder>> newScanReaderBuilder() {
+    return (NeedsFile<NeedsRange<OpenScanReaderOperationBuilder>>) (NeedsFile<?>) new OpenScanReaderOperation();
+  }
 
-  public abstract FileSKVIterator openIndex(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException;
+  /**
+   * Construct an operation object allowing one to create a reader for a file. A reader constructed in this manner fully supports seeking, and also enables any
+   * optimizations related to seeking (e.g. Bloom filters). <br>
+   * Syntax:
+   *
+   * <pre>
+   * FileSKVIterator scanner = fileOperations.newReaderBuilder()
+   *     .forFile(...)
+   *     .withTableConfiguration(...)
+   *     .withRateLimiter(...) // optional
+   *     .withBlockCache(...) // optional
+   *     .seekToBeginning(...) // optional
+   *     .build();
+   * </pre>
+   */
+  public NeedsFile<OpenReaderOperationBuilder> newReaderBuilder() {
+    return new OpenReaderOperation();
+  }
 
-  public abstract FileSKVIterator openIndex(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf, BlockCache dCache, BlockCache iCache)
-      throws IOException;
+  //
+  // Domain specific embedded language for execution of operations.
+  //
+  // Here, for each ...Operation class which is a POJO holding a group of parameters,
+  // we have a parallel ...OperationBuilder interface which only exposes the setters / execute methods.
+  // This allows us to expose only the setter/execute methods to upper layers, while
+  // allowing lower layers the freedom to both get and set.
+  //
 
-  public abstract long getFileSize(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException;
+  /**
+   * Options common to all FileOperations.
+   */
+  protected static class FileAccessOperation<SubclassType extends FileAccessOperation<SubclassType>> {
+    private AccumuloConfiguration tableConfiguration;
+
+    private String filename;
+    private FileSystem fs;
+    private Configuration fsConf;
+
+    /** Specify the table configuration defining access to this file. */
+    @SuppressWarnings("unchecked")
+    public SubclassType withTableConfiguration(AccumuloConfiguration tableConfiguration) {
+      this.tableConfiguration = tableConfiguration;
+      return (SubclassType) this;
+    }
+
+    /** Specify the file this operation should apply to. */
+    @SuppressWarnings("unchecked")
+    public SubclassType forFile(String filename, FileSystem fs, Configuration fsConf) {
+      this.filename = filename;
+      this.fs = fs;
+      this.fsConf = fsConf;
+      return (SubclassType) this;
+    }
+
+    /** Specify the file this operation should apply to. */
+    @SuppressWarnings("unchecked")
+    public SubclassType forFile(String filename) {
+      this.filename = filename;
+      return (SubclassType) this;
+    }
+
+    /** Specify the filesystem which this operation should apply to, along with its configuration. */
+    @SuppressWarnings("unchecked")
+    public SubclassType inFileSystem(FileSystem fs, Configuration fsConf) {
+      this.fs = fs;
+      this.fsConf = fsConf;
+      return (SubclassType) this;
+    }
+
+    protected void setFilename(String filename) {
+      this.filename = filename;
+    }
+
+    public String getFilename() {
+      return filename;
+    }
+
+    public FileSystem getFileSystem() {
+      return fs;
+    }
+
+    protected void setConfiguration(Configuration fsConf) {
+      this.fsConf = fsConf;
+    }
+
+    public Configuration getConfiguration() {
+      return fsConf;
+    }
+
+    public AccumuloConfiguration getTableConfiguration() {
+      return tableConfiguration;
+    }
+
+    /** Check for null parameters. */
+    protected void validate() {
+      Objects.requireNonNull(getFilename());
+      Objects.requireNonNull(getFileSystem());
+      Objects.requireNonNull(getConfiguration());
+      Objects.requireNonNull(getTableConfiguration());
+    }
+  }
+
+  /** Builder interface parallel to {@link FileAccessOperation}. */
+  protected static interface FileAccessOperationBuilder<SubbuilderType> extends NeedsFile<SubbuilderType>, NeedsFileSystem<SubbuilderType>,
+      NeedsTableConfiguration<SubbuilderType> {
+    // no optional/generic methods.
+  }
+
+  /**
+   * Operation object for performing {@code getFileSize()} operations.
+   */
+  protected class GetFileSizeOperation extends FileAccessOperation<GetFileSizeOperation> implements GetFileSizeOperationBuilder {
+    /** Return the size of the file. */
+    @Override
+    public long execute() throws IOException {
+      validate();
+      return getFileSize(this);
+    }
+  }
+
+  /** Builder interface for {@link GetFileSizeOperation}, allowing execution of {@code getFileSize()} operations. */
+  public static interface GetFileSizeOperationBuilder extends FileAccessOperationBuilder<GetFileSizeOperationBuilder> {
+    /** Return the size of the file. */
+    public long execute() throws IOException;
+  }
+
+  /**
+   * Options common to all {@code FileOperation}s which perform reading or writing.
+   */
+  protected static class FileIOOperation<SubclassType extends FileIOOperation<SubclassType>> extends FileAccessOperation<SubclassType> {
+    private RateLimiter rateLimiter;
+
+    /** Specify a rate limiter for this operation. */
+    @SuppressWarnings("unchecked")
+    public SubclassType withRateLimiter(RateLimiter rateLimiter) {
+      this.rateLimiter = rateLimiter;
+      return (SubclassType) this;
+    }
+
+    public RateLimiter getRateLimiter() {
+      return rateLimiter;
+    }
+  }
+
+  /** Builder interface parallel to {@link FileIOOperation}. */
+  protected static interface FileIOOperationBuilder<SubbuilderType> extends FileAccessOperationBuilder<SubbuilderType> {
+    /** Specify a rate limiter for this operation. */
+    public SubbuilderType withRateLimiter(RateLimiter rateLimiter);
+  }
+
+  /**
+   * Operation object for constructing a writer.
+   */
+  protected class OpenWriterOperation extends FileIOOperation<OpenWriterOperation> implements OpenWriterOperationBuilder,
+      NeedsFileOrOuputStream<OpenWriterOperationBuilder> {
+    private String compression;
+    private FSDataOutputStream outputStream;
+
+    @Override
+    public NeedsTableConfiguration<OpenWriterOperationBuilder> forOutputStream(String extenstion, FSDataOutputStream outputStream, Configuration fsConf) {
+      this.outputStream = outputStream;
+      setConfiguration(fsConf);
+      setFilename("foo" + extenstion);
+      return this;
+    }
+
+    @Override
+    public OpenWriterOperation withCompression(String compression) {
+      this.compression = compression;
+      return this;
+    }
+
+    public String getCompression() {
+      return compression;
+    }
+
+    public FSDataOutputStream getOutputStream() {
+      return outputStream;
+    }
+
+    @Override
+    protected void validate() {
+      if (outputStream == null) {
+        super.validate();
+      } else {
+        Objects.requireNonNull(getConfiguration());
+        Objects.requireNonNull(getTableConfiguration());
+      }
+    }
+
+    @Override
+    public FileSKVWriter build() throws IOException {
+      validate();
+      return openWriter(this);
+    }
+  }
+
+  /** Builder interface parallel to {@link OpenWriterOperation}. */
+  public static interface OpenWriterOperationBuilder extends FileIOOperationBuilder<OpenWriterOperationBuilder> {
+    /** Set the compression type. */
+    public OpenWriterOperationBuilder withCompression(String compression);
+
+    /** Construct the writer. */
+    public FileSKVWriter build() throws IOException;
+  }
+
+  /**
+   * Options common to all {@code FileOperations} which perform reads.
+   */
+  protected static class FileReaderOperation<SubclassType extends FileReaderOperation<SubclassType>> extends FileIOOperation<SubclassType> {
+    private BlockCache dataCache;
+    private BlockCache indexCache;
+
+    /** (Optional) Set the block cache pair to be used to optimize reads within the constructed reader. */
+    @SuppressWarnings("unchecked")
+    public SubclassType withBlockCache(BlockCache dataCache, BlockCache indexCache) {
+      this.dataCache = dataCache;
+      this.indexCache = indexCache;
+      return (SubclassType) this;
+    }
+
+    /** (Optional) set the data cache to be used to optimize reads within the constructed reader. */
+    @SuppressWarnings("unchecked")
+    public SubclassType withDataCache(BlockCache dataCache) {
+      this.dataCache = dataCache;
+      return (SubclassType) this;
+    }
+
+    /** (Optional) set the index cache to be used to optimize reads within the constructed reader. */
+    @SuppressWarnings("unchecked")
+    public SubclassType withIndexCache(BlockCache indexCache) {
+      this.indexCache = indexCache;
+      return (SubclassType) this;
+    }
+
+    public BlockCache getDataCache() {
+      return dataCache;
+    }
+
+    public BlockCache getIndexCache() {
+      return indexCache;
+    }
+  }
+
+  /** Builder interface parallel to {@link FileReaderOperation}. */
+  protected static interface FileReaderOperationBuilder<SubbuilderType> extends FileIOOperationBuilder<SubbuilderType> {
+    /** (Optional) Set the block cache pair to be used to optimize reads within the constructed reader. */
+    public SubbuilderType withBlockCache(BlockCache dataCache, BlockCache indexCache);
+
+    /** (Optional) set the data cache to be used to optimize reads within the constructed reader. */
+    public SubbuilderType withDataCache(BlockCache dataCache);
+
+    /** (Optional) set the index cache to be used to optimize reads within the constructed reader. */
+    public SubbuilderType withIndexCache(BlockCache indexCache);
+  }
+
+  /**
+   * Operation object for opening an index.
+   */
+  protected class OpenIndexOperation extends FileReaderOperation<OpenIndexOperation> implements OpenIndexOperationBuilder {
+    @Override
+    public FileSKVIterator build() throws IOException {
+      validate();
+      return openIndex(this);
+    }
+  }
+
+  /** Builder interface parallel to {@link OpenIndexOperation}. */
+  public static interface OpenIndexOperationBuilder extends FileReaderOperationBuilder<OpenIndexOperationBuilder> {
+    /** Construct the reader. */
+    public FileSKVIterator build() throws IOException;
+  }
+
+  /** Operation object for opening a scan reader. */
+  protected class OpenScanReaderOperation extends FileReaderOperation<OpenScanReaderOperation> implements OpenScanReaderOperationBuilder {
+    private Range range;
+    private Set<ByteSequence> columnFamilies;
+    private boolean inclusive;
+
+    /** Set the range over which the constructed iterator will search. */
+    @Override
+    public OpenScanReaderOperation overRange(Range range, Set<ByteSequence> columnFamilies, boolean inclusive) {
+      this.range = range;
+      this.columnFamilies = columnFamilies;
+      this.inclusive = inclusive;
+      return this;
+    }
+
+    /** The range over which this reader should scan. */
+    public Range getRange() {
+      return range;
+    }
+
+    /** The column families which this reader should scan. */
+    public Set<ByteSequence> getColumnFamilies() {
+      return columnFamilies;
+    }
+
+    public boolean isRangeInclusive() {
+      return inclusive;
+    }
+
+    @Override
+    protected void validate() {
+      super.validate();
+      Objects.requireNonNull(range);
+      Objects.requireNonNull(columnFamilies);
+    }
+
+    /** Execute the operation, constructing a scan iterator. */
+    @Override
+    public FileSKVIterator build() throws IOException {
+      validate();
+      return openScanReader(this);
+    }
+  }
+
+  /** Builder interface parallel to {@link OpenScanReaderOperation}. */
+  public static interface OpenScanReaderOperationBuilder extends FileReaderOperationBuilder<OpenScanReaderOperationBuilder>,
+      NeedsRange<OpenScanReaderOperationBuilder> {
+    /** Execute the operation, constructing a scan iterator. */
+    public FileSKVIterator build() throws IOException;
+  }
+
+  /** Operation object for opening a full reader. */
+  protected class OpenReaderOperation extends FileReaderOperation<OpenReaderOperation> implements OpenReaderOperationBuilder {
+    private boolean seekToBeginning = false;
+
+    /**
+     * Seek the constructed iterator to the beginning of its domain before returning. Equivalent to {@code seekToBeginning(true)}.
+     */
+    @Override
+    public OpenReaderOperation seekToBeginning() {
+      return seekToBeginning(true);
+    }
+
+    /** If true, seek the constructed iterator to the beginning of its domain before returning. */
+    @Override
+    public OpenReaderOperation seekToBeginning(boolean seekToBeginning) {
+      this.seekToBeginning = seekToBeginning;
+      return this;
+    }
+
+    public boolean isSeekToBeginning() {
+      return seekToBeginning;
+    }
+
+    /** Execute the operation, constructing the specified file reader. */
+    @Override
+    public FileSKVIterator build() throws IOException {
+      validate();
+      return openReader(this);
+    }
+  }
+
+  /** Builder parallel to {@link OpenReaderOperation}. */
+  public static interface OpenReaderOperationBuilder extends FileReaderOperationBuilder<OpenReaderOperationBuilder> {
+    /**
+     * Seek the constructed iterator to the beginning of its domain before returning. Equivalent to {@code seekToBeginning(true)}.
+     */
+    public OpenReaderOperationBuilder seekToBeginning();
+
+    /** If true, seek the constructed iterator to the beginning of its domain before returning. */
+    public OpenReaderOperationBuilder seekToBeginning(boolean seekToBeginning);
+
+    /** Execute the operation, constructing the specified file reader. */
+    public FileSKVIterator build() throws IOException;
+  }
+
+  /**
+   * Type wrapper to ensure that {@code forFile(...)} is called before other methods.
+   */
+  public static interface NeedsFile<ReturnType> {
+    /** Specify the file this operation should apply to. */
+    public NeedsTableConfiguration<ReturnType> forFile(String filename, FileSystem fs, Configuration fsConf);
+
+    /** Specify the file this operation should apply to. */
+    public NeedsFileSystem<ReturnType> forFile(String filename);
+  }
+
+  public static interface NeedsFileOrOuputStream<ReturnType> extends NeedsFile<ReturnType> {
+    /** Specify the file this operation should apply to. */
+    public NeedsTableConfiguration<ReturnType> forOutputStream(String extenstion, FSDataOutputStream out, Configuration fsConf);
+  }
+
+  /**
+   * Type wrapper to ensure that {@code inFileSystem(...)} is called before other methods.
+   */
+  public static interface NeedsFileSystem<ReturnType> {
+    /** Specify the {@link FileSystem} that this operation operates on, along with an alternate configuration. */
+    public NeedsTableConfiguration<ReturnType> inFileSystem(FileSystem fs, Configuration fsConf);
+  }
+
+  /**
+   * Type wrapper to ensure that {@code withTableConfiguration(...)} is called before other methods.
+   */
+  public static interface NeedsTableConfiguration<ReturnType> {
+    /** Specify the table configuration defining access to this file. */
+    public ReturnType withTableConfiguration(AccumuloConfiguration tableConfiguration);
+  }
+
+  /**
+   * Type wrapper to ensure that {@code overRange(...)} is called before other methods.
+   */
+  public static interface NeedsRange<ReturnType> {
+    /** Set the range over which the constructed iterator will search. */
+    public ReturnType overRange(Range range, Set<ByteSequence> columnFamilies, boolean inclusive);
+  }
 
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/FileSKVIterator.java b/core/src/main/java/org/apache/accumulo/core/file/FileSKVIterator.java
index 60970e2..364a44d 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/FileSKVIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/FileSKVIterator.java
@@ -21,15 +21,19 @@
 
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.iterators.system.InterruptibleIterator;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 
-public interface FileSKVIterator extends InterruptibleIterator {
+public interface FileSKVIterator extends InterruptibleIterator, AutoCloseable {
   Key getFirstKey() throws IOException;
 
   Key getLastKey() throws IOException;
 
   DataInputStream getMetaStore(String name) throws IOException, NoSuchMetaStoreException;
 
+  FileSKVIterator getSample(SamplerConfigurationImpl sampleConfig);
+
   void closeDeepCopies() throws IOException;
 
+  @Override
   void close() throws IOException;
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/FileSKVWriter.java b/core/src/main/java/org/apache/accumulo/core/file/FileSKVWriter.java
index f4aa888..eefdc6d 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/FileSKVWriter.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/FileSKVWriter.java
@@ -24,7 +24,7 @@
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 
-public interface FileSKVWriter {
+public interface FileSKVWriter extends AutoCloseable {
   boolean supportsLocalityGroups();
 
   void startNewLocalityGroup(String name, Set<ByteSequence> columnFamilies) throws IOException;
@@ -35,5 +35,8 @@
 
   DataOutputStream createMetaStore(String name) throws IOException;
 
+  @Override
   void close() throws IOException;
+
+  long getLength() throws IOException;
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/BlockFileWriter.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/BlockFileWriter.java
index 570a8a5..e9d97c5 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/BlockFileWriter.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/BlockFileWriter.java
@@ -33,4 +33,6 @@
   ABlockWriter prepareDataBlock() throws IOException;
 
   void close() throws IOException;
+
+  long getLength() throws IOException;
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlock.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlock.java
index 19612d0..c67b4c7 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlock.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlock.java
@@ -44,7 +44,7 @@
      * Block from in-memory store
      */
     MEMORY
-  };
+  }
 
   private final String blockName;
   private final byte buf[];
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
index 329ba71..248634d 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
@@ -48,7 +48,7 @@
     int initialSize = (int) Math.ceil(maxSize / (double) blockSize);
     if (initialSize == 0)
       initialSize++;
-    queue = new PriorityQueue<CachedBlock>(initialSize);
+    queue = new PriorityQueue<>(initialSize);
     heapSize = 0;
     this.maxSize = maxSize;
   }
@@ -88,7 +88,7 @@
    * @return list of cached elements in descending order
    */
   public CachedBlock[] get() {
-    LinkedList<CachedBlock> blocks = new LinkedList<CachedBlock>();
+    LinkedList<CachedBlock> blocks = new LinkedList<>();
     while (!queue.isEmpty()) {
       blocks.addFirst(queue.poll());
     }
@@ -101,7 +101,7 @@
    * @return list of cached elements in descending order
    */
   public LinkedList<CachedBlock> getList() {
-    LinkedList<CachedBlock> blocks = new LinkedList<CachedBlock>();
+    LinkedList<CachedBlock> blocks = new LinkedList<>();
     while (!queue.isEmpty()) {
       blocks.addFirst(queue.poll());
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/CachableBlockFile.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/CachableBlockFile.java
index 9879326..1e10fe7 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/CachableBlockFile.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/CachableBlockFile.java
@@ -21,6 +21,7 @@
 import java.io.DataOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
+import java.io.OutputStream;
 import java.lang.ref.SoftReference;
 
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
@@ -33,11 +34,14 @@
 import org.apache.accumulo.core.file.rfile.bcfile.BCFile;
 import org.apache.accumulo.core.file.rfile.bcfile.BCFile.Reader.BlockReader;
 import org.apache.accumulo.core.file.rfile.bcfile.BCFile.Writer.BlockAppender;
+import org.apache.accumulo.core.file.streams.PositionedOutput;
+import org.apache.accumulo.core.file.streams.RateLimitedInputStream;
+import org.apache.accumulo.core.file.streams.RateLimitedOutputStream;
+import org.apache.accumulo.core.util.ratelimit.RateLimiter;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.Seekable;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -48,26 +52,29 @@
 
 public class CachableBlockFile {
 
-  private CachableBlockFile() {};
+  private CachableBlockFile() {}
 
   private static final Logger log = LoggerFactory.getLogger(CachableBlockFile.class);
 
   public static class Writer implements BlockFileWriter {
     private BCFile.Writer _bc;
     private BlockWrite _bw;
-    private FSDataOutputStream fsout = null;
+    private final PositionedOutput fsout;
+    private long length = 0;
 
-    public Writer(FileSystem fs, Path fName, String compressAlgor, Configuration conf, AccumuloConfiguration accumuloConfiguration) throws IOException {
-      this.fsout = fs.create(fName);
-      init(fsout, compressAlgor, conf, accumuloConfiguration);
+    public Writer(FileSystem fs, Path fName, String compressAlgor, RateLimiter writeLimiter, Configuration conf, AccumuloConfiguration accumuloConfiguration)
+        throws IOException {
+      this(new RateLimitedOutputStream(fs.create(fName), writeLimiter), compressAlgor, conf, accumuloConfiguration);
     }
 
-    public Writer(FSDataOutputStream fsout, String compressAlgor, Configuration conf, AccumuloConfiguration accumuloConfiguration) throws IOException {
+    public <OutputStreamType extends OutputStream & PositionedOutput> Writer(OutputStreamType fsout, String compressAlgor, Configuration conf,
+        AccumuloConfiguration accumuloConfiguration) throws IOException {
       this.fsout = fsout;
       init(fsout, compressAlgor, conf, accumuloConfiguration);
     }
 
-    private void init(FSDataOutputStream fsout, String compressAlgor, Configuration conf, AccumuloConfiguration accumuloConfiguration) throws IOException {
+    private <OutputStreamT extends OutputStream & PositionedOutput> void init(OutputStreamT fsout, String compressAlgor, Configuration conf,
+        AccumuloConfiguration accumuloConfiguration) throws IOException {
       _bc = new BCFile.Writer(fsout, compressAlgor, conf, false, accumuloConfiguration);
     }
 
@@ -89,10 +96,13 @@
       _bw.close();
       _bc.close();
 
-      if (this.fsout != null) {
-        this.fsout.close();
-      }
+      length = this.fsout.position();
+      ((OutputStream) this.fsout).close();
+    }
 
+    @Override
+    public long getLength() throws IOException {
+      return length;
     }
 
   }
@@ -135,11 +145,12 @@
    *
    */
   public static class Reader implements BlockFileReader {
+    private final RateLimiter readLimiter;
     private BCFile.Reader _bc;
     private String fileName = "not_available";
     private BlockCache _dCache = null;
     private BlockCache _iCache = null;
-    private FSDataInputStream fin = null;
+    private InputStream fin = null;
     private FileSystem fs;
     private Configuration conf;
     private boolean closed = false;
@@ -217,6 +228,11 @@
 
     public Reader(FileSystem fs, Path dataFile, Configuration conf, BlockCache data, BlockCache index, AccumuloConfiguration accumuloConfiguration)
         throws IOException {
+      this(fs, dataFile, conf, data, index, null, accumuloConfiguration);
+    }
+
+    public Reader(FileSystem fs, Path dataFile, Configuration conf, BlockCache data, BlockCache index, RateLimiter readLimiter,
+        AccumuloConfiguration accumuloConfiguration) throws IOException {
 
       /*
        * Grab path create input stream grab len create file
@@ -228,21 +244,25 @@
       this.fs = fs;
       this.conf = conf;
       this.accumuloConfiguration = accumuloConfiguration;
+      this.readLimiter = readLimiter;
     }
 
-    public Reader(FSDataInputStream fsin, long len, Configuration conf, BlockCache data, BlockCache index, AccumuloConfiguration accumuloConfiguration)
-        throws IOException {
+    public <InputStreamType extends InputStream & Seekable> Reader(InputStreamType fsin, long len, Configuration conf, BlockCache data, BlockCache index,
+        AccumuloConfiguration accumuloConfiguration) throws IOException {
       this._dCache = data;
       this._iCache = index;
+      this.readLimiter = null;
       init(fsin, len, conf, accumuloConfiguration);
     }
 
-    public Reader(FSDataInputStream fsin, long len, Configuration conf, AccumuloConfiguration accumuloConfiguration) throws IOException {
-      // this.fin = fsin;
+    public <InputStreamType extends InputStream & Seekable> Reader(InputStreamType fsin, long len, Configuration conf,
+        AccumuloConfiguration accumuloConfiguration) throws IOException {
+      this.readLimiter = null;
       init(fsin, len, conf, accumuloConfiguration);
     }
 
-    private void init(FSDataInputStream fsin, long len, Configuration conf, AccumuloConfiguration accumuloConfiguration) throws IOException {
+    private <InputStreamT extends InputStream & Seekable> void init(InputStreamT fsin, long len, Configuration conf, AccumuloConfiguration accumuloConfiguration)
+        throws IOException {
       this._bc = new BCFile.Reader(this, fsin, len, conf, accumuloConfiguration);
     }
 
@@ -253,8 +273,9 @@
       if (_bc == null) {
         // lazily open file if needed
         Path path = new Path(fileName);
-        fin = fs.open(path);
-        init(fin, fs.getFileStatus(path).getLen(), conf, accumuloConfiguration);
+        RateLimitedInputStream fsIn = new RateLimitedInputStream(fs.open(path), this.readLimiter);
+        fin = fsIn;
+        init(fsIn, fs.getFileStatus(path).getLen(), conf, accumuloConfiguration);
       }
 
       return _bc;
@@ -461,7 +482,7 @@
           } catch (Exception e) {
             throw new RuntimeException(e);
           }
-          cb.setIndex(new SoftReference<T>(bi));
+          cb.setIndex(new SoftReference<>(bi));
         }
       }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/file/map/MapFileOperations.java b/core/src/main/java/org/apache/accumulo/core/file/map/MapFileOperations.java
index fb2762f..10ca253 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/map/MapFileOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/map/MapFileOperations.java
@@ -21,10 +21,8 @@
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Map;
-import java.util.Set;
 import java.util.concurrent.atomic.AtomicBoolean;
 
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
@@ -32,13 +30,11 @@
 import org.apache.accumulo.core.file.FileOperations;
 import org.apache.accumulo.core.file.FileSKVIterator;
 import org.apache.accumulo.core.file.FileSKVWriter;
-import org.apache.accumulo.core.file.blockfile.cache.BlockCache;
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.system.MapFileIterator;
 import org.apache.accumulo.core.iterators.system.SequenceFileIterator;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.MapFile;
 
@@ -132,65 +128,45 @@
     public void setInterruptFlag(AtomicBoolean flag) {
       ((FileSKVIterator) reader).setInterruptFlag(flag);
     }
+
+    @Override
+    public FileSKVIterator getSample(SamplerConfigurationImpl sampleConfig) {
+      return ((FileSKVIterator) reader).getSample(sampleConfig);
+    }
   }
 
   @Override
-  public FileSKVIterator openReader(String file, boolean seekToBeginning, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    FileSKVIterator iter = new RangeIterator(new MapFileIterator(acuconf, fs, file, conf));
-
-    if (seekToBeginning)
+  protected FileSKVIterator openReader(OpenReaderOperation options) throws IOException {
+    FileSKVIterator iter = new RangeIterator(new MapFileIterator(options.getTableConfiguration(), options.getFileSystem(), options.getFilename(),
+        options.getConfiguration()));
+    if (options.isSeekToBeginning()) {
       iter.seek(new Range(new Key(), null), new ArrayList<ByteSequence>(), false);
-
+    }
     return iter;
   }
 
   @Override
-  public FileSKVWriter openWriter(final String file, final FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-
+  protected FileSKVWriter openWriter(OpenWriterOperation options) throws IOException {
     throw new UnsupportedOperationException();
-
   }
 
   @Override
-  public FileSKVIterator openIndex(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    return new SequenceFileIterator(MapFileUtil.openIndex(conf, fs, new Path(file)), false);
+  protected FileSKVIterator openIndex(OpenIndexOperation options) throws IOException {
+    return new SequenceFileIterator(MapFileUtil.openIndex(options.getConfiguration(), options.getFileSystem(), new Path(options.getFilename())), false);
   }
 
   @Override
-  public long getFileSize(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    return fs.getFileStatus(new Path(file + "/" + MapFile.DATA_FILE_NAME)).getLen();
+  protected long getFileSize(GetFileSizeOperation options) throws IOException {
+    return options.getFileSystem().getFileStatus(new Path(options.getFilename() + "/" + MapFile.DATA_FILE_NAME)).getLen();
   }
 
   @Override
-  public FileSKVIterator openReader(String file, Range range, Set<ByteSequence> columnFamilies, boolean inclusive, FileSystem fs, Configuration conf,
-      AccumuloConfiguration tableConf) throws IOException {
-    MapFileIterator mfIter = new MapFileIterator(tableConf, fs, file, conf);
+  protected FileSKVIterator openScanReader(OpenScanReaderOperation options) throws IOException {
+    MapFileIterator mfIter = new MapFileIterator(options.getTableConfiguration(), options.getFileSystem(), options.getFilename(), options.getConfiguration());
 
     FileSKVIterator iter = new RangeIterator(mfIter);
-
-    iter.seek(range, columnFamilies, inclusive);
+    iter.seek(options.getRange(), options.getColumnFamilies(), options.isRangeInclusive());
 
     return iter;
   }
-
-  @Override
-  public FileSKVIterator openReader(String file, Range range, Set<ByteSequence> columnFamilies, boolean inclusive, FileSystem fs, Configuration conf,
-      AccumuloConfiguration tableConf, BlockCache dataCache, BlockCache indexCache) throws IOException {
-
-    return openReader(file, range, columnFamilies, inclusive, fs, conf, tableConf);
-  }
-
-  @Override
-  public FileSKVIterator openReader(String file, boolean seekToBeginning, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf,
-      BlockCache dataCache, BlockCache indexCache) throws IOException {
-
-    return openReader(file, seekToBeginning, fs, conf, acuconf);
-  }
-
-  @Override
-  public FileSKVIterator openIndex(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf, BlockCache dCache, BlockCache iCache)
-      throws IOException {
-
-    return openIndex(file, fs, conf, acuconf);
-  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/BlockIndex.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/BlockIndex.java
index 1ed9aca..652515e 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/BlockIndex.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/BlockIndex.java
@@ -162,7 +162,7 @@
 
     int count = 0;
 
-    ArrayList<BlockIndexEntry> index = new ArrayList<BlockIndexEntry>(indexEntries - 1);
+    ArrayList<BlockIndexEntry> index = new ArrayList<>(indexEntries - 1);
 
     while (count < (indexEntry.getNumEntries() - interval + 1)) {
 
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/CreateEmpty.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/CreateEmpty.java
index 75d5567..3a41e95 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/CreateEmpty.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/CreateEmpty.java
@@ -65,7 +65,7 @@
     @Parameter(description = " <path> { <path> ... } Each path given is a URL. "
         + "Relative paths are resolved according to the default filesystem defined in your Hadoop configuration, which is usually an HDFS instance.",
         required = true, validateWith = NamedLikeRFile.class)
-    List<String> files = new ArrayList<String>();
+    List<String> files = new ArrayList<>();
   }
 
   public static void main(String[] args) throws Exception {
@@ -77,8 +77,8 @@
     for (String arg : opts.files) {
       Path path = new Path(arg);
       log.info("Writing to file '" + path + "'");
-      FileSKVWriter writer = (new RFileOperations())
-          .openWriter(arg, path.getFileSystem(conf), conf, DefaultConfiguration.getDefaultConfiguration(), opts.codec);
+      FileSKVWriter writer = (new RFileOperations()).newWriterBuilder().forFile(arg, path.getFileSystem(conf), conf)
+          .withTableConfiguration(DefaultConfiguration.getDefaultConfiguration()).withCompression(opts.codec).build();
       writer.close();
     }
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/KeyShortener.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/KeyShortener.java
new file mode 100644
index 0000000..b039982
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/KeyShortener.java
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.file.rfile;
+
+import org.apache.accumulo.core.data.ArrayByteSequence;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.primitives.Bytes;
+
+/*
+ * Code to shorten keys that will be placed into RFile indexes. This code attempts to find a key thats between two keys that shorter.
+ */
+public class KeyShortener {
+
+  private static final byte[] EMPTY = new byte[0];
+  private static final byte[] B00 = new byte[] {(byte) 0x00};
+  private static final byte[] BFF = new byte[] {(byte) 0xff};
+
+  private static final Logger log = LoggerFactory.getLogger(KeyShortener.class);
+
+  private KeyShortener() {}
+
+  private static int findNonFF(ByteSequence bs, int start) {
+    for (int i = start; i < bs.length(); i++) {
+      if (bs.byteAt(i) != (byte) 0xff) {
+        return i;
+      }
+    }
+
+    return bs.length();
+  }
+
+  /*
+   * return S such that prev < S < current or null if no such sequence
+   */
+  public static ByteSequence shorten(ByteSequence prev, ByteSequence current) {
+
+    int minLen = Math.min(prev.length(), current.length());
+
+    for (int i = 0; i < minLen; i++) {
+      int pb = 0xff & prev.byteAt(i);
+      int cb = 0xff & current.byteAt(i);
+
+      int diff = cb - pb;
+
+      if (diff == 1) {
+        int newLen = findNonFF(prev, i + 1);
+        byte[] successor;
+        if (newLen < prev.length()) {
+          successor = Bytes.concat(prev.subSequence(0, newLen).toArray(), BFF);
+        } else {
+          successor = Bytes.concat(prev.subSequence(0, newLen).toArray(), B00);
+        }
+        return new ArrayByteSequence(successor);
+      } else if (diff > 1) {
+        byte[] copy = new byte[i + 1];
+        System.arraycopy(prev.subSequence(0, i + 1).toArray(), 0, copy, 0, i + 1);
+        copy[i] = (byte) ((0xff & copy[i]) + 1);
+        return new ArrayByteSequence(copy);
+      }
+    }
+
+    ArrayByteSequence successor = new ArrayByteSequence(Bytes.concat(prev.toArray(), B00));
+    if (successor.equals(current)) {
+      return null;
+    }
+
+    return successor;
+  }
+
+  /*
+   * This entire class supports an optional optimization. This code does a sanity check to ensure the optimization code did what was intended, doing a noop if
+   * there is a bug.
+   */
+  @VisibleForTesting
+  static Key sanityCheck(Key prev, Key current, Key shortened) {
+    if (prev.compareTo(shortened) >= 0) {
+      log.warn("Bug in key shortening code, please open an issue " + prev + " >= " + shortened);
+      return prev;
+    }
+
+    if (current.compareTo(shortened) <= 0) {
+      log.warn("Bug in key shortening code, please open an issue " + current + " <= " + shortened);
+      return prev;
+    }
+
+    return shortened;
+  }
+
+  /*
+   * Find a key K where prev < K < current AND K is shorter. If can not find a K that meets criteria, then returns prev.
+   */
+  public static Key shorten(Key prev, Key current) {
+    Preconditions.checkArgument(prev.compareTo(current) <= 0, "Expected key less than or equal. " + prev + " > " + current);
+
+    if (prev.getRowData().compareTo(current.getRowData()) < 0) {
+      ByteSequence shortenedRow = shorten(prev.getRowData(), current.getRowData());
+      if (shortenedRow == null) {
+        return prev;
+      }
+      return sanityCheck(prev, current, new Key(shortenedRow.toArray(), EMPTY, EMPTY, EMPTY, 0));
+    } else if (prev.getColumnFamilyData().compareTo(current.getColumnFamilyData()) < 0) {
+      ByteSequence shortenedFam = shorten(prev.getColumnFamilyData(), current.getColumnFamilyData());
+      if (shortenedFam == null) {
+        return prev;
+      }
+      return sanityCheck(prev, current, new Key(prev.getRowData().toArray(), shortenedFam.toArray(), EMPTY, EMPTY, 0));
+    } else if (prev.getColumnQualifierData().compareTo(current.getColumnQualifierData()) < 0) {
+      ByteSequence shortenedQual = shorten(prev.getColumnQualifierData(), current.getColumnQualifierData());
+      if (shortenedQual == null) {
+        return prev;
+      }
+      return sanityCheck(prev, current, new Key(prev.getRowData().toArray(), prev.getColumnFamilyData().toArray(), shortenedQual.toArray(), EMPTY, 0));
+    } else {
+      return prev;
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiIndexIterator.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiIndexIterator.java
index f220a58..01af184 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiIndexIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiIndexIterator.java
@@ -33,6 +33,7 @@
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.system.HeapIterator;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 
 class MultiIndexIterator extends HeapIterator implements FileSKVIterator {
 
@@ -93,4 +94,9 @@
     throw new UnsupportedOperationException();
   }
 
+  @Override
+  public FileSKVIterator getSample(SamplerConfigurationImpl sampleConfig) {
+    throw new UnsupportedOperationException();
+  }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiLevelIndex.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiLevelIndex.java
index 75ad4c8..f99560e 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiLevelIndex.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/MultiLevelIndex.java
@@ -132,7 +132,7 @@
     }
   }
 
-  private static abstract class SerializedIndexBase<T> extends AbstractList<T> implements List<T>, RandomAccess {
+  private static abstract class SerializedIndexBase<T> extends AbstractList<T> implements RandomAccess {
     protected int[] offsets;
     protected byte[] data;
 
@@ -276,7 +276,7 @@
 
       indexBytes = new ByteArrayOutputStream();
       indexOut = new DataOutputStream(indexBytes);
-      offsets = new ArrayList<Integer>();
+      offsets = new ArrayList<>();
     }
 
     public IndexBlock() {}
@@ -309,7 +309,7 @@
 
     public void readFields(DataInput in, int version) throws IOException {
 
-      if (version == RFile.RINDEX_VER_6 || version == RFile.RINDEX_VER_7) {
+      if (version == RFile.RINDEX_VER_6 || version == RFile.RINDEX_VER_7 || version == RFile.RINDEX_VER_8) {
         level = in.readInt();
         offset = in.readInt();
         hasNext = in.readBoolean();
@@ -352,7 +352,7 @@
 
         ByteArrayOutputStream baos = new ByteArrayOutputStream();
         DataOutputStream dos = new DataOutputStream(baos);
-        ArrayList<Integer> oal = new ArrayList<Integer>();
+        ArrayList<Integer> oal = new ArrayList<>();
 
         for (int i = 0; i < size; i++) {
           IndexEntry ie = new IndexEntry(false);
@@ -501,7 +501,7 @@
     Writer(BlockFileWriter blockFileWriter, int maxBlockSize) {
       this.blockFileWriter = blockFileWriter;
       this.threshold = maxBlockSize;
-      levels = new ArrayList<IndexBlock>();
+      levels = new ArrayList<>();
     }
 
     private void add(int level, Key key, int data, long offset, long compressedSize, long rawSize) throws IOException {
@@ -810,7 +810,7 @@
 
       size = 0;
 
-      if (version == RFile.RINDEX_VER_6 || version == RFile.RINDEX_VER_7) {
+      if (version == RFile.RINDEX_VER_6 || version == RFile.RINDEX_VER_7 || version == RFile.RINDEX_VER_8) {
         size = in.readInt();
       }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/PrintInfo.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/PrintInfo.java
index 5a3e911..cf0d046 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/PrintInfo.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/PrintInfo.java
@@ -28,9 +28,11 @@
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.FileSKVIterator;
 import org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile;
 import org.apache.accumulo.core.file.rfile.RFile.Reader;
 import org.apache.accumulo.start.spi.KeywordExecutable;
+import org.apache.commons.math3.stat.descriptive.SummaryStatistics;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -54,12 +56,54 @@
     boolean hash = false;
     @Parameter(names = {"--histogram"}, description = "print a histogram of the key-value sizes")
     boolean histogram = false;
+    @Parameter(names = {"--useSample"}, description = "Use sample data for --dump, --vis, --histogram options")
+    boolean useSample = false;
+    @Parameter(names = {"--keyStats"}, description = "print key length statistics for index and all data")
+    boolean keyStats = false;
     @Parameter(description = " <file> { <file> ... }")
-    List<String> files = new ArrayList<String>();
+    List<String> files = new ArrayList<>();
     @Parameter(names = {"-c", "--config"}, variableArity = true, description = "Comma-separated Hadoop configuration files")
     List<String> configFiles = new ArrayList<>();
   }
 
+  static class LogHistogram {
+    long countBuckets[] = new long[11];
+    long sizeBuckets[] = new long[countBuckets.length];
+    long totalSize = 0;
+
+    public void add(int size) {
+      int bucket = (int) Math.log10(size);
+      countBuckets[bucket]++;
+      sizeBuckets[bucket] += size;
+      totalSize += size;
+    }
+
+    public void print(String indent) {
+      System.out.println(indent + "Up to size      count      %-age");
+      for (int i = 1; i < countBuckets.length; i++) {
+        System.out.println(String.format("%s%11.0f : %10d %6.2f%%", indent, Math.pow(10, i), countBuckets[i], sizeBuckets[i] * 100. / totalSize));
+      }
+    }
+  }
+
+  static class KeyStats {
+    private SummaryStatistics stats = new SummaryStatistics();
+    private LogHistogram logHistogram = new LogHistogram();
+
+    public void add(Key k) {
+      int size = k.getSize();
+      stats.addValue(size);
+      logHistogram.add(size);
+    }
+
+    public void print(String indent) {
+      logHistogram.print(indent);
+      System.out.println();
+      System.out.printf("%smin:%,11.2f max:%,11.2f avg:%,11.2f stddev:%,11.2f\n", indent, stats.getMin(), stats.getMax(), stats.getMean(),
+          stats.getStandardDeviation());
+    }
+  }
+
   public static void main(String[] args) throws Exception {
     new PrintInfo().execute(args);
   }
@@ -87,9 +131,10 @@
     FileSystem hadoopFs = FileSystem.get(conf);
     FileSystem localFs = FileSystem.getLocal(conf);
 
-    long countBuckets[] = new long[11];
-    long sizeBuckets[] = new long[countBuckets.length];
-    long totalSize = 0;
+    LogHistogram kvHistogram = new LogHistogram();
+
+    KeyStats dataKeyStats = new KeyStats();
+    KeyStats indexKeyStats = new KeyStats();
 
     for (String arg : opts.files) {
       Path path = new Path(arg);
@@ -116,42 +161,71 @@
 
       Map<String,ArrayList<ByteSequence>> localityGroupCF = null;
 
-      if (opts.histogram || opts.dump || opts.vis || opts.hash) {
+      if (opts.histogram || opts.dump || opts.vis || opts.hash || opts.keyStats) {
         localityGroupCF = iter.getLocalityGroupCF();
 
+        FileSKVIterator dataIter;
+        if (opts.useSample) {
+          dataIter = iter.getSample();
+
+          if (dataIter == null) {
+            System.out.println("ERROR : This rfile has no sample data");
+            return;
+          }
+        } else {
+          dataIter = iter;
+        }
+
+        if (opts.keyStats) {
+          FileSKVIterator indexIter = iter.getIndex();
+          while (indexIter.hasTop()) {
+            indexKeyStats.add(indexIter.getTopKey());
+            indexIter.next();
+          }
+        }
+
         for (Entry<String,ArrayList<ByteSequence>> cf : localityGroupCF.entrySet()) {
 
-          iter.seek(new Range((Key) null, (Key) null), cf.getValue(), true);
-          while (iter.hasTop()) {
-            Key key = iter.getTopKey();
-            Value value = iter.getTopValue();
-            if (opts.dump)
+          dataIter.seek(new Range((Key) null, (Key) null), cf.getValue(), true);
+          while (dataIter.hasTop()) {
+            Key key = dataIter.getTopKey();
+            Value value = dataIter.getTopValue();
+            if (opts.dump) {
               System.out.println(key + " -> " + value);
-            if (opts.histogram) {
-              long size = key.getSize() + value.getSize();
-              int bucket = (int) Math.log10(size);
-              countBuckets[bucket]++;
-              sizeBuckets[bucket] += size;
-              totalSize += size;
+              if (System.out.checkError())
+                return;
             }
-            iter.next();
+            if (opts.histogram) {
+              kvHistogram.add(key.getSize() + value.getSize());
+            }
+            if (opts.keyStats) {
+              dataKeyStats.add(key);
+            }
+            dataIter.next();
           }
         }
       }
-      System.out.println();
 
       iter.close();
 
-      if (opts.vis || opts.hash)
+      if (opts.vis || opts.hash) {
+        System.out.println();
         vmg.printMetrics(opts.hash, "Visibility", System.out);
-
-      if (opts.histogram) {
-        System.out.println("Up to size      count      %-age");
-        for (int i = 1; i < countBuckets.length; i++) {
-          System.out.println(String.format("%11.0f : %10d %6.2f%%", Math.pow(10, i), countBuckets[i], sizeBuckets[i] * 100. / totalSize));
-        }
       }
 
+      if (opts.histogram) {
+        System.out.println();
+        kvHistogram.print("");
+      }
+
+      if (opts.keyStats) {
+        System.out.println();
+        System.out.println("Statistics for keys in data :");
+        dataKeyStats.print("\t");
+        System.out.println();
+        System.out.println("Statistics for keys in index :");
+        indexKeyStats.print("\t");
+      }
       // If the output stream has closed, there is no reason to keep going.
       if (System.out.checkError())
         return;
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/RFile.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/RFile.java
index af9011d..b11cf1a 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/RFile.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/RFile.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.core.file.rfile;
 
+import static java.util.Objects.requireNonNull;
+
 import java.io.DataInput;
 import java.io.DataInputStream;
 import java.io.DataOutput;
@@ -36,6 +38,9 @@
 import java.util.TreeMap;
 import java.util.concurrent.atomic.AtomicBoolean;
 
+import org.apache.accumulo.core.client.SampleNotPresentException;
+import org.apache.accumulo.core.client.sample.Sampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.ArrayByteSequence;
@@ -62,13 +67,16 @@
 import org.apache.accumulo.core.iterators.system.InterruptibleIterator;
 import org.apache.accumulo.core.iterators.system.LocalityGroupIterator;
 import org.apache.accumulo.core.iterators.system.LocalityGroupIterator.LocalityGroup;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.util.MutableByteSequence;
 import org.apache.commons.lang.mutable.MutableLong;
-import org.apache.commons.math.stat.descriptive.SummaryStatistics;
+import org.apache.commons.math3.stat.descriptive.SummaryStatistics;
 import org.apache.hadoop.io.Writable;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import com.google.common.annotations.VisibleForTesting;
+
 public class RFile {
 
   public static final String EXTENSION = "rf";
@@ -78,15 +86,42 @@
   private RFile() {}
 
   private static final int RINDEX_MAGIC = 0x20637474;
-  static final int RINDEX_VER_7 = 7;
-  static final int RINDEX_VER_6 = 6;
+
+  static final int RINDEX_VER_8 = 8; // Added sample storage. There is a sample locality group for each locality group. Sample are built using a Sampler and
+                                     // sampler configuration. The Sampler and its configuration are stored in RFile. Persisting the method of producing the
+                                     // sample allows a user of RFile to determine if the sample is useful.
+                                     //
+                                     // Selected smaller keys for index by doing two things. First internal stats were used to look for keys that were below
+                                     // average in size for the index. Also keys that were statistically large were excluded from the index. Second shorter keys
+                                     // (that may not exist in data) were generated for the index.
+  static final int RINDEX_VER_7 = 7; // Added support for prefix encoding and encryption. Before this change only exact matches within a key field were deduped
+                                     // for consecutive keys. After this change, if consecutive key fields have the same prefix then the prefix is only stored
+                                     // once.
+  static final int RINDEX_VER_6 = 6; // Added support for multilevel indexes. Before this the index was one list with an entry for each data block. For large
+                                     // files, a large index needed to be read into memory before any seek could be done. After this change the index is a fat
+                                     // tree, and opening a large rfile is much faster. Like the previous version of Rfile, each index node in the tree is kept
+                                     // in memory serialized and used in its serialized form.
   // static final int RINDEX_VER_5 = 5; // unreleased
-  static final int RINDEX_VER_4 = 4;
-  static final int RINDEX_VER_3 = 3;
+  static final int RINDEX_VER_4 = 4; // Added support for seeking using serialized indexes. After this change index is no longer deserialized when rfile opened.
+                                     // Entire serialized index is read into memory as single byte array. For seeks, serialized index is used to find blocks
+                                     // (the binary search deserializes the specific entries its needs). This resulted in less memory usage (no object overhead)
+                                     // and faster open times for RFiles.
+  static final int RINDEX_VER_3 = 3; // Initial released version of RFile. R is for relative encoding. A keys is encoded relative to the previous key. The
+                                     // initial version deduped key fields that were the same for consecutive keys. For sorted data this is a common occurrence.
+                                     // This version supports locality groups. Each locality group has an index pointing to set of data blocks. Each data block
+                                     // contains relatively encoded keys and values.
+
+  // Buffer sample data so that many sample data blocks are stored contiguously.
+  private static int sampleBufferSize = 10000000;
+
+  @VisibleForTesting
+  public static void setSampleBufferSize(int bufferSize) {
+    sampleBufferSize = bufferSize;
+  }
 
   private static class LocalityGroupMetadata implements Writable {
 
-    private int startBlock;
+    private int startBlock = -1;
     private Key firstKey;
     private Map<ByteSequence,MutableLong> columnFamilies;
 
@@ -96,26 +131,26 @@
 
     private MultiLevelIndex.BufferedWriter indexWriter;
     private MultiLevelIndex.Reader indexReader;
+    private int version;
 
     public LocalityGroupMetadata(int version, BlockFileReader br) {
-      columnFamilies = new HashMap<ByteSequence,MutableLong>();
+      columnFamilies = new HashMap<>();
       indexReader = new MultiLevelIndex.Reader(br, version);
+      this.version = version;
     }
 
-    public LocalityGroupMetadata(int nextBlock, Set<ByteSequence> pcf, int indexBlockSize, BlockFileWriter bfw) {
-      this.startBlock = nextBlock;
+    public LocalityGroupMetadata(Set<ByteSequence> pcf, int indexBlockSize, BlockFileWriter bfw) {
       isDefaultLG = true;
-      columnFamilies = new HashMap<ByteSequence,MutableLong>();
+      columnFamilies = new HashMap<>();
       previousColumnFamilies = pcf;
 
       indexWriter = new MultiLevelIndex.BufferedWriter(new MultiLevelIndex.Writer(bfw, indexBlockSize));
     }
 
-    public LocalityGroupMetadata(String name, Set<ByteSequence> cfset, int nextBlock, int indexBlockSize, BlockFileWriter bfw) {
-      this.startBlock = nextBlock;
+    public LocalityGroupMetadata(String name, Set<ByteSequence> cfset, int indexBlockSize, BlockFileWriter bfw) {
       this.name = name;
       isDefaultLG = false;
-      columnFamilies = new HashMap<ByteSequence,MutableLong>();
+      columnFamilies = new HashMap<>();
       for (ByteSequence cf : cfset) {
         columnFamilies.put(cf, new MutableLong(0));
       }
@@ -182,7 +217,9 @@
         name = in.readUTF();
       }
 
-      startBlock = in.readInt();
+      if (version == RINDEX_VER_3 || version == RINDEX_VER_4 || version == RINDEX_VER_6 || version == RINDEX_VER_7) {
+        startBlock = in.readInt();
+      }
 
       int size = in.readInt();
 
@@ -193,7 +230,7 @@
         columnFamilies = null;
       } else {
         if (columnFamilies == null)
-          columnFamilies = new HashMap<ByteSequence,MutableLong>();
+          columnFamilies = new HashMap<>();
         else
           columnFamilies.clear();
 
@@ -225,8 +262,6 @@
         out.writeUTF(name);
       }
 
-      out.writeInt(startBlock);
-
       if (isDefaultLG && columnFamilies == null) {
         // only expect null when default LG, otherwise let a NPE occur
         out.writeInt(-1);
@@ -247,26 +282,27 @@
       indexWriter.close(out);
     }
 
-    public void printInfo() throws IOException {
+    public void printInfo(boolean isSample) throws IOException {
       PrintStream out = System.out;
-      out.println("Locality group         : " + (isDefaultLG ? "<DEFAULT>" : name));
-      out.println("\tStart block          : " + startBlock);
-      out.println("\tNum   blocks         : " + String.format("%,d", indexReader.size()));
-      TreeMap<Integer,Long> sizesByLevel = new TreeMap<Integer,Long>();
-      TreeMap<Integer,Long> countsByLevel = new TreeMap<Integer,Long>();
+      out.printf("%-24s : %s\n", (isSample ? "Sample " : "") + "Locality group ", (isDefaultLG ? "<DEFAULT>" : name));
+      if (version == RINDEX_VER_3 || version == RINDEX_VER_4 || version == RINDEX_VER_6 || version == RINDEX_VER_7) {
+        out.printf("\t%-22s : %d\n", "Start block", startBlock);
+      }
+      out.printf("\t%-22s : %,d\n", "Num   blocks", indexReader.size());
+      TreeMap<Integer,Long> sizesByLevel = new TreeMap<>();
+      TreeMap<Integer,Long> countsByLevel = new TreeMap<>();
       indexReader.getIndexInfo(sizesByLevel, countsByLevel);
       for (Entry<Integer,Long> entry : sizesByLevel.descendingMap().entrySet()) {
-        out.println("\tIndex level " + entry.getKey() + "        : "
-            + String.format("%,d bytes  %,d blocks", entry.getValue(), countsByLevel.get(entry.getKey())));
+        out.printf("\t%-22s : %,d bytes  %,d blocks\n", "Index level " + entry.getKey(), entry.getValue(), countsByLevel.get(entry.getKey()));
       }
-      out.println("\tFirst key            : " + firstKey);
+      out.printf("\t%-22s : %s\n", "First key", firstKey);
 
       Key lastKey = null;
       if (indexReader.size() > 0) {
         lastKey = indexReader.getLastKey();
       }
 
-      out.println("\tLast key             : " + lastKey);
+      out.printf("\t%-22s : %s\n", "Last key", lastKey);
 
       long numKeys = 0;
       IndexIterator countIter = indexReader.lookup(new Key());
@@ -274,16 +310,71 @@
         numKeys += countIter.next().getNumEntries();
       }
 
-      out.println("\tNum entries          : " + String.format("%,d", numKeys));
-      out.println("\tColumn families      : " + (isDefaultLG && columnFamilies == null ? "<UNKNOWN>" : columnFamilies.keySet()));
+      out.printf("\t%-22s : %,d\n", "Num entries", numKeys);
+      out.printf("\t%-22s : %s\n", "Column families", (isDefaultLG && columnFamilies == null ? "<UNKNOWN>" : columnFamilies.keySet()));
     }
 
   }
 
-  public static class Writer implements FileSKVWriter {
+  private static class SampleEntry {
+    Key key;
+    Value val;
 
-    public static final int MAX_CF_IN_DLG = 1000;
-    private static final double MAX_BLOCK_MULTIPLIER = 1.1;
+    SampleEntry(Key key, Value val) {
+      this.key = new Key(key);
+      this.val = new Value(val);
+    }
+  }
+
+  private static class SampleLocalityGroupWriter {
+
+    private Sampler sampler;
+
+    private List<SampleEntry> entries = new ArrayList<>();
+    private long dataSize = 0;
+
+    private LocalityGroupWriter lgr;
+
+    public SampleLocalityGroupWriter(LocalityGroupWriter lgr, Sampler sampler) {
+      this.lgr = lgr;
+      this.sampler = sampler;
+    }
+
+    public void append(Key key, Value value) throws IOException {
+      if (sampler.accept(key)) {
+        entries.add(new SampleEntry(key, value));
+        dataSize += key.getSize() + value.getSize();
+      }
+    }
+
+    public void close() throws IOException {
+      for (SampleEntry se : entries) {
+        lgr.append(se.key, se.val);
+      }
+
+      lgr.close();
+    }
+
+    public void flushIfNeeded() throws IOException {
+      if (dataSize > sampleBufferSize) {
+        // the reason to write out all but one key is so that closeBlock() can always eventually be called with true
+        List<SampleEntry> subList = entries.subList(0, entries.size() - 1);
+
+        if (subList.size() > 0) {
+          for (SampleEntry se : subList) {
+            lgr.append(se.key, se.val);
+          }
+
+          lgr.closeBlock(subList.get(subList.size() - 1).key, false);
+
+          subList.clear();
+          dataSize = 0;
+        }
+      }
+    }
+  }
+
+  private static class LocalityGroupWriter {
 
     private BlockFileWriter fileWriter;
     private ABlockWriter blockWriter;
@@ -291,79 +382,26 @@
     // private BlockAppender blockAppender;
     private final long blockSize;
     private final long maxBlockSize;
-    private final int indexBlockSize;
     private int entries = 0;
 
-    private ArrayList<LocalityGroupMetadata> localityGroups = new ArrayList<LocalityGroupMetadata>();
     private LocalityGroupMetadata currentLocalityGroup = null;
-    private int nextBlock = 0;
 
     private Key lastKeyInBlock = null;
 
-    private boolean dataClosed = false;
-    private boolean closed = false;
     private Key prevKey = new Key();
-    private boolean startedDefaultLocalityGroup = false;
 
-    private HashSet<ByteSequence> previousColumnFamilies;
+    private SampleLocalityGroupWriter sample;
 
     private SummaryStatistics keyLenStats = new SummaryStatistics();
     private double avergageKeySize = 0;
 
-    public Writer(BlockFileWriter bfw, int blockSize) throws IOException {
-      this(bfw, blockSize, (int) AccumuloConfiguration.getDefaultConfiguration().getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE_INDEX));
-    }
-
-    public Writer(BlockFileWriter bfw, int blockSize, int indexBlockSize) throws IOException {
+    LocalityGroupWriter(BlockFileWriter fileWriter, long blockSize, long maxBlockSize, LocalityGroupMetadata currentLocalityGroup,
+        SampleLocalityGroupWriter sample) {
+      this.fileWriter = fileWriter;
       this.blockSize = blockSize;
-      this.maxBlockSize = (long) (blockSize * MAX_BLOCK_MULTIPLIER);
-      this.indexBlockSize = indexBlockSize;
-      this.fileWriter = bfw;
-      this.blockWriter = null;
-      previousColumnFamilies = new HashSet<ByteSequence>();
-    }
-
-    @Override
-    public synchronized void close() throws IOException {
-
-      if (closed) {
-        return;
-      }
-
-      closeData();
-
-      ABlockWriter mba = fileWriter.prepareMetaBlock("RFile.index");
-
-      mba.writeInt(RINDEX_MAGIC);
-      mba.writeInt(RINDEX_VER_7);
-
-      if (currentLocalityGroup != null)
-        localityGroups.add(currentLocalityGroup);
-
-      mba.writeInt(localityGroups.size());
-
-      for (LocalityGroupMetadata lc : localityGroups) {
-        lc.write(mba);
-      }
-
-      mba.close();
-
-      fileWriter.close();
-
-      closed = true;
-    }
-
-    private void closeData() throws IOException {
-
-      if (dataClosed) {
-        return;
-      }
-
-      dataClosed = true;
-
-      if (blockWriter != null) {
-        closeBlock(lastKeyInBlock, true);
-      }
+      this.maxBlockSize = maxBlockSize;
+      this.currentLocalityGroup = currentLocalityGroup;
+      this.sample = sample;
     }
 
     private boolean isGiantKey(Key k) {
@@ -371,15 +409,10 @@
       return k.getSize() > keyLenStats.getMean() + keyLenStats.getStandardDeviation() * 3;
     }
 
-    @Override
     public void append(Key key, Value value) throws IOException {
 
-      if (dataClosed) {
-        throw new IllegalStateException("Cannont append, data closed");
-      }
-
       if (key.compareTo(prevKey) < 0) {
-        throw new IllegalStateException("Keys appended out-of-order.  New key " + key + ", previous key " + prevKey);
+        throw new IllegalArgumentException("Keys appended out-of-order.  New key " + key + ", previous key " + prevKey);
       }
 
       currentLocalityGroup.updateColumnCount(key);
@@ -388,17 +421,25 @@
         currentLocalityGroup.setFirstKey(key);
       }
 
+      if (sample != null) {
+        sample.append(key, value);
+      }
+
       if (blockWriter == null) {
         blockWriter = fileWriter.prepareDataBlock();
       } else if (blockWriter.getRawSize() > blockSize) {
 
+        // Look for a key thats short to put in the index, defining short as average or below.
         if (avergageKeySize == 0) {
           // use the same average for the search for a below average key for a block
           avergageKeySize = keyLenStats.getMean();
         }
 
-        if ((prevKey.getSize() <= avergageKeySize || blockWriter.getRawSize() > maxBlockSize) && !isGiantKey(prevKey)) {
-          closeBlock(prevKey, false);
+        // Possibly produce a shorter key that does not exist in data. Even if a key can be shortened, it may not be below average.
+        Key closeKey = KeyShortener.shorten(prevKey, key);
+
+        if ((closeKey.getSize() <= avergageKeySize || blockWriter.getRawSize() > maxBlockSize) && !isGiantKey(closeKey)) {
+          closeBlock(closeKey, false);
           blockWriter = fileWriter.prepareDataBlock();
           // set average to zero so its recomputed for the next block
           avergageKeySize = 0;
@@ -426,10 +467,133 @@
       else
         currentLocalityGroup.indexWriter.add(key, entries, blockWriter.getStartPos(), blockWriter.getCompressedSize(), blockWriter.getRawSize());
 
+      if (sample != null)
+        sample.flushIfNeeded();
+
       blockWriter = null;
       lastKeyInBlock = null;
       entries = 0;
-      nextBlock++;
+    }
+
+    public void close() throws IOException {
+      if (blockWriter != null) {
+        closeBlock(lastKeyInBlock, true);
+      }
+
+      if (sample != null) {
+        sample.close();
+      }
+    }
+  }
+
+  public static class Writer implements FileSKVWriter {
+
+    public static final int MAX_CF_IN_DLG = 1000;
+    private static final double MAX_BLOCK_MULTIPLIER = 1.1;
+
+    private BlockFileWriter fileWriter;
+
+    // private BlockAppender blockAppender;
+    private final long blockSize;
+    private final long maxBlockSize;
+    private final int indexBlockSize;
+
+    private ArrayList<LocalityGroupMetadata> localityGroups = new ArrayList<>();
+    private ArrayList<LocalityGroupMetadata> sampleGroups = new ArrayList<>();
+    private LocalityGroupMetadata currentLocalityGroup = null;
+    private LocalityGroupMetadata sampleLocalityGroup = null;
+
+    private boolean dataClosed = false;
+    private boolean closed = false;
+    private boolean startedDefaultLocalityGroup = false;
+
+    private HashSet<ByteSequence> previousColumnFamilies;
+    private long length = -1;
+
+    private LocalityGroupWriter lgWriter;
+
+    private SamplerConfigurationImpl samplerConfig;
+    private Sampler sampler;
+
+    public Writer(BlockFileWriter bfw, int blockSize) throws IOException {
+      this(bfw, blockSize, (int) AccumuloConfiguration.getDefaultConfiguration().getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE_INDEX), null, null);
+    }
+
+    public Writer(BlockFileWriter bfw, int blockSize, int indexBlockSize, SamplerConfigurationImpl samplerConfig, Sampler sampler) throws IOException {
+      this.blockSize = blockSize;
+      this.maxBlockSize = (long) (blockSize * MAX_BLOCK_MULTIPLIER);
+      this.indexBlockSize = indexBlockSize;
+      this.fileWriter = bfw;
+      previousColumnFamilies = new HashSet<>();
+      this.samplerConfig = samplerConfig;
+      this.sampler = sampler;
+    }
+
+    @Override
+    public synchronized void close() throws IOException {
+
+      if (closed) {
+        return;
+      }
+
+      closeData();
+
+      ABlockWriter mba = fileWriter.prepareMetaBlock("RFile.index");
+
+      mba.writeInt(RINDEX_MAGIC);
+      mba.writeInt(RINDEX_VER_8);
+
+      if (currentLocalityGroup != null) {
+        localityGroups.add(currentLocalityGroup);
+        sampleGroups.add(sampleLocalityGroup);
+      }
+
+      mba.writeInt(localityGroups.size());
+
+      for (LocalityGroupMetadata lc : localityGroups) {
+        lc.write(mba);
+      }
+
+      if (samplerConfig == null) {
+        mba.writeBoolean(false);
+      } else {
+        mba.writeBoolean(true);
+
+        for (LocalityGroupMetadata lc : sampleGroups) {
+          lc.write(mba);
+        }
+
+        samplerConfig.write(mba);
+      }
+
+      mba.close();
+      fileWriter.close();
+      length = fileWriter.getLength();
+
+      closed = true;
+    }
+
+    private void closeData() throws IOException {
+
+      if (dataClosed) {
+        return;
+      }
+
+      dataClosed = true;
+
+      if (lgWriter != null) {
+        lgWriter.close();
+      }
+    }
+
+    @Override
+    public void append(Key key, Value value) throws IOException {
+
+      if (dataClosed) {
+        throw new IllegalStateException("Cannont append, data closed");
+      }
+
+      lgWriter.append(key, value);
     }
 
     @Override
@@ -448,28 +612,35 @@
         throw new IllegalStateException("Can not start anymore new locality groups after default locality group started");
       }
 
-      if (blockWriter != null) {
-        closeBlock(lastKeyInBlock, true);
+      if (lgWriter != null) {
+        lgWriter.close();
       }
 
       if (currentLocalityGroup != null) {
         localityGroups.add(currentLocalityGroup);
+        sampleGroups.add(sampleLocalityGroup);
       }
 
       if (columnFamilies == null) {
         startedDefaultLocalityGroup = true;
-        currentLocalityGroup = new LocalityGroupMetadata(nextBlock, previousColumnFamilies, indexBlockSize, fileWriter);
+        currentLocalityGroup = new LocalityGroupMetadata(previousColumnFamilies, indexBlockSize, fileWriter);
+        sampleLocalityGroup = new LocalityGroupMetadata(previousColumnFamilies, indexBlockSize, fileWriter);
       } else {
         if (!Collections.disjoint(columnFamilies, previousColumnFamilies)) {
-          HashSet<ByteSequence> overlap = new HashSet<ByteSequence>(columnFamilies);
+          HashSet<ByteSequence> overlap = new HashSet<>(columnFamilies);
           overlap.retainAll(previousColumnFamilies);
           throw new IllegalArgumentException("Column families over lap with previous locality group : " + overlap);
         }
-        currentLocalityGroup = new LocalityGroupMetadata(name, columnFamilies, nextBlock, indexBlockSize, fileWriter);
+        currentLocalityGroup = new LocalityGroupMetadata(name, columnFamilies, indexBlockSize, fileWriter);
+        sampleLocalityGroup = new LocalityGroupMetadata(name, columnFamilies, indexBlockSize, fileWriter);
         previousColumnFamilies.addAll(columnFamilies);
       }
 
-      prevKey = new Key();
+      SampleLocalityGroupWriter sampleWriter = null;
+      if (sampler != null) {
+        sampleWriter = new SampleLocalityGroupWriter(new LocalityGroupWriter(fileWriter, blockSize, maxBlockSize, sampleLocalityGroup, null), sampler);
+      }
+      lgWriter = new LocalityGroupWriter(fileWriter, blockSize, maxBlockSize, currentLocalityGroup, sampleWriter);
     }
 
     @Override
@@ -489,6 +660,14 @@
     public boolean supportsLocalityGroups() {
       return true;
     }
+
+    @Override
+    public long getLength() throws IOException {
+      if (!closed) {
+        return fileWriter.getLength();
+      }
+      return length;
+    }
   }
 
   private static class LocalityGroupReader extends LocalityGroup implements FileSKVIterator {
@@ -631,8 +810,9 @@
       if (columnFamilies.size() != 0 || inclusive)
         throw new IllegalArgumentException("I do not know how to filter column families");
 
-      if (interruptFlag != null && interruptFlag.get())
+      if (interruptFlag != null && interruptFlag.get()) {
         throw new IterationInterruptedException();
+      }
 
       try {
         _seek(range);
@@ -693,7 +873,7 @@
           reseek = false;
         }
 
-        if (startKey.compareTo(getTopKey()) >= 0 && startKey.compareTo(iiter.peekPrevious().getKey()) <= 0) {
+        if (entriesLeft > 0 && startKey.compareTo(getTopKey()) >= 0 && startKey.compareTo(iiter.peekPrevious().getKey()) <= 0) {
           // start key is within the unconsumed portion of the current block
 
           // this code intentionally does not use the index associated with a cached block
@@ -703,7 +883,7 @@
           // and speed up others.
 
           MutableByteSequence valbs = new MutableByteSequence(new byte[64], 0, 0);
-          SkippR skippr = RelativeKey.fastSkip(currBlock, startKey, valbs, prevKey, getTopKey());
+          SkippR skippr = RelativeKey.fastSkip(currBlock, startKey, valbs, prevKey, getTopKey(), entriesLeft);
           if (skippr.skipped > 0) {
             entriesLeft -= skippr.skipped;
             val = new Value(valbs.toArray());
@@ -714,6 +894,13 @@
           reseek = false;
         }
 
+        if (entriesLeft == 0 && startKey.compareTo(getTopKey()) > 0 && startKey.compareTo(iiter.peekPrevious().getKey()) <= 0) {
+          // In the empty space at the end of a block. This can occur when keys are shortened in the index creating index entries that do not exist in the
+          // block. These shortened index entires fall between the last key in a block and first key in the next block, but may not exist in the data.
+          // Just proceed to the next block.
+          reseek = false;
+        }
+
         if (iiter.previousIndex() == 0 && getTopKey().equals(firstKey) && startKey.compareTo(firstKey) <= 0) {
           // seeking before the beginning of the file, and already positioned at the first key in the file
           // so there is nothing to do
@@ -776,7 +963,7 @@
             }
           }
 
-          SkippR skippr = RelativeKey.fastSkip(currBlock, startKey, valbs, prevKey, currKey);
+          SkippR skippr = RelativeKey.fastSkip(currBlock, startKey, valbs, prevKey, currKey, entriesLeft);
           prevKey = skippr.prevKey;
           entriesLeft -= skippr.skipped;
           val = new Value(valbs.toArray());
@@ -845,15 +1032,24 @@
     public void registerMetrics(MetricsGatherer<?> vmg) {
       metricsGatherer = vmg;
     }
+
+    @Override
+    public FileSKVIterator getSample(SamplerConfigurationImpl sampleConfig) {
+      throw new UnsupportedOperationException();
+    }
   }
 
   public static class Reader extends HeapIterator implements FileSKVIterator {
 
     private BlockFileReader reader;
 
-    private ArrayList<LocalityGroupMetadata> localityGroups = new ArrayList<LocalityGroupMetadata>();
+    private ArrayList<LocalityGroupMetadata> localityGroups = new ArrayList<>();
+    private ArrayList<LocalityGroupMetadata> sampleGroups = new ArrayList<>();
 
-    private LocalityGroupReader lgReaders[];
+    private LocalityGroupReader currentReaders[];
+    private LocalityGroupReader readers[];
+    private LocalityGroupReader sampleReaders[];
+
     private HashSet<ByteSequence> nonDefaultColumnFamilies;
 
     private List<Reader> deepCopies;
@@ -861,6 +1057,10 @@
 
     private AtomicBoolean interruptFlag;
 
+    private SamplerConfigurationImpl samplerConfig = null;
+
+    private int rfileVersion;
+
     public Reader(BlockFileReader rdr) throws IOException {
       this.reader = rdr;
 
@@ -868,52 +1068,102 @@
       try {
         int magic = mb.readInt();
         int ver = mb.readInt();
+        rfileVersion = ver;
 
         if (magic != RINDEX_MAGIC)
           throw new IOException("Did not see expected magic number, saw " + magic);
-        if (ver != RINDEX_VER_7 && ver != RINDEX_VER_6 && ver != RINDEX_VER_4 && ver != RINDEX_VER_3)
+        if (ver != RINDEX_VER_8 && ver != RINDEX_VER_7 && ver != RINDEX_VER_6 && ver != RINDEX_VER_4 && ver != RINDEX_VER_3)
           throw new IOException("Did not see expected version, saw " + ver);
 
         int size = mb.readInt();
-        lgReaders = new LocalityGroupReader[size];
+        currentReaders = new LocalityGroupReader[size];
 
-        deepCopies = new LinkedList<Reader>();
+        deepCopies = new LinkedList<>();
 
         for (int i = 0; i < size; i++) {
           LocalityGroupMetadata lgm = new LocalityGroupMetadata(ver, rdr);
           lgm.readFields(mb);
           localityGroups.add(lgm);
 
-          lgReaders[i] = new LocalityGroupReader(reader, lgm, ver);
+          currentReaders[i] = new LocalityGroupReader(reader, lgm, ver);
         }
+
+        readers = currentReaders;
+
+        if (ver == RINDEX_VER_8 && mb.readBoolean()) {
+          sampleReaders = new LocalityGroupReader[size];
+
+          for (int i = 0; i < size; i++) {
+            LocalityGroupMetadata lgm = new LocalityGroupMetadata(ver, rdr);
+            lgm.readFields(mb);
+            sampleGroups.add(lgm);
+
+            sampleReaders[i] = new LocalityGroupReader(reader, lgm, ver);
+          }
+
+          samplerConfig = new SamplerConfigurationImpl(mb);
+        } else {
+          sampleReaders = null;
+          samplerConfig = null;
+        }
+
       } finally {
         mb.close();
       }
 
-      nonDefaultColumnFamilies = new HashSet<ByteSequence>();
+      nonDefaultColumnFamilies = new HashSet<>();
       for (LocalityGroupMetadata lgm : localityGroups) {
         if (!lgm.isDefaultLG)
           nonDefaultColumnFamilies.addAll(lgm.columnFamilies.keySet());
       }
 
-      createHeap(lgReaders.length);
+      createHeap(currentReaders.length);
     }
 
-    private Reader(Reader r) {
-      super(r.lgReaders.length);
+    private Reader(Reader r, LocalityGroupReader sampleReaders[]) {
+      super(sampleReaders.length);
       this.reader = r.reader;
       this.nonDefaultColumnFamilies = r.nonDefaultColumnFamilies;
-      this.lgReaders = new LocalityGroupReader[r.lgReaders.length];
+      this.currentReaders = new LocalityGroupReader[sampleReaders.length];
       this.deepCopies = r.deepCopies;
-      this.deepCopy = true;
-      for (int i = 0; i < lgReaders.length; i++) {
-        this.lgReaders[i] = new LocalityGroupReader(r.lgReaders[i]);
-        this.lgReaders[i].setInterruptFlag(r.interruptFlag);
+      this.deepCopy = false;
+      this.readers = r.readers;
+      this.sampleReaders = r.sampleReaders;
+      this.samplerConfig = r.samplerConfig;
+      this.rfileVersion = r.rfileVersion;
+      for (int i = 0; i < sampleReaders.length; i++) {
+        this.currentReaders[i] = sampleReaders[i];
+        this.currentReaders[i].setInterruptFlag(r.interruptFlag);
       }
     }
 
+    private Reader(Reader r, boolean useSample) {
+      super(r.currentReaders.length);
+      this.reader = r.reader;
+      this.nonDefaultColumnFamilies = r.nonDefaultColumnFamilies;
+      this.currentReaders = new LocalityGroupReader[r.currentReaders.length];
+      this.deepCopies = r.deepCopies;
+      this.deepCopy = true;
+      this.samplerConfig = r.samplerConfig;
+      this.rfileVersion = r.rfileVersion;
+      this.readers = r.readers;
+      this.sampleReaders = r.sampleReaders;
+
+      for (int i = 0; i < r.readers.length; i++) {
+        if (useSample) {
+          this.currentReaders[i] = new LocalityGroupReader(r.sampleReaders[i]);
+          this.currentReaders[i].setInterruptFlag(r.interruptFlag);
+        } else {
+          this.currentReaders[i] = new LocalityGroupReader(r.readers[i]);
+          this.currentReaders[i].setInterruptFlag(r.interruptFlag);
+        }
+
+      }
+
+    }
+
     private void closeLocalityGroupReaders() {
-      for (LocalityGroupReader lgr : lgReaders) {
+      for (LocalityGroupReader lgr : currentReaders) {
         try {
           lgr.close();
         } catch (IOException e) {
@@ -941,6 +1191,16 @@
       closeDeepCopies();
       closeLocalityGroupReaders();
 
+      if (sampleReaders != null) {
+        for (LocalityGroupReader lgr : sampleReaders) {
+          try {
+            lgr.close();
+          } catch (IOException e) {
+            log.warn("Errored out attempting to close LocalityGroupReader.", e);
+          }
+        }
+      }
+
       try {
         reader.close();
       } finally {
@@ -952,17 +1212,17 @@
 
     @Override
     public Key getFirstKey() throws IOException {
-      if (lgReaders.length == 0) {
+      if (currentReaders.length == 0) {
         return null;
       }
 
       Key minKey = null;
 
-      for (int i = 0; i < lgReaders.length; i++) {
+      for (int i = 0; i < currentReaders.length; i++) {
         if (minKey == null) {
-          minKey = lgReaders[i].getFirstKey();
+          minKey = currentReaders[i].getFirstKey();
         } else {
-          Key firstKey = lgReaders[i].getFirstKey();
+          Key firstKey = currentReaders[i].getFirstKey();
           if (firstKey != null && firstKey.compareTo(minKey) < 0)
             minKey = firstKey;
         }
@@ -973,17 +1233,17 @@
 
     @Override
     public Key getLastKey() throws IOException {
-      if (lgReaders.length == 0) {
+      if (currentReaders.length == 0) {
         return null;
       }
 
       Key maxKey = null;
 
-      for (int i = 0; i < lgReaders.length; i++) {
+      for (int i = 0; i < currentReaders.length; i++) {
         if (maxKey == null) {
-          maxKey = lgReaders[i].getLastKey();
+          maxKey = currentReaders[i].getLastKey();
         } else {
-          Key lastKey = lgReaders[i].getLastKey();
+          Key lastKey = currentReaders[i].getLastKey();
           if (lastKey != null && lastKey.compareTo(maxKey) > 0)
             maxKey = lastKey;
         }
@@ -1003,10 +1263,26 @@
 
     @Override
     public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
-      Reader copy = new Reader(this);
-      copy.setInterruptFlagInternal(interruptFlag);
-      deepCopies.add(copy);
-      return copy;
+      if (env != null && env.isSamplingEnabled()) {
+        SamplerConfiguration sc = env.getSamplerConfiguration();
+        if (sc == null) {
+          throw new SampleNotPresentException();
+        }
+
+        if (this.samplerConfig != null && this.samplerConfig.equals(new SamplerConfigurationImpl(sc))) {
+          Reader copy = new Reader(this, true);
+          copy.setInterruptFlagInternal(interruptFlag);
+          deepCopies.add(copy);
+          return copy;
+        } else {
+          throw new SampleNotPresentException();
+        }
+      } else {
+        Reader copy = new Reader(this, false);
+        copy.setInterruptFlagInternal(interruptFlag);
+        deepCopies.add(copy);
+        return copy;
+      }
     }
 
     @Override
@@ -1019,7 +1295,7 @@
       Map<String,ArrayList<ByteSequence>> cf = new HashMap<>();
 
       for (LocalityGroupMetadata lcg : localityGroups) {
-        ArrayList<ByteSequence> setCF = new ArrayList<ByteSequence>();
+        ArrayList<ByteSequence> setCF = new ArrayList<>();
 
         for (Entry<ByteSequence,MutableLong> entry : lcg.columnFamilies.entrySet()) {
           setCF.add(entry.getKey());
@@ -1042,14 +1318,20 @@
      */
     public void registerMetrics(MetricsGatherer<?> vmg) {
       vmg.init(getLocalityGroupCF());
-      for (LocalityGroupReader lgr : lgReaders) {
+      for (LocalityGroupReader lgr : currentReaders) {
         lgr.registerMetrics(vmg);
       }
+
+      if (sampleReaders != null) {
+        for (LocalityGroupReader lgr : sampleReaders) {
+          lgr.registerMetrics(vmg);
+        }
+      }
     }
 
     @Override
     public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
-      numLGSeeked = LocalityGroupIterator.seek(this, lgReaders, nonDefaultColumnFamilies, range, columnFamilies, inclusive);
+      numLGSeeked = LocalityGroupIterator.seek(this, currentReaders, nonDefaultColumnFamilies, range, columnFamilies, inclusive);
     }
 
     int getNumLocalityGroupsSeeked() {
@@ -1058,18 +1340,55 @@
 
     public FileSKVIterator getIndex() throws IOException {
 
-      ArrayList<Iterator<IndexEntry>> indexes = new ArrayList<Iterator<IndexEntry>>();
+      ArrayList<Iterator<IndexEntry>> indexes = new ArrayList<>();
 
-      for (LocalityGroupReader lgr : lgReaders) {
+      for (LocalityGroupReader lgr : currentReaders) {
         indexes.add(lgr.getIndex());
       }
 
       return new MultiIndexIterator(this, indexes);
     }
 
+    @Override
+    public FileSKVIterator getSample(SamplerConfigurationImpl sampleConfig) {
+      requireNonNull(sampleConfig);
+
+      if (this.samplerConfig != null && this.samplerConfig.equals(sampleConfig)) {
+        Reader copy = new Reader(this, sampleReaders);
+        copy.setInterruptFlagInternal(interruptFlag);
+        return copy;
+      }
+
+      return null;
+    }
+
+    // only visible for printinfo
+    FileSKVIterator getSample() {
+      if (samplerConfig == null)
+        return null;
+      return getSample(this.samplerConfig);
+    }
+
     public void printInfo() throws IOException {
+
+      System.out.printf("%-24s : %d\n", "RFile Version", rfileVersion);
+      System.out.println();
+
       for (LocalityGroupMetadata lgm : localityGroups) {
-        lgm.printInfo();
+        lgm.printInfo(false);
+      }
+
+      if (sampleGroups.size() > 0) {
+
+        System.out.println();
+        System.out.printf("%-24s :\n", "Sample Configuration");
+        System.out.printf("\t%-22s : %s\n", "Sampler class ", samplerConfig.getClassName());
+        System.out.printf("\t%-22s : %s\n", "Sampler options ", samplerConfig.getOptions());
+        System.out.println();
+
+        for (LocalityGroupMetadata lgm : sampleGroups) {
+          lgm.printInfo(true);
+        }
       }
     }
 
@@ -1086,7 +1405,7 @@
 
     private void setInterruptFlagInternal(AtomicBoolean flag) {
       this.interruptFlag = flag;
-      for (LocalityGroupReader lgr : lgReaders) {
+      for (LocalityGroupReader lgr : currentReaders) {
         lgr.setInterruptFlag(interruptFlag);
       }
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/RFileOperations.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/RFileOperations.java
index 088abfe..96d31ce 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/RFileOperations.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/RFileOperations.java
@@ -19,8 +19,8 @@
 import java.io.IOException;
 import java.util.Collection;
 import java.util.Collections;
-import java.util.Set;
 
+import org.apache.accumulo.core.client.sample.Sampler;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.ByteSequence;
@@ -29,11 +29,12 @@
 import org.apache.accumulo.core.file.FileOperations;
 import org.apache.accumulo.core.file.FileSKVIterator;
 import org.apache.accumulo.core.file.FileSKVWriter;
-import org.apache.accumulo.core.file.blockfile.cache.BlockCache;
 import org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile;
-import org.apache.accumulo.core.file.rfile.RFile.Reader;
-import org.apache.accumulo.core.file.rfile.RFile.Writer;
+import org.apache.accumulo.core.file.streams.RateLimitedOutputStream;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
+import org.apache.accumulo.core.sample.impl.SamplerFactory;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 
@@ -41,90 +42,86 @@
 
   private static final Collection<ByteSequence> EMPTY_CF_SET = Collections.emptySet();
 
-  @Override
-  public long getFileSize(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    return fs.getFileStatus(new Path(file)).getLen();
+  private static RFile.Reader getReader(FileReaderOperation<?> options) throws IOException {
+    CachableBlockFile.Reader _cbr = new CachableBlockFile.Reader(options.getFileSystem(), new Path(options.getFilename()), options.getConfiguration(),
+        options.getDataCache(), options.getIndexCache(), options.getRateLimiter(), options.getTableConfiguration());
+    return new RFile.Reader(_cbr);
   }
 
   @Override
-  public FileSKVIterator openIndex(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-
-    return openIndex(file, fs, conf, acuconf, null, null);
+  protected long getFileSize(GetFileSizeOperation options) throws IOException {
+    return options.getFileSystem().getFileStatus(new Path(options.getFilename())).getLen();
   }
 
   @Override
-  public FileSKVIterator openIndex(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf, BlockCache dataCache, BlockCache indexCache)
-      throws IOException {
-    Path path = new Path(file);
-    // long len = fs.getFileStatus(path).getLen();
-    // FSDataInputStream in = fs.open(path);
-    // Reader reader = new RFile.Reader(in, len , conf);
-    CachableBlockFile.Reader _cbr = new CachableBlockFile.Reader(fs, path, conf, dataCache, indexCache, acuconf);
-    final Reader reader = new RFile.Reader(_cbr);
-
-    return reader.getIndex();
+  protected FileSKVIterator openIndex(OpenIndexOperation options) throws IOException {
+    return getReader(options).getIndex();
   }
 
   @Override
-  public FileSKVIterator openReader(String file, boolean seekToBeginning, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    return openReader(file, seekToBeginning, fs, conf, acuconf, null, null);
-  }
+  protected FileSKVIterator openReader(OpenReaderOperation options) throws IOException {
+    RFile.Reader reader = getReader(options);
 
-  @Override
-  public FileSKVIterator openReader(String file, boolean seekToBeginning, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf,
-      BlockCache dataCache, BlockCache indexCache) throws IOException {
-    Path path = new Path(file);
-
-    CachableBlockFile.Reader _cbr = new CachableBlockFile.Reader(fs, path, conf, dataCache, indexCache, acuconf);
-    Reader iter = new RFile.Reader(_cbr);
-
-    if (seekToBeginning) {
-      iter.seek(new Range((Key) null, null), EMPTY_CF_SET, false);
+    if (options.isSeekToBeginning()) {
+      reader.seek(new Range((Key) null, null), EMPTY_CF_SET, false);
     }
 
-    return iter;
+    return reader;
   }
 
   @Override
-  public FileSKVIterator openReader(String file, Range range, Set<ByteSequence> columnFamilies, boolean inclusive, FileSystem fs, Configuration conf,
-      AccumuloConfiguration tableConf) throws IOException {
-    FileSKVIterator iter = openReader(file, false, fs, conf, tableConf, null, null);
-    iter.seek(range, columnFamilies, inclusive);
-    return iter;
+  protected FileSKVIterator openScanReader(OpenScanReaderOperation options) throws IOException {
+    RFile.Reader reader = getReader(options);
+    reader.seek(options.getRange(), options.getColumnFamilies(), options.isRangeInclusive());
+    return reader;
   }
 
   @Override
-  public FileSKVIterator openReader(String file, Range range, Set<ByteSequence> columnFamilies, boolean inclusive, FileSystem fs, Configuration conf,
-      AccumuloConfiguration tableConf, BlockCache dataCache, BlockCache indexCache) throws IOException {
-    FileSKVIterator iter = openReader(file, false, fs, conf, tableConf, dataCache, indexCache);
-    iter.seek(range, columnFamilies, inclusive);
-    return iter;
-  }
+  protected FileSKVWriter openWriter(OpenWriterOperation options) throws IOException {
 
-  @Override
-  public FileSKVWriter openWriter(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf) throws IOException {
-    return openWriter(file, fs, conf, acuconf, acuconf.get(Property.TABLE_FILE_COMPRESSION_TYPE));
-  }
-
-  FileSKVWriter openWriter(String file, FileSystem fs, Configuration conf, AccumuloConfiguration acuconf, String compression) throws IOException {
-    int hrep = conf.getInt("dfs.replication", -1);
-    int trep = acuconf.getCount(Property.TABLE_FILE_REPLICATION);
-    int rep = hrep;
-    if (trep > 0 && trep != hrep) {
-      rep = trep;
-    }
-    long hblock = conf.getLong("dfs.block.size", 1 << 26);
-    long tblock = acuconf.getMemoryInBytes(Property.TABLE_FILE_BLOCK_SIZE);
-    long block = hblock;
-    if (tblock > 0)
-      block = tblock;
-    int bufferSize = conf.getInt("io.file.buffer.size", 4096);
+    AccumuloConfiguration acuconf = options.getTableConfiguration();
 
     long blockSize = acuconf.getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE);
     long indexBlockSize = acuconf.getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE_INDEX);
 
-    CachableBlockFile.Writer _cbw = new CachableBlockFile.Writer(fs.create(new Path(file), false, bufferSize, (short) rep, block), compression, conf, acuconf);
-    Writer writer = new RFile.Writer(_cbw, (int) blockSize, (int) indexBlockSize);
+    SamplerConfigurationImpl samplerConfig = SamplerConfigurationImpl.newSamplerConfig(acuconf);
+    Sampler sampler = null;
+
+    if (samplerConfig != null) {
+      sampler = SamplerFactory.newSampler(samplerConfig, acuconf);
+    }
+
+    String compression = options.getCompression();
+    compression = compression == null ? options.getTableConfiguration().get(Property.TABLE_FILE_COMPRESSION_TYPE) : compression;
+
+    FSDataOutputStream outputStream = options.getOutputStream();
+
+    Configuration conf = options.getConfiguration();
+
+    if (outputStream == null) {
+      int hrep = conf.getInt("dfs.replication", -1);
+      int trep = acuconf.getCount(Property.TABLE_FILE_REPLICATION);
+      int rep = hrep;
+      if (trep > 0 && trep != hrep) {
+        rep = trep;
+      }
+      long hblock = conf.getLong("dfs.block.size", 1 << 26);
+      long tblock = acuconf.getMemoryInBytes(Property.TABLE_FILE_BLOCK_SIZE);
+      long block = hblock;
+      if (tblock > 0)
+        block = tblock;
+      int bufferSize = conf.getInt("io.file.buffer.size", 4096);
+
+      String file = options.getFilename();
+      FileSystem fs = options.getFileSystem();
+
+      outputStream = fs.create(new Path(file), false, bufferSize, (short) rep, block);
+    }
+
+    CachableBlockFile.Writer _cbw = new CachableBlockFile.Writer(new RateLimitedOutputStream(outputStream, options.getRateLimiter()), compression, conf,
+        acuconf);
+
+    RFile.Writer writer = new RFile.Writer(_cbw, (int) blockSize, (int) indexBlockSize, samplerConfig, sampler);
     return writer;
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/RelativeKey.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/RelativeKey.java
index aeba4e2..98163b1 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/RelativeKey.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/RelativeKey.java
@@ -230,11 +230,7 @@
     }
   }
 
-  public static SkippR fastSkip(DataInput in, Key seekKey, MutableByteSequence value, Key prevKey, Key currKey) throws IOException {
-    // this method assumes that fast skip is being called on a compressed block where the last key
-    // in the compressed block is >= seekKey... therefore this method shouldn't go past the end of the
-    // compressed block... if it does, there is probably an error in the caller's logic
-
+  public static SkippR fastSkip(DataInput in, Key seekKey, MutableByteSequence value, Key prevKey, Key currKey, int entriesLeft) throws IOException {
     // this method mostly avoids object allocation and only does compares when the row changes
 
     MutableByteSequence row, cf, cq, cv;
@@ -307,7 +303,7 @@
     int count = 0;
     Key newPrevKey = null;
 
-    while (true) {
+    while (count < entriesLeft) {
 
       pdel = (fieldsSame & DELETED) == DELETED;
 
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/SplitLarge.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/SplitLarge.java
index 92a9f72..a3a4193 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/SplitLarge.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/SplitLarge.java
@@ -47,7 +47,7 @@
     @Parameter(names = "-m", description = "the maximum size of the key/value pair to shunt to the small file")
     long maxSize = 10 * 1024 * 1024;
     @Parameter(description = "<file.rf> { <file.rf> ... }")
-    List<String> files = new ArrayList<String>();
+    List<String> files = new ArrayList<>();
   }
 
   public static void main(String[] args) throws Exception {
@@ -60,35 +60,34 @@
       AccumuloConfiguration aconf = DefaultConfiguration.getDefaultConfiguration();
       Path path = new Path(file);
       CachableBlockFile.Reader rdr = new CachableBlockFile.Reader(fs, path, conf, null, null, aconf);
-      Reader iter = new RFile.Reader(rdr);
+      try (Reader iter = new RFile.Reader(rdr)) {
 
-      if (!file.endsWith(".rf")) {
-        throw new IllegalArgumentException("File must end with .rf");
-      }
-      String smallName = file.substring(0, file.length() - 3) + "_small.rf";
-      String largeName = file.substring(0, file.length() - 3) + "_large.rf";
-
-      int blockSize = (int) aconf.getMemoryInBytes(Property.TABLE_FILE_BLOCK_SIZE);
-      Writer small = new RFile.Writer(new CachableBlockFile.Writer(fs, new Path(smallName), "gz", conf, aconf), blockSize);
-      small.startDefaultLocalityGroup();
-      Writer large = new RFile.Writer(new CachableBlockFile.Writer(fs, new Path(largeName), "gz", conf, aconf), blockSize);
-      large.startDefaultLocalityGroup();
-
-      iter.seek(new Range(), new ArrayList<ByteSequence>(), false);
-      while (iter.hasTop()) {
-        Key key = iter.getTopKey();
-        Value value = iter.getTopValue();
-        if (key.getSize() + value.getSize() < opts.maxSize) {
-          small.append(key, value);
-        } else {
-          large.append(key, value);
+        if (!file.endsWith(".rf")) {
+          throw new IllegalArgumentException("File must end with .rf");
         }
-        iter.next();
-      }
+        String smallName = file.substring(0, file.length() - 3) + "_small.rf";
+        String largeName = file.substring(0, file.length() - 3) + "_large.rf";
 
-      iter.close();
-      large.close();
-      small.close();
+        int blockSize = (int) aconf.getMemoryInBytes(Property.TABLE_FILE_BLOCK_SIZE);
+        try (Writer small = new RFile.Writer(new CachableBlockFile.Writer(fs, new Path(smallName), "gz", null, conf, aconf), blockSize);
+            Writer large = new RFile.Writer(new CachableBlockFile.Writer(fs, new Path(largeName), "gz", null, conf, aconf), blockSize)) {
+          small.startDefaultLocalityGroup();
+          large.startDefaultLocalityGroup();
+
+          iter.seek(new Range(), new ArrayList<ByteSequence>(), false);
+          while (iter.hasTop()) {
+            Key key = iter.getTopKey();
+            Value value = iter.getTopValue();
+            if (key.getSize() + value.getSize() < opts.maxSize) {
+              small.append(key, value);
+            } else {
+              large.append(key, value);
+            }
+            iter.next();
+          }
+
+        }
+      }
     }
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BCFile.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BCFile.java
index 3764603..77de47e 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BCFile.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BCFile.java
@@ -45,11 +45,14 @@
 import org.apache.accumulo.core.security.crypto.CryptoModuleFactory;
 import org.apache.accumulo.core.security.crypto.CryptoModuleParameters;
 import org.apache.accumulo.core.security.crypto.SecretKeyEncryptionStrategy;
+import org.apache.accumulo.core.file.streams.BoundedRangeFileInputStream;
+import org.apache.accumulo.core.file.streams.PositionedDataOutputStream;
+import org.apache.accumulo.core.file.streams.PositionedOutput;
+import org.apache.accumulo.core.file.streams.SeekableDataInputStream;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Seekable;
 import org.apache.hadoop.io.BytesWritable;
 import org.apache.hadoop.io.compress.Compressor;
 import org.apache.hadoop.io.compress.Decompressor;
@@ -87,7 +90,7 @@
    * BCFile writer, the entry point for creating a new BCFile.
    */
   static public class Writer implements Closeable {
-    private final FSDataOutputStream out;
+    private final PositionedDataOutputStream out;
     private final Configuration conf;
     private final CryptoModule cryptoModule;
     private BCFileCryptoModuleParameters cryptoParams;
@@ -127,7 +130,7 @@
       private final Algorithm compressAlgo;
       private Compressor compressor; // !null only if using native
       // Hadoop compression
-      private final FSDataOutputStream fsOut;
+      private final PositionedDataOutputStream fsOut;
       private final OutputStream cipherOut;
       private final long posStart;
       private final SimpleBufferedOutputStream fsBufferedOutput;
@@ -139,11 +142,11 @@
        * @param cryptoModule
        *          the module to use to obtain cryptographic streams
        */
-      public WBlockState(Algorithm compressionAlgo, FSDataOutputStream fsOut, BytesWritable fsOutputBuffer, Configuration conf, CryptoModule cryptoModule,
-          CryptoModuleParameters cryptoParams) throws IOException {
+      public WBlockState(Algorithm compressionAlgo, PositionedDataOutputStream fsOut, BytesWritable fsOutputBuffer, Configuration conf,
+          CryptoModule cryptoModule, CryptoModuleParameters cryptoParams) throws IOException {
         this.compressAlgo = compressionAlgo;
         this.fsOut = fsOut;
-        this.posStart = fsOut.getPos();
+        this.posStart = fsOut.position();
 
         fsOutputBuffer.setCapacity(getFSOutputBufferSize(conf));
 
@@ -211,7 +214,7 @@
        * @return The current byte offset in underlying file.
        */
       long getCurrentPos() throws IOException {
-        return fsOut.getPos() + fsBufferedOutput.size();
+        return fsOut.position() + fsBufferedOutput.size();
       }
 
       long getStartPos() {
@@ -338,18 +341,18 @@
      *          Name of the compression algorithm, which will be used for all data blocks.
      * @see Compression#getSupportedAlgorithms
      */
-    public Writer(FSDataOutputStream fout, String compressionName, Configuration conf, boolean trackDataBlocks, AccumuloConfiguration accumuloConfiguration)
-        throws IOException {
-      if (fout.getPos() != 0) {
+    public <OutputStreamType extends OutputStream & PositionedOutput> Writer(OutputStreamType fout, String compressionName, Configuration conf,
+        boolean trackDataBlocks, AccumuloConfiguration accumuloConfiguration) throws IOException {
+      if (fout.position() != 0) {
         throw new IOException("Output file not at zero offset.");
       }
 
-      this.out = fout;
+      this.out = new PositionedDataOutputStream(fout);
       this.conf = conf;
       dataIndex = new DataIndex(compressionName, trackDataBlocks);
       metaIndex = new MetaIndex();
       fsOutputBuffer = new BytesWritable();
-      Magic.write(fout);
+      Magic.write(this.out);
 
       // Set up crypto-related detail, including secret key generation and encryption
 
@@ -388,14 +391,14 @@
             appender.close();
           }
 
-          long offsetIndexMeta = out.getPos();
+          long offsetIndexMeta = out.position();
           metaIndex.write(out);
 
           if (cryptoParams.getAlgorithmName() == null || cryptoParams.getAlgorithmName().equals(Property.CRYPTO_CIPHER_SUITE.getDefaultValue())) {
             out.writeLong(offsetIndexMeta);
             API_VERSION_1.write(out);
           } else {
-            long offsetCryptoParameters = out.getPos();
+            long offsetCryptoParameters = out.position();
             cryptoParams.write(out);
 
             // Meta Index, crypto params offsets and the trailing section are written out directly.
@@ -566,7 +569,7 @@
 
     public void read(DataInput in) throws IOException {
 
-      Map<String,String> optionsFromFile = new HashMap<String,String>();
+      Map<String,String> optionsFromFile = new HashMap<>();
 
       int numContextEntries = in.readInt();
       for (int i = 0; i < numContextEntries; i++) {
@@ -594,7 +597,7 @@
   static public class Reader implements Closeable {
     private static final String META_NAME = "BCFile.metaindex";
     private static final String CRYPTO_BLOCK_NAME = "BCFile.cryptoparams";
-    private final FSDataInputStream in;
+    private final SeekableDataInputStream in;
     private final Configuration conf;
     final DataIndex dataIndex;
     // Index for meta blocks
@@ -613,8 +616,8 @@
       private final BlockRegion region;
       private final InputStream in;
 
-      public RBlockState(Algorithm compressionAlgo, FSDataInputStream fsin, BlockRegion region, Configuration conf, CryptoModule cryptoModule,
-          Version bcFileVersion, CryptoModuleParameters cryptoParams) throws IOException {
+      public <InputStreamType extends InputStream & Seekable> RBlockState(Algorithm compressionAlgo, InputStreamType fsin, BlockRegion region,
+          Configuration conf, CryptoModule cryptoModule, Version bcFileVersion, CryptoModuleParameters cryptoParams) throws IOException {
         this.compressAlgo = compressionAlgo;
         this.region = region;
         this.decompressor = compressionAlgo.getDecompressor();
@@ -752,15 +755,15 @@
      * @param fileLength
      *          Length of the corresponding file
      */
-    public Reader(FSDataInputStream fin, long fileLength, Configuration conf, AccumuloConfiguration accumuloConfiguration) throws IOException {
-
-      this.in = fin;
+    public <InputStreamType extends InputStream & Seekable> Reader(InputStreamType fin, long fileLength, Configuration conf,
+        AccumuloConfiguration accumuloConfiguration) throws IOException {
+      this.in = new SeekableDataInputStream(fin);
       this.conf = conf;
 
       // Move the cursor to grab the version and the magic first
-      fin.seek(fileLength - Magic.size() - Version.size());
-      version = new Version(fin);
-      Magic.readAndVerify(fin);
+      this.in.seek(fileLength - Magic.size() - Version.size());
+      version = new Version(this.in);
+      Magic.readAndVerify(this.in);
 
       // Do a version check
       if (!version.compatibleWith(BCFile.API_VERSION) && !version.equals(BCFile.API_VERSION_1)) {
@@ -772,26 +775,26 @@
       long offsetCryptoParameters = 0;
 
       if (version.equals(API_VERSION_1)) {
-        fin.seek(fileLength - Magic.size() - Version.size() - (Long.SIZE / Byte.SIZE));
-        offsetIndexMeta = fin.readLong();
+        this.in.seek(fileLength - Magic.size() - Version.size() - (Long.SIZE / Byte.SIZE));
+        offsetIndexMeta = this.in.readLong();
 
       } else {
-        fin.seek(fileLength - Magic.size() - Version.size() - (2 * (Long.SIZE / Byte.SIZE)));
-        offsetIndexMeta = fin.readLong();
-        offsetCryptoParameters = fin.readLong();
+        this.in.seek(fileLength - Magic.size() - Version.size() - (2 * (Long.SIZE / Byte.SIZE)));
+        offsetIndexMeta = this.in.readLong();
+        offsetCryptoParameters = this.in.readLong();
       }
 
       // read meta index
-      fin.seek(offsetIndexMeta);
-      metaIndex = new MetaIndex(fin);
+      this.in.seek(offsetIndexMeta);
+      metaIndex = new MetaIndex(this.in);
 
       // If they exist, read the crypto parameters
       if (!version.equals(BCFile.API_VERSION_1)) {
 
         // read crypto parameters
-        fin.seek(offsetCryptoParameters);
+        this.in.seek(offsetCryptoParameters);
         cryptoParams = new BCFileCryptoModuleParameters();
-        cryptoParams.read(fin);
+        cryptoParams.read(this.in);
 
         this.cryptoModule = CryptoModuleFactory.getCryptoModule(cryptoParams.getAllOptions().get(Property.CRYPTO_MODULE_CLASS.getKey()));
 
@@ -832,9 +835,9 @@
       }
     }
 
-    public Reader(CachableBlockFile.Reader cache, FSDataInputStream fin, long fileLength, Configuration conf, AccumuloConfiguration accumuloConfiguration)
-        throws IOException {
-      this.in = fin;
+    public <InputStreamType extends InputStream & Seekable> Reader(CachableBlockFile.Reader cache, InputStreamType fin, long fileLength, Configuration conf,
+        AccumuloConfiguration accumuloConfiguration) throws IOException {
+      this.in = new SeekableDataInputStream(fin);
       this.conf = conf;
 
       BlockRead cachedMetaIndex = cache.getCachedMetaBlock(META_NAME);
@@ -845,9 +848,9 @@
         // move the cursor to the beginning of the tail, containing: offset to the
         // meta block index, version and magic
         // Move the cursor to grab the version and the magic first
-        fin.seek(fileLength - Magic.size() - Version.size());
-        version = new Version(fin);
-        Magic.readAndVerify(fin);
+        this.in.seek(fileLength - Magic.size() - Version.size());
+        version = new Version(this.in);
+        Magic.readAndVerify(this.in);
 
         // Do a version check
         if (!version.compatibleWith(BCFile.API_VERSION) && !version.equals(BCFile.API_VERSION_1)) {
@@ -859,26 +862,26 @@
         long offsetCryptoParameters = 0;
 
         if (version.equals(API_VERSION_1)) {
-          fin.seek(fileLength - Magic.size() - Version.size() - (Long.SIZE / Byte.SIZE));
-          offsetIndexMeta = fin.readLong();
+          this.in.seek(fileLength - Magic.size() - Version.size() - (Long.SIZE / Byte.SIZE));
+          offsetIndexMeta = this.in.readLong();
 
         } else {
-          fin.seek(fileLength - Magic.size() - Version.size() - (2 * (Long.SIZE / Byte.SIZE)));
-          offsetIndexMeta = fin.readLong();
-          offsetCryptoParameters = fin.readLong();
+          this.in.seek(fileLength - Magic.size() - Version.size() - (2 * (Long.SIZE / Byte.SIZE)));
+          offsetIndexMeta = this.in.readLong();
+          offsetCryptoParameters = this.in.readLong();
         }
 
         // read meta index
-        fin.seek(offsetIndexMeta);
-        metaIndex = new MetaIndex(fin);
+        this.in.seek(offsetIndexMeta);
+        metaIndex = new MetaIndex(this.in);
 
         // If they exist, read the crypto parameters
         if (!version.equals(BCFile.API_VERSION_1) && cachedCryptoParams == null) {
 
           // read crypto parameters
-          fin.seek(offsetCryptoParameters);
+          this.in.seek(offsetCryptoParameters);
           cryptoParams = new BCFileCryptoModuleParameters();
-          cryptoParams.read(fin);
+          cryptoParams.read(this.in);
 
           if (accumuloConfiguration.getBoolean(Property.CRYPTO_OVERRIDE_KEY_STRATEGY_WITH_CONFIGURED_STRATEGY)) {
             Map<String,String> cryptoConfFromAccumuloConf = accumuloConfiguration.getAllPropertiesWithPrefix(Property.CRYPTO_PREFIX);
@@ -1074,13 +1077,13 @@
 
     // for write
     public MetaIndex() {
-      index = new TreeMap<String,MetaIndexEntry>();
+      index = new TreeMap<>();
     }
 
     // for read, construct the map from the file
     public MetaIndex(DataInput in) throws IOException {
       int count = Utils.readVInt(in);
-      index = new TreeMap<String,MetaIndexEntry>();
+      index = new TreeMap<>();
 
       for (int nx = 0; nx < count; nx++) {
         MetaIndexEntry indexEntry = new MetaIndexEntry(in);
@@ -1172,7 +1175,7 @@
       defaultCompressionAlgorithm = Compression.getCompressionAlgorithmByName(Utils.readString(in));
 
       int n = Utils.readVInt(in);
-      listRegions = new ArrayList<BlockRegion>(n);
+      listRegions = new ArrayList<>(n);
 
       for (int i = 0; i < n; i++) {
         BlockRegion region = new BlockRegion(in);
@@ -1184,7 +1187,7 @@
     public DataIndex(String defaultCompressionAlgorithmName, boolean trackBlocks) {
       this.trackBlocks = trackBlocks;
       this.defaultCompressionAlgorithm = Compression.getCompressionAlgorithmByName(defaultCompressionAlgorithmName);
-      listRegions = new ArrayList<BlockRegion>();
+      listRegions = new ArrayList<>();
     }
 
     public Algorithm getDefaultCompressionAlgorithm() {
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Compression.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Compression.java
index 64a89a6..2b81541 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Compression.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Compression.java
@@ -575,7 +575,7 @@
   public static String[] getSupportedAlgorithms() {
     Algorithm[] algos = Algorithm.class.getEnumConstants();
 
-    ArrayList<String> ret = new ArrayList<String>();
+    ArrayList<String> ret = new ArrayList<>();
     for (Algorithm a : algos) {
       if (a.isSupported()) {
         ret.add(a.getName());
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BoundedRangeFileInputStream.java b/core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
similarity index 90%
rename from core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BoundedRangeFileInputStream.java
rename to core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
index f93bb84..1c01843 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BoundedRangeFileInputStream.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
@@ -14,8 +14,7 @@
  * License for the specific language governing permissions and limitations under
  * the License.
  */
-
-package org.apache.accumulo.core.file.rfile.bcfile;
+package org.apache.accumulo.core.file.streams;
 
 import java.io.IOException;
 import java.io.InputStream;
@@ -23,15 +22,14 @@
 import java.security.PrivilegedActionException;
 import java.security.PrivilegedExceptionAction;
 
-import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.Seekable;
 
 /**
  * BoundedRangeFIleInputStream abstracts a contiguous region of a Hadoop FSDataInputStream as a regular input stream. One can create multiple
  * BoundedRangeFileInputStream on top of the same FSDataInputStream and they would not interfere with each other.
  */
-class BoundedRangeFileInputStream extends InputStream {
-
-  private FSDataInputStream in;
+public class BoundedRangeFileInputStream extends InputStream {
+  private InputStream in;
   private long pos;
   private long end;
   private long mark;
@@ -49,7 +47,7 @@
    *
    *          The actual length of the region may be smaller if (off_begin + length) goes beyond the end of FS input stream.
    */
-  public BoundedRangeFileInputStream(FSDataInputStream in, long offset, long length) {
+  public <StreamType extends InputStream & Seekable> BoundedRangeFileInputStream(StreamType in, long offset, long length) {
     if (offset < 0 || length < 0) {
       throw new IndexOutOfBoundsException("Invalid offset/length: " + offset + "/" + length);
     }
@@ -93,9 +91,9 @@
     if (n == 0)
       return -1;
     Integer ret = 0;
-    final FSDataInputStream inLocal = in;
+    final InputStream inLocal = in;
     synchronized (inLocal) {
-      inLocal.seek(pos);
+      ((Seekable) inLocal).seek(pos);
       try {
         ret = AccessController.doPrivileged(new PrivilegedExceptionAction<Integer>() {
           @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/file/streams/PositionedDataOutputStream.java b/core/src/main/java/org/apache/accumulo/core/file/streams/PositionedDataOutputStream.java
new file mode 100644
index 0000000..bd18426
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/file/streams/PositionedDataOutputStream.java
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.file.streams;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+
+/**
+ * A filter converting a {@link PositionedOutput} {@code OutputStream} to a {@code PositionedOutput} {@code DataOutputStream}
+ */
+public class PositionedDataOutputStream extends DataOutputStream implements PositionedOutput {
+  public <StreamType extends OutputStream & PositionedOutput> PositionedDataOutputStream(StreamType type) {
+    super(type);
+  }
+
+  @Override
+  public long position() throws IOException {
+    return ((PositionedOutput) out).position();
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java b/core/src/main/java/org/apache/accumulo/core/file/streams/PositionedOutput.java
similarity index 67%
copy from core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
copy to core/src/main/java/org/apache/accumulo/core/file/streams/PositionedOutput.java
index 01f5fa8..e5dcba4 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/streams/PositionedOutput.java
@@ -14,19 +14,13 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.util;
+package org.apache.accumulo.core.file.streams;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
+import java.io.IOException;
 
-public class UtilWaitThread {
-  private static final Logger log = LoggerFactory.getLogger(UtilWaitThread.class);
-
-  public static void sleep(long millis) {
-    try {
-      Thread.sleep(millis);
-    } catch (InterruptedException e) {
-      log.error("{}", e.getMessage(), e);
-    }
-  }
+/**
+ * For any byte sink (but especially OutputStream), the ability to report how many bytes have been sunk.
+ */
+public interface PositionedOutput {
+  public long position() throws IOException;
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/file/streams/PositionedOutputs.java b/core/src/main/java/org/apache/accumulo/core/file/streams/PositionedOutputs.java
new file mode 100644
index 0000000..4769818
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/file/streams/PositionedOutputs.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.file.streams;
+
+import java.io.FilterOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.Objects;
+import org.apache.hadoop.fs.FSDataOutputStream;
+
+/**
+ * Utility functions for {@link PositionedOutput}.
+ */
+public class PositionedOutputs {
+  private PositionedOutputs() {}
+
+  /** Convert an {@code OutputStream} into an {@code OutputStream} implementing {@link PositionedOutput}. */
+  public static PositionedOutputStream wrap(final OutputStream fout) {
+    Objects.requireNonNull(fout);
+    if (fout instanceof FSDataOutputStream) {
+      return new PositionedOutputStream(fout) {
+        @Override
+        public long position() throws IOException {
+          return ((FSDataOutputStream) fout).getPos();
+        }
+      };
+    } else if (fout instanceof PositionedOutput) {
+      return new PositionedOutputStream(fout) {
+        @Override
+        public long position() throws IOException {
+          return ((PositionedOutput) fout).position();
+        }
+      };
+    } else {
+      return new PositionedOutputStream(fout) {
+        @Override
+        public long position() throws IOException {
+          throw new UnsupportedOperationException("Underlying stream does not support position()");
+        }
+      };
+    }
+  }
+
+  private static abstract class PositionedOutputStream extends FilterOutputStream implements PositionedOutput {
+    public PositionedOutputStream(OutputStream stream) {
+      super(stream);
+    }
+
+    @Override
+    public void write(byte[] data, int off, int len) throws IOException {
+      out.write(data, off, len);
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/file/streams/RateLimitedInputStream.java b/core/src/main/java/org/apache/accumulo/core/file/streams/RateLimitedInputStream.java
new file mode 100644
index 0000000..5254086
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/file/streams/RateLimitedInputStream.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.file.streams;
+
+import java.io.FilterInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import org.apache.accumulo.core.util.ratelimit.NullRateLimiter;
+import org.apache.accumulo.core.util.ratelimit.RateLimiter;
+import org.apache.hadoop.fs.Seekable;
+
+/**
+ * A decorator for an {@code InputStream} which limits the rate at which reads are performed.
+ */
+public class RateLimitedInputStream extends FilterInputStream implements Seekable {
+  private final RateLimiter rateLimiter;
+
+  public <StreamType extends InputStream & Seekable> RateLimitedInputStream(StreamType stream, RateLimiter rateLimiter) {
+    super(stream);
+    this.rateLimiter = rateLimiter == null ? NullRateLimiter.INSTANCE : rateLimiter;
+  }
+
+  @Override
+  public int read() throws IOException {
+    int val = in.read();
+    if (val >= 0) {
+      rateLimiter.acquire(1);
+    }
+    return val;
+  }
+
+  @Override
+  public int read(byte[] buffer, int offset, int length) throws IOException {
+    int count = in.read(buffer, offset, length);
+    if (count > 0) {
+      rateLimiter.acquire(count);
+    }
+    return count;
+  }
+
+  @Override
+  public void seek(long pos) throws IOException {
+    ((Seekable) in).seek(pos);
+  }
+
+  @Override
+  public long getPos() throws IOException {
+    return ((Seekable) in).getPos();
+  }
+
+  @Override
+  public boolean seekToNewSource(long targetPos) throws IOException {
+    return ((Seekable) in).seekToNewSource(targetPos);
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/file/streams/RateLimitedOutputStream.java b/core/src/main/java/org/apache/accumulo/core/file/streams/RateLimitedOutputStream.java
new file mode 100644
index 0000000..b426a6b
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/file/streams/RateLimitedOutputStream.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.accumulo.core.file.streams;
+
+import java.io.FilterOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import org.apache.accumulo.core.util.ratelimit.NullRateLimiter;
+import org.apache.accumulo.core.util.ratelimit.RateLimiter;
+
+/**
+ * A decorator for {@code OutputStream} which limits the rate at which data may be written.
+ */
+public class RateLimitedOutputStream extends FilterOutputStream implements PositionedOutput {
+  private final RateLimiter writeLimiter;
+
+  public RateLimitedOutputStream(OutputStream wrappedStream, RateLimiter writeLimiter) {
+    super(PositionedOutputs.wrap(wrappedStream));
+    this.writeLimiter = writeLimiter == null ? NullRateLimiter.INSTANCE : writeLimiter;
+  }
+
+  @Override
+  public void write(int i) throws IOException {
+    writeLimiter.acquire(1);
+    out.write(i);
+  }
+
+  @Override
+  public void write(byte[] buffer, int offset, int length) throws IOException {
+    writeLimiter.acquire(length);
+    out.write(buffer, offset, length);
+  }
+
+  @Override
+  public void close() throws IOException {
+    out.close();
+  }
+
+  @Override
+  public long position() throws IOException {
+    return ((PositionedOutput) out).position();
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/file/streams/SeekableDataInputStream.java b/core/src/main/java/org/apache/accumulo/core/file/streams/SeekableDataInputStream.java
new file mode 100644
index 0000000..09060f5
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/file/streams/SeekableDataInputStream.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.file.streams;
+
+import java.io.DataInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import org.apache.hadoop.fs.Seekable;
+
+/**
+ * A wrapper converting a {@link Seekable} {@code InputStream} into a {@code Seekable} {@link DataInputStream}
+ */
+public class SeekableDataInputStream extends DataInputStream implements Seekable {
+  public <StreamType extends InputStream & Seekable> SeekableDataInputStream(StreamType stream) {
+    super(stream);
+  }
+
+  @Override
+  public void seek(long pos) throws IOException {
+    ((Seekable) in).seek(pos);
+  }
+
+  @Override
+  public long getPos() throws IOException {
+    return ((Seekable) in).getPos();
+  }
+
+  @Override
+  public boolean seekToNewSource(long targetPos) throws IOException {
+    return ((Seekable) in).seekToNewSource(targetPos);
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/gc/thrift/GCMonitorService.java b/core/src/main/java/org/apache/accumulo/core/gc/thrift/GCMonitorService.java
index a5c47ff..949771e 100644
--- a/core/src/main/java/org/apache/accumulo/core/gc/thrift/GCMonitorService.java
+++ b/core/src/main/java/org/apache/accumulo/core/gc/thrift/GCMonitorService.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class GCMonitorService {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class GCMonitorService {
 
   public interface Iface {
 
@@ -533,7 +536,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -997,7 +1012,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/gc/thrift/GCStatus.java b/core/src/main/java/org/apache/accumulo/core/gc/thrift/GCStatus.java
index 9d6ade6..98bf9da 100644
--- a/core/src/main/java/org/apache/accumulo/core/gc/thrift/GCStatus.java
+++ b/core/src/main/java/org/apache/accumulo/core/gc/thrift/GCStatus.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class GCStatus implements org.apache.thrift.TBase<GCStatus, GCStatus._Fields>, java.io.Serializable, Cloneable, Comparable<GCStatus> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class GCStatus implements org.apache.thrift.TBase<GCStatus, GCStatus._Fields>, java.io.Serializable, Cloneable, Comparable<GCStatus> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("GCStatus");
 
   private static final org.apache.thrift.protocol.TField LAST_FIELD_DESC = new org.apache.thrift.protocol.TField("last", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -420,7 +423,29 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_last = true && (isSetLast());
+    list.add(present_last);
+    if (present_last)
+      list.add(last);
+
+    boolean present_lastLog = true && (isSetLastLog());
+    list.add(present_lastLog);
+    if (present_lastLog)
+      list.add(lastLog);
+
+    boolean present_current = true && (isSetCurrent());
+    list.add(present_current);
+    if (present_current)
+      list.add(current);
+
+    boolean present_currentLog = true && (isSetCurrentLog());
+    list.add(present_currentLog);
+    if (present_currentLog)
+      list.add(currentLog);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/gc/thrift/GcCycleStats.java b/core/src/main/java/org/apache/accumulo/core/gc/thrift/GcCycleStats.java
index 83dc826..339c2a1 100644
--- a/core/src/main/java/org/apache/accumulo/core/gc/thrift/GcCycleStats.java
+++ b/core/src/main/java/org/apache/accumulo/core/gc/thrift/GcCycleStats.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class GcCycleStats implements org.apache.thrift.TBase<GcCycleStats, GcCycleStats._Fields>, java.io.Serializable, Cloneable, Comparable<GcCycleStats> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class GcCycleStats implements org.apache.thrift.TBase<GcCycleStats, GcCycleStats._Fields>, java.io.Serializable, Cloneable, Comparable<GcCycleStats> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("GcCycleStats");
 
   private static final org.apache.thrift.protocol.TField STARTED_FIELD_DESC = new org.apache.thrift.protocol.TField("started", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -424,22 +427,22 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case STARTED:
-      return Long.valueOf(getStarted());
+      return getStarted();
 
     case FINISHED:
-      return Long.valueOf(getFinished());
+      return getFinished();
 
     case CANDIDATES:
-      return Long.valueOf(getCandidates());
+      return getCandidates();
 
     case IN_USE:
-      return Long.valueOf(getInUse());
+      return getInUse();
 
     case DELETED:
-      return Long.valueOf(getDeleted());
+      return getDeleted();
 
     case ERRORS:
-      return Long.valueOf(getErrors());
+      return getErrors();
 
     }
     throw new IllegalStateException();
@@ -540,7 +543,39 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_started = true;
+    list.add(present_started);
+    if (present_started)
+      list.add(started);
+
+    boolean present_finished = true;
+    list.add(present_finished);
+    if (present_finished)
+      list.add(finished);
+
+    boolean present_candidates = true;
+    list.add(present_candidates);
+    if (present_candidates)
+      list.add(candidates);
+
+    boolean present_inUse = true;
+    list.add(present_inUse);
+    if (present_inUse)
+      list.add(inUse);
+
+    boolean present_deleted = true;
+    list.add(present_deleted);
+    if (present_deleted)
+      list.add(deleted);
+
+    boolean present_errors = true;
+    list.add(present_errors);
+    if (present_errors)
+      list.add(errors);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/AggregatingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/AggregatingIterator.java
index 3292cc2..979eaeb 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/AggregatingIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/AggregatingIterator.java
@@ -173,8 +173,7 @@
       String context = null;
       if (null != env)
         context = env.getConfig().get(Property.TABLE_CLASSPATH);
-      this.aggregators = new ColumnToClassMapping<org.apache.accumulo.core.iterators.aggregation.Aggregator>(options,
-          org.apache.accumulo.core.iterators.aggregation.Aggregator.class, context);
+      this.aggregators = new ColumnToClassMapping<>(options, org.apache.accumulo.core.iterators.aggregation.Aggregator.class, context);
     } catch (ClassNotFoundException e) {
       log.error(e.toString());
       throw new IllegalArgumentException(e);
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/FirstEntryInRowIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/FirstEntryInRowIterator.java
index fcca805..32e6464 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/FirstEntryInRowIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/FirstEntryInRowIterator.java
@@ -132,7 +132,7 @@
   public IteratorOptions describeOptions() {
     String name = "firstEntry";
     String desc = "Only allows iteration over the first entry per row";
-    HashMap<String,String> namedOptions = new HashMap<String,String>();
+    HashMap<String,String> namedOptions = new HashMap<>();
     namedOptions.put(NUM_SCANS_STRING_NAME, "Number of scans to try before seeking [10]");
     return new IteratorOptions(name, desc, namedOptions, null);
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/IteratorAdapter.java b/core/src/main/java/org/apache/accumulo/core/iterators/IteratorAdapter.java
new file mode 100644
index 0000000..2d8af8f
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/IteratorAdapter.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.iterators;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map.Entry;
+import java.util.NoSuchElementException;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.KeyValue;
+import org.apache.accumulo.core.data.Value;
+
+public class IteratorAdapter implements Iterator<Entry<Key,Value>> {
+
+  SortedKeyValueIterator<Key,Value> inner;
+
+  public IteratorAdapter(SortedKeyValueIterator<Key,Value> inner) {
+    this.inner = inner;
+  }
+
+  @Override
+  public boolean hasNext() {
+    return inner.hasTop();
+  }
+
+  @Override
+  public Entry<Key,Value> next() {
+    try {
+      Entry<Key,Value> result = new KeyValue(new Key(inner.getTopKey()), new Value(inner.getTopValue()).get());
+      inner.next();
+      return result;
+    } catch (IOException ex) {
+      throw new NoSuchElementException();
+    }
+  }
+
+  @Override
+  public void remove() {
+    throw new UnsupportedOperationException();
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/IteratorEnvironment.java b/core/src/main/java/org/apache/accumulo/core/iterators/IteratorEnvironment.java
index 5a53e93..7ef27e5 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/IteratorEnvironment.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/IteratorEnvironment.java
@@ -18,6 +18,8 @@
 
 import java.io.IOException;
 
+import org.apache.accumulo.core.client.SampleNotPresentException;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
@@ -37,4 +39,50 @@
   void registerSideChannel(SortedKeyValueIterator<Key,Value> iter);
 
   Authorizations getAuthorizations();
+
+  /**
+   * Returns a new iterator environment object that can be used to create deep copies over sample data. The new object created will use the current sampling
+   * configuration for the table. The existing iterator environment object will not be modified.
+   *
+   * <p>
+   * Since sample data could be created in many different ways, a good practice for an iterator is to verify the sampling configuration is as expected.
+   *
+   * <pre>
+   * <code>
+   *   class MyIter implements SortedKeyValueIterator&lt;Key,Value&gt; {
+   *     SortedKeyValueIterator&lt;Key,Value&gt; source;
+   *     SortedKeyValueIterator&lt;Key,Value&gt; sampleIter;
+   *     &#64;Override
+   *     void init(SortedKeyValueIterator&lt;Key,Value&gt; source, Map&lt;String,String&gt; options, IteratorEnvironment env) {
+   *       IteratorEnvironment sampleEnv = env.cloneWithSamplingEnabled();
+   *       //do some sanity checks on sampling config
+   *       validateSamplingConfiguration(sampleEnv.getSamplerConfiguration());
+   *       sampleIter = source.deepCopy(sampleEnv);
+   *       this.source = source;
+   *     }
+   *   }
+   * </code>
+   * </pre>
+   *
+   * @throws SampleNotPresentException
+   *           when sampling is not configured for table.
+   * @since 1.8.0
+   */
+  IteratorEnvironment cloneWithSamplingEnabled();
+
+  /**
+   * There are at least two conditions under which sampling will be enabled for an environment. One condition is when sampling is enabled for the scan that
+   * starts everything. Another possibility is for a deep copy created with an environment created by calling {@link #cloneWithSamplingEnabled()}
+   *
+   * @return true if sampling is enabled for this environment.
+   * @since 1.8.0
+   */
+  boolean isSamplingEnabled();
+
+  /**
+   *
+   * @return sampling configuration is sampling is enabled for environment, otherwise returns null.
+   * @since 1.8.0
+   */
+  SamplerConfiguration getSamplerConfiguration();
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/IteratorUtil.java b/core/src/main/java/org/apache/accumulo/core/iterators/IteratorUtil.java
index 079bb70..981404c 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/IteratorUtil.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/IteratorUtil.java
@@ -35,12 +35,19 @@
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.constraints.DefaultKeySizeConstraint;
+import org.apache.accumulo.core.data.Column;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.data.thrift.IterInfo;
+import org.apache.accumulo.core.iterators.system.ColumnFamilySkippingIterator;
+import org.apache.accumulo.core.iterators.system.ColumnQualifierFilter;
+import org.apache.accumulo.core.iterators.system.DeletingIterator;
 import org.apache.accumulo.core.iterators.system.SynchronizedIterator;
+import org.apache.accumulo.core.iterators.system.VisibilityFilter;
 import org.apache.accumulo.core.iterators.user.VersioningIterator;
+import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.tabletserver.thrift.IteratorConfig;
 import org.apache.accumulo.core.tabletserver.thrift.TIteratorSetting;
 import org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader;
@@ -111,7 +118,7 @@
    * @return A map of Table properties
    */
   public static Map<String,String> generateInitialTableProperties(boolean limitVersion) {
-    TreeMap<String,String> props = new TreeMap<String,String>();
+    TreeMap<String,String> props = new TreeMap<>();
 
     if (limitVersion) {
       for (IteratorScope iterScope : IteratorScope.values()) {
@@ -163,7 +170,7 @@
 
         Map<String,String> options = allOptions.get(iterName);
         if (options == null) {
-          options = new HashMap<String,String>();
+          options = new HashMap<>();
           allOptions.put(iterName, options);
         }
 
@@ -188,8 +195,8 @@
       SortedKeyValueIterator<K,V> source, KeyExtent extent, AccumuloConfiguration conf, List<IteratorSetting> iterators, IteratorEnvironment env)
       throws IOException {
 
-    List<IterInfo> ssiList = new ArrayList<IterInfo>();
-    Map<String,Map<String,String>> ssio = new HashMap<String,Map<String,String>>();
+    List<IterInfo> ssiList = new ArrayList<>();
+    Map<String,Map<String,String>> ssio = new HashMap<>();
 
     for (IteratorSetting is : iterators) {
       ssiList.add(new IterInfo(is.getPriority(), is.getIteratorClass(), is.getName()));
@@ -205,17 +212,11 @@
     return loadIterators(scope, source, extent, conf, ssiList, ssio, env, true);
   }
 
-  public static <K extends WritableComparable<?>,V extends Writable> SortedKeyValueIterator<K,V> loadIterators(IteratorScope scope,
-      SortedKeyValueIterator<K,V> source, KeyExtent extent, AccumuloConfiguration conf, List<IterInfo> ssiList, Map<String,Map<String,String>> ssio,
-      IteratorEnvironment env, boolean useAccumuloClassLoader) throws IOException {
-    List<IterInfo> iters = new ArrayList<IterInfo>(ssiList);
-    Map<String,Map<String,String>> allOptions = new HashMap<String,Map<String,String>>();
-
+  private static void parseIteratorConfiguration(IteratorScope scope, List<IterInfo> iters, Map<String,Map<String,String>> ssio,
+      Map<String,Map<String,String>> allOptions, AccumuloConfiguration conf) {
     parseIterConf(scope, iters, allOptions, conf);
 
     mergeOptions(ssio, allOptions);
-
-    return loadIterators(source, iters, allOptions, env, useAccumuloClassLoader, conf.get(Property.TABLE_CLASSPATH));
   }
 
   private static void mergeOptions(Map<String,Map<String,String>> ssio, Map<String,Map<String,String>> allOptions) {
@@ -231,6 +232,24 @@
     }
   }
 
+  public static <K extends WritableComparable<?>,V extends Writable> SortedKeyValueIterator<K,V> loadIterators(IteratorScope scope,
+      SortedKeyValueIterator<K,V> source, KeyExtent extent, AccumuloConfiguration conf, List<IterInfo> ssiList, Map<String,Map<String,String>> ssio,
+      IteratorEnvironment env, boolean useAccumuloClassLoader) throws IOException {
+    List<IterInfo> iters = new ArrayList<>(ssiList);
+    Map<String,Map<String,String>> allOptions = new HashMap<>();
+    parseIteratorConfiguration(scope, iters, ssio, allOptions, conf);
+    return loadIterators(source, iters, allOptions, env, useAccumuloClassLoader, conf.get(Property.TABLE_CLASSPATH));
+  }
+
+  public static <K extends WritableComparable<?>,V extends Writable> SortedKeyValueIterator<K,V> loadIterators(IteratorScope scope,
+      SortedKeyValueIterator<K,V> source, KeyExtent extent, AccumuloConfiguration conf, List<IterInfo> ssiList, Map<String,Map<String,String>> ssio,
+      IteratorEnvironment env, boolean useAccumuloClassLoader, String classLoaderContext) throws IOException {
+    List<IterInfo> iters = new ArrayList<>(ssiList);
+    Map<String,Map<String,String>> allOptions = new HashMap<>();
+    parseIteratorConfiguration(scope, iters, ssio, allOptions, conf);
+    return loadIterators(source, iters, allOptions, env, useAccumuloClassLoader, classLoaderContext);
+  }
+
   public static <K extends WritableComparable<?>,V extends Writable> SortedKeyValueIterator<K,V> loadIterators(SortedKeyValueIterator<K,V> source,
       Collection<IterInfo> iters, Map<String,Map<String,String>> iterOpts, IteratorEnvironment env, boolean useAccumuloClassLoader, String context)
       throws IOException {
@@ -241,7 +260,7 @@
       Collection<IterInfo> iters, Map<String,Map<String,String>> iterOpts, IteratorEnvironment env, boolean useAccumuloClassLoader, String context,
       Map<String,Class<? extends SortedKeyValueIterator<K,V>>> classCache) throws IOException {
     // wrap the source in a SynchronizedIterator in case any of the additional configured iterators want to use threading
-    SortedKeyValueIterator<K,V> prev = new SynchronizedIterator<K,V>(source);
+    SortedKeyValueIterator<K,V> prev = new SynchronizedIterator<>(source);
 
     try {
       for (IterInfo iterInfo : iters) {
@@ -270,13 +289,13 @@
       }
     } catch (ClassNotFoundException e) {
       log.error(e.toString());
-      throw new IOException(e);
+      throw new RuntimeException(e);
     } catch (InstantiationException e) {
       log.error(e.toString());
-      throw new IOException(e);
+      throw new RuntimeException(e);
     } catch (IllegalAccessException e) {
       log.error(e.toString());
-      throw new IOException(e);
+      throw new RuntimeException(e);
     }
     return prev;
   }
@@ -330,7 +349,7 @@
   }
 
   public static IteratorConfig toIteratorConfig(List<IteratorSetting> iterators) {
-    ArrayList<TIteratorSetting> tisList = new ArrayList<TIteratorSetting>();
+    ArrayList<TIteratorSetting> tisList = new ArrayList<>();
 
     for (IteratorSetting iteratorSetting : iterators) {
       tisList.add(toTIteratorSetting(iteratorSetting));
@@ -340,7 +359,7 @@
   }
 
   public static List<IteratorSetting> toIteratorSettings(IteratorConfig ic) {
-    List<IteratorSetting> ret = new ArrayList<IteratorSetting>();
+    List<IteratorSetting> ret = new ArrayList<>();
     for (TIteratorSetting tIteratorSetting : ic.getIterators()) {
       ret.add(toIteratorSetting(tIteratorSetting));
     }
@@ -372,4 +391,12 @@
     }
     return toIteratorSettings(ic);
   }
+
+  public static SortedKeyValueIterator<Key,Value> setupSystemScanIterators(SortedKeyValueIterator<Key,Value> source, Set<Column> cols, Authorizations auths,
+      byte[] defaultVisibility) throws IOException {
+    DeletingIterator delIter = new DeletingIterator(source, false);
+    ColumnFamilySkippingIterator cfsi = new ColumnFamilySkippingIterator(delIter);
+    ColumnQualifierFilter colFilter = new ColumnQualifierFilter(cfsi, cols);
+    return new VisibilityFilter(colFilter, auths, defaultVisibility);
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/LongCombiner.java b/core/src/main/java/org/apache/accumulo/core/iterators/LongCombiner.java
index 35ceb6e..6cddea0 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/LongCombiner.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/LongCombiner.java
@@ -118,7 +118,7 @@
   /**
    * An Encoder that uses a variable-length encoding for Longs. It uses WritableUtils.writeVLong and WritableUtils.readVLong for encoding and decoding.
    */
-  public static class VarLenEncoder extends AbstractLexicoder<Long> implements Encoder<Long> {
+  public static class VarLenEncoder extends AbstractLexicoder<Long> {
     @Override
     public byte[] encode(Long v) {
       ByteArrayOutputStream baos = new ByteArrayOutputStream();
@@ -153,7 +153,7 @@
   /**
    * An Encoder that uses an 8-byte encoding for Longs.
    */
-  public static class FixedLenEncoder extends AbstractLexicoder<Long> implements Encoder<Long> {
+  public static class FixedLenEncoder extends AbstractLexicoder<Long> {
     @Override
     public byte[] encode(Long l) {
       byte[] b = new byte[8];
@@ -198,7 +198,7 @@
   /**
    * An Encoder that uses a String representation of Longs. It uses Long.toString and Long.parseLong for encoding and decoding.
    */
-  public static class StringEncoder extends AbstractLexicoder<Long> implements Encoder<Long> {
+  public static class StringEncoder extends AbstractLexicoder<Long> {
     @Override
     public byte[] encode(Long v) {
       return Long.toString(v).getBytes(UTF_8);
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/OptionDescriber.java b/core/src/main/java/org/apache/accumulo/core/iterators/OptionDescriber.java
index 756edd1..73dfd61 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/OptionDescriber.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/OptionDescriber.java
@@ -57,10 +57,10 @@
       this.name = name;
       this.namedOptions = null;
       if (namedOptions != null)
-        this.namedOptions = new LinkedHashMap<String,String>(namedOptions);
+        this.namedOptions = new LinkedHashMap<>(namedOptions);
       this.unnamedOptionDescriptions = null;
       if (unnamedOptionDescriptions != null)
-        this.unnamedOptionDescriptions = new ArrayList<String>(unnamedOptionDescriptions);
+        this.unnamedOptionDescriptions = new ArrayList<>(unnamedOptionDescriptions);
       this.description = description;
     }
 
@@ -81,11 +81,11 @@
     }
 
     public void setNamedOptions(Map<String,String> namedOptions) {
-      this.namedOptions = new LinkedHashMap<String,String>(namedOptions);
+      this.namedOptions = new LinkedHashMap<>(namedOptions);
     }
 
     public void setUnnamedOptionDescriptions(List<String> unnamedOptionDescriptions) {
-      this.unnamedOptionDescriptions = new ArrayList<String>(unnamedOptionDescriptions);
+      this.unnamedOptionDescriptions = new ArrayList<>(unnamedOptionDescriptions);
     }
 
     public void setName(String name) {
@@ -98,13 +98,13 @@
 
     public void addNamedOption(String name, String description) {
       if (namedOptions == null)
-        namedOptions = new LinkedHashMap<String,String>();
+        namedOptions = new LinkedHashMap<>();
       namedOptions.put(name, description);
     }
 
     public void addUnnamedOption(String description) {
       if (unnamedOptionDescriptions == null)
-        unnamedOptionDescriptions = new ArrayList<String>();
+        unnamedOptionDescriptions = new ArrayList<>();
       unnamedOptionDescriptions.add(description);
     }
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/OrIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/OrIterator.java
index 3769eae..43ed5ed 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/OrIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/OrIterator.java
@@ -40,7 +40,7 @@
 
   private TermSource currentTerm;
   private ArrayList<TermSource> sources;
-  private PriorityQueue<TermSource> sorted = new PriorityQueue<TermSource>(5);
+  private PriorityQueue<TermSource> sorted = new PriorityQueue<>(5);
   private static final Text nullText = new Text();
   private static final Key nullKey = new Key();
 
@@ -83,11 +83,11 @@
   }
 
   public OrIterator() {
-    this.sources = new ArrayList<TermSource>();
+    this.sources = new ArrayList<>();
   }
 
   private OrIterator(OrIterator other, IteratorEnvironment env) {
-    this.sources = new ArrayList<TermSource>();
+    this.sources = new ArrayList<>();
 
     for (TermSource TS : other.sources)
       this.sources.add(new TermSource(TS.iter.deepCopy(env), TS.term));
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/SortedMapIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/SortedMapIterator.java
index 3999b6f..25c010d 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/SortedMapIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/SortedMapIterator.java
@@ -24,6 +24,7 @@
 import java.util.SortedMap;
 import java.util.concurrent.atomic.AtomicBoolean;
 
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
@@ -53,6 +54,9 @@
 
   @Override
   public SortedMapIterator deepCopy(IteratorEnvironment env) {
+    if (env != null && env.isSamplingEnabled()) {
+      throw new SampleNotPresentException();
+    }
     return new SortedMapIterator(map, interruptFlag);
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/TypedValueCombiner.java b/core/src/main/java/org/apache/accumulo/core/iterators/TypedValueCombiner.java
index 7e7fa64..03e4d88 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/TypedValueCombiner.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/TypedValueCombiner.java
@@ -179,7 +179,7 @@
 
   @Override
   public Value reduce(Key key, Iterator<Value> iter) {
-    return new Value(encoder.encode(typedReduce(key, new VIterator<V>(iter, encoder, lossy))));
+    return new Value(encoder.encode(typedReduce(key, new VIterator<>(iter, encoder, lossy))));
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/WrappingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/WrappingIterator.java
index 7723ef1..5b37b30 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/WrappingIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/WrappingIterator.java
@@ -56,8 +56,6 @@
 
   @Override
   public Key getTopKey() {
-    if (source == null)
-      throw new IllegalStateException("no source set");
     if (seenSeek == false)
       throw new IllegalStateException("never been seeked");
     return getSource().getTopKey();
@@ -65,8 +63,6 @@
 
   @Override
   public Value getTopValue() {
-    if (source == null)
-      throw new IllegalStateException("no source set");
     if (seenSeek == false)
       throw new IllegalStateException("never been seeked");
     return getSource().getTopValue();
@@ -74,8 +70,6 @@
 
   @Override
   public boolean hasTop() {
-    if (source == null)
-      throw new IllegalStateException("no source set");
     if (seenSeek == false)
       throw new IllegalStateException("never been seeked");
     return getSource().hasTop();
@@ -89,8 +83,6 @@
 
   @Override
   public void next() throws IOException {
-    if (source == null)
-      throw new IllegalStateException("no source set");
     if (seenSeek == false)
       throw new IllegalStateException("never been seeked");
     getSource().next();
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnSet.java b/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnSet.java
index c1edf5d..7c74ad6 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnSet.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnSet.java
@@ -36,8 +36,8 @@
   private ColFamHashKey lookupCF = new ColFamHashKey();
 
   public ColumnSet() {
-    objectsCF = new HashSet<ColFamHashKey>();
-    objectsCol = new HashSet<ColHashKey>();
+    objectsCF = new HashSet<>();
+    objectsCol = new HashSet<>();
   }
 
   public ColumnSet(Collection<String> objectStrings) {
@@ -126,9 +126,9 @@
     String[] cols = columns.split(":");
 
     if (cols.length == 1) {
-      return new Pair<Text,Text>(decode(cols[0]), null);
+      return new Pair<>(decode(cols[0]), null);
     } else if (cols.length == 2) {
-      return new Pair<Text,Text>(decode(cols[0]), decode(cols[1]));
+      return new Pair<>(decode(cols[0]), decode(cols[1]));
     } else {
       throw new IllegalArgumentException(columns);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnToClassMapping.java b/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnToClassMapping.java
index 84a6996..11d5222 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnToClassMapping.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/conf/ColumnToClassMapping.java
@@ -37,8 +37,8 @@
   private ColFamHashKey lookupCF = new ColFamHashKey();
 
   public ColumnToClassMapping() {
-    objectsCF = new HashMap<ColFamHashKey,K>();
-    objectsCol = new HashMap<ColHashKey,K>();
+    objectsCF = new HashMap<>();
+    objectsCol = new HashMap<>();
   }
 
   public ColumnToClassMapping(Map<String,String> objectStrings, Class<? extends K> c) throws InstantiationException, IllegalAccessException,
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/ColumnFamilySkippingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/ColumnFamilySkippingIterator.java
index 350c4cd..53f3643 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/system/ColumnFamilySkippingIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/ColumnFamilySkippingIterator.java
@@ -108,12 +108,12 @@
     if (columnFamilies instanceof Set<?>) {
       colFamSet = (Set<ByteSequence>) columnFamilies;
     } else {
-      colFamSet = new HashSet<ByteSequence>();
+      colFamSet = new HashSet<>();
       colFamSet.addAll(columnFamilies);
     }
 
     if (inclusive) {
-      sortedColFams = new TreeSet<ByteSequence>(colFamSet);
+      sortedColFams = new TreeSet<>(colFamSet);
     } else {
       sortedColFams = null;
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/ColumnQualifierFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/ColumnQualifierFilter.java
index aa6426d..866f04f 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/system/ColumnQualifierFilter.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/ColumnQualifierFilter.java
@@ -66,8 +66,8 @@
   }
 
   public void init(Set<Column> columns) {
-    this.columnFamilies = new HashSet<ByteSequence>();
-    this.columnsQualifiers = new HashMap<ByteSequence,HashSet<ByteSequence>>();
+    this.columnFamilies = new HashSet<>();
+    this.columnsQualifiers = new HashMap<>();
 
     for (Iterator<Column> iter = columns.iterator(); iter.hasNext();) {
       Column col = iter.next();
@@ -75,7 +75,7 @@
         ArrayByteSequence cq = new ArrayByteSequence(col.columnQualifier);
         HashSet<ByteSequence> cfset = this.columnsQualifiers.get(cq);
         if (cfset == null) {
-          cfset = new HashSet<ByteSequence>();
+          cfset = new HashSet<>();
           this.columnsQualifiers.put(cq, cfset);
         }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/EmptyIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/EmptyIterator.java
new file mode 100644
index 0000000..b791eb1
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/EmptyIterator.java
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.iterators.system;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+
+public class EmptyIterator implements InterruptibleIterator {
+
+  public static final EmptyIterator EMPTY_ITERATOR = new EmptyIterator();
+
+  @Override
+  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {}
+
+  @Override
+  public boolean hasTop() {
+    return false;
+  }
+
+  @Override
+  public void next() throws IOException {
+    // nothing should call this since hasTop always returns false
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {}
+
+  @Override
+  public Key getTopKey() {
+    // nothing should call this since hasTop always returns false
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public Value getTopValue() {
+    // nothing should call this since hasTop always returns false
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
+    return EMPTY_ITERATOR;
+  }
+
+  @Override
+  public void setInterruptFlag(AtomicBoolean flag) {}
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/HeapIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/HeapIterator.java
index 8f2f66c..6ed2d3e 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/system/HeapIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/HeapIterator.java
@@ -53,7 +53,7 @@
     if (heap != null)
       throw new IllegalStateException("heap already exist");
 
-    heap = new PriorityQueue<SortedKeyValueIterator<Key,Value>>(maxSize == 0 ? 1 : maxSize, new SKVIComparator());
+    heap = new PriorityQueue<>(maxSize == 0 ? 1 : maxSize, new SKVIComparator());
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/LocalityGroupIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/LocalityGroupIterator.java
index b2fae6d..ac8355b 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/system/LocalityGroupIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/LocalityGroupIterator.java
@@ -91,7 +91,7 @@
       if (columnFamilies instanceof Set<?>) {
         cfSet = (Set<ByteSequence>) columnFamilies;
       } else {
-        cfSet = new HashSet<ByteSequence>();
+        cfSet = new HashSet<>();
         cfSet.addAll(columnFamilies);
       }
     else
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/MapFileIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/MapFileIterator.java
index 9d59570..f9f0600 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/system/MapFileIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/MapFileIterator.java
@@ -33,6 +33,7 @@
 import org.apache.accumulo.core.iterators.IterationInterruptedException;
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -154,4 +155,9 @@
   public void close() throws IOException {
     reader.close();
   }
+
+  @Override
+  public FileSKVIterator getSample(SamplerConfigurationImpl sampleConfig) {
+    return null;
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/MultiIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/MultiIterator.java
index aef6aeb..7ff07e6 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/system/MultiIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/MultiIterator.java
@@ -50,7 +50,7 @@
 
   private MultiIterator(MultiIterator other, IteratorEnvironment env) {
     super(other.iters.size());
-    this.iters = new ArrayList<SortedKeyValueIterator<Key,Value>>();
+    this.iters = new ArrayList<>();
     this.fence = other.fence;
     for (SortedKeyValueIterator<Key,Value> iter : other.iters) {
       iters.add(iter.deepCopy(env));
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/SampleIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/SampleIterator.java
new file mode 100644
index 0000000..8b488c8
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/SampleIterator.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.iterators.system;
+
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.Sampler;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.Filter;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+
+public class SampleIterator extends Filter {
+
+  private Sampler sampler = new RowSampler();
+
+  public SampleIterator(SortedKeyValueIterator<Key,Value> iter, Sampler sampler) {
+    setSource(iter);
+    this.sampler = sampler;
+  }
+
+  @Override
+  public boolean accept(Key k, Value v) {
+    return sampler.accept(k);
+  }
+
+  @Override
+  public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
+    return new SampleIterator(getSource().deepCopy(env), sampler);
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/SequenceFileIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/SequenceFileIterator.java
index 8710acd..8ea3800 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/system/SequenceFileIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/SequenceFileIterator.java
@@ -29,6 +29,7 @@
 import org.apache.accumulo.core.file.FileSKVIterator;
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.hadoop.io.SequenceFile;
 import org.apache.hadoop.io.SequenceFile.Reader;
 
@@ -126,4 +127,9 @@
   public void setInterruptFlag(AtomicBoolean flag) {
     throw new UnsupportedOperationException();
   }
+
+  @Override
+  public FileSKVIterator getSample(SamplerConfigurationImpl sampleConfig) {
+    throw new UnsupportedOperationException();
+  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/SourceSwitchingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/SourceSwitchingIterator.java
index 1f06a71..098aa63 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/system/SourceSwitchingIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/SourceSwitchingIterator.java
@@ -36,7 +36,7 @@
  * (InMemoryMap that was minor compacted to a file). Clients reading from a table that has data in memory should not see interruption in their scan when that
  * data is minor compacted. This iterator is designed to manage this behind the scene.
  */
-public class SourceSwitchingIterator implements SortedKeyValueIterator<Key,Value>, InterruptibleIterator {
+public class SourceSwitchingIterator implements InterruptibleIterator {
 
   public interface DataSource {
     boolean isCurrent();
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/system/SynchronizedIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/system/SynchronizedIterator.java
index a095106..43da54d 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/system/SynchronizedIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/system/SynchronizedIterator.java
@@ -68,7 +68,7 @@
 
   @Override
   public synchronized SortedKeyValueIterator<K,V> deepCopy(IteratorEnvironment env) {
-    return new SynchronizedIterator<K,V>(source.deepCopy(env));
+    return new SynchronizedIterator<>(source.deepCopy(env));
   }
 
   public SynchronizedIterator() {}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/BigDecimalCombiner.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/BigDecimalCombiner.java
index d9e6cdd..77f33d9 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/BigDecimalCombiner.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/BigDecimalCombiner.java
@@ -101,8 +101,7 @@
    * Provides the ability to encode scientific notation.
    *
    */
-  public static class BigDecimalEncoder extends AbstractLexicoder<BigDecimal> implements
-      org.apache.accumulo.core.iterators.TypedValueCombiner.Encoder<BigDecimal> {
+  public static class BigDecimalEncoder extends AbstractLexicoder<BigDecimal> {
     @Override
     public byte[] encode(BigDecimal v) {
       return v.toString().getBytes(UTF_8);
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/CfCqSliceFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/CfCqSliceFilter.java
new file mode 100644
index 0000000..d4630b0
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/CfCqSliceFilter.java
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.iterators.user;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.PartialKey;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.Filter;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+
+/**
+ * Filters key/value pairs for a range of column families and a range of column qualifiers. Only keys which fall in both ranges will be passed by the filter.
+ * Note that if you have a small, well-defined set of column families it will be much more efficient to configure locality groups to isolate that data instead
+ * of configuring this iterator to scan over it.
+ *
+ * @see org.apache.accumulo.core.iterators.user.CfCqSliceOpts for a description of this iterator's options.
+ */
+public class CfCqSliceFilter extends Filter {
+
+  private CfCqSliceOpts cso;
+
+  @Override
+  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
+    super.init(source, options, env);
+    cso = new CfCqSliceOpts(options);
+  }
+
+  @Override
+  public boolean accept(Key k, Value v) {
+    PartialKey inSlice = isKeyInSlice(k);
+    return inSlice == PartialKey.ROW_COLFAM_COLQUAL;
+  }
+
+  @Override
+  public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
+    CfCqSliceFilter o = (CfCqSliceFilter) super.deepCopy(env);
+    o.cso = new CfCqSliceOpts(cso);
+    return o;
+  }
+
+  private PartialKey isKeyInSlice(Key k) {
+    if (cso.minCf.getLength() > 0) {
+      int minCfComp = k.compareColumnFamily(cso.minCf);
+      if (minCfComp < 0 || (minCfComp == 0 && !cso.minInclusive)) {
+        return PartialKey.ROW;
+      }
+    }
+    if (cso.maxCf.getLength() > 0) {
+      int maxCfComp = k.compareColumnFamily(cso.maxCf);
+      if (maxCfComp > 0 || (maxCfComp == 0 && !cso.maxInclusive)) {
+        return PartialKey.ROW;
+      }
+    }
+    // k.colfam is in the "slice".
+    if (cso.minCq.getLength() > 0) {
+      int minCqComp = k.compareColumnQualifier(cso.minCq);
+      if (minCqComp < 0 || (minCqComp == 0 && !cso.minInclusive)) {
+        return PartialKey.ROW_COLFAM;
+      }
+    }
+    if (cso.maxCq.getLength() > 0) {
+      int maxCqComp = k.compareColumnQualifier(cso.maxCq);
+      if (maxCqComp > 0 || (maxCqComp == 0 && !cso.maxInclusive)) {
+        return PartialKey.ROW_COLFAM;
+      }
+    }
+    // k.colqual is in the slice.
+    return PartialKey.ROW_COLFAM_COLQUAL;
+  }
+
+  @Override
+  public IteratorOptions describeOptions() {
+    return new CfCqSliceOpts.Describer().describeOptions();
+  }
+
+  @Override
+  public boolean validateOptions(Map<String,String> options) {
+    return new CfCqSliceOpts.Describer().validateOptions(options);
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/CfCqSliceOpts.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/CfCqSliceOpts.java
new file mode 100644
index 0000000..53cbfb0
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/CfCqSliceOpts.java
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.iterators.user;
+
+import org.apache.accumulo.core.iterators.OptionDescriber;
+import org.apache.hadoop.io.Text;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+public class CfCqSliceOpts {
+  public static final String OPT_MIN_CF = "minCf";
+  public static final String OPT_MIN_CF_DESC = "UTF-8 encoded string representing minimum column family. "
+      + "Optional parameter. If minCf and minCq are undefined, the column slice will start at the first column "
+      + "of each row. If you want to do an exact match on column families, it's more efficient to leave minCf "
+      + "and maxCf undefined and use the scanner's fetchColumnFamily method.";
+
+  public static final String OPT_MIN_CQ = "minCq";
+  public static final String OPT_MIN_CQ_DESC = "UTF-8 encoded string representing minimum column qualifier. "
+      + "Optional parameter. If minCf and minCq are undefined, the column slice will start at the first column " + "of each row.";
+
+  public static final String OPT_MAX_CF = "maxCf";
+  public static final String OPT_MAX_CF_DESC = "UTF-8 encoded string representing maximum column family. "
+      + "Optional parameter. If minCf and minCq are undefined, the column slice will start at the first column "
+      + "of each row. If you want to do an exact match on column families, it's more efficient to leave minCf "
+      + "and maxCf undefined and use the scanner's fetchColumnFamily method.";
+
+  public static final String OPT_MAX_CQ = "maxCq";
+  public static final String OPT_MAX_CQ_DESC = "UTF-8 encoded string representing maximum column qualifier. "
+      + "Optional parameter. If maxCf and MaxCq are undefined, the column slice will end at the last column of " + "each row.";
+
+  public static final String OPT_MIN_INCLUSIVE = "minInclusive";
+  public static final String OPT_MIN_INCLUSIVE_DESC = "UTF-8 encoded string indicating whether to include the "
+      + "minimum column in the slice range. Optional parameter, default is true.";
+
+  public static final String OPT_MAX_INCLUSIVE = "maxInclusive";
+  public static final String OPT_MAX_INCLUSIVE_DESC = "UTF-8 encoded string indicating whether to include the "
+      + "maximum column in the slice range. Optional parameter, default is true.";
+
+  Text minCf;
+  Text minCq;
+
+  Text maxCf;
+  Text maxCq;
+
+  boolean minInclusive;
+  boolean maxInclusive;
+
+  public CfCqSliceOpts(CfCqSliceOpts o) {
+    minCf = new Text(o.minCf);
+    minCq = new Text(o.minCq);
+    maxCf = new Text(o.maxCf);
+    maxCq = new Text(o.maxCq);
+    minInclusive = o.minInclusive;
+    maxInclusive = o.maxInclusive;
+  }
+
+  public CfCqSliceOpts(Map<String,String> options) {
+    String optStr = options.get(OPT_MIN_CF);
+    minCf = optStr == null ? new Text() : new Text(optStr.getBytes(UTF_8));
+
+    optStr = options.get(OPT_MIN_CQ);
+    minCq = optStr == null ? new Text() : new Text(optStr.getBytes(UTF_8));
+
+    optStr = options.get(OPT_MAX_CF);
+    maxCf = optStr == null ? new Text() : new Text(optStr.getBytes(UTF_8));
+
+    optStr = options.get(OPT_MAX_CQ);
+    maxCq = optStr == null ? new Text() : new Text(optStr.getBytes(UTF_8));
+
+    optStr = options.get(OPT_MIN_INCLUSIVE);
+    minInclusive = optStr == null || optStr.isEmpty() ? true : Boolean.valueOf(options.get(OPT_MIN_INCLUSIVE));
+
+    optStr = options.get(OPT_MAX_INCLUSIVE);
+    maxInclusive = optStr == null || optStr.isEmpty() ? true : Boolean.valueOf(options.get(OPT_MAX_INCLUSIVE));
+  }
+
+  static class Describer implements OptionDescriber {
+    @Override
+    public OptionDescriber.IteratorOptions describeOptions() {
+      Map<String,String> options = new HashMap<>();
+      options.put(OPT_MIN_CF, OPT_MIN_CF_DESC);
+      options.put(OPT_MIN_CQ, OPT_MIN_CQ_DESC);
+      options.put(OPT_MAX_CF, OPT_MAX_CF_DESC);
+      options.put(OPT_MAX_CQ, OPT_MAX_CQ_DESC);
+      options.put(OPT_MIN_INCLUSIVE, OPT_MIN_INCLUSIVE_DESC);
+      options.put(OPT_MAX_INCLUSIVE, OPT_MAX_INCLUSIVE_DESC);
+      return new OptionDescriber.IteratorOptions("ColumnSliceFilter", "Returns all key/value pairs where the column is between the specified values", options,
+          Collections.<String> emptyList());
+    }
+
+    @Override
+    public boolean validateOptions(Map<String,String> options) {
+      // if you don't specify a max CF and a max CQ, that means there's no upper bounds to the slice. In that case
+      // you must not set max inclusive to false.
+      CfCqSliceOpts o = new CfCqSliceOpts(options);
+      boolean boundsOk = true;
+      boolean upperBoundsExist = o.maxCf.getLength() > 0 && o.maxCq.getLength() > 0;
+      if (upperBoundsExist) {
+        boundsOk = o.maxInclusive;
+      }
+      boolean cqRangeOk = o.maxCq.getLength() == 0 || (o.minCq.compareTo(o.maxCq) < 1);
+      boolean cfRangeOk = o.maxCf.getLength() == 0 || (o.minCf.compareTo(o.maxCf) < 1);
+      return boundsOk && cqRangeOk && cfRangeOk;
+    }
+  }
+
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/CfCqSliceSeekingFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/CfCqSliceSeekingFilter.java
new file mode 100644
index 0000000..e5c4969
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/CfCqSliceSeekingFilter.java
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.iterators.user;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.PartialKey;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.OptionDescriber;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+
+import java.io.IOException;
+import java.util.Map;
+
+/**
+ * Filters key/value pairs for a range of column families and a range of column qualifiers. Only keys which fall in both ranges will be passed by the filter.
+ * Note that if you have a small, well-defined set of column families it will be much more efficient to configure locality groups to isolate that data instead
+ * of configuring this iterator to seek over it.
+ *
+ * This filter may be more efficient than the CfCqSliceFilter or the ColumnSlice filter for small slices of large rows as it will seek to the next potential
+ * match once it determines that it has iterated past the end of a slice.
+ *
+ * @see org.apache.accumulo.core.iterators.user.CfCqSliceOpts for a description of this iterator's options.
+ */
+public class CfCqSliceSeekingFilter extends SeekingFilter implements OptionDescriber {
+
+  private static final FilterResult SKIP_TO_HINT = FilterResult.of(false, AdvanceResult.USE_HINT);
+  private static final FilterResult SKIP_TO_NEXT = FilterResult.of(false, AdvanceResult.NEXT);
+  private static final FilterResult SKIP_TO_NEXT_ROW = FilterResult.of(false, AdvanceResult.NEXT_ROW);
+  private static final FilterResult SKIP_TO_NEXT_CF = FilterResult.of(false, AdvanceResult.NEXT_CF);
+  private static final FilterResult INCLUDE_AND_NEXT = FilterResult.of(true, AdvanceResult.NEXT);
+  private static final FilterResult INCLUDE_AND_NEXT_CF = FilterResult.of(true, AdvanceResult.NEXT_CF);
+
+  private CfCqSliceOpts cso;
+
+  @Override
+  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
+    super.init(source, options, env);
+    cso = new CfCqSliceOpts(options);
+  }
+
+  @Override
+  public FilterResult filter(Key k, Value v) {
+    if (cso.minCf.getLength() > 0) {
+      int minCfCmp = k.compareColumnFamily(cso.minCf);
+      if (minCfCmp < 0) {
+        return SKIP_TO_HINT; // hint will be the min CF in this row.
+      }
+      if (minCfCmp == 0 && !cso.minInclusive) {
+        return SKIP_TO_NEXT;
+      }
+    }
+    if (cso.maxCf.getLength() > 0) {
+      int maxCfCmp = k.compareColumnFamily(cso.maxCf);
+      if (maxCfCmp > 0 || (maxCfCmp == 0 && !cso.maxInclusive)) {
+        return SKIP_TO_NEXT_ROW;
+      }
+    }
+    // at this point we're in the correct CF range, now check the CQ.
+    if (cso.minCq.getLength() > 0) {
+      int minCqCmp = k.compareColumnQualifier(cso.minCq);
+      if (minCqCmp < 0) {
+        return SKIP_TO_HINT; // hint will be the min CQ in this CF in this row.
+      }
+      if (minCqCmp == 0 && !cso.minInclusive) {
+        return SKIP_TO_NEXT;
+      }
+    }
+    if (cso.maxCq.getLength() > 0) {
+      int maxCqCmp = k.compareColumnQualifier(cso.maxCq);
+      if (maxCqCmp > 0 || (maxCqCmp == 0 && !cso.maxInclusive)) {
+        return SKIP_TO_NEXT_CF;
+      }
+      if (maxCqCmp == 0) {
+        // special-case here: we know we're at the last CQ in the slice, so skip to the next CF in the row.
+        return INCLUDE_AND_NEXT_CF;
+      }
+    }
+    // at this point we're in the CQ slice.
+    return INCLUDE_AND_NEXT;
+  }
+
+  @Override
+  public Key getNextKeyHint(Key k, Value v) throws IllegalArgumentException {
+    if (cso.minCf.getLength() > 0) {
+      int minCfCmp = k.compareColumnFamily(cso.minCf);
+      if (minCfCmp < 0) {
+        Key hint = new Key(k.getRow(), cso.minCf);
+        return cso.minInclusive ? hint : hint.followingKey(PartialKey.ROW_COLFAM);
+      }
+    }
+    if (cso.minCq.getLength() > 0) {
+      int minCqCmp = k.compareColumnQualifier(cso.minCq);
+      if (minCqCmp < 0) {
+        Key hint = new Key(k.getRow(), k.getColumnFamily(), cso.minCq);
+        return cso.minInclusive ? hint : hint.followingKey(PartialKey.ROW_COLFAM_COLQUAL);
+      }
+    }
+    // If we get here it means that we were asked to provide a hint for a key that we
+    // didn't return USE_HINT for.
+    throw new IllegalArgumentException("Don't know how to provide hint for key " + k);
+  }
+
+  @Override
+  public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
+    CfCqSliceSeekingFilter o = (CfCqSliceSeekingFilter) super.deepCopy(env);
+    o.cso = new CfCqSliceOpts(cso);
+    return o;
+  }
+
+  @Override
+  public IteratorOptions describeOptions() {
+    return new CfCqSliceOpts.Describer().describeOptions();
+  }
+
+  @Override
+  public boolean validateOptions(Map<String,String> options) {
+    return new CfCqSliceOpts.Describer().validateOptions(options);
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/ColumnAgeOffFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/ColumnAgeOffFilter.java
index c3da5c1..8093e92 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/ColumnAgeOffFilter.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/ColumnAgeOffFilter.java
@@ -43,6 +43,9 @@
       for (Entry<String,String> entry : objectStrings.entrySet()) {
         String column = entry.getKey();
         String ttl = entry.getValue();
+        // skip the negate option, it will cause an exception to be thrown
+        if (column.equals(NEGATE) && (ttl.equalsIgnoreCase("true") || ttl.equalsIgnoreCase("false")))
+          continue;
         Long l = Long.parseLong(ttl);
 
         Pair<Text,Text> colPair = ColumnSet.decodeColumns(column);
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/GrepIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/GrepIterator.java
index 043a729..30d27ae 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/GrepIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/GrepIterator.java
@@ -36,46 +36,35 @@
 public class GrepIterator extends Filter {
 
   private byte term[];
+  private int right[] = new int[256];
 
   @Override
   public boolean accept(Key k, Value v) {
     return match(v.get()) || match(k.getRowData()) || match(k.getColumnFamilyData()) || match(k.getColumnQualifierData());
   }
 
-  private boolean match(ByteSequence bs) {
-    return indexOf(bs.getBackingArray(), bs.offset(), bs.length(), term) >= 0;
+  protected boolean match(ByteSequence bs) {
+    return indexOf(bs.getBackingArray(), bs.offset(), bs.length()) >= 0;
   }
 
-  private boolean match(byte[] ba) {
-    return indexOf(ba, 0, ba.length, term) >= 0;
+  protected boolean match(byte[] ba) {
+    return indexOf(ba, 0, ba.length) >= 0;
   }
 
-  // copied code below from java string and modified
-
-  private static int indexOf(byte[] source, int sourceOffset, int sourceCount, byte[] target) {
-    byte first = target[0];
-    int targetCount = target.length;
-    int max = sourceOffset + (sourceCount - targetCount);
-
-    for (int i = sourceOffset; i <= max; i++) {
-      /* Look for first character. */
-      if (source[i] != first) {
-        while (++i <= max && source[i] != first)
-          continue;
-      }
-
-      /* Found first character, now look at the rest of v2 */
-      if (i <= max) {
-        int j = i + 1;
-        int end = j + targetCount - 1;
-        for (int k = 1; j < end && source[j] == target[k]; j++, k++)
-          continue;
-
-        if (j == end) {
-          /* Found whole string. */
-          return i - sourceOffset;
+  protected int indexOf(byte[] value, int offset, int length) {
+    final int M = term.length;
+    final int N = offset + length;
+    int skip;
+    for (int i = offset; i <= N - M; i += skip) {
+      skip = 0;
+      for (int j = M - 1; j >= 0; j--) {
+        if (term[j] != value[i + j]) {
+          skip = Math.max(1, j - right[value[i + j] & 0xff]);
         }
       }
+      if (skip == 0) {
+        return i;
+      }
     }
     return -1;
   }
@@ -91,6 +80,12 @@
   public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
     super.init(source, options, env);
     term = options.get("term").getBytes(UTF_8);
+    for (int i = 0; i < right.length; i++) {
+      right[i] = -1;
+    }
+    for (int i = 0; i < term.length; i++) {
+      right[term[i] & 0xff] = i;
+    }
   }
 
   /**
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/LargeRowFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/LargeRowFilter.java
index 59a5dec..9fd40b6 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/LargeRowFilter.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/LargeRowFilter.java
@@ -56,8 +56,8 @@
   private SortedKeyValueIterator<Key,Value> source;
 
   // a cache of keys
-  private ArrayList<Key> keys = new ArrayList<Key>();
-  private ArrayList<Value> values = new ArrayList<Value>();
+  private ArrayList<Key> keys = new ArrayList<>();
+  private ArrayList<Value> values = new ArrayList<>();
 
   private int currentPosition;
 
@@ -195,11 +195,11 @@
   public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
 
     if (inclusive && !columnFamilies.contains(EMPTY)) {
-      columnFamilies = new HashSet<ByteSequence>(columnFamilies);
+      columnFamilies = new HashSet<>(columnFamilies);
       columnFamilies.add(EMPTY);
       dropEmptyColFams = true;
     } else if (!inclusive && columnFamilies.contains(EMPTY)) {
-      columnFamilies = new HashSet<ByteSequence>(columnFamilies);
+      columnFamilies = new HashSet<>(columnFamilies);
       columnFamilies.remove(EMPTY);
       dropEmptyColFams = true;
     } else {
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/RowDeletingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/RowDeletingIterator.java
index 60870d8..3f704db 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/RowDeletingIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/RowDeletingIterator.java
@@ -144,11 +144,11 @@
   public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
 
     if (inclusive && !columnFamilies.contains(EMPTY)) {
-      columnFamilies = new HashSet<ByteSequence>(columnFamilies);
+      columnFamilies = new HashSet<>(columnFamilies);
       columnFamilies.add(EMPTY);
       dropEmptyColFams = true;
     } else if (!inclusive && columnFamilies.contains(EMPTY)) {
-      columnFamilies = new HashSet<ByteSequence>(columnFamilies);
+      columnFamilies = new HashSet<>(columnFamilies);
       columnFamilies.remove(EMPTY);
       dropEmptyColFams = true;
     } else {
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/RowEncodingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/RowEncodingIterator.java
index 320301d..e0fd64e 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/RowEncodingIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/RowEncodingIterator.java
@@ -91,8 +91,8 @@
     return newInstance;
   }
 
-  List<Key> keys = new ArrayList<Key>();
-  List<Value> values = new ArrayList<Value>();
+  List<Key> keys = new ArrayList<>();
+  List<Value> values = new ArrayList<>();
 
   private void prepKeys() throws IOException {
     long kvBufSize = 0;
@@ -163,7 +163,7 @@
   public IteratorOptions describeOptions() {
     String desc = "This iterator encapsulates an entire row of Key/Value pairs into a single Key/Value pair.";
     String bufferDesc = "Maximum buffer size (in accumulo memory spec) to use for buffering keys before throwing a BufferOverflowException.";
-    HashMap<String,String> namedOptions = new HashMap<String,String>();
+    HashMap<String,String> namedOptions = new HashMap<>();
     namedOptions.put(MAX_BUFFER_SIZE_OPT, bufferDesc);
     return new IteratorOptions(getClass().getSimpleName(), desc, namedOptions, null);
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/SeekingFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/SeekingFilter.java
new file mode 100644
index 0000000..bdc9b14
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/SeekingFilter.java
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.iterators.user;
+
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.PartialKey;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.iterators.WrappingIterator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.EnumMap;
+import java.util.Map;
+
+/**
+ * Base class for filters that can skip over key-value pairs which do not match their filter predicate. In addition to returning true/false to accept or reject
+ * a kv pair, subclasses can return an extra field which indicates how far the source iterator should be advanced.
+ *
+ * Note that the behaviour of the negate option is different from the Filter class. If a KV pair fails the subclass' filter predicate and negate is true, then
+ * the KV pair will pass the filter. However if the subclass advances the source past a bunch of KV pairs, all those pairs will be implicitly rejected and
+ * negate will have no effect.
+ *
+ * @see org.apache.accumulo.core.iterators.Filter
+ */
+public abstract class SeekingFilter extends WrappingIterator {
+  private static final Logger log = LoggerFactory.getLogger(SeekingFilter.class);
+
+  protected static final String NEGATE = "negate";
+
+  public enum AdvanceResult {
+    NEXT, NEXT_CQ, NEXT_CF, NEXT_ROW, USE_HINT
+  }
+
+  public static class FilterResult {
+    private static final EnumMap<AdvanceResult,FilterResult> PASSES = new EnumMap<>(AdvanceResult.class);
+    private static final EnumMap<AdvanceResult,FilterResult> FAILS = new EnumMap<>(AdvanceResult.class);
+    static {
+      for (AdvanceResult ar : AdvanceResult.values()) {
+        PASSES.put(ar, new FilterResult(true, ar));
+        FAILS.put(ar, new FilterResult(false, ar));
+      }
+    }
+
+    final boolean accept;
+    final AdvanceResult advance;
+
+    public FilterResult(boolean accept, AdvanceResult advance) {
+      this.accept = accept;
+      this.advance = advance;
+    }
+
+    public static FilterResult of(boolean accept, AdvanceResult advance) {
+      return accept ? PASSES.get(advance) : FAILS.get(advance);
+    }
+
+    public String toString() {
+      return "Acc: " + accept + " Adv: " + advance;
+    }
+  }
+
+  /**
+   * Subclasses must provide an implementation which examines the given key and value and determines (1) whether to accept the KV pair and (2) how far to
+   * advance the source iterator past the key.
+   *
+   * @param k
+   *          a key
+   * @param v
+   *          a value
+   * @return indicating whether to pass or block the key, and how far the source iterator should be advanced.
+   */
+  public abstract FilterResult filter(Key k, Value v);
+
+  /**
+   * Whenever the subclass returns AdvanceResult.USE_HINT from its filter predicate, this method will be called to see how far to advance the source iterator.
+   * The return value must be a key which is greater than (sorts after) the input key. If the subclass never returns USE_HINT, this method will never be called
+   * and may safely return null.
+   *
+   * @param k
+   *          a key
+   * @param v
+   *          a value
+   * @return as above
+   */
+  public abstract Key getNextKeyHint(Key k, Value v);
+
+  private Collection<ByteSequence> columnFamilies;
+  private boolean inclusive;
+  private Range seekRange;
+  private boolean negate;
+
+  private AdvanceResult advance;
+
+  private boolean advancedPastSeek = false;
+
+  @Override
+  public void next() throws IOException {
+    advanceSource(getSource(), advance);
+    findTop();
+  }
+
+  @Override
+  public boolean hasTop() {
+    return !advancedPastSeek && super.hasTop();
+  }
+
+  @Override
+  public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
+    super.seek(range, columnFamilies, inclusive);
+    advance = null;
+    this.columnFamilies = columnFamilies;
+    this.inclusive = inclusive;
+    seekRange = range;
+    advancedPastSeek = false;
+    findTop();
+  }
+
+  @Override
+  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
+    super.init(source, options, env);
+    negate = Boolean.parseBoolean(options.get(NEGATE));
+  }
+
+  @Override
+  public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
+    SeekingFilter newInstance;
+    try {
+      newInstance = this.getClass().newInstance();
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+    newInstance.setSource(getSource().deepCopy(env));
+    newInstance.negate = negate;
+    return newInstance;
+  }
+
+  protected void findTop() throws IOException {
+    SortedKeyValueIterator<Key,Value> src = getSource();
+    // advance could be null if we've just been seeked
+    advance = null;
+    while (src.hasTop() && !advancedPastSeek) {
+      if (src.getTopKey().isDeleted()) {
+        // as per. o.a.a.core.iterators.Filter, deleted keys always pass through the filter.
+        advance = AdvanceResult.NEXT;
+        return;
+      }
+      FilterResult f = filter(src.getTopKey(), src.getTopValue());
+      if (log.isTraceEnabled()) {
+        log.trace("Filtered: {} result == {} hint == {}", src.getTopKey(), f,
+            f.advance == AdvanceResult.USE_HINT ? getNextKeyHint(src.getTopKey(), src.getTopValue()) : " (none)");
+      }
+      if (f.accept != negate) {
+        // advance will be processed when next is called
+        advance = f.advance;
+        break;
+      } else {
+        advanceSource(src, f.advance);
+      }
+    }
+  }
+
+  private void advanceSource(SortedKeyValueIterator<Key,Value> src, AdvanceResult adv) throws IOException {
+    Key topKey = src.getTopKey();
+    Range advRange = null;
+    switch (adv) {
+      case NEXT:
+        src.next();
+        return;
+      case NEXT_CQ:
+        advRange = new Range(topKey.followingKey(PartialKey.ROW_COLFAM_COLQUAL), null);
+        break;
+      case NEXT_CF:
+        advRange = new Range(topKey.followingKey(PartialKey.ROW_COLFAM), null);
+        break;
+      case NEXT_ROW:
+        advRange = new Range(topKey.followingKey(PartialKey.ROW), null);
+        break;
+      case USE_HINT:
+        Value topVal = src.getTopValue();
+        Key hintKey = getNextKeyHint(topKey, topVal);
+        if (hintKey != null && hintKey.compareTo(topKey) > 0) {
+          advRange = new Range(hintKey, null);
+        } else {
+          String msg = "Filter returned USE_HINT for " + topKey + " but invalid hint: " + hintKey;
+          throw new IOException(msg);
+        }
+        break;
+    }
+    if (advRange == null) {
+      // Should never get here. Just a safeguard in case somebody adds a new type of AdvanceRange and forgets to handle it here.
+      throw new IOException("Unable to determine range to advance to for AdvanceResult " + adv);
+    }
+    advRange = advRange.clip(seekRange, true);
+    if (advRange == null) {
+      // the advanced range is outside the seek range. the source is exhausted.
+      advancedPastSeek = true;
+    } else {
+      src.seek(advRange, columnFamilies, inclusive);
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/SummingArrayCombiner.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/SummingArrayCombiner.java
index 260fa36..2e59b2c 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/SummingArrayCombiner.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/SummingArrayCombiner.java
@@ -69,7 +69,7 @@
 
   @Override
   public List<Long> typedReduce(Key key, Iterator<List<Long>> iter) {
-    List<Long> sum = new ArrayList<Long>();
+    List<Long> sum = new ArrayList<>();
     while (iter.hasNext()) {
       sum = arrayAdd(sum, iter.next());
     }
@@ -142,7 +142,7 @@
     return true;
   }
 
-  public abstract static class DOSArrayEncoder<V> extends AbstractLexicoder<List<V>> implements Encoder<List<V>> {
+  public abstract static class DOSArrayEncoder<V> extends AbstractLexicoder<List<V>> {
     public abstract void write(DataOutputStream dos, V v) throws IOException;
 
     public abstract V read(DataInputStream dis) throws IOException;
@@ -173,7 +173,7 @@
       DataInputStream dis = new DataInputStream(new ByteArrayInputStream(b, offset, origLen));
       try {
         int len = WritableUtils.readVInt(dis);
-        List<V> vl = new ArrayList<V>(len);
+        List<V> vl = new ArrayList<>(len);
         for (int i = 0; i < len; i++) {
           vl.add(read(dis));
         }
@@ -208,7 +208,7 @@
     }
   }
 
-  public static class StringArrayEncoder extends AbstractEncoder<List<Long>> implements Encoder<List<Long>> {
+  public static class StringArrayEncoder extends AbstractEncoder<List<Long>> {
     @Override
     public byte[] encode(List<Long> la) {
       if (la.size() == 0)
@@ -230,7 +230,7 @@
     @Override
     protected List<Long> decodeUnchecked(byte[] b, int offset, int len) {
       String[] longstrs = new String(b, offset, len, UTF_8).split(",");
-      List<Long> la = new ArrayList<Long>(longstrs.length);
+      List<Long> la = new ArrayList<>(longstrs.length);
       for (String s : longstrs) {
         if (s.length() == 0)
           la.add(0l);
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/TransformingIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/TransformingIterator.java
index 66de3d6..1c99f7a 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/TransformingIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/TransformingIterator.java
@@ -86,7 +86,7 @@
 
   protected Logger log = LoggerFactory.getLogger(getClass());
 
-  protected ArrayList<Pair<Key,Value>> keys = new ArrayList<Pair<Key,Value>>();
+  protected ArrayList<Pair<Key,Value>> keys = new ArrayList<>();
   protected int keyPos = -1;
   protected boolean scanning;
   protected Range seekRange;
@@ -136,7 +136,7 @@
     String bufferDesc = "Maximum buffer size (in accumulo memory spec) to use for buffering keys before throwing a BufferOverflowException.  "
         + "Users should keep this limit in mind when deciding what to transform.  That is, if transforming the column family for example, then all "
         + "keys sharing the same row and column family must fit within this limit (along with their associated values)";
-    HashMap<String,String> namedOptions = new HashMap<String,String>();
+    HashMap<String,String> namedOptions = new HashMap<>();
     namedOptions.put(AUTH_OPT, authDesc);
     namedOptions.put(MAX_BUFFER_SIZE_OPT, bufferDesc);
     return new IteratorOptions(getClass().getSimpleName(), desc, namedOptions, null);
@@ -176,7 +176,7 @@
     copy.keyPos = keyPos;
     copy.keys.addAll(keys);
     copy.seekRange = (seekRange == null) ? null : new Range(seekRange);
-    copy.seekColumnFamilies = (seekColumnFamilies == null) ? null : new HashSet<ByteSequence>(seekColumnFamilies);
+    copy.seekColumnFamilies = (seekColumnFamilies == null) ? null : new HashSet<>(seekColumnFamilies);
     copy.seekColumnFamiliesInclusive = seekColumnFamiliesInclusive;
 
     copy.ve = ve;
@@ -334,7 +334,7 @@
 
           if (getSource().hasTop() && key == getSource().getTopKey())
             key = new Key(key);
-          keys.add(new Pair<Key,Value>(key, new Value(val)));
+          keys.add(new Pair<>(key, new Value(val)));
           appened += (key.getSize() + val.getSize() + 128);
         }
       }
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/VisibilityFilter.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/VisibilityFilter.java
index 6e55aec..f7007a1 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/VisibilityFilter.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/VisibilityFilter.java
@@ -25,7 +25,6 @@
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
-import org.apache.accumulo.core.iterators.OptionDescriber;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.ColumnVisibility;
@@ -37,7 +36,7 @@
 /**
  *
  */
-public class VisibilityFilter extends org.apache.accumulo.core.iterators.system.VisibilityFilter implements OptionDescriber {
+public class VisibilityFilter extends org.apache.accumulo.core.iterators.system.VisibilityFilter {
 
   private static final String AUTHS = "auths";
   private static final String FILTER_INVALID_ONLY = "filterInvalid";
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
index 982d0df..0a3f3e5 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
@@ -73,7 +73,7 @@
    *           Signals that an I/O exception has occurred.
    */
   public static final SortedMap<Key,Value> decodeColumnFamily(Key rowKey, Value rowValue) throws IOException {
-    SortedMap<Key,Value> map = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map = new TreeMap<>();
     ByteArrayInputStream in = new ByteArrayInputStream(rowValue.get());
     DataInputStream din = new DataInputStream(in);
     int numKeys = din.readInt();
@@ -148,8 +148,8 @@
     return new Value(out.toByteArray());
   }
 
-  List<Key> keys = new ArrayList<Key>();
-  List<Value> values = new ArrayList<Value>();
+  List<Key> keys = new ArrayList<>();
+  List<Value> values = new ArrayList<>();
 
   private void prepKeys() throws IOException {
     if (topKey != null)
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeRowIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeRowIterator.java
index 665cbfe..17bf315 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeRowIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeRowIterator.java
@@ -84,7 +84,7 @@
 
   // decode a bunch of key value pairs that have been encoded into a single value
   public static final SortedMap<Key,Value> decodeRow(Key rowKey, Value rowValue) throws IOException {
-    SortedMap<Key,Value> map = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map = new TreeMap<>();
     ByteArrayInputStream in = new ByteArrayInputStream(rowValue.get());
     DataInputStream din = new DataInputStream(in);
     int numKeys = din.readInt();
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/BulkImportState.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/BulkImportState.java
new file mode 100644
index 0000000..a4f2efe
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/BulkImportState.java
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.accumulo.core.master.thrift;
+
+
+import java.util.Map;
+import java.util.HashMap;
+import org.apache.thrift.TEnum;
+
+@SuppressWarnings({"unused"}) public enum BulkImportState implements org.apache.thrift.TEnum {
+  INITIAL(0),
+  MOVING(1),
+  PROCESSING(2),
+  ASSIGNING(3),
+  LOADING(4),
+  COPY_FILES(5),
+  CLEANUP(6);
+
+  private final int value;
+
+  private BulkImportState(int value) {
+    this.value = value;
+  }
+
+  /**
+   * Get the integer value of this enum value, as defined in the Thrift IDL.
+   */
+  public int getValue() {
+    return value;
+  }
+
+  /**
+   * Find a the enum type by its integer value, as defined in the Thrift IDL.
+   * @return null if the value is not found.
+   */
+  public static BulkImportState findByValue(int value) { 
+    switch (value) {
+      case 0:
+        return INITIAL;
+      case 1:
+        return MOVING;
+      case 2:
+        return PROCESSING;
+      case 3:
+        return ASSIGNING;
+      case 4:
+        return LOADING;
+      case 5:
+        return COPY_FILES;
+      case 6:
+        return CLEANUP;
+      default:
+        return null;
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/BulkImportStatus.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/BulkImportStatus.java
new file mode 100644
index 0000000..95a588e
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/BulkImportStatus.java
@@ -0,0 +1,638 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.accumulo.core.master.thrift;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class BulkImportStatus implements org.apache.thrift.TBase<BulkImportStatus, BulkImportStatus._Fields>, java.io.Serializable, Cloneable, Comparable<BulkImportStatus> {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("BulkImportStatus");
+
+  private static final org.apache.thrift.protocol.TField START_TIME_FIELD_DESC = new org.apache.thrift.protocol.TField("startTime", org.apache.thrift.protocol.TType.I64, (short)1);
+  private static final org.apache.thrift.protocol.TField FILENAME_FIELD_DESC = new org.apache.thrift.protocol.TField("filename", org.apache.thrift.protocol.TType.STRING, (short)2);
+  private static final org.apache.thrift.protocol.TField STATE_FIELD_DESC = new org.apache.thrift.protocol.TField("state", org.apache.thrift.protocol.TType.I32, (short)3);
+
+  private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+  static {
+    schemes.put(StandardScheme.class, new BulkImportStatusStandardSchemeFactory());
+    schemes.put(TupleScheme.class, new BulkImportStatusTupleSchemeFactory());
+  }
+
+  public long startTime; // required
+  public String filename; // required
+  /**
+   * 
+   * @see BulkImportState
+   */
+  public BulkImportState state; // required
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+    START_TIME((short)1, "startTime"),
+    FILENAME((short)2, "filename"),
+    /**
+     * 
+     * @see BulkImportState
+     */
+    STATE((short)3, "state");
+
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      switch(fieldId) {
+        case 1: // START_TIME
+          return START_TIME;
+        case 2: // FILENAME
+          return FILENAME;
+        case 3: // STATE
+          return STATE;
+        default:
+          return null;
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+  private static final int __STARTTIME_ISSET_ID = 0;
+  private byte __isset_bitfield = 0;
+  public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+  static {
+    Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+    tmpMap.put(_Fields.START_TIME, new org.apache.thrift.meta_data.FieldMetaData("startTime", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I64)));
+    tmpMap.put(_Fields.FILENAME, new org.apache.thrift.meta_data.FieldMetaData("filename", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+    tmpMap.put(_Fields.STATE, new org.apache.thrift.meta_data.FieldMetaData("state", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, BulkImportState.class)));
+    metaDataMap = Collections.unmodifiableMap(tmpMap);
+    org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(BulkImportStatus.class, metaDataMap);
+  }
+
+  public BulkImportStatus() {
+  }
+
+  public BulkImportStatus(
+    long startTime,
+    String filename,
+    BulkImportState state)
+  {
+    this();
+    this.startTime = startTime;
+    setStartTimeIsSet(true);
+    this.filename = filename;
+    this.state = state;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public BulkImportStatus(BulkImportStatus other) {
+    __isset_bitfield = other.__isset_bitfield;
+    this.startTime = other.startTime;
+    if (other.isSetFilename()) {
+      this.filename = other.filename;
+    }
+    if (other.isSetState()) {
+      this.state = other.state;
+    }
+  }
+
+  public BulkImportStatus deepCopy() {
+    return new BulkImportStatus(this);
+  }
+
+  @Override
+  public void clear() {
+    setStartTimeIsSet(false);
+    this.startTime = 0;
+    this.filename = null;
+    this.state = null;
+  }
+
+  public long getStartTime() {
+    return this.startTime;
+  }
+
+  public BulkImportStatus setStartTime(long startTime) {
+    this.startTime = startTime;
+    setStartTimeIsSet(true);
+    return this;
+  }
+
+  public void unsetStartTime() {
+    __isset_bitfield = EncodingUtils.clearBit(__isset_bitfield, __STARTTIME_ISSET_ID);
+  }
+
+  /** Returns true if field startTime is set (has been assigned a value) and false otherwise */
+  public boolean isSetStartTime() {
+    return EncodingUtils.testBit(__isset_bitfield, __STARTTIME_ISSET_ID);
+  }
+
+  public void setStartTimeIsSet(boolean value) {
+    __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __STARTTIME_ISSET_ID, value);
+  }
+
+  public String getFilename() {
+    return this.filename;
+  }
+
+  public BulkImportStatus setFilename(String filename) {
+    this.filename = filename;
+    return this;
+  }
+
+  public void unsetFilename() {
+    this.filename = null;
+  }
+
+  /** Returns true if field filename is set (has been assigned a value) and false otherwise */
+  public boolean isSetFilename() {
+    return this.filename != null;
+  }
+
+  public void setFilenameIsSet(boolean value) {
+    if (!value) {
+      this.filename = null;
+    }
+  }
+
+  /**
+   * 
+   * @see BulkImportState
+   */
+  public BulkImportState getState() {
+    return this.state;
+  }
+
+  /**
+   * 
+   * @see BulkImportState
+   */
+  public BulkImportStatus setState(BulkImportState state) {
+    this.state = state;
+    return this;
+  }
+
+  public void unsetState() {
+    this.state = null;
+  }
+
+  /** Returns true if field state is set (has been assigned a value) and false otherwise */
+  public boolean isSetState() {
+    return this.state != null;
+  }
+
+  public void setStateIsSet(boolean value) {
+    if (!value) {
+      this.state = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case START_TIME:
+      if (value == null) {
+        unsetStartTime();
+      } else {
+        setStartTime((Long)value);
+      }
+      break;
+
+    case FILENAME:
+      if (value == null) {
+        unsetFilename();
+      } else {
+        setFilename((String)value);
+      }
+      break;
+
+    case STATE:
+      if (value == null) {
+        unsetState();
+      } else {
+        setState((BulkImportState)value);
+      }
+      break;
+
+    }
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case START_TIME:
+      return getStartTime();
+
+    case FILENAME:
+      return getFilename();
+
+    case STATE:
+      return getState();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    if (field == null) {
+      throw new IllegalArgumentException();
+    }
+
+    switch (field) {
+    case START_TIME:
+      return isSetStartTime();
+    case FILENAME:
+      return isSetFilename();
+    case STATE:
+      return isSetState();
+    }
+    throw new IllegalStateException();
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof BulkImportStatus)
+      return this.equals((BulkImportStatus)that);
+    return false;
+  }
+
+  public boolean equals(BulkImportStatus that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_startTime = true;
+    boolean that_present_startTime = true;
+    if (this_present_startTime || that_present_startTime) {
+      if (!(this_present_startTime && that_present_startTime))
+        return false;
+      if (this.startTime != that.startTime)
+        return false;
+    }
+
+    boolean this_present_filename = true && this.isSetFilename();
+    boolean that_present_filename = true && that.isSetFilename();
+    if (this_present_filename || that_present_filename) {
+      if (!(this_present_filename && that_present_filename))
+        return false;
+      if (!this.filename.equals(that.filename))
+        return false;
+    }
+
+    boolean this_present_state = true && this.isSetState();
+    boolean that_present_state = true && that.isSetState();
+    if (this_present_state || that_present_state) {
+      if (!(this_present_state && that_present_state))
+        return false;
+      if (!this.state.equals(that.state))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_startTime = true;
+    list.add(present_startTime);
+    if (present_startTime)
+      list.add(startTime);
+
+    boolean present_filename = true && (isSetFilename());
+    list.add(present_filename);
+    if (present_filename)
+      list.add(filename);
+
+    boolean present_state = true && (isSetState());
+    list.add(present_state);
+    if (present_state)
+      list.add(state.getValue());
+
+    return list.hashCode();
+  }
+
+  @Override
+  public int compareTo(BulkImportStatus other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+
+    lastComparison = Boolean.valueOf(isSetStartTime()).compareTo(other.isSetStartTime());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetStartTime()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.startTime, other.startTime);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    lastComparison = Boolean.valueOf(isSetFilename()).compareTo(other.isSetFilename());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetFilename()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.filename, other.filename);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    lastComparison = Boolean.valueOf(isSetState()).compareTo(other.isSetState());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetState()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.state, other.state);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    return 0;
+  }
+
+  public _Fields fieldForId(int fieldId) {
+    return _Fields.findByThriftId(fieldId);
+  }
+
+  public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+    schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+  }
+
+  public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+    schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("BulkImportStatus(");
+    boolean first = true;
+
+    sb.append("startTime:");
+    sb.append(this.startTime);
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("filename:");
+    if (this.filename == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.filename);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("state:");
+    if (this.state == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.state);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws org.apache.thrift.TException {
+    // check for required fields
+    // check for sub-struct validity
+  }
+
+  private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+    try {
+      write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+    try {
+      // it doesn't seem like you should have to do this, but java serialization is wacky, and doesn't call the default constructor.
+      __isset_bitfield = 0;
+      read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private static class BulkImportStatusStandardSchemeFactory implements SchemeFactory {
+    public BulkImportStatusStandardScheme getScheme() {
+      return new BulkImportStatusStandardScheme();
+    }
+  }
+
+  private static class BulkImportStatusStandardScheme extends StandardScheme<BulkImportStatus> {
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot, BulkImportStatus struct) throws org.apache.thrift.TException {
+      org.apache.thrift.protocol.TField schemeField;
+      iprot.readStructBegin();
+      while (true)
+      {
+        schemeField = iprot.readFieldBegin();
+        if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+          break;
+        }
+        switch (schemeField.id) {
+          case 1: // START_TIME
+            if (schemeField.type == org.apache.thrift.protocol.TType.I64) {
+              struct.startTime = iprot.readI64();
+              struct.setStartTimeIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          case 2: // FILENAME
+            if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+              struct.filename = iprot.readString();
+              struct.setFilenameIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          case 3: // STATE
+            if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
+              struct.state = org.apache.accumulo.core.master.thrift.BulkImportState.findByValue(iprot.readI32());
+              struct.setStateIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          default:
+            org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+        }
+        iprot.readFieldEnd();
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      struct.validate();
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot, BulkImportStatus struct) throws org.apache.thrift.TException {
+      struct.validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      oprot.writeFieldBegin(START_TIME_FIELD_DESC);
+      oprot.writeI64(struct.startTime);
+      oprot.writeFieldEnd();
+      if (struct.filename != null) {
+        oprot.writeFieldBegin(FILENAME_FIELD_DESC);
+        oprot.writeString(struct.filename);
+        oprot.writeFieldEnd();
+      }
+      if (struct.state != null) {
+        oprot.writeFieldBegin(STATE_FIELD_DESC);
+        oprot.writeI32(struct.state.getValue());
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+  }
+
+  private static class BulkImportStatusTupleSchemeFactory implements SchemeFactory {
+    public BulkImportStatusTupleScheme getScheme() {
+      return new BulkImportStatusTupleScheme();
+    }
+  }
+
+  private static class BulkImportStatusTupleScheme extends TupleScheme<BulkImportStatus> {
+
+    @Override
+    public void write(org.apache.thrift.protocol.TProtocol prot, BulkImportStatus struct) throws org.apache.thrift.TException {
+      TTupleProtocol oprot = (TTupleProtocol) prot;
+      BitSet optionals = new BitSet();
+      if (struct.isSetStartTime()) {
+        optionals.set(0);
+      }
+      if (struct.isSetFilename()) {
+        optionals.set(1);
+      }
+      if (struct.isSetState()) {
+        optionals.set(2);
+      }
+      oprot.writeBitSet(optionals, 3);
+      if (struct.isSetStartTime()) {
+        oprot.writeI64(struct.startTime);
+      }
+      if (struct.isSetFilename()) {
+        oprot.writeString(struct.filename);
+      }
+      if (struct.isSetState()) {
+        oprot.writeI32(struct.state.getValue());
+      }
+    }
+
+    @Override
+    public void read(org.apache.thrift.protocol.TProtocol prot, BulkImportStatus struct) throws org.apache.thrift.TException {
+      TTupleProtocol iprot = (TTupleProtocol) prot;
+      BitSet incoming = iprot.readBitSet(3);
+      if (incoming.get(0)) {
+        struct.startTime = iprot.readI64();
+        struct.setStartTimeIsSet(true);
+      }
+      if (incoming.get(1)) {
+        struct.filename = iprot.readString();
+        struct.setFilenameIsSet(true);
+      }
+      if (incoming.get(2)) {
+        struct.state = org.apache.accumulo.core.master.thrift.BulkImportState.findByValue(iprot.readI32());
+        struct.setStateIsSet(true);
+      }
+    }
+  }
+
+}
+
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/Compacting.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/Compacting.java
index 5b1b9cc..242eae5 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/Compacting.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/Compacting.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class Compacting implements org.apache.thrift.TBase<Compacting, Compacting._Fields>, java.io.Serializable, Cloneable, Comparable<Compacting> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class Compacting implements org.apache.thrift.TBase<Compacting, Compacting._Fields>, java.io.Serializable, Cloneable, Comparable<Compacting> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("Compacting");
 
   private static final org.apache.thrift.protocol.TField RUNNING_FIELD_DESC = new org.apache.thrift.protocol.TField("running", org.apache.thrift.protocol.TType.I32, (short)1);
@@ -244,10 +247,10 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case RUNNING:
-      return Integer.valueOf(getRunning());
+      return getRunning();
 
     case QUEUED:
-      return Integer.valueOf(getQueued());
+      return getQueued();
 
     }
     throw new IllegalStateException();
@@ -304,7 +307,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_running = true;
+    list.add(present_running);
+    if (present_running)
+      list.add(running);
+
+    boolean present_queued = true;
+    list.add(present_queued);
+    if (present_queued)
+      list.add(queued);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/DeadServer.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/DeadServer.java
index 1cbe5f7..3af0518 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/DeadServer.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/DeadServer.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class DeadServer implements org.apache.thrift.TBase<DeadServer, DeadServer._Fields>, java.io.Serializable, Cloneable, Comparable<DeadServer> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class DeadServer implements org.apache.thrift.TBase<DeadServer, DeadServer._Fields>, java.io.Serializable, Cloneable, Comparable<DeadServer> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("DeadServer");
 
   private static final org.apache.thrift.protocol.TField SERVER_FIELD_DESC = new org.apache.thrift.protocol.TField("server", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -292,7 +295,7 @@
       return getServer();
 
     case LAST_STATUS:
-      return Long.valueOf(getLastStatus());
+      return getLastStatus();
 
     case STATUS:
       return getStatus();
@@ -363,7 +366,24 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_server = true && (isSetServer());
+    list.add(present_server);
+    if (present_server)
+      list.add(server);
+
+    boolean present_lastStatus = true;
+    list.add(present_lastStatus);
+    if (present_lastStatus)
+      list.add(lastStatus);
+
+    boolean present_status = true && (isSetStatus());
+    list.add(present_status);
+    if (present_status)
+      list.add(status);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java
index c726469..866d6f6 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/FateOperation.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/FateService.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/FateService.java
index 27a9d0d..7fe0974 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/FateService.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/FateService.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class FateService {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class FateService {
 
   public interface Iface {
 
@@ -1022,7 +1025,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -1428,7 +1443,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Long.valueOf(getSuccess());
+        return getSuccess();
 
       case SEC:
         return getSec();
@@ -1488,7 +1503,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -2162,7 +2189,7 @@
         return getCredentials();
 
       case OPID:
-        return Long.valueOf(getOpid());
+        return getOpid();
 
       case OP:
         return getOp();
@@ -2174,7 +2201,7 @@
         return getOptions();
 
       case AUTO_CLEAN:
-        return Boolean.valueOf(isAutoClean());
+        return isAutoClean();
 
       }
       throw new IllegalStateException();
@@ -2286,7 +2313,44 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_opid = true;
+      list.add(present_opid);
+      if (present_opid)
+        list.add(opid);
+
+      boolean present_op = true && (isSetOp());
+      list.add(present_op);
+      if (present_op)
+        list.add(op.getValue());
+
+      boolean present_arguments = true && (isSetArguments());
+      list.add(present_arguments);
+      if (present_arguments)
+        list.add(arguments);
+
+      boolean present_options = true && (isSetOptions());
+      list.add(present_options);
+      if (present_options)
+        list.add(options);
+
+      boolean present_autoClean = true;
+      list.add(present_autoClean);
+      if (present_autoClean)
+        list.add(autoClean);
+
+      return list.hashCode();
     }
 
     @Override
@@ -2419,7 +2483,7 @@
       if (this.arguments == null) {
         sb.append("null");
       } else {
-        sb.append(this.arguments);
+        org.apache.thrift.TBaseHelper.toString(this.arguments, sb);
       }
       first = false;
       if (!first) sb.append(", ");
@@ -2513,7 +2577,7 @@
               break;
             case 3: // OP
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.op = FateOperation.findByValue(iprot.readI32());
+                struct.op = org.apache.accumulo.core.master.thrift.FateOperation.findByValue(iprot.readI32());
                 struct.setOpIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -2522,13 +2586,13 @@
             case 4: // ARGUMENTS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list70 = iprot.readListBegin();
-                  struct.arguments = new ArrayList<ByteBuffer>(_list70.size);
-                  for (int _i71 = 0; _i71 < _list70.size; ++_i71)
+                  org.apache.thrift.protocol.TList _list86 = iprot.readListBegin();
+                  struct.arguments = new ArrayList<ByteBuffer>(_list86.size);
+                  ByteBuffer _elem87;
+                  for (int _i88 = 0; _i88 < _list86.size; ++_i88)
                   {
-                    ByteBuffer _elem72;
-                    _elem72 = iprot.readBinary();
-                    struct.arguments.add(_elem72);
+                    _elem87 = iprot.readBinary();
+                    struct.arguments.add(_elem87);
                   }
                   iprot.readListEnd();
                 }
@@ -2540,15 +2604,15 @@
             case 5: // OPTIONS
               if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
                 {
-                  org.apache.thrift.protocol.TMap _map73 = iprot.readMapBegin();
-                  struct.options = new HashMap<String,String>(2*_map73.size);
-                  for (int _i74 = 0; _i74 < _map73.size; ++_i74)
+                  org.apache.thrift.protocol.TMap _map89 = iprot.readMapBegin();
+                  struct.options = new HashMap<String,String>(2*_map89.size);
+                  String _key90;
+                  String _val91;
+                  for (int _i92 = 0; _i92 < _map89.size; ++_i92)
                   {
-                    String _key75;
-                    String _val76;
-                    _key75 = iprot.readString();
-                    _val76 = iprot.readString();
-                    struct.options.put(_key75, _val76);
+                    _key90 = iprot.readString();
+                    _val91 = iprot.readString();
+                    struct.options.put(_key90, _val91);
                   }
                   iprot.readMapEnd();
                 }
@@ -2597,9 +2661,9 @@
           oprot.writeFieldBegin(ARGUMENTS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.arguments.size()));
-            for (ByteBuffer _iter77 : struct.arguments)
+            for (ByteBuffer _iter93 : struct.arguments)
             {
-              oprot.writeBinary(_iter77);
+              oprot.writeBinary(_iter93);
             }
             oprot.writeListEnd();
           }
@@ -2609,10 +2673,10 @@
           oprot.writeFieldBegin(OPTIONS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, struct.options.size()));
-            for (Map.Entry<String, String> _iter78 : struct.options.entrySet())
+            for (Map.Entry<String, String> _iter94 : struct.options.entrySet())
             {
-              oprot.writeString(_iter78.getKey());
-              oprot.writeString(_iter78.getValue());
+              oprot.writeString(_iter94.getKey());
+              oprot.writeString(_iter94.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -2681,19 +2745,19 @@
         if (struct.isSetArguments()) {
           {
             oprot.writeI32(struct.arguments.size());
-            for (ByteBuffer _iter79 : struct.arguments)
+            for (ByteBuffer _iter95 : struct.arguments)
             {
-              oprot.writeBinary(_iter79);
+              oprot.writeBinary(_iter95);
             }
           }
         }
         if (struct.isSetOptions()) {
           {
             oprot.writeI32(struct.options.size());
-            for (Map.Entry<String, String> _iter80 : struct.options.entrySet())
+            for (Map.Entry<String, String> _iter96 : struct.options.entrySet())
             {
-              oprot.writeString(_iter80.getKey());
-              oprot.writeString(_iter80.getValue());
+              oprot.writeString(_iter96.getKey());
+              oprot.writeString(_iter96.getValue());
             }
           }
         }
@@ -2721,33 +2785,33 @@
           struct.setOpidIsSet(true);
         }
         if (incoming.get(3)) {
-          struct.op = FateOperation.findByValue(iprot.readI32());
+          struct.op = org.apache.accumulo.core.master.thrift.FateOperation.findByValue(iprot.readI32());
           struct.setOpIsSet(true);
         }
         if (incoming.get(4)) {
           {
-            org.apache.thrift.protocol.TList _list81 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-            struct.arguments = new ArrayList<ByteBuffer>(_list81.size);
-            for (int _i82 = 0; _i82 < _list81.size; ++_i82)
+            org.apache.thrift.protocol.TList _list97 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.arguments = new ArrayList<ByteBuffer>(_list97.size);
+            ByteBuffer _elem98;
+            for (int _i99 = 0; _i99 < _list97.size; ++_i99)
             {
-              ByteBuffer _elem83;
-              _elem83 = iprot.readBinary();
-              struct.arguments.add(_elem83);
+              _elem98 = iprot.readBinary();
+              struct.arguments.add(_elem98);
             }
           }
           struct.setArgumentsIsSet(true);
         }
         if (incoming.get(5)) {
           {
-            org.apache.thrift.protocol.TMap _map84 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-            struct.options = new HashMap<String,String>(2*_map84.size);
-            for (int _i85 = 0; _i85 < _map84.size; ++_i85)
+            org.apache.thrift.protocol.TMap _map100 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.options = new HashMap<String,String>(2*_map100.size);
+            String _key101;
+            String _val102;
+            for (int _i103 = 0; _i103 < _map100.size; ++_i103)
             {
-              String _key86;
-              String _val87;
-              _key86 = iprot.readString();
-              _val87 = iprot.readString();
-              struct.options.put(_key86, _val87);
+              _key101 = iprot.readString();
+              _val102 = iprot.readString();
+              struct.options.put(_key101, _val102);
             }
           }
           struct.setOptionsIsSet(true);
@@ -3015,7 +3079,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -3466,7 +3542,7 @@
         return getCredentials();
 
       case OPID:
-        return Long.valueOf(getOpid());
+        return getOpid();
 
       }
       throw new IllegalStateException();
@@ -3534,7 +3610,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_opid = true;
+      list.add(present_opid);
+      if (present_opid)
+        list.add(opid);
+
+      return list.hashCode();
     }
 
     @Override
@@ -4094,7 +4187,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -4586,7 +4696,7 @@
         return getCredentials();
 
       case OPID:
-        return Long.valueOf(getOpid());
+        return getOpid();
 
       }
       throw new IllegalStateException();
@@ -4654,7 +4764,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_opid = true;
+      list.add(present_opid);
+      if (present_opid)
+        list.add(opid);
+
+      return list.hashCode();
     }
 
     @Override
@@ -5096,7 +5223,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterClientService.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterClientService.java
index 63a2131..99bebe0 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterClientService.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterClientService.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class MasterClientService {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class MasterClientService {
 
   public interface Iface extends FateService.Iface {
 
@@ -517,7 +520,7 @@
       args.setCredentials(credentials);
       args.setServerName(serverName);
       args.setSplit(split);
-      sendBase("reportSplitExtent", args);
+      sendBaseOneway("reportSplitExtent", args);
     }
 
     public void reportTabletStatus(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String serverName, TabletLoadState status, org.apache.accumulo.core.data.thrift.TKeyExtent tablet) throws org.apache.thrift.TException
@@ -533,7 +536,7 @@
       args.setServerName(serverName);
       args.setStatus(status);
       args.setTablet(tablet);
-      sendBase("reportTabletStatus", args);
+      sendBaseOneway("reportTabletStatus", args);
     }
 
     public List<String> getActiveTservers(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException
@@ -1177,7 +1180,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("reportSplitExtent", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("reportSplitExtent", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         reportSplitExtent_args args = new reportSplitExtent_args();
         args.setTinfo(tinfo);
         args.setCredentials(credentials);
@@ -1219,7 +1222,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("reportTabletStatus", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("reportTabletStatus", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         reportTabletStatus_args args = new reportTabletStatus_args();
         args.setTinfo(tinfo);
         args.setCredentials(credentials);
@@ -3141,7 +3144,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -3633,7 +3653,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Long.valueOf(getSuccess());
+        return getSuccess();
 
       case SEC:
         return getSec();
@@ -3707,7 +3727,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -4092,8 +4129,8 @@
       this.tinfo = tinfo;
       this.credentials = credentials;
       this.tableName = tableName;
-      this.startRow = startRow;
-      this.endRow = endRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       this.flushID = flushID;
       setFlushIDIsSet(true);
       this.maxLoops = maxLoops;
@@ -4116,11 +4153,9 @@
       }
       if (other.isSetStartRow()) {
         this.startRow = org.apache.thrift.TBaseHelper.copyBinary(other.startRow);
-;
       }
       if (other.isSetEndRow()) {
         this.endRow = org.apache.thrift.TBaseHelper.copyBinary(other.endRow);
-;
       }
       this.flushID = other.flushID;
       this.maxLoops = other.maxLoops;
@@ -4221,16 +4256,16 @@
     }
 
     public ByteBuffer bufferForStartRow() {
-      return startRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(startRow);
     }
 
     public waitForFlush_args setStartRow(byte[] startRow) {
-      setStartRow(startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(startRow));
+      this.startRow = startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(startRow, startRow.length));
       return this;
     }
 
     public waitForFlush_args setStartRow(ByteBuffer startRow) {
-      this.startRow = startRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
       return this;
     }
 
@@ -4255,16 +4290,16 @@
     }
 
     public ByteBuffer bufferForEndRow() {
-      return endRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     public waitForFlush_args setEndRow(byte[] endRow) {
-      setEndRow(endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(endRow));
+      this.endRow = endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(endRow, endRow.length));
       return this;
     }
 
     public waitForFlush_args setEndRow(ByteBuffer endRow) {
-      this.endRow = endRow;
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       return this;
     }
 
@@ -4408,10 +4443,10 @@
         return getEndRow();
 
       case FLUSH_ID:
-        return Long.valueOf(getFlushID());
+        return getFlushID();
 
       case MAX_LOOPS:
-        return Long.valueOf(getMaxLoops());
+        return getMaxLoops();
 
       }
       throw new IllegalStateException();
@@ -4523,7 +4558,44 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_startRow = true && (isSetStartRow());
+      list.add(present_startRow);
+      if (present_startRow)
+        list.add(startRow);
+
+      boolean present_endRow = true && (isSetEndRow());
+      list.add(present_endRow);
+      if (present_endRow)
+        list.add(endRow);
+
+      boolean present_flushID = true;
+      list.add(present_flushID);
+      if (present_flushID)
+        list.add(flushID);
+
+      boolean present_maxLoops = true;
+      list.add(present_maxLoops);
+      if (present_maxLoops)
+        list.add(maxLoops);
+
+      return list.hashCode();
     }
 
     @Override
@@ -5182,7 +5254,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -5817,7 +5901,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      boolean present_value = true && (isSetValue());
+      list.add(present_value);
+      if (present_value)
+        list.add(value);
+
+      return list.hashCode();
     }
 
     @Override
@@ -6404,7 +6515,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -6980,7 +7103,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      return list.hashCode();
     }
 
     @Override
@@ -7526,7 +7671,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -8161,7 +8318,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_ns = true && (isSetNs());
+      list.add(present_ns);
+      if (present_ns)
+        list.add(ns);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      boolean present_value = true && (isSetValue());
+      list.add(present_value);
+      if (present_value)
+        list.add(value);
+
+      return list.hashCode();
     }
 
     @Override
@@ -8748,7 +8932,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -9324,7 +9520,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_ns = true && (isSetNs());
+      list.add(present_ns);
+      if (present_ns)
+        list.add(ns);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      return list.hashCode();
     }
 
     @Override
@@ -9870,7 +10088,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tope = true && (isSetTope());
+      list.add(present_tope);
+      if (present_tope)
+        list.add(tope);
+
+      return list.hashCode();
     }
 
     @Override
@@ -10403,7 +10633,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_state = true && (isSetState());
+      list.add(present_state);
+      if (present_state)
+        list.add(state.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -10556,7 +10803,7 @@
               break;
             case 2: // STATE
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.state = MasterGoalState.findByValue(iprot.readI32());
+                struct.state = org.apache.accumulo.core.master.thrift.MasterGoalState.findByValue(iprot.readI32());
                 struct.setStateIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -10646,7 +10893,7 @@
           struct.setCredentialsIsSet(true);
         }
         if (incoming.get(2)) {
-          struct.state = MasterGoalState.findByValue(iprot.readI32());
+          struct.state = org.apache.accumulo.core.master.thrift.MasterGoalState.findByValue(iprot.readI32());
           struct.setStateIsSet(true);
         }
       }
@@ -10849,7 +11096,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -11257,7 +11511,7 @@
         return getCredentials();
 
       case STOP_TABLET_SERVERS:
-        return Boolean.valueOf(isStopTabletServers());
+        return isStopTabletServers();
 
       }
       throw new IllegalStateException();
@@ -11325,7 +11579,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_stopTabletServers = true;
+      list.add(present_stopTabletServers);
+      if (present_stopTabletServers)
+        list.add(stopTabletServers);
+
+      return list.hashCode();
     }
 
     @Override
@@ -11767,7 +12038,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -12223,7 +12501,7 @@
         return getTabletServer();
 
       case FORCE:
-        return Boolean.valueOf(isForce());
+        return isForce();
 
       }
       throw new IllegalStateException();
@@ -12302,7 +12580,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tabletServer = true && (isSetTabletServer());
+      list.add(present_tabletServer);
+      if (present_tabletServer)
+        list.add(tabletServer);
+
+      boolean present_force = true;
+      list.add(present_force);
+      if (present_force)
+        list.add(force);
+
+      return list.hashCode();
     }
 
     @Override
@@ -12785,7 +13085,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -13318,7 +13625,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      boolean present_value = true && (isSetValue());
+      list.add(present_value);
+      if (present_value)
+        list.add(value);
+
+      return list.hashCode();
     }
 
     @Override
@@ -13805,7 +14134,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -14279,7 +14615,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      return list.hashCode();
     }
 
     @Override
@@ -14725,7 +15078,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -15140,7 +15500,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -15604,7 +15976,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -16006,7 +16390,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      return list.hashCode();
     }
 
     @Override
@@ -16300,7 +16691,9 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
     }
 
     @Override
@@ -16788,7 +17181,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_serverName = true && (isSetServerName());
+      list.add(present_serverName);
+      if (present_serverName)
+        list.add(serverName);
+
+      boolean present_split = true && (isSetSplit());
+      list.add(present_split);
+      if (present_split)
+        list.add(split);
+
+      return list.hashCode();
     }
 
     @Override
@@ -17532,7 +17947,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_serverName = true && (isSetServerName());
+      list.add(present_serverName);
+      if (present_serverName)
+        list.add(serverName);
+
+      boolean present_status = true && (isSetStatus());
+      list.add(present_status);
+      if (present_status)
+        list.add(status.getValue());
+
+      boolean present_tablet = true && (isSetTablet());
+      list.add(present_tablet);
+      if (present_tablet)
+        list.add(tablet);
+
+      return list.hashCode();
     }
 
     @Override
@@ -17732,7 +18174,7 @@
               break;
             case 3: // STATUS
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.status = TabletLoadState.findByValue(iprot.readI32());
+                struct.status = org.apache.accumulo.core.master.thrift.TabletLoadState.findByValue(iprot.readI32());
                 struct.setStatusIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -17857,7 +18299,7 @@
           struct.setServerNameIsSet(true);
         }
         if (incoming.get(3)) {
-          struct.status = TabletLoadState.findByValue(iprot.readI32());
+          struct.status = org.apache.accumulo.core.master.thrift.TabletLoadState.findByValue(iprot.readI32());
           struct.setStatusIsSet(true);
         }
         if (incoming.get(4)) {
@@ -18124,7 +18566,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -18605,7 +19059,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -18717,13 +19183,13 @@
             case 0: // SUCCESS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list88 = iprot.readListBegin();
-                  struct.success = new ArrayList<String>(_list88.size);
-                  for (int _i89 = 0; _i89 < _list88.size; ++_i89)
+                  org.apache.thrift.protocol.TList _list104 = iprot.readListBegin();
+                  struct.success = new ArrayList<String>(_list104.size);
+                  String _elem105;
+                  for (int _i106 = 0; _i106 < _list104.size; ++_i106)
                   {
-                    String _elem90;
-                    _elem90 = iprot.readString();
-                    struct.success.add(_elem90);
+                    _elem105 = iprot.readString();
+                    struct.success.add(_elem105);
                   }
                   iprot.readListEnd();
                 }
@@ -18760,9 +19226,9 @@
           oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.success.size()));
-            for (String _iter91 : struct.success)
+            for (String _iter107 : struct.success)
             {
-              oprot.writeString(_iter91);
+              oprot.writeString(_iter107);
             }
             oprot.writeListEnd();
           }
@@ -18801,9 +19267,9 @@
         if (struct.isSetSuccess()) {
           {
             oprot.writeI32(struct.success.size());
-            for (String _iter92 : struct.success)
+            for (String _iter108 : struct.success)
             {
-              oprot.writeString(_iter92);
+              oprot.writeString(_iter108);
             }
           }
         }
@@ -18818,13 +19284,13 @@
         BitSet incoming = iprot.readBitSet(2);
         if (incoming.get(0)) {
           {
-            org.apache.thrift.protocol.TList _list93 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-            struct.success = new ArrayList<String>(_list93.size);
-            for (int _i94 = 0; _i94 < _list93.size; ++_i94)
+            org.apache.thrift.protocol.TList _list109 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.success = new ArrayList<String>(_list109.size);
+            String _elem110;
+            for (int _i111 = 0; _i111 < _list109.size; ++_i111)
             {
-              String _elem95;
-              _elem95 = iprot.readString();
-              struct.success.add(_elem95);
+              _elem110 = iprot.readString();
+              struct.success.add(_elem110);
             }
           }
           struct.setSuccessIsSet(true);
@@ -19152,7 +19618,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_cfg = true && (isSetCfg());
+      list.add(present_cfg);
+      if (present_cfg)
+        list.add(cfg);
+
+      return list.hashCode();
     }
 
     @Override
@@ -19662,7 +20145,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -20258,7 +20753,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tfino = true && (isSetTfino());
+      list.add(present_tfino);
+      if (present_tfino)
+        list.add(tfino);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_logsToWatch = true && (isSetLogsToWatch());
+      list.add(present_logsToWatch);
+      if (present_logsToWatch)
+        list.add(logsToWatch);
+
+      return list.hashCode();
     }
 
     @Override
@@ -20438,13 +20955,13 @@
             case 4: // LOGS_TO_WATCH
               if (schemeField.type == org.apache.thrift.protocol.TType.SET) {
                 {
-                  org.apache.thrift.protocol.TSet _set96 = iprot.readSetBegin();
-                  struct.logsToWatch = new HashSet<String>(2*_set96.size);
-                  for (int _i97 = 0; _i97 < _set96.size; ++_i97)
+                  org.apache.thrift.protocol.TSet _set112 = iprot.readSetBegin();
+                  struct.logsToWatch = new HashSet<String>(2*_set112.size);
+                  String _elem113;
+                  for (int _i114 = 0; _i114 < _set112.size; ++_i114)
                   {
-                    String _elem98;
-                    _elem98 = iprot.readString();
-                    struct.logsToWatch.add(_elem98);
+                    _elem113 = iprot.readString();
+                    struct.logsToWatch.add(_elem113);
                   }
                   iprot.readSetEnd();
                 }
@@ -20487,9 +21004,9 @@
           oprot.writeFieldBegin(LOGS_TO_WATCH_FIELD_DESC);
           {
             oprot.writeSetBegin(new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, struct.logsToWatch.size()));
-            for (String _iter99 : struct.logsToWatch)
+            for (String _iter115 : struct.logsToWatch)
             {
-              oprot.writeString(_iter99);
+              oprot.writeString(_iter115);
             }
             oprot.writeSetEnd();
           }
@@ -20538,9 +21055,9 @@
         if (struct.isSetLogsToWatch()) {
           {
             oprot.writeI32(struct.logsToWatch.size());
-            for (String _iter100 : struct.logsToWatch)
+            for (String _iter116 : struct.logsToWatch)
             {
-              oprot.writeString(_iter100);
+              oprot.writeString(_iter116);
             }
           }
         }
@@ -20566,13 +21083,13 @@
         }
         if (incoming.get(3)) {
           {
-            org.apache.thrift.protocol.TSet _set101 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-            struct.logsToWatch = new HashSet<String>(2*_set101.size);
-            for (int _i102 = 0; _i102 < _set101.size; ++_i102)
+            org.apache.thrift.protocol.TSet _set117 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.logsToWatch = new HashSet<String>(2*_set117.size);
+            String _elem118;
+            for (int _i119 = 0; _i119 < _set117.size; ++_i119)
             {
-              String _elem103;
-              _elem103 = iprot.readString();
-              struct.logsToWatch.add(_elem103);
+              _elem118 = iprot.readString();
+              struct.logsToWatch.add(_elem118);
             }
           }
           struct.setLogsToWatchIsSet(true);
@@ -20733,7 +21250,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       }
       throw new IllegalStateException();
@@ -20779,7 +21296,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterGoalState.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterGoalState.java
index f436454..2891727 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterGoalState.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterGoalState.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterMonitorInfo.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterMonitorInfo.java
index 990fc89..9e7b8ea 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterMonitorInfo.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterMonitorInfo.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class MasterMonitorInfo implements org.apache.thrift.TBase<MasterMonitorInfo, MasterMonitorInfo._Fields>, java.io.Serializable, Cloneable, Comparable<MasterMonitorInfo> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class MasterMonitorInfo implements org.apache.thrift.TBase<MasterMonitorInfo, MasterMonitorInfo._Fields>, java.io.Serializable, Cloneable, Comparable<MasterMonitorInfo> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("MasterMonitorInfo");
 
   private static final org.apache.thrift.protocol.TField TABLE_MAP_FIELD_DESC = new org.apache.thrift.protocol.TField("tableMap", org.apache.thrift.protocol.TType.MAP, (short)1);
@@ -59,6 +62,7 @@
   private static final org.apache.thrift.protocol.TField UNASSIGNED_TABLETS_FIELD_DESC = new org.apache.thrift.protocol.TField("unassignedTablets", org.apache.thrift.protocol.TType.I32, (short)7);
   private static final org.apache.thrift.protocol.TField SERVERS_SHUTTING_DOWN_FIELD_DESC = new org.apache.thrift.protocol.TField("serversShuttingDown", org.apache.thrift.protocol.TType.SET, (short)9);
   private static final org.apache.thrift.protocol.TField DEAD_TABLET_SERVERS_FIELD_DESC = new org.apache.thrift.protocol.TField("deadTabletServers", org.apache.thrift.protocol.TType.LIST, (short)10);
+  private static final org.apache.thrift.protocol.TField BULK_IMPORTS_FIELD_DESC = new org.apache.thrift.protocol.TField("bulkImports", org.apache.thrift.protocol.TType.LIST, (short)11);
 
   private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
   static {
@@ -82,6 +86,7 @@
   public int unassignedTablets; // required
   public Set<String> serversShuttingDown; // required
   public List<DeadServer> deadTabletServers; // required
+  public List<BulkImportStatus> bulkImports; // required
 
   /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
   public enum _Fields implements org.apache.thrift.TFieldIdEnum {
@@ -100,7 +105,8 @@
     GOAL_STATE((short)8, "goalState"),
     UNASSIGNED_TABLETS((short)7, "unassignedTablets"),
     SERVERS_SHUTTING_DOWN((short)9, "serversShuttingDown"),
-    DEAD_TABLET_SERVERS((short)10, "deadTabletServers");
+    DEAD_TABLET_SERVERS((short)10, "deadTabletServers"),
+    BULK_IMPORTS((short)11, "bulkImports");
 
     private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -131,6 +137,8 @@
           return SERVERS_SHUTTING_DOWN;
         case 10: // DEAD_TABLET_SERVERS
           return DEAD_TABLET_SERVERS;
+        case 11: // BULK_IMPORTS
+          return BULK_IMPORTS;
         default:
           return null;
       }
@@ -199,6 +207,9 @@
     tmpMap.put(_Fields.DEAD_TABLET_SERVERS, new org.apache.thrift.meta_data.FieldMetaData("deadTabletServers", org.apache.thrift.TFieldRequirementType.DEFAULT, 
         new org.apache.thrift.meta_data.ListMetaData(org.apache.thrift.protocol.TType.LIST, 
             new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, DeadServer.class))));
+    tmpMap.put(_Fields.BULK_IMPORTS, new org.apache.thrift.meta_data.FieldMetaData("bulkImports", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.ListMetaData(org.apache.thrift.protocol.TType.LIST, 
+            new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, BulkImportStatus.class))));
     metaDataMap = Collections.unmodifiableMap(tmpMap);
     org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(MasterMonitorInfo.class, metaDataMap);
   }
@@ -214,7 +225,8 @@
     MasterGoalState goalState,
     int unassignedTablets,
     Set<String> serversShuttingDown,
-    List<DeadServer> deadTabletServers)
+    List<DeadServer> deadTabletServers,
+    List<BulkImportStatus> bulkImports)
   {
     this();
     this.tableMap = tableMap;
@@ -226,6 +238,7 @@
     setUnassignedTabletsIsSet(true);
     this.serversShuttingDown = serversShuttingDown;
     this.deadTabletServers = deadTabletServers;
+    this.bulkImports = bulkImports;
   }
 
   /**
@@ -277,6 +290,13 @@
       }
       this.deadTabletServers = __this__deadTabletServers;
     }
+    if (other.isSetBulkImports()) {
+      List<BulkImportStatus> __this__bulkImports = new ArrayList<BulkImportStatus>(other.bulkImports.size());
+      for (BulkImportStatus other_element : other.bulkImports) {
+        __this__bulkImports.add(new BulkImportStatus(other_element));
+      }
+      this.bulkImports = __this__bulkImports;
+    }
   }
 
   public MasterMonitorInfo deepCopy() {
@@ -294,6 +314,7 @@
     this.unassignedTablets = 0;
     this.serversShuttingDown = null;
     this.deadTabletServers = null;
+    this.bulkImports = null;
   }
 
   public int getTableMapSize() {
@@ -570,6 +591,45 @@
     }
   }
 
+  public int getBulkImportsSize() {
+    return (this.bulkImports == null) ? 0 : this.bulkImports.size();
+  }
+
+  public java.util.Iterator<BulkImportStatus> getBulkImportsIterator() {
+    return (this.bulkImports == null) ? null : this.bulkImports.iterator();
+  }
+
+  public void addToBulkImports(BulkImportStatus elem) {
+    if (this.bulkImports == null) {
+      this.bulkImports = new ArrayList<BulkImportStatus>();
+    }
+    this.bulkImports.add(elem);
+  }
+
+  public List<BulkImportStatus> getBulkImports() {
+    return this.bulkImports;
+  }
+
+  public MasterMonitorInfo setBulkImports(List<BulkImportStatus> bulkImports) {
+    this.bulkImports = bulkImports;
+    return this;
+  }
+
+  public void unsetBulkImports() {
+    this.bulkImports = null;
+  }
+
+  /** Returns true if field bulkImports is set (has been assigned a value) and false otherwise */
+  public boolean isSetBulkImports() {
+    return this.bulkImports != null;
+  }
+
+  public void setBulkImportsIsSet(boolean value) {
+    if (!value) {
+      this.bulkImports = null;
+    }
+  }
+
   public void setFieldValue(_Fields field, Object value) {
     switch (field) {
     case TABLE_MAP:
@@ -636,6 +696,14 @@
       }
       break;
 
+    case BULK_IMPORTS:
+      if (value == null) {
+        unsetBulkImports();
+      } else {
+        setBulkImports((List<BulkImportStatus>)value);
+      }
+      break;
+
     }
   }
 
@@ -657,7 +725,7 @@
       return getGoalState();
 
     case UNASSIGNED_TABLETS:
-      return Integer.valueOf(getUnassignedTablets());
+      return getUnassignedTablets();
 
     case SERVERS_SHUTTING_DOWN:
       return getServersShuttingDown();
@@ -665,6 +733,9 @@
     case DEAD_TABLET_SERVERS:
       return getDeadTabletServers();
 
+    case BULK_IMPORTS:
+      return getBulkImports();
+
     }
     throw new IllegalStateException();
   }
@@ -692,6 +763,8 @@
       return isSetServersShuttingDown();
     case DEAD_TABLET_SERVERS:
       return isSetDeadTabletServers();
+    case BULK_IMPORTS:
+      return isSetBulkImports();
     }
     throw new IllegalStateException();
   }
@@ -781,12 +854,68 @@
         return false;
     }
 
+    boolean this_present_bulkImports = true && this.isSetBulkImports();
+    boolean that_present_bulkImports = true && that.isSetBulkImports();
+    if (this_present_bulkImports || that_present_bulkImports) {
+      if (!(this_present_bulkImports && that_present_bulkImports))
+        return false;
+      if (!this.bulkImports.equals(that.bulkImports))
+        return false;
+    }
+
     return true;
   }
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_tableMap = true && (isSetTableMap());
+    list.add(present_tableMap);
+    if (present_tableMap)
+      list.add(tableMap);
+
+    boolean present_tServerInfo = true && (isSetTServerInfo());
+    list.add(present_tServerInfo);
+    if (present_tServerInfo)
+      list.add(tServerInfo);
+
+    boolean present_badTServers = true && (isSetBadTServers());
+    list.add(present_badTServers);
+    if (present_badTServers)
+      list.add(badTServers);
+
+    boolean present_state = true && (isSetState());
+    list.add(present_state);
+    if (present_state)
+      list.add(state.getValue());
+
+    boolean present_goalState = true && (isSetGoalState());
+    list.add(present_goalState);
+    if (present_goalState)
+      list.add(goalState.getValue());
+
+    boolean present_unassignedTablets = true;
+    list.add(present_unassignedTablets);
+    if (present_unassignedTablets)
+      list.add(unassignedTablets);
+
+    boolean present_serversShuttingDown = true && (isSetServersShuttingDown());
+    list.add(present_serversShuttingDown);
+    if (present_serversShuttingDown)
+      list.add(serversShuttingDown);
+
+    boolean present_deadTabletServers = true && (isSetDeadTabletServers());
+    list.add(present_deadTabletServers);
+    if (present_deadTabletServers)
+      list.add(deadTabletServers);
+
+    boolean present_bulkImports = true && (isSetBulkImports());
+    list.add(present_bulkImports);
+    if (present_bulkImports)
+      list.add(bulkImports);
+
+    return list.hashCode();
   }
 
   @Override
@@ -877,6 +1006,16 @@
         return lastComparison;
       }
     }
+    lastComparison = Boolean.valueOf(isSetBulkImports()).compareTo(other.isSetBulkImports());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetBulkImports()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.bulkImports, other.bulkImports);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
     return 0;
   }
 
@@ -956,6 +1095,14 @@
       sb.append(this.deadTabletServers);
     }
     first = false;
+    if (!first) sb.append(", ");
+    sb.append("bulkImports:");
+    if (this.bulkImports == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.bulkImports);
+    }
+    first = false;
     sb.append(")");
     return sb.toString();
   }
@@ -1004,16 +1151,16 @@
           case 1: // TABLE_MAP
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map18 = iprot.readMapBegin();
-                struct.tableMap = new HashMap<String,TableInfo>(2*_map18.size);
-                for (int _i19 = 0; _i19 < _map18.size; ++_i19)
+                org.apache.thrift.protocol.TMap _map26 = iprot.readMapBegin();
+                struct.tableMap = new HashMap<String,TableInfo>(2*_map26.size);
+                String _key27;
+                TableInfo _val28;
+                for (int _i29 = 0; _i29 < _map26.size; ++_i29)
                 {
-                  String _key20;
-                  TableInfo _val21;
-                  _key20 = iprot.readString();
-                  _val21 = new TableInfo();
-                  _val21.read(iprot);
-                  struct.tableMap.put(_key20, _val21);
+                  _key27 = iprot.readString();
+                  _val28 = new TableInfo();
+                  _val28.read(iprot);
+                  struct.tableMap.put(_key27, _val28);
                 }
                 iprot.readMapEnd();
               }
@@ -1025,14 +1172,14 @@
           case 2: // T_SERVER_INFO
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list22 = iprot.readListBegin();
-                struct.tServerInfo = new ArrayList<TabletServerStatus>(_list22.size);
-                for (int _i23 = 0; _i23 < _list22.size; ++_i23)
+                org.apache.thrift.protocol.TList _list30 = iprot.readListBegin();
+                struct.tServerInfo = new ArrayList<TabletServerStatus>(_list30.size);
+                TabletServerStatus _elem31;
+                for (int _i32 = 0; _i32 < _list30.size; ++_i32)
                 {
-                  TabletServerStatus _elem24;
-                  _elem24 = new TabletServerStatus();
-                  _elem24.read(iprot);
-                  struct.tServerInfo.add(_elem24);
+                  _elem31 = new TabletServerStatus();
+                  _elem31.read(iprot);
+                  struct.tServerInfo.add(_elem31);
                 }
                 iprot.readListEnd();
               }
@@ -1044,15 +1191,15 @@
           case 3: // BAD_TSERVERS
             if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
               {
-                org.apache.thrift.protocol.TMap _map25 = iprot.readMapBegin();
-                struct.badTServers = new HashMap<String,Byte>(2*_map25.size);
-                for (int _i26 = 0; _i26 < _map25.size; ++_i26)
+                org.apache.thrift.protocol.TMap _map33 = iprot.readMapBegin();
+                struct.badTServers = new HashMap<String,Byte>(2*_map33.size);
+                String _key34;
+                byte _val35;
+                for (int _i36 = 0; _i36 < _map33.size; ++_i36)
                 {
-                  String _key27;
-                  byte _val28;
-                  _key27 = iprot.readString();
-                  _val28 = iprot.readByte();
-                  struct.badTServers.put(_key27, _val28);
+                  _key34 = iprot.readString();
+                  _val35 = iprot.readByte();
+                  struct.badTServers.put(_key34, _val35);
                 }
                 iprot.readMapEnd();
               }
@@ -1063,7 +1210,7 @@
             break;
           case 6: // STATE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.state = MasterState.findByValue(iprot.readI32());
+              struct.state = org.apache.accumulo.core.master.thrift.MasterState.findByValue(iprot.readI32());
               struct.setStateIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -1071,7 +1218,7 @@
             break;
           case 8: // GOAL_STATE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.goalState = MasterGoalState.findByValue(iprot.readI32());
+              struct.goalState = org.apache.accumulo.core.master.thrift.MasterGoalState.findByValue(iprot.readI32());
               struct.setGoalStateIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -1088,13 +1235,13 @@
           case 9: // SERVERS_SHUTTING_DOWN
             if (schemeField.type == org.apache.thrift.protocol.TType.SET) {
               {
-                org.apache.thrift.protocol.TSet _set29 = iprot.readSetBegin();
-                struct.serversShuttingDown = new HashSet<String>(2*_set29.size);
-                for (int _i30 = 0; _i30 < _set29.size; ++_i30)
+                org.apache.thrift.protocol.TSet _set37 = iprot.readSetBegin();
+                struct.serversShuttingDown = new HashSet<String>(2*_set37.size);
+                String _elem38;
+                for (int _i39 = 0; _i39 < _set37.size; ++_i39)
                 {
-                  String _elem31;
-                  _elem31 = iprot.readString();
-                  struct.serversShuttingDown.add(_elem31);
+                  _elem38 = iprot.readString();
+                  struct.serversShuttingDown.add(_elem38);
                 }
                 iprot.readSetEnd();
               }
@@ -1106,14 +1253,14 @@
           case 10: // DEAD_TABLET_SERVERS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list32 = iprot.readListBegin();
-                struct.deadTabletServers = new ArrayList<DeadServer>(_list32.size);
-                for (int _i33 = 0; _i33 < _list32.size; ++_i33)
+                org.apache.thrift.protocol.TList _list40 = iprot.readListBegin();
+                struct.deadTabletServers = new ArrayList<DeadServer>(_list40.size);
+                DeadServer _elem41;
+                for (int _i42 = 0; _i42 < _list40.size; ++_i42)
                 {
-                  DeadServer _elem34;
-                  _elem34 = new DeadServer();
-                  _elem34.read(iprot);
-                  struct.deadTabletServers.add(_elem34);
+                  _elem41 = new DeadServer();
+                  _elem41.read(iprot);
+                  struct.deadTabletServers.add(_elem41);
                 }
                 iprot.readListEnd();
               }
@@ -1122,6 +1269,25 @@
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
+          case 11: // BULK_IMPORTS
+            if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
+              {
+                org.apache.thrift.protocol.TList _list43 = iprot.readListBegin();
+                struct.bulkImports = new ArrayList<BulkImportStatus>(_list43.size);
+                BulkImportStatus _elem44;
+                for (int _i45 = 0; _i45 < _list43.size; ++_i45)
+                {
+                  _elem44 = new BulkImportStatus();
+                  _elem44.read(iprot);
+                  struct.bulkImports.add(_elem44);
+                }
+                iprot.readListEnd();
+              }
+              struct.setBulkImportsIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
           default:
             org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
         }
@@ -1141,10 +1307,10 @@
         oprot.writeFieldBegin(TABLE_MAP_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.tableMap.size()));
-          for (Map.Entry<String, TableInfo> _iter35 : struct.tableMap.entrySet())
+          for (Map.Entry<String, TableInfo> _iter46 : struct.tableMap.entrySet())
           {
-            oprot.writeString(_iter35.getKey());
-            _iter35.getValue().write(oprot);
+            oprot.writeString(_iter46.getKey());
+            _iter46.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
@@ -1154,9 +1320,9 @@
         oprot.writeFieldBegin(T_SERVER_INFO_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.tServerInfo.size()));
-          for (TabletServerStatus _iter36 : struct.tServerInfo)
+          for (TabletServerStatus _iter47 : struct.tServerInfo)
           {
-            _iter36.write(oprot);
+            _iter47.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -1166,10 +1332,10 @@
         oprot.writeFieldBegin(BAD_TSERVERS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.BYTE, struct.badTServers.size()));
-          for (Map.Entry<String, Byte> _iter37 : struct.badTServers.entrySet())
+          for (Map.Entry<String, Byte> _iter48 : struct.badTServers.entrySet())
           {
-            oprot.writeString(_iter37.getKey());
-            oprot.writeByte(_iter37.getValue());
+            oprot.writeString(_iter48.getKey());
+            oprot.writeByte(_iter48.getValue());
           }
           oprot.writeMapEnd();
         }
@@ -1192,9 +1358,9 @@
         oprot.writeFieldBegin(SERVERS_SHUTTING_DOWN_FIELD_DESC);
         {
           oprot.writeSetBegin(new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, struct.serversShuttingDown.size()));
-          for (String _iter38 : struct.serversShuttingDown)
+          for (String _iter49 : struct.serversShuttingDown)
           {
-            oprot.writeString(_iter38);
+            oprot.writeString(_iter49);
           }
           oprot.writeSetEnd();
         }
@@ -1204,9 +1370,21 @@
         oprot.writeFieldBegin(DEAD_TABLET_SERVERS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.deadTabletServers.size()));
-          for (DeadServer _iter39 : struct.deadTabletServers)
+          for (DeadServer _iter50 : struct.deadTabletServers)
           {
-            _iter39.write(oprot);
+            _iter50.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      if (struct.bulkImports != null) {
+        oprot.writeFieldBegin(BULK_IMPORTS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.bulkImports.size()));
+          for (BulkImportStatus _iter51 : struct.bulkImports)
+          {
+            _iter51.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -1254,33 +1432,36 @@
       if (struct.isSetDeadTabletServers()) {
         optionals.set(7);
       }
-      oprot.writeBitSet(optionals, 8);
+      if (struct.isSetBulkImports()) {
+        optionals.set(8);
+      }
+      oprot.writeBitSet(optionals, 9);
       if (struct.isSetTableMap()) {
         {
           oprot.writeI32(struct.tableMap.size());
-          for (Map.Entry<String, TableInfo> _iter40 : struct.tableMap.entrySet())
+          for (Map.Entry<String, TableInfo> _iter52 : struct.tableMap.entrySet())
           {
-            oprot.writeString(_iter40.getKey());
-            _iter40.getValue().write(oprot);
+            oprot.writeString(_iter52.getKey());
+            _iter52.getValue().write(oprot);
           }
         }
       }
       if (struct.isSetTServerInfo()) {
         {
           oprot.writeI32(struct.tServerInfo.size());
-          for (TabletServerStatus _iter41 : struct.tServerInfo)
+          for (TabletServerStatus _iter53 : struct.tServerInfo)
           {
-            _iter41.write(oprot);
+            _iter53.write(oprot);
           }
         }
       }
       if (struct.isSetBadTServers()) {
         {
           oprot.writeI32(struct.badTServers.size());
-          for (Map.Entry<String, Byte> _iter42 : struct.badTServers.entrySet())
+          for (Map.Entry<String, Byte> _iter54 : struct.badTServers.entrySet())
           {
-            oprot.writeString(_iter42.getKey());
-            oprot.writeByte(_iter42.getValue());
+            oprot.writeString(_iter54.getKey());
+            oprot.writeByte(_iter54.getValue());
           }
         }
       }
@@ -1296,18 +1477,27 @@
       if (struct.isSetServersShuttingDown()) {
         {
           oprot.writeI32(struct.serversShuttingDown.size());
-          for (String _iter43 : struct.serversShuttingDown)
+          for (String _iter55 : struct.serversShuttingDown)
           {
-            oprot.writeString(_iter43);
+            oprot.writeString(_iter55);
           }
         }
       }
       if (struct.isSetDeadTabletServers()) {
         {
           oprot.writeI32(struct.deadTabletServers.size());
-          for (DeadServer _iter44 : struct.deadTabletServers)
+          for (DeadServer _iter56 : struct.deadTabletServers)
           {
-            _iter44.write(oprot);
+            _iter56.write(oprot);
+          }
+        }
+      }
+      if (struct.isSetBulkImports()) {
+        {
+          oprot.writeI32(struct.bulkImports.size());
+          for (BulkImportStatus _iter57 : struct.bulkImports)
+          {
+            _iter57.write(oprot);
           }
         }
       }
@@ -1316,58 +1506,58 @@
     @Override
     public void read(org.apache.thrift.protocol.TProtocol prot, MasterMonitorInfo struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
-      BitSet incoming = iprot.readBitSet(8);
+      BitSet incoming = iprot.readBitSet(9);
       if (incoming.get(0)) {
         {
-          org.apache.thrift.protocol.TMap _map45 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.tableMap = new HashMap<String,TableInfo>(2*_map45.size);
-          for (int _i46 = 0; _i46 < _map45.size; ++_i46)
+          org.apache.thrift.protocol.TMap _map58 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.tableMap = new HashMap<String,TableInfo>(2*_map58.size);
+          String _key59;
+          TableInfo _val60;
+          for (int _i61 = 0; _i61 < _map58.size; ++_i61)
           {
-            String _key47;
-            TableInfo _val48;
-            _key47 = iprot.readString();
-            _val48 = new TableInfo();
-            _val48.read(iprot);
-            struct.tableMap.put(_key47, _val48);
+            _key59 = iprot.readString();
+            _val60 = new TableInfo();
+            _val60.read(iprot);
+            struct.tableMap.put(_key59, _val60);
           }
         }
         struct.setTableMapIsSet(true);
       }
       if (incoming.get(1)) {
         {
-          org.apache.thrift.protocol.TList _list49 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.tServerInfo = new ArrayList<TabletServerStatus>(_list49.size);
-          for (int _i50 = 0; _i50 < _list49.size; ++_i50)
+          org.apache.thrift.protocol.TList _list62 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.tServerInfo = new ArrayList<TabletServerStatus>(_list62.size);
+          TabletServerStatus _elem63;
+          for (int _i64 = 0; _i64 < _list62.size; ++_i64)
           {
-            TabletServerStatus _elem51;
-            _elem51 = new TabletServerStatus();
-            _elem51.read(iprot);
-            struct.tServerInfo.add(_elem51);
+            _elem63 = new TabletServerStatus();
+            _elem63.read(iprot);
+            struct.tServerInfo.add(_elem63);
           }
         }
         struct.setTServerInfoIsSet(true);
       }
       if (incoming.get(2)) {
         {
-          org.apache.thrift.protocol.TMap _map52 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.BYTE, iprot.readI32());
-          struct.badTServers = new HashMap<String,Byte>(2*_map52.size);
-          for (int _i53 = 0; _i53 < _map52.size; ++_i53)
+          org.apache.thrift.protocol.TMap _map65 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.BYTE, iprot.readI32());
+          struct.badTServers = new HashMap<String,Byte>(2*_map65.size);
+          String _key66;
+          byte _val67;
+          for (int _i68 = 0; _i68 < _map65.size; ++_i68)
           {
-            String _key54;
-            byte _val55;
-            _key54 = iprot.readString();
-            _val55 = iprot.readByte();
-            struct.badTServers.put(_key54, _val55);
+            _key66 = iprot.readString();
+            _val67 = iprot.readByte();
+            struct.badTServers.put(_key66, _val67);
           }
         }
         struct.setBadTServersIsSet(true);
       }
       if (incoming.get(3)) {
-        struct.state = MasterState.findByValue(iprot.readI32());
+        struct.state = org.apache.accumulo.core.master.thrift.MasterState.findByValue(iprot.readI32());
         struct.setStateIsSet(true);
       }
       if (incoming.get(4)) {
-        struct.goalState = MasterGoalState.findByValue(iprot.readI32());
+        struct.goalState = org.apache.accumulo.core.master.thrift.MasterGoalState.findByValue(iprot.readI32());
         struct.setGoalStateIsSet(true);
       }
       if (incoming.get(5)) {
@@ -1376,31 +1566,45 @@
       }
       if (incoming.get(6)) {
         {
-          org.apache.thrift.protocol.TSet _set56 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-          struct.serversShuttingDown = new HashSet<String>(2*_set56.size);
-          for (int _i57 = 0; _i57 < _set56.size; ++_i57)
+          org.apache.thrift.protocol.TSet _set69 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+          struct.serversShuttingDown = new HashSet<String>(2*_set69.size);
+          String _elem70;
+          for (int _i71 = 0; _i71 < _set69.size; ++_i71)
           {
-            String _elem58;
-            _elem58 = iprot.readString();
-            struct.serversShuttingDown.add(_elem58);
+            _elem70 = iprot.readString();
+            struct.serversShuttingDown.add(_elem70);
           }
         }
         struct.setServersShuttingDownIsSet(true);
       }
       if (incoming.get(7)) {
         {
-          org.apache.thrift.protocol.TList _list59 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.deadTabletServers = new ArrayList<DeadServer>(_list59.size);
-          for (int _i60 = 0; _i60 < _list59.size; ++_i60)
+          org.apache.thrift.protocol.TList _list72 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.deadTabletServers = new ArrayList<DeadServer>(_list72.size);
+          DeadServer _elem73;
+          for (int _i74 = 0; _i74 < _list72.size; ++_i74)
           {
-            DeadServer _elem61;
-            _elem61 = new DeadServer();
-            _elem61.read(iprot);
-            struct.deadTabletServers.add(_elem61);
+            _elem73 = new DeadServer();
+            _elem73.read(iprot);
+            struct.deadTabletServers.add(_elem73);
           }
         }
         struct.setDeadTabletServersIsSet(true);
       }
+      if (incoming.get(8)) {
+        {
+          org.apache.thrift.protocol.TList _list75 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.bulkImports = new ArrayList<BulkImportStatus>(_list75.size);
+          BulkImportStatus _elem76;
+          for (int _i77 = 0; _i77 < _list75.size; ++_i77)
+          {
+            _elem76 = new BulkImportStatus();
+            _elem76.read(iprot);
+            struct.bulkImports.add(_elem76);
+          }
+        }
+        struct.setBulkImportsIsSet(true);
+      }
     }
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterState.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterState.java
index 1d63305..29548d3 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterState.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/MasterState.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/RecoveryException.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/RecoveryException.java
index 025cced..641345c 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/RecoveryException.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/RecoveryException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class RecoveryException extends TException implements org.apache.thrift.TBase<RecoveryException, RecoveryException._Fields>, java.io.Serializable, Cloneable, Comparable<RecoveryException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class RecoveryException extends TException implements org.apache.thrift.TBase<RecoveryException, RecoveryException._Fields>, java.io.Serializable, Cloneable, Comparable<RecoveryException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("RecoveryException");
 
   private static final org.apache.thrift.protocol.TField WHY_FIELD_DESC = new org.apache.thrift.protocol.TField("why", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_why = true && (isSetWhy());
+    list.add(present_why);
+    if (present_why)
+      list.add(why);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/RecoveryStatus.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/RecoveryStatus.java
index 3ffcb03..44a27df 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/RecoveryStatus.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/RecoveryStatus.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class RecoveryStatus implements org.apache.thrift.TBase<RecoveryStatus, RecoveryStatus._Fields>, java.io.Serializable, Cloneable, Comparable<RecoveryStatus> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class RecoveryStatus implements org.apache.thrift.TBase<RecoveryStatus, RecoveryStatus._Fields>, java.io.Serializable, Cloneable, Comparable<RecoveryStatus> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("RecoveryStatus");
 
   private static final org.apache.thrift.protocol.TField NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("name", org.apache.thrift.protocol.TType.STRING, (short)2);
@@ -292,10 +295,10 @@
       return getName();
 
     case RUNTIME:
-      return Integer.valueOf(getRuntime());
+      return getRuntime();
 
     case PROGRESS:
-      return Double.valueOf(getProgress());
+      return getProgress();
 
     }
     throw new IllegalStateException();
@@ -363,7 +366,24 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_name = true && (isSetName());
+    list.add(present_name);
+    if (present_name)
+      list.add(name);
+
+    boolean present_runtime = true;
+    list.add(present_runtime);
+    if (present_runtime)
+      list.add(runtime);
+
+    boolean present_progress = true;
+    list.add(present_progress);
+    if (present_progress)
+      list.add(progress);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/TableInfo.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/TableInfo.java
index 3c919cd..5c385d7 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/TableInfo.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/TableInfo.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TableInfo implements org.apache.thrift.TBase<TableInfo, TableInfo._Fields>, java.io.Serializable, Cloneable, Comparable<TableInfo> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TableInfo implements org.apache.thrift.TBase<TableInfo, TableInfo._Fields>, java.io.Serializable, Cloneable, Comparable<TableInfo> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TableInfo");
 
   private static final org.apache.thrift.protocol.TField RECS_FIELD_DESC = new org.apache.thrift.protocol.TField("recs", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -694,28 +697,28 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case RECS:
-      return Long.valueOf(getRecs());
+      return getRecs();
 
     case RECS_IN_MEMORY:
-      return Long.valueOf(getRecsInMemory());
+      return getRecsInMemory();
 
     case TABLETS:
-      return Integer.valueOf(getTablets());
+      return getTablets();
 
     case ONLINE_TABLETS:
-      return Integer.valueOf(getOnlineTablets());
+      return getOnlineTablets();
 
     case INGEST_RATE:
-      return Double.valueOf(getIngestRate());
+      return getIngestRate();
 
     case INGEST_BYTE_RATE:
-      return Double.valueOf(getIngestByteRate());
+      return getIngestByteRate();
 
     case QUERY_RATE:
-      return Double.valueOf(getQueryRate());
+      return getQueryRate();
 
     case QUERY_BYTE_RATE:
-      return Double.valueOf(getQueryByteRate());
+      return getQueryByteRate();
 
     case MINORS:
       return getMinors();
@@ -727,7 +730,7 @@
       return getScans();
 
     case SCAN_RATE:
-      return Double.valueOf(getScanRate());
+      return getScanRate();
 
     }
     throw new IllegalStateException();
@@ -894,7 +897,69 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_recs = true;
+    list.add(present_recs);
+    if (present_recs)
+      list.add(recs);
+
+    boolean present_recsInMemory = true;
+    list.add(present_recsInMemory);
+    if (present_recsInMemory)
+      list.add(recsInMemory);
+
+    boolean present_tablets = true;
+    list.add(present_tablets);
+    if (present_tablets)
+      list.add(tablets);
+
+    boolean present_onlineTablets = true;
+    list.add(present_onlineTablets);
+    if (present_onlineTablets)
+      list.add(onlineTablets);
+
+    boolean present_ingestRate = true;
+    list.add(present_ingestRate);
+    if (present_ingestRate)
+      list.add(ingestRate);
+
+    boolean present_ingestByteRate = true;
+    list.add(present_ingestByteRate);
+    if (present_ingestByteRate)
+      list.add(ingestByteRate);
+
+    boolean present_queryRate = true;
+    list.add(present_queryRate);
+    if (present_queryRate)
+      list.add(queryRate);
+
+    boolean present_queryByteRate = true;
+    list.add(present_queryByteRate);
+    if (present_queryByteRate)
+      list.add(queryByteRate);
+
+    boolean present_minors = true && (isSetMinors());
+    list.add(present_minors);
+    if (present_minors)
+      list.add(minors);
+
+    boolean present_majors = true && (isSetMajors());
+    list.add(present_majors);
+    if (present_majors)
+      list.add(majors);
+
+    boolean present_scans = true && (isSetScans());
+    list.add(present_scans);
+    if (present_scans)
+      list.add(scans);
+
+    boolean present_scanRate = true;
+    list.add(present_scanRate);
+    if (present_scanRate)
+      list.add(scanRate);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletLoadState.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletLoadState.java
index 97338cc..1a3a80c 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletLoadState.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletLoadState.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletServerStatus.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletServerStatus.java
index 1967a70..07444b5 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletServerStatus.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletServerStatus.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TabletServerStatus implements org.apache.thrift.TBase<TabletServerStatus, TabletServerStatus._Fields>, java.io.Serializable, Cloneable, Comparable<TabletServerStatus> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TabletServerStatus implements org.apache.thrift.TBase<TabletServerStatus, TabletServerStatus._Fields>, java.io.Serializable, Cloneable, Comparable<TabletServerStatus> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TabletServerStatus");
 
   private static final org.apache.thrift.protocol.TField TABLE_MAP_FIELD_DESC = new org.apache.thrift.protocol.TField("tableMap", org.apache.thrift.protocol.TType.MAP, (short)1);
@@ -64,6 +67,7 @@
   private static final org.apache.thrift.protocol.TField LOG_SORTS_FIELD_DESC = new org.apache.thrift.protocol.TField("logSorts", org.apache.thrift.protocol.TType.LIST, (short)14);
   private static final org.apache.thrift.protocol.TField FLUSHS_FIELD_DESC = new org.apache.thrift.protocol.TField("flushs", org.apache.thrift.protocol.TType.I64, (short)15);
   private static final org.apache.thrift.protocol.TField SYNCS_FIELD_DESC = new org.apache.thrift.protocol.TField("syncs", org.apache.thrift.protocol.TType.I64, (short)16);
+  private static final org.apache.thrift.protocol.TField BULK_IMPORTS_FIELD_DESC = new org.apache.thrift.protocol.TField("bulkImports", org.apache.thrift.protocol.TType.LIST, (short)17);
 
   private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
   static {
@@ -84,6 +88,7 @@
   public List<RecoveryStatus> logSorts; // required
   public long flushs; // required
   public long syncs; // required
+  public List<BulkImportStatus> bulkImports; // required
 
   /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
   public enum _Fields implements org.apache.thrift.TFieldIdEnum {
@@ -99,7 +104,8 @@
     DATA_CACHE_REQUEST((short)13, "dataCacheRequest"),
     LOG_SORTS((short)14, "logSorts"),
     FLUSHS((short)15, "flushs"),
-    SYNCS((short)16, "syncs");
+    SYNCS((short)16, "syncs"),
+    BULK_IMPORTS((short)17, "bulkImports");
 
     private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -140,6 +146,8 @@
           return FLUSHS;
         case 16: // SYNCS
           return SYNCS;
+        case 17: // BULK_IMPORTS
+          return BULK_IMPORTS;
         default:
           return null;
       }
@@ -223,6 +231,9 @@
         new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I64)));
     tmpMap.put(_Fields.SYNCS, new org.apache.thrift.meta_data.FieldMetaData("syncs", org.apache.thrift.TFieldRequirementType.DEFAULT, 
         new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I64)));
+    tmpMap.put(_Fields.BULK_IMPORTS, new org.apache.thrift.meta_data.FieldMetaData("bulkImports", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.ListMetaData(org.apache.thrift.protocol.TType.LIST, 
+            new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, BulkImportStatus.class))));
     metaDataMap = Collections.unmodifiableMap(tmpMap);
     org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(TabletServerStatus.class, metaDataMap);
   }
@@ -243,7 +254,8 @@
     long dataCacheRequest,
     List<RecoveryStatus> logSorts,
     long flushs,
-    long syncs)
+    long syncs,
+    List<BulkImportStatus> bulkImports)
   {
     this();
     this.tableMap = tableMap;
@@ -269,6 +281,7 @@
     setFlushsIsSet(true);
     this.syncs = syncs;
     setSyncsIsSet(true);
+    this.bulkImports = bulkImports;
   }
 
   /**
@@ -311,6 +324,13 @@
     }
     this.flushs = other.flushs;
     this.syncs = other.syncs;
+    if (other.isSetBulkImports()) {
+      List<BulkImportStatus> __this__bulkImports = new ArrayList<BulkImportStatus>(other.bulkImports.size());
+      for (BulkImportStatus other_element : other.bulkImports) {
+        __this__bulkImports.add(new BulkImportStatus(other_element));
+      }
+      this.bulkImports = __this__bulkImports;
+    }
   }
 
   public TabletServerStatus deepCopy() {
@@ -342,6 +362,7 @@
     this.flushs = 0;
     setSyncsIsSet(false);
     this.syncs = 0;
+    this.bulkImports = null;
   }
 
   public int getTableMapSize() {
@@ -672,6 +693,45 @@
     __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __SYNCS_ISSET_ID, value);
   }
 
+  public int getBulkImportsSize() {
+    return (this.bulkImports == null) ? 0 : this.bulkImports.size();
+  }
+
+  public java.util.Iterator<BulkImportStatus> getBulkImportsIterator() {
+    return (this.bulkImports == null) ? null : this.bulkImports.iterator();
+  }
+
+  public void addToBulkImports(BulkImportStatus elem) {
+    if (this.bulkImports == null) {
+      this.bulkImports = new ArrayList<BulkImportStatus>();
+    }
+    this.bulkImports.add(elem);
+  }
+
+  public List<BulkImportStatus> getBulkImports() {
+    return this.bulkImports;
+  }
+
+  public TabletServerStatus setBulkImports(List<BulkImportStatus> bulkImports) {
+    this.bulkImports = bulkImports;
+    return this;
+  }
+
+  public void unsetBulkImports() {
+    this.bulkImports = null;
+  }
+
+  /** Returns true if field bulkImports is set (has been assigned a value) and false otherwise */
+  public boolean isSetBulkImports() {
+    return this.bulkImports != null;
+  }
+
+  public void setBulkImportsIsSet(boolean value) {
+    if (!value) {
+      this.bulkImports = null;
+    }
+  }
+
   public void setFieldValue(_Fields field, Object value) {
     switch (field) {
     case TABLE_MAP:
@@ -778,6 +838,14 @@
       }
       break;
 
+    case BULK_IMPORTS:
+      if (value == null) {
+        unsetBulkImports();
+      } else {
+        setBulkImports((List<BulkImportStatus>)value);
+      }
+      break;
+
     }
   }
 
@@ -787,40 +855,43 @@
       return getTableMap();
 
     case LAST_CONTACT:
-      return Long.valueOf(getLastContact());
+      return getLastContact();
 
     case NAME:
       return getName();
 
     case OS_LOAD:
-      return Double.valueOf(getOsLoad());
+      return getOsLoad();
 
     case HOLD_TIME:
-      return Long.valueOf(getHoldTime());
+      return getHoldTime();
 
     case LOOKUPS:
-      return Long.valueOf(getLookups());
+      return getLookups();
 
     case INDEX_CACHE_HITS:
-      return Long.valueOf(getIndexCacheHits());
+      return getIndexCacheHits();
 
     case INDEX_CACHE_REQUEST:
-      return Long.valueOf(getIndexCacheRequest());
+      return getIndexCacheRequest();
 
     case DATA_CACHE_HITS:
-      return Long.valueOf(getDataCacheHits());
+      return getDataCacheHits();
 
     case DATA_CACHE_REQUEST:
-      return Long.valueOf(getDataCacheRequest());
+      return getDataCacheRequest();
 
     case LOG_SORTS:
       return getLogSorts();
 
     case FLUSHS:
-      return Long.valueOf(getFlushs());
+      return getFlushs();
 
     case SYNCS:
-      return Long.valueOf(getSyncs());
+      return getSyncs();
+
+    case BULK_IMPORTS:
+      return getBulkImports();
 
     }
     throw new IllegalStateException();
@@ -859,6 +930,8 @@
       return isSetFlushs();
     case SYNCS:
       return isSetSyncs();
+    case BULK_IMPORTS:
+      return isSetBulkImports();
     }
     throw new IllegalStateException();
   }
@@ -993,12 +1066,93 @@
         return false;
     }
 
+    boolean this_present_bulkImports = true && this.isSetBulkImports();
+    boolean that_present_bulkImports = true && that.isSetBulkImports();
+    if (this_present_bulkImports || that_present_bulkImports) {
+      if (!(this_present_bulkImports && that_present_bulkImports))
+        return false;
+      if (!this.bulkImports.equals(that.bulkImports))
+        return false;
+    }
+
     return true;
   }
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_tableMap = true && (isSetTableMap());
+    list.add(present_tableMap);
+    if (present_tableMap)
+      list.add(tableMap);
+
+    boolean present_lastContact = true;
+    list.add(present_lastContact);
+    if (present_lastContact)
+      list.add(lastContact);
+
+    boolean present_name = true && (isSetName());
+    list.add(present_name);
+    if (present_name)
+      list.add(name);
+
+    boolean present_osLoad = true;
+    list.add(present_osLoad);
+    if (present_osLoad)
+      list.add(osLoad);
+
+    boolean present_holdTime = true;
+    list.add(present_holdTime);
+    if (present_holdTime)
+      list.add(holdTime);
+
+    boolean present_lookups = true;
+    list.add(present_lookups);
+    if (present_lookups)
+      list.add(lookups);
+
+    boolean present_indexCacheHits = true;
+    list.add(present_indexCacheHits);
+    if (present_indexCacheHits)
+      list.add(indexCacheHits);
+
+    boolean present_indexCacheRequest = true;
+    list.add(present_indexCacheRequest);
+    if (present_indexCacheRequest)
+      list.add(indexCacheRequest);
+
+    boolean present_dataCacheHits = true;
+    list.add(present_dataCacheHits);
+    if (present_dataCacheHits)
+      list.add(dataCacheHits);
+
+    boolean present_dataCacheRequest = true;
+    list.add(present_dataCacheRequest);
+    if (present_dataCacheRequest)
+      list.add(dataCacheRequest);
+
+    boolean present_logSorts = true && (isSetLogSorts());
+    list.add(present_logSorts);
+    if (present_logSorts)
+      list.add(logSorts);
+
+    boolean present_flushs = true;
+    list.add(present_flushs);
+    if (present_flushs)
+      list.add(flushs);
+
+    boolean present_syncs = true;
+    list.add(present_syncs);
+    if (present_syncs)
+      list.add(syncs);
+
+    boolean present_bulkImports = true && (isSetBulkImports());
+    list.add(present_bulkImports);
+    if (present_bulkImports)
+      list.add(bulkImports);
+
+    return list.hashCode();
   }
 
   @Override
@@ -1139,6 +1293,16 @@
         return lastComparison;
       }
     }
+    lastComparison = Boolean.valueOf(isSetBulkImports()).compareTo(other.isSetBulkImports());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetBulkImports()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.bulkImports, other.bulkImports);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
     return 0;
   }
 
@@ -1222,6 +1386,14 @@
     sb.append("syncs:");
     sb.append(this.syncs);
     first = false;
+    if (!first) sb.append(", ");
+    sb.append("bulkImports:");
+    if (this.bulkImports == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.bulkImports);
+    }
+    first = false;
     sb.append(")");
     return sb.toString();
   }
@@ -1272,14 +1444,14 @@
               {
                 org.apache.thrift.protocol.TMap _map0 = iprot.readMapBegin();
                 struct.tableMap = new HashMap<String,TableInfo>(2*_map0.size);
-                for (int _i1 = 0; _i1 < _map0.size; ++_i1)
+                String _key1;
+                TableInfo _val2;
+                for (int _i3 = 0; _i3 < _map0.size; ++_i3)
                 {
-                  String _key2;
-                  TableInfo _val3;
-                  _key2 = iprot.readString();
-                  _val3 = new TableInfo();
-                  _val3.read(iprot);
-                  struct.tableMap.put(_key2, _val3);
+                  _key1 = iprot.readString();
+                  _val2 = new TableInfo();
+                  _val2.read(iprot);
+                  struct.tableMap.put(_key1, _val2);
                 }
                 iprot.readMapEnd();
               }
@@ -1365,12 +1537,12 @@
               {
                 org.apache.thrift.protocol.TList _list4 = iprot.readListBegin();
                 struct.logSorts = new ArrayList<RecoveryStatus>(_list4.size);
-                for (int _i5 = 0; _i5 < _list4.size; ++_i5)
+                RecoveryStatus _elem5;
+                for (int _i6 = 0; _i6 < _list4.size; ++_i6)
                 {
-                  RecoveryStatus _elem6;
-                  _elem6 = new RecoveryStatus();
-                  _elem6.read(iprot);
-                  struct.logSorts.add(_elem6);
+                  _elem5 = new RecoveryStatus();
+                  _elem5.read(iprot);
+                  struct.logSorts.add(_elem5);
                 }
                 iprot.readListEnd();
               }
@@ -1395,6 +1567,25 @@
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
+          case 17: // BULK_IMPORTS
+            if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
+              {
+                org.apache.thrift.protocol.TList _list7 = iprot.readListBegin();
+                struct.bulkImports = new ArrayList<BulkImportStatus>(_list7.size);
+                BulkImportStatus _elem8;
+                for (int _i9 = 0; _i9 < _list7.size; ++_i9)
+                {
+                  _elem8 = new BulkImportStatus();
+                  _elem8.read(iprot);
+                  struct.bulkImports.add(_elem8);
+                }
+                iprot.readListEnd();
+              }
+              struct.setBulkImportsIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
           default:
             org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
         }
@@ -1414,10 +1605,10 @@
         oprot.writeFieldBegin(TABLE_MAP_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, struct.tableMap.size()));
-          for (Map.Entry<String, TableInfo> _iter7 : struct.tableMap.entrySet())
+          for (Map.Entry<String, TableInfo> _iter10 : struct.tableMap.entrySet())
           {
-            oprot.writeString(_iter7.getKey());
-            _iter7.getValue().write(oprot);
+            oprot.writeString(_iter10.getKey());
+            _iter10.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
@@ -1456,9 +1647,9 @@
         oprot.writeFieldBegin(LOG_SORTS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.logSorts.size()));
-          for (RecoveryStatus _iter8 : struct.logSorts)
+          for (RecoveryStatus _iter11 : struct.logSorts)
           {
-            _iter8.write(oprot);
+            _iter11.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -1470,6 +1661,18 @@
       oprot.writeFieldBegin(SYNCS_FIELD_DESC);
       oprot.writeI64(struct.syncs);
       oprot.writeFieldEnd();
+      if (struct.bulkImports != null) {
+        oprot.writeFieldBegin(BULK_IMPORTS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.bulkImports.size()));
+          for (BulkImportStatus _iter12 : struct.bulkImports)
+          {
+            _iter12.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
       oprot.writeFieldStop();
       oprot.writeStructEnd();
     }
@@ -1527,14 +1730,17 @@
       if (struct.isSetSyncs()) {
         optionals.set(12);
       }
-      oprot.writeBitSet(optionals, 13);
+      if (struct.isSetBulkImports()) {
+        optionals.set(13);
+      }
+      oprot.writeBitSet(optionals, 14);
       if (struct.isSetTableMap()) {
         {
           oprot.writeI32(struct.tableMap.size());
-          for (Map.Entry<String, TableInfo> _iter9 : struct.tableMap.entrySet())
+          for (Map.Entry<String, TableInfo> _iter13 : struct.tableMap.entrySet())
           {
-            oprot.writeString(_iter9.getKey());
-            _iter9.getValue().write(oprot);
+            oprot.writeString(_iter13.getKey());
+            _iter13.getValue().write(oprot);
           }
         }
       }
@@ -1568,9 +1774,9 @@
       if (struct.isSetLogSorts()) {
         {
           oprot.writeI32(struct.logSorts.size());
-          for (RecoveryStatus _iter10 : struct.logSorts)
+          for (RecoveryStatus _iter14 : struct.logSorts)
           {
-            _iter10.write(oprot);
+            _iter14.write(oprot);
           }
         }
       }
@@ -1580,24 +1786,33 @@
       if (struct.isSetSyncs()) {
         oprot.writeI64(struct.syncs);
       }
+      if (struct.isSetBulkImports()) {
+        {
+          oprot.writeI32(struct.bulkImports.size());
+          for (BulkImportStatus _iter15 : struct.bulkImports)
+          {
+            _iter15.write(oprot);
+          }
+        }
+      }
     }
 
     @Override
     public void read(org.apache.thrift.protocol.TProtocol prot, TabletServerStatus struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
-      BitSet incoming = iprot.readBitSet(13);
+      BitSet incoming = iprot.readBitSet(14);
       if (incoming.get(0)) {
         {
-          org.apache.thrift.protocol.TMap _map11 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.tableMap = new HashMap<String,TableInfo>(2*_map11.size);
-          for (int _i12 = 0; _i12 < _map11.size; ++_i12)
+          org.apache.thrift.protocol.TMap _map16 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.tableMap = new HashMap<String,TableInfo>(2*_map16.size);
+          String _key17;
+          TableInfo _val18;
+          for (int _i19 = 0; _i19 < _map16.size; ++_i19)
           {
-            String _key13;
-            TableInfo _val14;
-            _key13 = iprot.readString();
-            _val14 = new TableInfo();
-            _val14.read(iprot);
-            struct.tableMap.put(_key13, _val14);
+            _key17 = iprot.readString();
+            _val18 = new TableInfo();
+            _val18.read(iprot);
+            struct.tableMap.put(_key17, _val18);
           }
         }
         struct.setTableMapIsSet(true);
@@ -1640,14 +1855,14 @@
       }
       if (incoming.get(10)) {
         {
-          org.apache.thrift.protocol.TList _list15 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.logSorts = new ArrayList<RecoveryStatus>(_list15.size);
-          for (int _i16 = 0; _i16 < _list15.size; ++_i16)
+          org.apache.thrift.protocol.TList _list20 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.logSorts = new ArrayList<RecoveryStatus>(_list20.size);
+          RecoveryStatus _elem21;
+          for (int _i22 = 0; _i22 < _list20.size; ++_i22)
           {
-            RecoveryStatus _elem17;
-            _elem17 = new RecoveryStatus();
-            _elem17.read(iprot);
-            struct.logSorts.add(_elem17);
+            _elem21 = new RecoveryStatus();
+            _elem21.read(iprot);
+            struct.logSorts.add(_elem21);
           }
         }
         struct.setLogSortsIsSet(true);
@@ -1660,6 +1875,20 @@
         struct.syncs = iprot.readI64();
         struct.setSyncsIsSet(true);
       }
+      if (incoming.get(13)) {
+        {
+          org.apache.thrift.protocol.TList _list23 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.bulkImports = new ArrayList<BulkImportStatus>(_list23.size);
+          BulkImportStatus _elem24;
+          for (int _i25 = 0; _i25 < _list23.size; ++_i25)
+          {
+            _elem24 = new BulkImportStatus();
+            _elem24.read(iprot);
+            struct.bulkImports.add(_elem24);
+          }
+        }
+        struct.setBulkImportsIsSet(true);
+      }
     }
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletSplit.java b/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletSplit.java
index bd529b9..48d4465 100644
--- a/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletSplit.java
+++ b/core/src/main/java/org/apache/accumulo/core/master/thrift/TabletSplit.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TabletSplit implements org.apache.thrift.TBase<TabletSplit, TabletSplit._Fields>, java.io.Serializable, Cloneable, Comparable<TabletSplit> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TabletSplit implements org.apache.thrift.TBase<TabletSplit, TabletSplit._Fields>, java.io.Serializable, Cloneable, Comparable<TabletSplit> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TabletSplit");
 
   private static final org.apache.thrift.protocol.TField OLD_TABLET_FIELD_DESC = new org.apache.thrift.protocol.TField("oldTablet", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -322,7 +325,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_oldTablet = true && (isSetOldTablet());
+    list.add(present_oldTablet);
+    if (present_oldTablet)
+      list.add(oldTablet);
+
+    boolean present_newTablets = true && (isSetNewTablets());
+    list.add(present_newTablets);
+    if (present_newTablets)
+      list.add(newTablets);
+
+    return list.hashCode();
   }
 
   @Override
@@ -446,14 +461,14 @@
           case 2: // NEW_TABLETS
             if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
               {
-                org.apache.thrift.protocol.TList _list62 = iprot.readListBegin();
-                struct.newTablets = new ArrayList<org.apache.accumulo.core.data.thrift.TKeyExtent>(_list62.size);
-                for (int _i63 = 0; _i63 < _list62.size; ++_i63)
+                org.apache.thrift.protocol.TList _list78 = iprot.readListBegin();
+                struct.newTablets = new ArrayList<org.apache.accumulo.core.data.thrift.TKeyExtent>(_list78.size);
+                org.apache.accumulo.core.data.thrift.TKeyExtent _elem79;
+                for (int _i80 = 0; _i80 < _list78.size; ++_i80)
                 {
-                  org.apache.accumulo.core.data.thrift.TKeyExtent _elem64;
-                  _elem64 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
-                  _elem64.read(iprot);
-                  struct.newTablets.add(_elem64);
+                  _elem79 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+                  _elem79.read(iprot);
+                  struct.newTablets.add(_elem79);
                 }
                 iprot.readListEnd();
               }
@@ -486,9 +501,9 @@
         oprot.writeFieldBegin(NEW_TABLETS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.newTablets.size()));
-          for (org.apache.accumulo.core.data.thrift.TKeyExtent _iter65 : struct.newTablets)
+          for (org.apache.accumulo.core.data.thrift.TKeyExtent _iter81 : struct.newTablets)
           {
-            _iter65.write(oprot);
+            _iter81.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -525,9 +540,9 @@
       if (struct.isSetNewTablets()) {
         {
           oprot.writeI32(struct.newTablets.size());
-          for (org.apache.accumulo.core.data.thrift.TKeyExtent _iter66 : struct.newTablets)
+          for (org.apache.accumulo.core.data.thrift.TKeyExtent _iter82 : struct.newTablets)
           {
-            _iter66.write(oprot);
+            _iter82.write(oprot);
           }
         }
       }
@@ -544,14 +559,14 @@
       }
       if (incoming.get(1)) {
         {
-          org.apache.thrift.protocol.TList _list67 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.newTablets = new ArrayList<org.apache.accumulo.core.data.thrift.TKeyExtent>(_list67.size);
-          for (int _i68 = 0; _i68 < _list67.size; ++_i68)
+          org.apache.thrift.protocol.TList _list83 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.newTablets = new ArrayList<org.apache.accumulo.core.data.thrift.TKeyExtent>(_list83.size);
+          org.apache.accumulo.core.data.thrift.TKeyExtent _elem84;
+          for (int _i85 = 0; _i85 < _list83.size; ++_i85)
           {
-            org.apache.accumulo.core.data.thrift.TKeyExtent _elem69;
-            _elem69 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
-            _elem69.read(iprot);
-            struct.newTablets.add(_elem69);
+            _elem84 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+            _elem84.read(iprot);
+            struct.newTablets.add(_elem84);
           }
         }
         struct.setNewTabletsIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/MetadataLocationObtainer.java b/core/src/main/java/org/apache/accumulo/core/metadata/MetadataLocationObtainer.java
index c8c61aa..6336d12 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/MetadataLocationObtainer.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/MetadataLocationObtainer.java
@@ -27,6 +27,7 @@
 import java.util.SortedSet;
 import java.util.TreeMap;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -56,20 +57,21 @@
 import org.apache.accumulo.core.util.OpTimer;
 import org.apache.accumulo.core.util.TextUtil;
 import org.apache.hadoop.io.Text;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class MetadataLocationObtainer implements TabletLocationObtainer {
-  private static final Logger log = Logger.getLogger(MetadataLocationObtainer.class);
+  private static final Logger log = LoggerFactory.getLogger(MetadataLocationObtainer.class);
+
   private SortedSet<Column> locCols;
   private ArrayList<Column> columns;
 
   public MetadataLocationObtainer() {
 
-    locCols = new TreeSet<Column>();
+    locCols = new TreeSet<>();
     locCols.add(new Column(TextUtil.getBytes(TabletsSection.CurrentLocationColumnFamily.NAME), null, null));
     locCols.add(TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.toColumn());
-    columns = new ArrayList<Column>(locCols);
+    columns = new ArrayList<>(locCols);
   }
 
   @Override
@@ -77,23 +79,27 @@
       throws AccumuloSecurityException, AccumuloException {
 
     try {
-      OpTimer opTimer = null;
-      if (log.isTraceEnabled())
-        opTimer = new OpTimer(log, Level.TRACE).start("Looking up in " + src.tablet_extent.getTableId() + " row=" + TextUtil.truncate(row) + "  extent="
-            + src.tablet_extent + " tserver=" + src.tablet_location);
+
+      OpTimer timer = null;
+
+      if (log.isTraceEnabled()) {
+        log.trace("tid={} Looking up in {} row={} extent={} tserver={}", Thread.currentThread().getId(), src.tablet_extent.getTableId(),
+            TextUtil.truncate(row), src.tablet_extent, src.tablet_location);
+        timer = new OpTimer().start();
+      }
 
       Range range = new Range(row, true, stopRow, true);
 
-      TreeMap<Key,Value> encodedResults = new TreeMap<Key,Value>();
-      TreeMap<Key,Value> results = new TreeMap<Key,Value>();
+      TreeMap<Key,Value> encodedResults = new TreeMap<>();
+      TreeMap<Key,Value> results = new TreeMap<>();
 
       // Use the whole row iterator so that a partial mutations is not read. The code that extracts locations for tablets does a sanity check to ensure there is
       // only one location. Reading a partial mutation could make it appear there are multiple locations when there are not.
-      List<IterInfo> serverSideIteratorList = new ArrayList<IterInfo>();
+      List<IterInfo> serverSideIteratorList = new ArrayList<>();
       serverSideIteratorList.add(new IterInfo(10000, WholeRowIterator.class.getName(), "WRI"));
       Map<String,Map<String,String>> serverSideIteratorOptions = Collections.emptyMap();
       boolean more = ThriftScanner.getBatchFromServer(context, range, src.tablet_extent, src.tablet_location, encodedResults, locCols, serverSideIteratorList,
-          serverSideIteratorOptions, Constants.SCAN_BATCH_SIZE, Authorizations.EMPTY, false);
+          serverSideIteratorOptions, Constants.SCAN_BATCH_SIZE, Authorizations.EMPTY, false, 0L, null);
 
       decodeRows(encodedResults, results);
 
@@ -101,13 +107,16 @@
         range = new Range(results.lastKey().followingKey(PartialKey.ROW_COLFAM_COLQUAL_COLVIS_TIME), true, new Key(stopRow).followingKey(PartialKey.ROW), false);
         encodedResults.clear();
         more = ThriftScanner.getBatchFromServer(context, range, src.tablet_extent, src.tablet_location, encodedResults, locCols, serverSideIteratorList,
-            serverSideIteratorOptions, Constants.SCAN_BATCH_SIZE, Authorizations.EMPTY, false);
+            serverSideIteratorOptions, Constants.SCAN_BATCH_SIZE, Authorizations.EMPTY, false, 0L, null);
 
         decodeRows(encodedResults, results);
       }
 
-      if (opTimer != null)
-        opTimer.stop("Got " + results.size() + " results  from " + src.tablet_extent + " in %DURATION%");
+      if (timer != null) {
+        timer.stop();
+        log.trace("tid={} Got {} results from {} in {}", Thread.currentThread().getId(), results.size(), src.tablet_extent,
+            String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+      }
 
       // if (log.isTraceEnabled()) log.trace("results "+results);
 
@@ -115,15 +124,15 @@
 
     } catch (AccumuloServerException ase) {
       if (log.isTraceEnabled())
-        log.trace(src.tablet_extent.getTableId() + " lookup failed, " + src.tablet_location + " server side exception");
+        log.trace("{} lookup failed, {} server side exception", src.tablet_extent.getTableId(), src.tablet_location);
       throw ase;
     } catch (NotServingTabletException e) {
       if (log.isTraceEnabled())
-        log.trace(src.tablet_extent.getTableId() + " lookup failed, " + src.tablet_location + " not serving " + src.tablet_extent);
+        log.trace("{} lookup failed, {} not serving {}", src.tablet_extent.getTableId(), src.tablet_location, src.tablet_extent);
       parent.invalidateCache(src.tablet_extent);
     } catch (AccumuloException e) {
       if (log.isTraceEnabled())
-        log.trace(src.tablet_extent.getTableId() + " lookup failed", e);
+        log.trace("{} lookup failed", src.tablet_extent.getTableId(), e);
       parent.invalidateCache(context.getInstance(), src.tablet_location);
     }
 
@@ -140,11 +149,20 @@
     }
   }
 
+  private static class SettableScannerOptions extends ScannerOptions {
+    public ScannerOptions setColumns(SortedSet<Column> locCols) {
+      this.fetchedColumns = locCols;
+      // see comment in lookupTablet about why iterator is used
+      addScanIterator(new IteratorSetting(10000, "WRI", WholeRowIterator.class.getName()));
+      return this;
+    }
+  }
+
   @Override
   public List<TabletLocation> lookupTablets(ClientContext context, String tserver, Map<KeyExtent,List<Range>> tabletsRanges, TabletLocator parent)
       throws AccumuloSecurityException, AccumuloException {
 
-    final TreeMap<Key,Value> results = new TreeMap<Key,Value>();
+    final TreeMap<Key,Value> results = new TreeMap<>();
 
     ResultReceiver rr = new ResultReceiver() {
 
@@ -160,30 +178,26 @@
       }
     };
 
-    ScannerOptions opts = new ScannerOptions() {
-      ScannerOptions setOpts() {
-        this.fetchedColumns = locCols;
-        // see comment in lookupTablet about why iterator is used
-        addScanIterator(new IteratorSetting(10000, "WRI", WholeRowIterator.class.getName()));
-        return this;
-      }
-    }.setOpts();
+    ScannerOptions opts = null;
+    try (SettableScannerOptions unsetOpts = new SettableScannerOptions()) {
+      opts = unsetOpts.setColumns(locCols);
+    }
 
-    Map<KeyExtent,List<Range>> unscanned = new HashMap<KeyExtent,List<Range>>();
-    Map<KeyExtent,List<Range>> failures = new HashMap<KeyExtent,List<Range>>();
+    Map<KeyExtent,List<Range>> unscanned = new HashMap<>();
+    Map<KeyExtent,List<Range>> failures = new HashMap<>();
     try {
       TabletServerBatchReaderIterator.doLookup(context, tserver, tabletsRanges, failures, unscanned, rr, columns, opts, Authorizations.EMPTY);
       if (failures.size() > 0) {
         // invalidate extents in parents cache
         if (log.isTraceEnabled())
-          log.trace("lookupTablets failed for " + failures.size() + " extents");
+          log.trace("lookupTablets failed for {} extents", failures.size());
         parent.invalidateCache(failures.keySet());
       }
     } catch (IOException e) {
-      log.trace("lookupTablets failed server=" + tserver, e);
+      log.trace("lookupTablets failed server={}", tserver, e);
       parent.invalidateCache(context.getInstance(), tserver);
     } catch (AccumuloServerException e) {
-      log.trace("lookupTablets failed server=" + tserver, e);
+      log.trace("lookupTablets failed server={}", tserver, e);
       throw e;
     }
 
@@ -198,8 +212,8 @@
     Value prevRow = null;
     KeyExtent ke;
 
-    List<TabletLocation> results = new ArrayList<TabletLocation>();
-    ArrayList<KeyExtent> locationless = new ArrayList<KeyExtent>();
+    List<TabletLocation> results = new ArrayList<>();
+    ArrayList<KeyExtent> locationless = new ArrayList<>();
 
     Text lastRowFromKey = new Text();
 
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/RootTable.java b/core/src/main/java/org/apache/accumulo/core/metadata/RootTable.java
index 292ba3b..2052563 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/RootTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/RootTable.java
@@ -18,7 +18,6 @@
 
 import org.apache.accumulo.core.client.impl.Namespaces;
 import org.apache.accumulo.core.data.impl.KeyExtent;
-import org.apache.hadoop.io.Text;
 
 /**
  *
@@ -41,9 +40,10 @@
   public static final String ZROOT_TABLET_FUTURE_LOCATION = ZROOT_TABLET + "/future_location";
   public static final String ZROOT_TABLET_LAST_LOCATION = ZROOT_TABLET + "/lastlocation";
   public static final String ZROOT_TABLET_WALOGS = ZROOT_TABLET + "/walogs";
+  public static final String ZROOT_TABLET_CURRENT_LOGS = ZROOT_TABLET + "/current_logs";
   public static final String ZROOT_TABLET_PATH = ZROOT_TABLET + "/dir";
 
-  public static final KeyExtent EXTENT = new KeyExtent(new Text(ID), null, null);
-  public static final KeyExtent OLD_EXTENT = new KeyExtent(new Text(MetadataTable.ID), KeyExtent.getMetadataEntry(new Text(MetadataTable.ID), null), null);
+  public static final KeyExtent EXTENT = new KeyExtent(ID, null, null);
+  public static final KeyExtent OLD_EXTENT = new KeyExtent(MetadataTable.ID, KeyExtent.getMetadataEntry(MetadataTable.ID, null), null);
 
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java b/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
index 236698c..c93987d 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
@@ -49,7 +49,7 @@
       return new Range(new Key(tableId + ';'), true, new Key(tableId + '<').followingKey(PartialKey.ROW), false);
     }
 
-    public static Text getRow(Text tableId, Text endRow) {
+    public static Text getRow(String tableId, Text endRow) {
       Text entry = new Text(tableId);
 
       if (endRow == null) {
@@ -137,6 +137,13 @@
     }
 
     /**
+     * Column family for storing suspension location, as a demand for assignment.
+     */
+    public static class SuspendLocationColumn {
+      public static final ColumnFQ SUSPEND_COLUMN = new ColumnFQ(new Text("suspend"), new Text("loc"));
+    }
+
+    /**
      * Temporary markers that indicate a tablet loaded a bulk file
      */
     public static class BulkFileColumnFamily {
@@ -247,18 +254,14 @@
     }
 
     /**
-     * Extract the table ID from the colfam into the given {@link Text}
+     * Extract the table ID from the colfam
      *
      * @param k
      *          Key to extract from
-     * @param buff
-     *          Text to place table ID into
      */
-    public static void getTableId(Key k, Text buff) {
+    public static String getTableId(Key k) {
       requireNonNull(k);
-      requireNonNull(buff);
-
-      k.getColumnQualifier(buff);
+      return k.getColumnQualifier().toString();
     }
 
     /**
@@ -279,4 +282,5 @@
       buff.set(buff.getBytes(), section.getRowPrefix().length(), buff.getLength() - section.getRowPrefix().length());
     }
   }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java b/core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
index 7e7ae38..94d19b5 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
@@ -95,27 +95,10 @@
      * @param k
      *          Key to extract from
      * @return The table ID
-     * @see #getTableId(Key,Text)
      */
     public static String getTableId(Key k) {
-      Text buff = new Text();
-      getTableId(k, buff);
-      return buff.toString();
-    }
-
-    /**
-     * Extract the table ID from the key into the given {@link Text}
-     *
-     * @param k
-     *          Key to extract from
-     * @param buff
-     *          Text to place table ID into
-     */
-    public static void getTableId(Key k, Text buff) {
       requireNonNull(k);
-      requireNonNull(buff);
-
-      k.getColumnQualifier(buff);
+      return k.getColumnQualifier().toString();
     }
 
     /**
@@ -141,8 +124,8 @@
       scanner.fetchColumnFamily(NAME);
     }
 
-    public static Mutation add(Mutation m, Text tableId, Value v) {
-      m.put(NAME, tableId, v);
+    public static Mutation add(Mutation m, String tableId, Value v) {
+      m.put(NAME, new Text(tableId), v);
       return m;
     }
   }
@@ -234,8 +217,8 @@
      *          Serialized Status msg
      * @return The original Mutation
      */
-    public static Mutation add(Mutation m, Text tableId, Value v) {
-      m.put(NAME, tableId, v);
+    public static Mutation add(Mutation m, String tableId, Value v) {
+      m.put(NAME, new Text(tableId), v);
       return m;
     }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java b/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java
index 4b61b53..7076757 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/ReplicationTable.java
@@ -31,11 +31,11 @@
 import org.apache.accumulo.core.client.TableOfflineException;
 import org.apache.accumulo.core.client.impl.Namespaces;
 import org.apache.accumulo.core.client.impl.Tables;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
 import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -87,7 +87,7 @@
   }
 
   public static boolean isOnline(Connector conn) {
-    return conn.getInstance() instanceof MockInstance || TableState.ONLINE == Tables.getTableState(conn.getInstance(), ID);
+    return DeprecationUtil.isMockInstance(conn.getInstance()) || TableState.ONLINE == Tables.getTableState(conn.getInstance(), ID);
   }
 
   public static void setOnline(Connector conn) throws AccumuloSecurityException, AccumuloException {
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/thrift/KeyValues.java b/core/src/main/java/org/apache/accumulo/core/replication/thrift/KeyValues.java
index 3c6f6ed..dd70af1 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/thrift/KeyValues.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/thrift/KeyValues.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class KeyValues implements org.apache.thrift.TBase<KeyValues, KeyValues._Fields>, java.io.Serializable, Cloneable, Comparable<KeyValues> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class KeyValues implements org.apache.thrift.TBase<KeyValues, KeyValues._Fields>, java.io.Serializable, Cloneable, Comparable<KeyValues> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("KeyValues");
 
   private static final org.apache.thrift.protocol.TField KEY_VALUES_FIELD_DESC = new org.apache.thrift.protocol.TField("keyValues", org.apache.thrift.protocol.TType.LIST, (short)1);
@@ -263,7 +266,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_keyValues = true && (isSetKeyValues());
+    list.add(present_keyValues);
+    if (present_keyValues)
+      list.add(keyValues);
+
+    return list.hashCode();
   }
 
   @Override
@@ -359,12 +369,12 @@
               {
                 org.apache.thrift.protocol.TList _list8 = iprot.readListBegin();
                 struct.keyValues = new ArrayList<org.apache.accumulo.core.data.thrift.TKeyValue>(_list8.size);
-                for (int _i9 = 0; _i9 < _list8.size; ++_i9)
+                org.apache.accumulo.core.data.thrift.TKeyValue _elem9;
+                for (int _i10 = 0; _i10 < _list8.size; ++_i10)
                 {
-                  org.apache.accumulo.core.data.thrift.TKeyValue _elem10;
-                  _elem10 = new org.apache.accumulo.core.data.thrift.TKeyValue();
-                  _elem10.read(iprot);
-                  struct.keyValues.add(_elem10);
+                  _elem9 = new org.apache.accumulo.core.data.thrift.TKeyValue();
+                  _elem9.read(iprot);
+                  struct.keyValues.add(_elem9);
                 }
                 iprot.readListEnd();
               }
@@ -441,12 +451,12 @@
         {
           org.apache.thrift.protocol.TList _list13 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.keyValues = new ArrayList<org.apache.accumulo.core.data.thrift.TKeyValue>(_list13.size);
-          for (int _i14 = 0; _i14 < _list13.size; ++_i14)
+          org.apache.accumulo.core.data.thrift.TKeyValue _elem14;
+          for (int _i15 = 0; _i15 < _list13.size; ++_i15)
           {
-            org.apache.accumulo.core.data.thrift.TKeyValue _elem15;
-            _elem15 = new org.apache.accumulo.core.data.thrift.TKeyValue();
-            _elem15.read(iprot);
-            struct.keyValues.add(_elem15);
+            _elem14 = new org.apache.accumulo.core.data.thrift.TKeyValue();
+            _elem14.read(iprot);
+            struct.keyValues.add(_elem14);
           }
         }
         struct.setKeyValuesIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/thrift/RemoteReplicationErrorCode.java b/core/src/main/java/org/apache/accumulo/core/replication/thrift/RemoteReplicationErrorCode.java
index 75dd28c..2ec6a15 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/thrift/RemoteReplicationErrorCode.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/thrift/RemoteReplicationErrorCode.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/thrift/RemoteReplicationException.java b/core/src/main/java/org/apache/accumulo/core/replication/thrift/RemoteReplicationException.java
index 331d767..5b4a9d1 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/thrift/RemoteReplicationException.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/thrift/RemoteReplicationException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class RemoteReplicationException extends TException implements org.apache.thrift.TBase<RemoteReplicationException, RemoteReplicationException._Fields>, java.io.Serializable, Cloneable, Comparable<RemoteReplicationException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class RemoteReplicationException extends TException implements org.apache.thrift.TBase<RemoteReplicationException, RemoteReplicationException._Fields>, java.io.Serializable, Cloneable, Comparable<RemoteReplicationException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("RemoteReplicationException");
 
   private static final org.apache.thrift.protocol.TField CODE_FIELD_DESC = new org.apache.thrift.protocol.TField("code", org.apache.thrift.protocol.TType.I32, (short)1);
@@ -318,7 +321,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_code = true && (isSetCode());
+    list.add(present_code);
+    if (present_code)
+      list.add(code.getValue());
+
+    boolean present_reason = true && (isSetReason());
+    list.add(present_reason);
+    if (present_reason)
+      list.add(reason);
+
+    return list.hashCode();
   }
 
   @Override
@@ -429,7 +444,7 @@
         switch (schemeField.id) {
           case 1: // CODE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.code = RemoteReplicationErrorCode.findByValue(iprot.readI32());
+              struct.code = org.apache.accumulo.core.replication.thrift.RemoteReplicationErrorCode.findByValue(iprot.readI32());
               struct.setCodeIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -506,7 +521,7 @@
       TTupleProtocol iprot = (TTupleProtocol) prot;
       BitSet incoming = iprot.readBitSet(2);
       if (incoming.get(0)) {
-        struct.code = RemoteReplicationErrorCode.findByValue(iprot.readI32());
+        struct.code = org.apache.accumulo.core.replication.thrift.RemoteReplicationErrorCode.findByValue(iprot.readI32());
         struct.setCodeIsSet(true);
       }
       if (incoming.get(1)) {
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinator.java b/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinator.java
index 1314802..0ceabcf 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinator.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinator.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ReplicationCoordinator {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ReplicationCoordinator {
 
   public interface Iface {
 
@@ -533,7 +536,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_remoteTableId = true && (isSetRemoteTableId());
+      list.add(present_remoteTableId);
+      if (present_remoteTableId)
+        list.add(remoteTableId);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -992,7 +1007,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_e = true && (isSetE());
+      list.add(present_e);
+      if (present_e)
+        list.add(e);
+
+      return list.hashCode();
     }
 
     @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinatorErrorCode.java b/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinatorErrorCode.java
index 8c56d61..545656b 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinatorErrorCode.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinatorErrorCode.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinatorException.java b/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinatorException.java
index 5e1feae..5e3b99d 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinatorException.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationCoordinatorException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ReplicationCoordinatorException extends TException implements org.apache.thrift.TBase<ReplicationCoordinatorException, ReplicationCoordinatorException._Fields>, java.io.Serializable, Cloneable, Comparable<ReplicationCoordinatorException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ReplicationCoordinatorException extends TException implements org.apache.thrift.TBase<ReplicationCoordinatorException, ReplicationCoordinatorException._Fields>, java.io.Serializable, Cloneable, Comparable<ReplicationCoordinatorException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ReplicationCoordinatorException");
 
   private static final org.apache.thrift.protocol.TField CODE_FIELD_DESC = new org.apache.thrift.protocol.TField("code", org.apache.thrift.protocol.TType.I32, (short)1);
@@ -318,7 +321,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_code = true && (isSetCode());
+    list.add(present_code);
+    if (present_code)
+      list.add(code.getValue());
+
+    boolean present_reason = true && (isSetReason());
+    list.add(present_reason);
+    if (present_reason)
+      list.add(reason);
+
+    return list.hashCode();
   }
 
   @Override
@@ -429,7 +444,7 @@
         switch (schemeField.id) {
           case 1: // CODE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.code = ReplicationCoordinatorErrorCode.findByValue(iprot.readI32());
+              struct.code = org.apache.accumulo.core.replication.thrift.ReplicationCoordinatorErrorCode.findByValue(iprot.readI32());
               struct.setCodeIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -506,7 +521,7 @@
       TTupleProtocol iprot = (TTupleProtocol) prot;
       BitSet incoming = iprot.readBitSet(2);
       if (incoming.get(0)) {
-        struct.code = ReplicationCoordinatorErrorCode.findByValue(iprot.readI32());
+        struct.code = org.apache.accumulo.core.replication.thrift.ReplicationCoordinatorErrorCode.findByValue(iprot.readI32());
         struct.setCodeIsSet(true);
       }
       if (incoming.get(1)) {
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationServicer.java b/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationServicer.java
index d2ff11b..8403b56 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationServicer.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/thrift/ReplicationServicer.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ReplicationServicer {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ReplicationServicer {
 
   public interface Iface {
 
@@ -753,7 +756,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_remoteTableId = true && (isSetRemoteTableId());
+      list.add(present_remoteTableId);
+      if (present_remoteTableId)
+        list.add(remoteTableId);
+
+      boolean present_data = true && (isSetData());
+      list.add(present_data);
+      if (present_data)
+        list.add(data);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -1200,7 +1220,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Long.valueOf(getSuccess());
+        return getSuccess();
 
       case E:
         return getE();
@@ -1260,7 +1280,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_e = true && (isSetE());
+      list.add(present_e);
+      if (present_e)
+        list.add(e);
+
+      return list.hashCode();
     }
 
     @Override
@@ -1773,7 +1805,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_remoteTableId = true && (isSetRemoteTableId());
+      list.add(present_remoteTableId);
+      if (present_remoteTableId)
+        list.add(remoteTableId);
+
+      boolean present_data = true && (isSetData());
+      list.add(present_data);
+      if (present_data)
+        list.add(data);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -2220,7 +2269,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Long.valueOf(getSuccess());
+        return getSuccess();
 
       case E:
         return getE();
@@ -2280,7 +2329,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_e = true && (isSetE());
+      list.add(present_e);
+      if (present_e)
+        list.add(e);
+
+      return list.hashCode();
     }
 
     @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/replication/thrift/WalEdits.java b/core/src/main/java/org/apache/accumulo/core/replication/thrift/WalEdits.java
index 4459dcd..9b62fa1 100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/thrift/WalEdits.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/thrift/WalEdits.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class WalEdits implements org.apache.thrift.TBase<WalEdits, WalEdits._Fields>, java.io.Serializable, Cloneable, Comparable<WalEdits> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class WalEdits implements org.apache.thrift.TBase<WalEdits, WalEdits._Fields>, java.io.Serializable, Cloneable, Comparable<WalEdits> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("WalEdits");
 
   private static final org.apache.thrift.protocol.TField EDITS_FIELD_DESC = new org.apache.thrift.protocol.TField("edits", org.apache.thrift.protocol.TType.LIST, (short)1);
@@ -260,7 +263,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_edits = true && (isSetEdits());
+    list.add(present_edits);
+    if (present_edits)
+      list.add(edits);
+
+    return list.hashCode();
   }
 
   @Override
@@ -305,7 +315,7 @@
     if (this.edits == null) {
       sb.append("null");
     } else {
-      sb.append(this.edits);
+      org.apache.thrift.TBaseHelper.toString(this.edits, sb);
     }
     first = false;
     sb.append(")");
@@ -356,11 +366,11 @@
               {
                 org.apache.thrift.protocol.TList _list0 = iprot.readListBegin();
                 struct.edits = new ArrayList<ByteBuffer>(_list0.size);
-                for (int _i1 = 0; _i1 < _list0.size; ++_i1)
+                ByteBuffer _elem1;
+                for (int _i2 = 0; _i2 < _list0.size; ++_i2)
                 {
-                  ByteBuffer _elem2;
-                  _elem2 = iprot.readBinary();
-                  struct.edits.add(_elem2);
+                  _elem1 = iprot.readBinary();
+                  struct.edits.add(_elem1);
                 }
                 iprot.readListEnd();
               }
@@ -437,11 +447,11 @@
         {
           org.apache.thrift.protocol.TList _list5 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.edits = new ArrayList<ByteBuffer>(_list5.size);
-          for (int _i6 = 0; _i6 < _list5.size; ++_i6)
+          ByteBuffer _elem6;
+          for (int _i7 = 0; _i7 < _list5.size; ++_i7)
           {
-            ByteBuffer _elem7;
-            _elem7 = iprot.readBinary();
-            struct.edits.add(_elem7);
+            _elem6 = iprot.readBinary();
+            struct.edits.add(_elem6);
           }
         }
         struct.setEditsIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/rpc/SslConnectionParams.java b/core/src/main/java/org/apache/accumulo/core/rpc/SslConnectionParams.java
index f85ea10..017134e 100644
--- a/core/src/main/java/org/apache/accumulo/core/rpc/SslConnectionParams.java
+++ b/core/src/main/java/org/apache/accumulo/core/rpc/SslConnectionParams.java
@@ -209,12 +209,24 @@
     return trustStoreType;
   }
 
+  // Work around THRIFT-3450 ... fixed with b9641e094...
+  static class TSSLTransportParametersHack extends TSSLTransportParameters {
+    TSSLTransportParametersHack(String clientProtocol) {
+      super(clientProtocol, new String[] {});
+      this.cipherSuites = null;
+    }
+  }
+
   public TSSLTransportParameters getTTransportParams() {
     if (useJsse)
       throw new IllegalStateException("Cannot get TTransportParams for JSEE configuration.");
 
-    // Null cipherSuites is implicitly handled
-    TSSLTransportParameters params = new TSSLTransportParameters(clientProtocol, cipherSuites);
+    TSSLTransportParameters params;
+    if (cipherSuites != null) {
+      params = new TSSLTransportParameters(clientProtocol, cipherSuites);
+    } else {
+      params = new TSSLTransportParametersHack(clientProtocol);
+    }
 
     params.requireClientAuth(clientAuth);
     if (keyStoreSet) {
diff --git a/core/src/main/java/org/apache/accumulo/core/rpc/ThriftUtil.java b/core/src/main/java/org/apache/accumulo/core/rpc/ThriftUtil.java
index be4238e..49e4349 100644
--- a/core/src/main/java/org/apache/accumulo/core/rpc/ThriftUtil.java
+++ b/core/src/main/java/org/apache/accumulo/core/rpc/ThriftUtil.java
@@ -59,7 +59,7 @@
 
   private static final TraceProtocolFactory protocolFactory = new TraceProtocolFactory();
   private static final TFramedTransport.Factory transportFactory = new TFramedTransport.Factory(Integer.MAX_VALUE);
-  private static final Map<Integer,TTransportFactory> factoryCache = new HashMap<Integer,TTransportFactory>();
+  private static final Map<Integer,TTransportFactory> factoryCache = new HashMap<>();
 
   public static final String GSSAPI = "GSSAPI", DIGEST_MD5 = "DIGEST-MD5";
 
diff --git a/core/src/main/java/org/apache/accumulo/core/sample/impl/DataoutputHasher.java b/core/src/main/java/org/apache/accumulo/core/sample/impl/DataoutputHasher.java
new file mode 100644
index 0000000..d243dfe
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/sample/impl/DataoutputHasher.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.sample.impl;
+
+import java.io.DataOutput;
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+
+import com.google.common.hash.Hasher;
+
+public class DataoutputHasher implements DataOutput {
+
+  private Hasher hasher;
+
+  public DataoutputHasher(Hasher hasher) {
+    this.hasher = hasher;
+  }
+
+  @Override
+  public void write(int b) throws IOException {
+    hasher.putByte((byte) (0xff & b));
+  }
+
+  @Override
+  public void write(byte[] b) throws IOException {
+    hasher.putBytes(b);
+  }
+
+  @Override
+  public void write(byte[] b, int off, int len) throws IOException {
+    hasher.putBytes(b, off, len);
+  }
+
+  @Override
+  public void writeBoolean(boolean v) throws IOException {
+    hasher.putBoolean(v);
+  }
+
+  @Override
+  public void writeByte(int v) throws IOException {
+    hasher.putByte((byte) (0xff & v));
+
+  }
+
+  @Override
+  public void writeShort(int v) throws IOException {
+    hasher.putShort((short) (0xffff & v));
+  }
+
+  @Override
+  public void writeChar(int v) throws IOException {
+    hasher.putChar((char) v);
+  }
+
+  @Override
+  public void writeInt(int v) throws IOException {
+    hasher.putInt(v);
+  }
+
+  @Override
+  public void writeLong(long v) throws IOException {
+    hasher.putLong(v);
+  }
+
+  @Override
+  public void writeFloat(float v) throws IOException {
+    hasher.putDouble(v);
+  }
+
+  @Override
+  public void writeDouble(double v) throws IOException {
+    hasher.putDouble(v);
+  }
+
+  @Override
+  public void writeBytes(String s) throws IOException {
+    for (int i = 0; i < s.length(); i++) {
+      hasher.putByte((byte) (0xff & s.charAt(i)));
+    }
+  }
+
+  @Override
+  public void writeChars(String s) throws IOException {
+    hasher.putString(s);
+
+  }
+
+  @Override
+  public void writeUTF(String s) throws IOException {
+    hasher.putInt(s.length());
+    hasher.putBytes(s.getBytes(StandardCharsets.UTF_8));
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/sample/impl/SamplerConfigurationImpl.java b/core/src/main/java/org/apache/accumulo/core/sample/impl/SamplerConfigurationImpl.java
new file mode 100644
index 0000000..d3e2fe7
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/sample/impl/SamplerConfigurationImpl.java
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.sample.impl;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.tabletserver.thrift.TSamplerConfiguration;
+import org.apache.accumulo.core.util.Pair;
+import org.apache.hadoop.io.Writable;
+
+public class SamplerConfigurationImpl implements Writable {
+  private String className;
+  private Map<String,String> options;
+
+  public SamplerConfigurationImpl(DataInput in) throws IOException {
+    readFields(in);
+  }
+
+  public SamplerConfigurationImpl(SamplerConfiguration sc) {
+    this.className = sc.getSamplerClassName();
+    this.options = new HashMap<>(sc.getOptions());
+  }
+
+  public SamplerConfigurationImpl(String className, Map<String,String> options) {
+    this.className = className;
+    this.options = options;
+  }
+
+  public SamplerConfigurationImpl() {}
+
+  public String getClassName() {
+    return className;
+  }
+
+  public Map<String,String> getOptions() {
+    return Collections.unmodifiableMap(options);
+  }
+
+  @Override
+  public int hashCode() {
+    return 31 * className.hashCode() + options.hashCode();
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    if (o instanceof SamplerConfigurationImpl) {
+      SamplerConfigurationImpl osc = (SamplerConfigurationImpl) o;
+
+      return className.equals(osc.className) && options.equals(osc.options);
+    }
+
+    return false;
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    // The Writable serialization methods for this class are called by RFile and therefore must be very stable. An alternative way to serialize this class is to
+    // use Thrift. That was not used here inorder to avoid making RFile depend on Thrift.
+
+    // versioning info
+    out.write(1);
+
+    out.writeUTF(className);
+
+    out.writeInt(options.size());
+
+    for (Entry<String,String> entry : options.entrySet()) {
+      out.writeUTF(entry.getKey());
+      out.writeUTF(entry.getValue());
+    }
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    int version = in.readByte();
+
+    if (version != 1) {
+      throw new IllegalArgumentException("Unexpected version " + version);
+    }
+
+    className = in.readUTF();
+
+    options = new HashMap<>();
+
+    int num = in.readInt();
+
+    for (int i = 0; i < num; i++) {
+      String key = in.readUTF();
+      String val = in.readUTF();
+      options.put(key, val);
+    }
+  }
+
+  public SamplerConfiguration toSamplerConfiguration() {
+    SamplerConfiguration sc = new SamplerConfiguration(className);
+    sc.setOptions(options);
+    return sc;
+  }
+
+  public List<Pair<String,String>> toTableProperties() {
+    ArrayList<Pair<String,String>> props = new ArrayList<>();
+
+    for (Entry<String,String> entry : options.entrySet()) {
+      props.add(new Pair<>(Property.TABLE_SAMPLER_OPTS.getKey() + entry.getKey(), entry.getValue()));
+    }
+
+    // intentionally added last, so its set last
+    props.add(new Pair<>(Property.TABLE_SAMPLER.getKey(), className));
+
+    return props;
+  }
+
+  public Map<String,String> toTablePropertiesMap() {
+    LinkedHashMap<String,String> propsMap = new LinkedHashMap<>();
+    for (Pair<String,String> pair : toTableProperties()) {
+      propsMap.put(pair.getFirst(), pair.getSecond());
+    }
+
+    return propsMap;
+  }
+
+  public static SamplerConfigurationImpl newSamplerConfig(AccumuloConfiguration acuconf) {
+    String className = acuconf.get(Property.TABLE_SAMPLER);
+
+    if (className == null || className.equals("")) {
+      return null;
+    }
+
+    Map<String,String> rawOptions = acuconf.getAllPropertiesWithPrefix(Property.TABLE_SAMPLER_OPTS);
+    Map<String,String> options = new HashMap<>();
+
+    for (Entry<String,String> entry : rawOptions.entrySet()) {
+      String key = entry.getKey().substring(Property.TABLE_SAMPLER_OPTS.getKey().length());
+      options.put(key, entry.getValue());
+    }
+
+    return new SamplerConfigurationImpl(className, options);
+  }
+
+  @Override
+  public String toString() {
+    return className + " " + options;
+  }
+
+  public static void checkDisjoint(Map<String,String> props, SamplerConfiguration samplerConfiguration) {
+    if (props.isEmpty() || samplerConfiguration == null) {
+      return;
+    }
+
+    Map<String,String> sampleProps = new SamplerConfigurationImpl(samplerConfiguration).toTablePropertiesMap();
+
+    checkArgument(Collections.disjoint(props.keySet(), sampleProps.keySet()), "Properties and derived sampler properties are not disjoint");
+  }
+
+  public static TSamplerConfiguration toThrift(SamplerConfiguration samplerConfig) {
+    if (samplerConfig == null)
+      return null;
+    return new TSamplerConfiguration(samplerConfig.getSamplerClassName(), samplerConfig.getOptions());
+  }
+
+  public static SamplerConfiguration fromThrift(TSamplerConfiguration tsc) {
+    if (tsc == null)
+      return null;
+    return new SamplerConfiguration(tsc.getClassName()).setOptions(tsc.getOptions());
+  }
+
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/sample/impl/SamplerFactory.java b/core/src/main/java/org/apache/accumulo/core/sample/impl/SamplerFactory.java
new file mode 100644
index 0000000..d70f3af
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/sample/impl/SamplerFactory.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.sample.impl;
+
+import java.io.IOException;
+
+import org.apache.accumulo.core.client.sample.Sampler;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader;
+
+public class SamplerFactory {
+  public static Sampler newSampler(SamplerConfigurationImpl config, AccumuloConfiguration acuconf) throws IOException {
+    String context = acuconf.get(Property.TABLE_CLASSPATH);
+
+    Class<? extends Sampler> clazz;
+    try {
+      if (context != null && !context.equals(""))
+        clazz = AccumuloVFSClassLoader.getContextManager().loadClass(context, config.getClassName(), Sampler.class);
+      else
+        clazz = AccumuloVFSClassLoader.loadClass(config.getClassName(), Sampler.class);
+
+      Sampler sampler = clazz.newInstance();
+
+      sampler.init(config.toSamplerConfiguration());
+
+      return sampler;
+
+    } catch (ClassNotFoundException | InstantiationException | IllegalAccessException e) {
+      throw new RuntimeException(e);
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/security/Authorizations.java b/core/src/main/java/org/apache/accumulo/core/security/Authorizations.java
index 001fed2..c725d9b 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/Authorizations.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/Authorizations.java
@@ -42,8 +42,8 @@
 
   private static final long serialVersionUID = 1L;
 
-  private Set<ByteSequence> auths = new HashSet<ByteSequence>();
-  private List<byte[]> authsList = new ArrayList<byte[]>(); // sorted order
+  private Set<ByteSequence> auths = new HashSet<>();
+  private List<byte[]> authsList = new ArrayList<>(); // sorted order
 
   /**
    * An empty set of authorizations.
@@ -88,7 +88,7 @@
   }
 
   private void checkAuths() {
-    Set<ByteSequence> sortedAuths = new TreeSet<ByteSequence>(auths);
+    Set<ByteSequence> sortedAuths = new TreeSet<>(auths);
 
     for (ByteSequence bs : sortedAuths) {
       if (bs.length() == 0) {
@@ -212,7 +212,7 @@
    * @see #Authorizations(Collection)
    */
   public List<byte[]> getAuthorizations() {
-    ArrayList<byte[]> copy = new ArrayList<byte[]>(authsList.size());
+    ArrayList<byte[]> copy = new ArrayList<>(authsList.size());
     for (byte[] auth : authsList) {
       byte[] bytes = new byte[auth.length];
       System.arraycopy(auth, 0, bytes, 0, auth.length);
@@ -227,7 +227,7 @@
    * @return authorizations, each as a string encoded in UTF-8 and within a buffer
    */
   public List<ByteBuffer> getAuthorizationsBB() {
-    ArrayList<ByteBuffer> copy = new ArrayList<ByteBuffer>(authsList.size());
+    ArrayList<ByteBuffer> copy = new ArrayList<>(authsList.size());
     for (byte[] auth : authsList) {
       byte[] bytes = new byte[auth.length];
       System.arraycopy(auth, 0, bytes, 0, auth.length);
diff --git a/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java b/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
index 4e0d597..c51e688 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
@@ -123,7 +123,7 @@
 
     public void add(Node child) {
       if (children == EMPTY)
-        children = new ArrayList<Node>();
+        children = new ArrayList<>();
 
       children.add(child);
     }
@@ -220,7 +220,7 @@
   // @formatter:on
   public static Node normalize(Node root, byte[] expression, NodeComparator comparator) {
     if (root.type != NodeType.TERM) {
-      TreeSet<Node> rolledUp = new TreeSet<Node>(comparator);
+      TreeSet<Node> rolledUp = new TreeSet<>(comparator);
       java.util.Iterator<Node> itr = root.children.iterator();
       while (itr.hasNext()) {
         Node c = normalize(itr.next(), expression, comparator);
diff --git a/core/src/main/java/org/apache/accumulo/core/security/NamespacePermission.java b/core/src/main/java/org/apache/accumulo/core/security/NamespacePermission.java
index 638f630..55a7b75 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/NamespacePermission.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/NamespacePermission.java
@@ -63,7 +63,7 @@
   public static List<String> printableValues() {
     NamespacePermission[] a = NamespacePermission.values();
 
-    List<String> list = new ArrayList<String>(a.length);
+    List<String> list = new ArrayList<>(a.length);
 
     for (NamespacePermission p : a)
       list.add("Namespace." + p);
diff --git a/core/src/main/java/org/apache/accumulo/core/security/SystemPermission.java b/core/src/main/java/org/apache/accumulo/core/security/SystemPermission.java
index a1df5dc..cf70ec2 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/SystemPermission.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/SystemPermission.java
@@ -44,7 +44,7 @@
 
   private static HashMap<Byte,SystemPermission> mapping;
   static {
-    mapping = new HashMap<Byte,SystemPermission>(SystemPermission.values().length);
+    mapping = new HashMap<>(SystemPermission.values().length);
     for (SystemPermission perm : SystemPermission.values())
       mapping.put(perm.permID, perm);
   }
@@ -70,7 +70,7 @@
   public static List<String> printableValues() {
     SystemPermission[] a = SystemPermission.values();
 
-    List<String> list = new ArrayList<String>(a.length);
+    List<String> list = new ArrayList<>(a.length);
 
     for (SystemPermission p : a)
       list.add("System." + p);
diff --git a/core/src/main/java/org/apache/accumulo/core/security/TablePermission.java b/core/src/main/java/org/apache/accumulo/core/security/TablePermission.java
index b6d122f..a80be9a 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/TablePermission.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/TablePermission.java
@@ -64,7 +64,7 @@
   public static List<String> printableValues() {
     TablePermission[] a = TablePermission.values();
 
-    List<String> list = new ArrayList<String>(a.length);
+    List<String> list = new ArrayList<>(a.length);
 
     for (TablePermission p : a)
       list.add("Table." + p);
diff --git a/core/src/main/java/org/apache/accumulo/core/security/VisibilityEvaluator.java b/core/src/main/java/org/apache/accumulo/core/security/VisibilityEvaluator.java
index 03b336b..efd7ea3 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/VisibilityEvaluator.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/VisibilityEvaluator.java
@@ -93,7 +93,7 @@
    * @see #escape(byte[], boolean)
    */
   static Authorizations escape(Authorizations auths) {
-    ArrayList<byte[]> retAuths = new ArrayList<byte[]>(auths.getAuthorizations().size());
+    ArrayList<byte[]> retAuths = new ArrayList<>(auths.getAuthorizations().size());
 
     for (byte[] auth : auths.getAuthorizations())
       retAuths.add(escape(auth, false));
diff --git a/core/src/main/java/org/apache/accumulo/core/security/crypto/CachingHDFSSecretKeyEncryptionStrategy.java b/core/src/main/java/org/apache/accumulo/core/security/crypto/CachingHDFSSecretKeyEncryptionStrategy.java
index 4ee27ef..7b79d99 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/crypto/CachingHDFSSecretKeyEncryptionStrategy.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/crypto/CachingHDFSSecretKeyEncryptionStrategy.java
@@ -112,7 +112,7 @@
     private byte[] keyEncryptionKey;
     private String pathToKeyName;
 
-    public SecretKeyCache() {};
+    public SecretKeyCache() {}
 
     public synchronized void ensureSecretKeyCacheInitialized(CryptoModuleParameters context) throws IOException {
 
diff --git a/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleFactory.java b/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleFactory.java
index e8e2326..79db306 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleFactory.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleFactory.java
@@ -34,8 +34,8 @@
 public class CryptoModuleFactory {
 
   private static final Logger log = LoggerFactory.getLogger(CryptoModuleFactory.class);
-  private static final Map<String,CryptoModule> cryptoModulesCache = new HashMap<String,CryptoModule>();
-  private static final Map<String,SecretKeyEncryptionStrategy> secretKeyEncryptionStrategyCache = new HashMap<String,SecretKeyEncryptionStrategy>();
+  private static final Map<String,CryptoModule> cryptoModulesCache = new HashMap<>();
+  private static final Map<String,SecretKeyEncryptionStrategy> secretKeyEncryptionStrategyCache = new HashMap<>();
 
   /**
    * This method returns a crypto module based on settings in the given configuration parameter.
diff --git a/core/src/main/java/org/apache/accumulo/core/security/crypto/DefaultCryptoModule.java b/core/src/main/java/org/apache/accumulo/core/security/crypto/DefaultCryptoModule.java
index b7e089f..13104b2 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/crypto/DefaultCryptoModule.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/crypto/DefaultCryptoModule.java
@@ -300,7 +300,7 @@
       String marker = dataIn.readUTF();
       if (marker.equals(ENCRYPTION_HEADER_MARKER_V1) || marker.equals(ENCRYPTION_HEADER_MARKER_V2)) {
 
-        Map<String,String> paramsFromFile = new HashMap<String,String>();
+        Map<String,String> paramsFromFile = new HashMap<>();
 
         // Read in the bulk of parameters
         int paramsCount = dataIn.readInt();
diff --git a/core/src/main/java/org/apache/accumulo/core/security/thrift/TAuthenticationKey.java b/core/src/main/java/org/apache/accumulo/core/security/thrift/TAuthenticationKey.java
index 4da2bb2..574a673 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/thrift/TAuthenticationKey.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/thrift/TAuthenticationKey.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TAuthenticationKey implements org.apache.thrift.TBase<TAuthenticationKey, TAuthenticationKey._Fields>, java.io.Serializable, Cloneable, Comparable<TAuthenticationKey> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TAuthenticationKey implements org.apache.thrift.TBase<TAuthenticationKey, TAuthenticationKey._Fields>, java.io.Serializable, Cloneable, Comparable<TAuthenticationKey> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TAuthenticationKey");
 
   private static final org.apache.thrift.protocol.TField SECRET_FIELD_DESC = new org.apache.thrift.protocol.TField("secret", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -139,7 +142,7 @@
   private static final int __EXPIRATIONDATE_ISSET_ID = 1;
   private static final int __CREATIONDATE_ISSET_ID = 2;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.KEY_ID,_Fields.EXPIRATION_DATE,_Fields.CREATION_DATE};
+  private static final _Fields optionals[] = {_Fields.KEY_ID,_Fields.EXPIRATION_DATE,_Fields.CREATION_DATE};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -162,7 +165,7 @@
     ByteBuffer secret)
   {
     this();
-    this.secret = secret;
+    this.secret = org.apache.thrift.TBaseHelper.copyBinary(secret);
   }
 
   /**
@@ -172,7 +175,6 @@
     __isset_bitfield = other.__isset_bitfield;
     if (other.isSetSecret()) {
       this.secret = org.apache.thrift.TBaseHelper.copyBinary(other.secret);
-;
     }
     this.keyId = other.keyId;
     this.expirationDate = other.expirationDate;
@@ -200,16 +202,16 @@
   }
 
   public ByteBuffer bufferForSecret() {
-    return secret;
+    return org.apache.thrift.TBaseHelper.copyBinary(secret);
   }
 
   public TAuthenticationKey setSecret(byte[] secret) {
-    setSecret(secret == null ? (ByteBuffer)null : ByteBuffer.wrap(secret));
+    this.secret = secret == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(secret, secret.length));
     return this;
   }
 
   public TAuthenticationKey setSecret(ByteBuffer secret) {
-    this.secret = secret;
+    this.secret = org.apache.thrift.TBaseHelper.copyBinary(secret);
     return this;
   }
 
@@ -340,13 +342,13 @@
       return getSecret();
 
     case KEY_ID:
-      return Integer.valueOf(getKeyId());
+      return getKeyId();
 
     case EXPIRATION_DATE:
-      return Long.valueOf(getExpirationDate());
+      return getExpirationDate();
 
     case CREATION_DATE:
-      return Long.valueOf(getCreationDate());
+      return getCreationDate();
 
     }
     throw new IllegalStateException();
@@ -425,7 +427,29 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_secret = true && (isSetSecret());
+    list.add(present_secret);
+    if (present_secret)
+      list.add(secret);
+
+    boolean present_keyId = true && (isSetKeyId());
+    list.add(present_keyId);
+    if (present_keyId)
+      list.add(keyId);
+
+    boolean present_expirationDate = true && (isSetExpirationDate());
+    list.add(present_expirationDate);
+    if (present_expirationDate)
+      list.add(expirationDate);
+
+    boolean present_creationDate = true && (isSetCreationDate());
+    list.add(present_creationDate);
+    if (present_creationDate)
+      list.add(creationDate);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/security/thrift/TAuthenticationTokenIdentifier.java b/core/src/main/java/org/apache/accumulo/core/security/thrift/TAuthenticationTokenIdentifier.java
index d4e75f0..cf3f515 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/thrift/TAuthenticationTokenIdentifier.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/thrift/TAuthenticationTokenIdentifier.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TAuthenticationTokenIdentifier implements org.apache.thrift.TBase<TAuthenticationTokenIdentifier, TAuthenticationTokenIdentifier._Fields>, java.io.Serializable, Cloneable, Comparable<TAuthenticationTokenIdentifier> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TAuthenticationTokenIdentifier implements org.apache.thrift.TBase<TAuthenticationTokenIdentifier, TAuthenticationTokenIdentifier._Fields>, java.io.Serializable, Cloneable, Comparable<TAuthenticationTokenIdentifier> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TAuthenticationTokenIdentifier");
 
   private static final org.apache.thrift.protocol.TField PRINCIPAL_FIELD_DESC = new org.apache.thrift.protocol.TField("principal", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -144,7 +147,7 @@
   private static final int __ISSUEDATE_ISSET_ID = 1;
   private static final int __EXPIRATIONDATE_ISSET_ID = 2;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.KEY_ID,_Fields.ISSUE_DATE,_Fields.EXPIRATION_DATE,_Fields.INSTANCE_ID};
+  private static final _Fields optionals[] = {_Fields.KEY_ID,_Fields.ISSUE_DATE,_Fields.EXPIRATION_DATE,_Fields.INSTANCE_ID};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -372,13 +375,13 @@
       return getPrincipal();
 
     case KEY_ID:
-      return Integer.valueOf(getKeyId());
+      return getKeyId();
 
     case ISSUE_DATE:
-      return Long.valueOf(getIssueDate());
+      return getIssueDate();
 
     case EXPIRATION_DATE:
-      return Long.valueOf(getExpirationDate());
+      return getExpirationDate();
 
     case INSTANCE_ID:
       return getInstanceId();
@@ -471,7 +474,34 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_principal = true && (isSetPrincipal());
+    list.add(present_principal);
+    if (present_principal)
+      list.add(principal);
+
+    boolean present_keyId = true && (isSetKeyId());
+    list.add(present_keyId);
+    if (present_keyId)
+      list.add(keyId);
+
+    boolean present_issueDate = true && (isSetIssueDate());
+    list.add(present_issueDate);
+    if (present_issueDate)
+      list.add(issueDate);
+
+    boolean present_expirationDate = true && (isSetExpirationDate());
+    list.add(present_expirationDate);
+    if (present_expirationDate)
+      list.add(expirationDate);
+
+    boolean present_instanceId = true && (isSetInstanceId());
+    list.add(present_instanceId);
+    if (present_instanceId)
+      list.add(instanceId);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/security/thrift/TCredentials.java b/core/src/main/java/org/apache/accumulo/core/security/thrift/TCredentials.java
index 0bbd241..f9beda9 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/thrift/TCredentials.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/thrift/TCredentials.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TCredentials implements org.apache.thrift.TBase<TCredentials, TCredentials._Fields>, java.io.Serializable, Cloneable, Comparable<TCredentials> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TCredentials implements org.apache.thrift.TBase<TCredentials, TCredentials._Fields>, java.io.Serializable, Cloneable, Comparable<TCredentials> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TCredentials");
 
   private static final org.apache.thrift.protocol.TField PRINCIPAL_FIELD_DESC = new org.apache.thrift.protocol.TField("principal", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -162,7 +165,7 @@
     this();
     this.principal = principal;
     this.tokenClassName = tokenClassName;
-    this.token = token;
+    this.token = org.apache.thrift.TBaseHelper.copyBinary(token);
     this.instanceId = instanceId;
   }
 
@@ -178,7 +181,6 @@
     }
     if (other.isSetToken()) {
       this.token = org.apache.thrift.TBaseHelper.copyBinary(other.token);
-;
     }
     if (other.isSetInstanceId()) {
       this.instanceId = other.instanceId;
@@ -251,16 +253,16 @@
   }
 
   public ByteBuffer bufferForToken() {
-    return token;
+    return org.apache.thrift.TBaseHelper.copyBinary(token);
   }
 
   public TCredentials setToken(byte[] token) {
-    setToken(token == null ? (ByteBuffer)null : ByteBuffer.wrap(token));
+    this.token = token == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(token, token.length));
     return this;
   }
 
   public TCredentials setToken(ByteBuffer token) {
-    this.token = token;
+    this.token = org.apache.thrift.TBaseHelper.copyBinary(token);
     return this;
   }
 
@@ -431,7 +433,29 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_principal = true && (isSetPrincipal());
+    list.add(present_principal);
+    if (present_principal)
+      list.add(principal);
+
+    boolean present_tokenClassName = true && (isSetTokenClassName());
+    list.add(present_tokenClassName);
+    if (present_tokenClassName)
+      list.add(tokenClassName);
+
+    boolean present_token = true && (isSetToken());
+    list.add(present_token);
+    if (present_token)
+      list.add(token);
+
+    boolean present_instanceId = true && (isSetInstanceId());
+    list.add(present_instanceId);
+    if (present_instanceId)
+      list.add(instanceId);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/security/thrift/TDelegationToken.java b/core/src/main/java/org/apache/accumulo/core/security/thrift/TDelegationToken.java
index 904d195..9e2b3a0 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/thrift/TDelegationToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/thrift/TDelegationToken.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TDelegationToken implements org.apache.thrift.TBase<TDelegationToken, TDelegationToken._Fields>, java.io.Serializable, Cloneable, Comparable<TDelegationToken> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TDelegationToken implements org.apache.thrift.TBase<TDelegationToken, TDelegationToken._Fields>, java.io.Serializable, Cloneable, Comparable<TDelegationToken> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TDelegationToken");
 
   private static final org.apache.thrift.protocol.TField PASSWORD_FIELD_DESC = new org.apache.thrift.protocol.TField("password", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -144,7 +147,7 @@
     TAuthenticationTokenIdentifier identifier)
   {
     this();
-    this.password = password;
+    this.password = org.apache.thrift.TBaseHelper.copyBinary(password);
     this.identifier = identifier;
   }
 
@@ -154,7 +157,6 @@
   public TDelegationToken(TDelegationToken other) {
     if (other.isSetPassword()) {
       this.password = org.apache.thrift.TBaseHelper.copyBinary(other.password);
-;
     }
     if (other.isSetIdentifier()) {
       this.identifier = new TAuthenticationTokenIdentifier(other.identifier);
@@ -177,16 +179,16 @@
   }
 
   public ByteBuffer bufferForPassword() {
-    return password;
+    return org.apache.thrift.TBaseHelper.copyBinary(password);
   }
 
   public TDelegationToken setPassword(byte[] password) {
-    setPassword(password == null ? (ByteBuffer)null : ByteBuffer.wrap(password));
+    this.password = password == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(password, password.length));
     return this;
   }
 
   public TDelegationToken setPassword(ByteBuffer password) {
-    this.password = password;
+    this.password = org.apache.thrift.TBaseHelper.copyBinary(password);
     return this;
   }
 
@@ -313,7 +315,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_password = true && (isSetPassword());
+    list.add(present_password);
+    if (present_password)
+      list.add(password);
+
+    boolean present_identifier = true && (isSetIdentifier());
+    list.add(present_identifier);
+    if (present_identifier)
+      list.add(identifier);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/security/thrift/TDelegationTokenConfig.java b/core/src/main/java/org/apache/accumulo/core/security/thrift/TDelegationTokenConfig.java
index cdde83e..21e5013 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/thrift/TDelegationTokenConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/thrift/TDelegationTokenConfig.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TDelegationTokenConfig implements org.apache.thrift.TBase<TDelegationTokenConfig, TDelegationTokenConfig._Fields>, java.io.Serializable, Cloneable, Comparable<TDelegationTokenConfig> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TDelegationTokenConfig implements org.apache.thrift.TBase<TDelegationTokenConfig, TDelegationTokenConfig._Fields>, java.io.Serializable, Cloneable, Comparable<TDelegationTokenConfig> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TDelegationTokenConfig");
 
   private static final org.apache.thrift.protocol.TField LIFETIME_FIELD_DESC = new org.apache.thrift.protocol.TField("lifetime", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -122,7 +125,7 @@
   // isset id assignments
   private static final int __LIFETIME_ISSET_ID = 0;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.LIFETIME};
+  private static final _Fields optionals[] = {_Fields.LIFETIME};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -192,7 +195,7 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case LIFETIME:
-      return Long.valueOf(getLifetime());
+      return getLifetime();
 
     }
     throw new IllegalStateException();
@@ -238,7 +241,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_lifetime = true && (isSetLifetime());
+    list.add(present_lifetime);
+    if (present_lifetime)
+      list.add(lifetime);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/log/LogEntry.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/log/LogEntry.java
index 7fe61d1..ab70bb0 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/log/LogEntry.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/log/LogEntry.java
@@ -16,10 +16,10 @@
  */
 package org.apache.accumulo.core.tabletserver.log;
 
+import static java.nio.charset.StandardCharsets.UTF_8;
+
 import java.io.IOException;
-import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Collection;
 
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
@@ -29,30 +29,29 @@
 import org.apache.hadoop.io.DataOutputBuffer;
 import org.apache.hadoop.io.Text;
 
-import com.google.common.base.Joiner;
-
 public class LogEntry {
-  public KeyExtent extent;
-  public long timestamp;
-  public String server;
-  public String filename;
-  public int tabletId;
-  public Collection<String> logSet;
-
-  public LogEntry() {}
+  public final KeyExtent extent;
+  public final long timestamp;
+  public final String server;
+  public final String filename;
 
   public LogEntry(LogEntry le) {
     this.extent = le.extent;
     this.timestamp = le.timestamp;
     this.server = le.server;
     this.filename = le.filename;
-    this.tabletId = le.tabletId;
-    this.logSet = new ArrayList<String>(le.logSet);
+  }
+
+  public LogEntry(KeyExtent extent, long timestamp, String server, String filename) {
+    this.extent = extent;
+    this.timestamp = timestamp;
+    this.server = server;
+    this.filename = filename;
   }
 
   @Override
   public String toString() {
-    return extent.toString() + " " + filename + " (" + tabletId + ")";
+    return extent.toString() + " " + filename;
   }
 
   public String getName() {
@@ -65,43 +64,35 @@
     out.writeLong(timestamp);
     out.writeUTF(server);
     out.writeUTF(filename);
-    out.write(tabletId);
-    out.write(logSet.size());
-    for (String s : logSet) {
-      out.writeUTF(s);
-    }
     return Arrays.copyOf(out.getData(), out.getLength());
   }
 
-  public void fromBytes(byte bytes[]) throws IOException {
+  static public LogEntry fromBytes(byte bytes[]) throws IOException {
     DataInputBuffer inp = new DataInputBuffer();
     inp.reset(bytes, bytes.length);
-    extent = new KeyExtent();
+    KeyExtent extent = new KeyExtent();
     extent.readFields(inp);
-    timestamp = inp.readLong();
-    server = inp.readUTF();
-    filename = inp.readUTF();
-    tabletId = inp.read();
-    int count = inp.read();
-    ArrayList<String> logSet = new ArrayList<String>(count);
-    for (int i = 0; i < count; i++)
-      logSet.add(inp.readUTF());
-    this.logSet = logSet;
+    long timestamp = inp.readLong();
+    String server = inp.readUTF();
+    String filename = inp.readUTF();
+    return new LogEntry(extent, timestamp, server, filename);
   }
 
   static private final Text EMPTY_TEXT = new Text();
 
   public static LogEntry fromKeyValue(Key key, Value value) {
-    LogEntry result = new LogEntry();
-    result.extent = new KeyExtent(key.getRow(), EMPTY_TEXT);
+    String qualifier = key.getColumnQualifier().toString();
+    if (qualifier.indexOf('/') < 1) {
+      throw new IllegalArgumentException("Bad key for log entry: " + key);
+    }
+    KeyExtent extent = new KeyExtent(key.getRow(), EMPTY_TEXT);
     String[] parts = key.getColumnQualifier().toString().split("/", 2);
-    result.server = parts[0];
-    result.filename = parts[1];
-    parts = value.toString().split("\\|");
-    result.tabletId = Integer.parseInt(parts[1]);
-    result.logSet = Arrays.asList(parts[0].split(";"));
-    result.timestamp = key.getTimestamp();
-    return result;
+    String server = parts[0];
+    // handle old-style log entries that specify log sets
+    parts = value.toString().split("\\|")[0].split(";");
+    String filename = parts[parts.length - 1];
+    long timestamp = key.getTimestamp();
+    return new LogEntry(extent, timestamp, server, filename);
   }
 
   public Text getRow() {
@@ -112,11 +103,16 @@
     return MetadataSchema.TabletsSection.LogColumnFamily.NAME;
   }
 
+  public String getUniqueID() {
+    String parts[] = filename.split("/");
+    return parts[parts.length - 1];
+  }
+
   public Text getColumnQualifier() {
     return new Text(server + "/" + filename);
   }
 
   public Value getValue() {
-    return new Value((Joiner.on(";").join(logSet) + "|" + tabletId).getBytes());
+    return new Value(filename.getBytes(UTF_8));
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActionStats.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActionStats.java
index 86a502b..4997cf8 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActionStats.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActionStats.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ActionStats implements org.apache.thrift.TBase<ActionStats, ActionStats._Fields>, java.io.Serializable, Cloneable, Comparable<ActionStats> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ActionStats implements org.apache.thrift.TBase<ActionStats, ActionStats._Fields>, java.io.Serializable, Cloneable, Comparable<ActionStats> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ActionStats");
 
   private static final org.apache.thrift.protocol.TField STATUS_FIELD_DESC = new org.apache.thrift.protocol.TField("status", org.apache.thrift.protocol.TType.I32, (short)1);
@@ -514,28 +517,28 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case STATUS:
-      return Integer.valueOf(getStatus());
+      return getStatus();
 
     case ELAPSED:
-      return Double.valueOf(getElapsed());
+      return getElapsed();
 
     case NUM:
-      return Integer.valueOf(getNum());
+      return getNum();
 
     case COUNT:
-      return Long.valueOf(getCount());
+      return getCount();
 
     case SUM_DEV:
-      return Double.valueOf(getSumDev());
+      return getSumDev();
 
     case FAIL:
-      return Integer.valueOf(getFail());
+      return getFail();
 
     case QUEUE_TIME:
-      return Double.valueOf(getQueueTime());
+      return getQueueTime();
 
     case QUEUE_SUM_DEV:
-      return Double.valueOf(getQueueSumDev());
+      return getQueueSumDev();
 
     }
     throw new IllegalStateException();
@@ -658,7 +661,49 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_status = true;
+    list.add(present_status);
+    if (present_status)
+      list.add(status);
+
+    boolean present_elapsed = true;
+    list.add(present_elapsed);
+    if (present_elapsed)
+      list.add(elapsed);
+
+    boolean present_num = true;
+    list.add(present_num);
+    if (present_num)
+      list.add(num);
+
+    boolean present_count = true;
+    list.add(present_count);
+    if (present_count)
+      list.add(count);
+
+    boolean present_sumDev = true;
+    list.add(present_sumDev);
+    if (present_sumDev)
+      list.add(sumDev);
+
+    boolean present_fail = true;
+    list.add(present_fail);
+    if (present_fail)
+      list.add(fail);
+
+    boolean present_queueTime = true;
+    list.add(present_queueTime);
+    if (present_queueTime)
+      list.add(queueTime);
+
+    boolean present_queueSumDev = true;
+    list.add(present_queueSumDev);
+    if (present_queueSumDev)
+      list.add(queueSumDev);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActiveCompaction.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActiveCompaction.java
index 9c1977d..4c0decc 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActiveCompaction.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActiveCompaction.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ActiveCompaction implements org.apache.thrift.TBase<ActiveCompaction, ActiveCompaction._Fields>, java.io.Serializable, Cloneable, Comparable<ActiveCompaction> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ActiveCompaction implements org.apache.thrift.TBase<ActiveCompaction, ActiveCompaction._Fields>, java.io.Serializable, Cloneable, Comparable<ActiveCompaction> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ActiveCompaction");
 
   private static final org.apache.thrift.protocol.TField EXTENT_FIELD_DESC = new org.apache.thrift.protocol.TField("extent", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -748,7 +751,7 @@
       return getExtent();
 
     case AGE:
-      return Long.valueOf(getAge());
+      return getAge();
 
     case INPUT_FILES:
       return getInputFiles();
@@ -766,10 +769,10 @@
       return getLocalityGroup();
 
     case ENTRIES_READ:
-      return Long.valueOf(getEntriesRead());
+      return getEntriesRead();
 
     case ENTRIES_WRITTEN:
-      return Long.valueOf(getEntriesWritten());
+      return getEntriesWritten();
 
     case SSI_LIST:
       return getSsiList();
@@ -931,7 +934,64 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_extent = true && (isSetExtent());
+    list.add(present_extent);
+    if (present_extent)
+      list.add(extent);
+
+    boolean present_age = true;
+    list.add(present_age);
+    if (present_age)
+      list.add(age);
+
+    boolean present_inputFiles = true && (isSetInputFiles());
+    list.add(present_inputFiles);
+    if (present_inputFiles)
+      list.add(inputFiles);
+
+    boolean present_outputFile = true && (isSetOutputFile());
+    list.add(present_outputFile);
+    if (present_outputFile)
+      list.add(outputFile);
+
+    boolean present_type = true && (isSetType());
+    list.add(present_type);
+    if (present_type)
+      list.add(type.getValue());
+
+    boolean present_reason = true && (isSetReason());
+    list.add(present_reason);
+    if (present_reason)
+      list.add(reason.getValue());
+
+    boolean present_localityGroup = true && (isSetLocalityGroup());
+    list.add(present_localityGroup);
+    if (present_localityGroup)
+      list.add(localityGroup);
+
+    boolean present_entriesRead = true;
+    list.add(present_entriesRead);
+    if (present_entriesRead)
+      list.add(entriesRead);
+
+    boolean present_entriesWritten = true;
+    list.add(present_entriesWritten);
+    if (present_entriesWritten)
+      list.add(entriesWritten);
+
+    boolean present_ssiList = true && (isSetSsiList());
+    list.add(present_ssiList);
+    if (present_ssiList)
+      list.add(ssiList);
+
+    boolean present_ssio = true && (isSetSsio());
+    list.add(present_ssio);
+    if (present_ssio)
+      list.add(ssio);
+
+    return list.hashCode();
   }
 
   @Override
@@ -1217,11 +1277,11 @@
               {
                 org.apache.thrift.protocol.TList _list52 = iprot.readListBegin();
                 struct.inputFiles = new ArrayList<String>(_list52.size);
-                for (int _i53 = 0; _i53 < _list52.size; ++_i53)
+                String _elem53;
+                for (int _i54 = 0; _i54 < _list52.size; ++_i54)
                 {
-                  String _elem54;
-                  _elem54 = iprot.readString();
-                  struct.inputFiles.add(_elem54);
+                  _elem53 = iprot.readString();
+                  struct.inputFiles.add(_elem53);
                 }
                 iprot.readListEnd();
               }
@@ -1240,7 +1300,7 @@
             break;
           case 5: // TYPE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.type = CompactionType.findByValue(iprot.readI32());
+              struct.type = org.apache.accumulo.core.tabletserver.thrift.CompactionType.findByValue(iprot.readI32());
               struct.setTypeIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -1248,7 +1308,7 @@
             break;
           case 6: // REASON
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.reason = CompactionReason.findByValue(iprot.readI32());
+              struct.reason = org.apache.accumulo.core.tabletserver.thrift.CompactionReason.findByValue(iprot.readI32());
               struct.setReasonIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -1283,12 +1343,12 @@
               {
                 org.apache.thrift.protocol.TList _list55 = iprot.readListBegin();
                 struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list55.size);
-                for (int _i56 = 0; _i56 < _list55.size; ++_i56)
+                org.apache.accumulo.core.data.thrift.IterInfo _elem56;
+                for (int _i57 = 0; _i57 < _list55.size; ++_i57)
                 {
-                  org.apache.accumulo.core.data.thrift.IterInfo _elem57;
-                  _elem57 = new org.apache.accumulo.core.data.thrift.IterInfo();
-                  _elem57.read(iprot);
-                  struct.ssiList.add(_elem57);
+                  _elem56 = new org.apache.accumulo.core.data.thrift.IterInfo();
+                  _elem56.read(iprot);
+                  struct.ssiList.add(_elem56);
                 }
                 iprot.readListEnd();
               }
@@ -1302,25 +1362,25 @@
               {
                 org.apache.thrift.protocol.TMap _map58 = iprot.readMapBegin();
                 struct.ssio = new HashMap<String,Map<String,String>>(2*_map58.size);
-                for (int _i59 = 0; _i59 < _map58.size; ++_i59)
+                String _key59;
+                Map<String,String> _val60;
+                for (int _i61 = 0; _i61 < _map58.size; ++_i61)
                 {
-                  String _key60;
-                  Map<String,String> _val61;
-                  _key60 = iprot.readString();
+                  _key59 = iprot.readString();
                   {
                     org.apache.thrift.protocol.TMap _map62 = iprot.readMapBegin();
-                    _val61 = new HashMap<String,String>(2*_map62.size);
-                    for (int _i63 = 0; _i63 < _map62.size; ++_i63)
+                    _val60 = new HashMap<String,String>(2*_map62.size);
+                    String _key63;
+                    String _val64;
+                    for (int _i65 = 0; _i65 < _map62.size; ++_i65)
                     {
-                      String _key64;
-                      String _val65;
-                      _key64 = iprot.readString();
-                      _val65 = iprot.readString();
-                      _val61.put(_key64, _val65);
+                      _key63 = iprot.readString();
+                      _val64 = iprot.readString();
+                      _val60.put(_key63, _val64);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.ssio.put(_key60, _val61);
+                  struct.ssio.put(_key59, _val60);
                 }
                 iprot.readMapEnd();
               }
@@ -1553,11 +1613,11 @@
         {
           org.apache.thrift.protocol.TList _list74 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.inputFiles = new ArrayList<String>(_list74.size);
-          for (int _i75 = 0; _i75 < _list74.size; ++_i75)
+          String _elem75;
+          for (int _i76 = 0; _i76 < _list74.size; ++_i76)
           {
-            String _elem76;
-            _elem76 = iprot.readString();
-            struct.inputFiles.add(_elem76);
+            _elem75 = iprot.readString();
+            struct.inputFiles.add(_elem75);
           }
         }
         struct.setInputFilesIsSet(true);
@@ -1567,11 +1627,11 @@
         struct.setOutputFileIsSet(true);
       }
       if (incoming.get(4)) {
-        struct.type = CompactionType.findByValue(iprot.readI32());
+        struct.type = org.apache.accumulo.core.tabletserver.thrift.CompactionType.findByValue(iprot.readI32());
         struct.setTypeIsSet(true);
       }
       if (incoming.get(5)) {
-        struct.reason = CompactionReason.findByValue(iprot.readI32());
+        struct.reason = org.apache.accumulo.core.tabletserver.thrift.CompactionReason.findByValue(iprot.readI32());
         struct.setReasonIsSet(true);
       }
       if (incoming.get(6)) {
@@ -1590,12 +1650,12 @@
         {
           org.apache.thrift.protocol.TList _list77 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list77.size);
-          for (int _i78 = 0; _i78 < _list77.size; ++_i78)
+          org.apache.accumulo.core.data.thrift.IterInfo _elem78;
+          for (int _i79 = 0; _i79 < _list77.size; ++_i79)
           {
-            org.apache.accumulo.core.data.thrift.IterInfo _elem79;
-            _elem79 = new org.apache.accumulo.core.data.thrift.IterInfo();
-            _elem79.read(iprot);
-            struct.ssiList.add(_elem79);
+            _elem78 = new org.apache.accumulo.core.data.thrift.IterInfo();
+            _elem78.read(iprot);
+            struct.ssiList.add(_elem78);
           }
         }
         struct.setSsiListIsSet(true);
@@ -1604,24 +1664,24 @@
         {
           org.apache.thrift.protocol.TMap _map80 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
           struct.ssio = new HashMap<String,Map<String,String>>(2*_map80.size);
-          for (int _i81 = 0; _i81 < _map80.size; ++_i81)
+          String _key81;
+          Map<String,String> _val82;
+          for (int _i83 = 0; _i83 < _map80.size; ++_i83)
           {
-            String _key82;
-            Map<String,String> _val83;
-            _key82 = iprot.readString();
+            _key81 = iprot.readString();
             {
               org.apache.thrift.protocol.TMap _map84 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-              _val83 = new HashMap<String,String>(2*_map84.size);
-              for (int _i85 = 0; _i85 < _map84.size; ++_i85)
+              _val82 = new HashMap<String,String>(2*_map84.size);
+              String _key85;
+              String _val86;
+              for (int _i87 = 0; _i87 < _map84.size; ++_i87)
               {
-                String _key86;
-                String _val87;
-                _key86 = iprot.readString();
-                _val87 = iprot.readString();
-                _val83.put(_key86, _val87);
+                _key85 = iprot.readString();
+                _val86 = iprot.readString();
+                _val82.put(_key85, _val86);
               }
             }
-            struct.ssio.put(_key82, _val83);
+            struct.ssio.put(_key81, _val82);
           }
         }
         struct.setSsioIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActiveScan.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActiveScan.java
index fe389e4..3ee05a2 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActiveScan.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ActiveScan.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ActiveScan implements org.apache.thrift.TBase<ActiveScan, ActiveScan._Fields>, java.io.Serializable, Cloneable, Comparable<ActiveScan> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ActiveScan implements org.apache.thrift.TBase<ActiveScan, ActiveScan._Fields>, java.io.Serializable, Cloneable, Comparable<ActiveScan> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ActiveScan");
 
   private static final org.apache.thrift.protocol.TField CLIENT_FIELD_DESC = new org.apache.thrift.protocol.TField("client", org.apache.thrift.protocol.TType.STRING, (short)2);
@@ -64,6 +67,7 @@
   private static final org.apache.thrift.protocol.TField SSIO_FIELD_DESC = new org.apache.thrift.protocol.TField("ssio", org.apache.thrift.protocol.TType.MAP, (short)12);
   private static final org.apache.thrift.protocol.TField AUTHORIZATIONS_FIELD_DESC = new org.apache.thrift.protocol.TField("authorizations", org.apache.thrift.protocol.TType.LIST, (short)13);
   private static final org.apache.thrift.protocol.TField SCAN_ID_FIELD_DESC = new org.apache.thrift.protocol.TField("scanId", org.apache.thrift.protocol.TType.I64, (short)14);
+  private static final org.apache.thrift.protocol.TField CLASS_LOADER_CONTEXT_FIELD_DESC = new org.apache.thrift.protocol.TField("classLoaderContext", org.apache.thrift.protocol.TType.STRING, (short)15);
 
   private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
   static {
@@ -92,6 +96,7 @@
   public Map<String,Map<String,String>> ssio; // required
   public List<ByteBuffer> authorizations; // required
   public long scanId; // optional
+  public String classLoaderContext; // required
 
   /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
   public enum _Fields implements org.apache.thrift.TFieldIdEnum {
@@ -115,7 +120,8 @@
     SSI_LIST((short)11, "ssiList"),
     SSIO((short)12, "ssio"),
     AUTHORIZATIONS((short)13, "authorizations"),
-    SCAN_ID((short)14, "scanId");
+    SCAN_ID((short)14, "scanId"),
+    CLASS_LOADER_CONTEXT((short)15, "classLoaderContext");
 
     private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -156,6 +162,8 @@
           return AUTHORIZATIONS;
         case 14: // SCAN_ID
           return SCAN_ID;
+        case 15: // CLASS_LOADER_CONTEXT
+          return CLASS_LOADER_CONTEXT;
         default:
           return null;
       }
@@ -200,7 +208,7 @@
   private static final int __IDLETIME_ISSET_ID = 1;
   private static final int __SCANID_ISSET_ID = 2;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.SCAN_ID};
+  private static final _Fields optionals[] = {_Fields.SCAN_ID};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -237,6 +245,8 @@
             new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING            , true))));
     tmpMap.put(_Fields.SCAN_ID, new org.apache.thrift.meta_data.FieldMetaData("scanId", org.apache.thrift.TFieldRequirementType.OPTIONAL, 
         new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I64)));
+    tmpMap.put(_Fields.CLASS_LOADER_CONTEXT, new org.apache.thrift.meta_data.FieldMetaData("classLoaderContext", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
     metaDataMap = Collections.unmodifiableMap(tmpMap);
     org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(ActiveScan.class, metaDataMap);
   }
@@ -256,7 +266,8 @@
     List<org.apache.accumulo.core.data.thrift.TColumn> columns,
     List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList,
     Map<String,Map<String,String>> ssio,
-    List<ByteBuffer> authorizations)
+    List<ByteBuffer> authorizations,
+    String classLoaderContext)
   {
     this();
     this.client = client;
@@ -273,6 +284,7 @@
     this.ssiList = ssiList;
     this.ssio = ssio;
     this.authorizations = authorizations;
+    this.classLoaderContext = classLoaderContext;
   }
 
   /**
@@ -334,6 +346,9 @@
       this.authorizations = __this__authorizations;
     }
     this.scanId = other.scanId;
+    if (other.isSetClassLoaderContext()) {
+      this.classLoaderContext = other.classLoaderContext;
+    }
   }
 
   public ActiveScan deepCopy() {
@@ -358,6 +373,7 @@
     this.authorizations = null;
     setScanIdIsSet(false);
     this.scanId = 0;
+    this.classLoaderContext = null;
   }
 
   public String getClient() {
@@ -741,6 +757,30 @@
     __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __SCANID_ISSET_ID, value);
   }
 
+  public String getClassLoaderContext() {
+    return this.classLoaderContext;
+  }
+
+  public ActiveScan setClassLoaderContext(String classLoaderContext) {
+    this.classLoaderContext = classLoaderContext;
+    return this;
+  }
+
+  public void unsetClassLoaderContext() {
+    this.classLoaderContext = null;
+  }
+
+  /** Returns true if field classLoaderContext is set (has been assigned a value) and false otherwise */
+  public boolean isSetClassLoaderContext() {
+    return this.classLoaderContext != null;
+  }
+
+  public void setClassLoaderContextIsSet(boolean value) {
+    if (!value) {
+      this.classLoaderContext = null;
+    }
+  }
+
   public void setFieldValue(_Fields field, Object value) {
     switch (field) {
     case CLIENT:
@@ -847,6 +887,14 @@
       }
       break;
 
+    case CLASS_LOADER_CONTEXT:
+      if (value == null) {
+        unsetClassLoaderContext();
+      } else {
+        setClassLoaderContext((String)value);
+      }
+      break;
+
     }
   }
 
@@ -862,10 +910,10 @@
       return getTableId();
 
     case AGE:
-      return Long.valueOf(getAge());
+      return getAge();
 
     case IDLE_TIME:
-      return Long.valueOf(getIdleTime());
+      return getIdleTime();
 
     case TYPE:
       return getType();
@@ -889,7 +937,10 @@
       return getAuthorizations();
 
     case SCAN_ID:
-      return Long.valueOf(getScanId());
+      return getScanId();
+
+    case CLASS_LOADER_CONTEXT:
+      return getClassLoaderContext();
 
     }
     throw new IllegalStateException();
@@ -928,6 +979,8 @@
       return isSetAuthorizations();
     case SCAN_ID:
       return isSetScanId();
+    case CLASS_LOADER_CONTEXT:
+      return isSetClassLoaderContext();
     }
     throw new IllegalStateException();
   }
@@ -1062,12 +1115,93 @@
         return false;
     }
 
+    boolean this_present_classLoaderContext = true && this.isSetClassLoaderContext();
+    boolean that_present_classLoaderContext = true && that.isSetClassLoaderContext();
+    if (this_present_classLoaderContext || that_present_classLoaderContext) {
+      if (!(this_present_classLoaderContext && that_present_classLoaderContext))
+        return false;
+      if (!this.classLoaderContext.equals(that.classLoaderContext))
+        return false;
+    }
+
     return true;
   }
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_client = true && (isSetClient());
+    list.add(present_client);
+    if (present_client)
+      list.add(client);
+
+    boolean present_user = true && (isSetUser());
+    list.add(present_user);
+    if (present_user)
+      list.add(user);
+
+    boolean present_tableId = true && (isSetTableId());
+    list.add(present_tableId);
+    if (present_tableId)
+      list.add(tableId);
+
+    boolean present_age = true;
+    list.add(present_age);
+    if (present_age)
+      list.add(age);
+
+    boolean present_idleTime = true;
+    list.add(present_idleTime);
+    if (present_idleTime)
+      list.add(idleTime);
+
+    boolean present_type = true && (isSetType());
+    list.add(present_type);
+    if (present_type)
+      list.add(type.getValue());
+
+    boolean present_state = true && (isSetState());
+    list.add(present_state);
+    if (present_state)
+      list.add(state.getValue());
+
+    boolean present_extent = true && (isSetExtent());
+    list.add(present_extent);
+    if (present_extent)
+      list.add(extent);
+
+    boolean present_columns = true && (isSetColumns());
+    list.add(present_columns);
+    if (present_columns)
+      list.add(columns);
+
+    boolean present_ssiList = true && (isSetSsiList());
+    list.add(present_ssiList);
+    if (present_ssiList)
+      list.add(ssiList);
+
+    boolean present_ssio = true && (isSetSsio());
+    list.add(present_ssio);
+    if (present_ssio)
+      list.add(ssio);
+
+    boolean present_authorizations = true && (isSetAuthorizations());
+    list.add(present_authorizations);
+    if (present_authorizations)
+      list.add(authorizations);
+
+    boolean present_scanId = true && (isSetScanId());
+    list.add(present_scanId);
+    if (present_scanId)
+      list.add(scanId);
+
+    boolean present_classLoaderContext = true && (isSetClassLoaderContext());
+    list.add(present_classLoaderContext);
+    if (present_classLoaderContext)
+      list.add(classLoaderContext);
+
+    return list.hashCode();
   }
 
   @Override
@@ -1208,6 +1342,16 @@
         return lastComparison;
       }
     }
+    lastComparison = Boolean.valueOf(isSetClassLoaderContext()).compareTo(other.isSetClassLoaderContext());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetClassLoaderContext()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.classLoaderContext, other.classLoaderContext);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
     return 0;
   }
 
@@ -1312,7 +1456,7 @@
     if (this.authorizations == null) {
       sb.append("null");
     } else {
-      sb.append(this.authorizations);
+      org.apache.thrift.TBaseHelper.toString(this.authorizations, sb);
     }
     first = false;
     if (isSetScanId()) {
@@ -1321,6 +1465,14 @@
       sb.append(this.scanId);
       first = false;
     }
+    if (!first) sb.append(", ");
+    sb.append("classLoaderContext:");
+    if (this.classLoaderContext == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.classLoaderContext);
+    }
+    first = false;
     sb.append(")");
     return sb.toString();
   }
@@ -1411,7 +1563,7 @@
             break;
           case 7: // TYPE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.type = ScanType.findByValue(iprot.readI32());
+              struct.type = org.apache.accumulo.core.tabletserver.thrift.ScanType.findByValue(iprot.readI32());
               struct.setTypeIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -1419,7 +1571,7 @@
             break;
           case 8: // STATE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.state = ScanState.findByValue(iprot.readI32());
+              struct.state = org.apache.accumulo.core.tabletserver.thrift.ScanState.findByValue(iprot.readI32());
               struct.setStateIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -1439,12 +1591,12 @@
               {
                 org.apache.thrift.protocol.TList _list8 = iprot.readListBegin();
                 struct.columns = new ArrayList<org.apache.accumulo.core.data.thrift.TColumn>(_list8.size);
-                for (int _i9 = 0; _i9 < _list8.size; ++_i9)
+                org.apache.accumulo.core.data.thrift.TColumn _elem9;
+                for (int _i10 = 0; _i10 < _list8.size; ++_i10)
                 {
-                  org.apache.accumulo.core.data.thrift.TColumn _elem10;
-                  _elem10 = new org.apache.accumulo.core.data.thrift.TColumn();
-                  _elem10.read(iprot);
-                  struct.columns.add(_elem10);
+                  _elem9 = new org.apache.accumulo.core.data.thrift.TColumn();
+                  _elem9.read(iprot);
+                  struct.columns.add(_elem9);
                 }
                 iprot.readListEnd();
               }
@@ -1458,12 +1610,12 @@
               {
                 org.apache.thrift.protocol.TList _list11 = iprot.readListBegin();
                 struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list11.size);
-                for (int _i12 = 0; _i12 < _list11.size; ++_i12)
+                org.apache.accumulo.core.data.thrift.IterInfo _elem12;
+                for (int _i13 = 0; _i13 < _list11.size; ++_i13)
                 {
-                  org.apache.accumulo.core.data.thrift.IterInfo _elem13;
-                  _elem13 = new org.apache.accumulo.core.data.thrift.IterInfo();
-                  _elem13.read(iprot);
-                  struct.ssiList.add(_elem13);
+                  _elem12 = new org.apache.accumulo.core.data.thrift.IterInfo();
+                  _elem12.read(iprot);
+                  struct.ssiList.add(_elem12);
                 }
                 iprot.readListEnd();
               }
@@ -1477,25 +1629,25 @@
               {
                 org.apache.thrift.protocol.TMap _map14 = iprot.readMapBegin();
                 struct.ssio = new HashMap<String,Map<String,String>>(2*_map14.size);
-                for (int _i15 = 0; _i15 < _map14.size; ++_i15)
+                String _key15;
+                Map<String,String> _val16;
+                for (int _i17 = 0; _i17 < _map14.size; ++_i17)
                 {
-                  String _key16;
-                  Map<String,String> _val17;
-                  _key16 = iprot.readString();
+                  _key15 = iprot.readString();
                   {
                     org.apache.thrift.protocol.TMap _map18 = iprot.readMapBegin();
-                    _val17 = new HashMap<String,String>(2*_map18.size);
-                    for (int _i19 = 0; _i19 < _map18.size; ++_i19)
+                    _val16 = new HashMap<String,String>(2*_map18.size);
+                    String _key19;
+                    String _val20;
+                    for (int _i21 = 0; _i21 < _map18.size; ++_i21)
                     {
-                      String _key20;
-                      String _val21;
-                      _key20 = iprot.readString();
-                      _val21 = iprot.readString();
-                      _val17.put(_key20, _val21);
+                      _key19 = iprot.readString();
+                      _val20 = iprot.readString();
+                      _val16.put(_key19, _val20);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.ssio.put(_key16, _val17);
+                  struct.ssio.put(_key15, _val16);
                 }
                 iprot.readMapEnd();
               }
@@ -1509,11 +1661,11 @@
               {
                 org.apache.thrift.protocol.TList _list22 = iprot.readListBegin();
                 struct.authorizations = new ArrayList<ByteBuffer>(_list22.size);
-                for (int _i23 = 0; _i23 < _list22.size; ++_i23)
+                ByteBuffer _elem23;
+                for (int _i24 = 0; _i24 < _list22.size; ++_i24)
                 {
-                  ByteBuffer _elem24;
-                  _elem24 = iprot.readBinary();
-                  struct.authorizations.add(_elem24);
+                  _elem23 = iprot.readBinary();
+                  struct.authorizations.add(_elem23);
                 }
                 iprot.readListEnd();
               }
@@ -1530,6 +1682,14 @@
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
+          case 15: // CLASS_LOADER_CONTEXT
+            if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+              struct.classLoaderContext = iprot.readString();
+              struct.setClassLoaderContextIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
           default:
             org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
         }
@@ -1643,6 +1803,11 @@
         oprot.writeI64(struct.scanId);
         oprot.writeFieldEnd();
       }
+      if (struct.classLoaderContext != null) {
+        oprot.writeFieldBegin(CLASS_LOADER_CONTEXT_FIELD_DESC);
+        oprot.writeString(struct.classLoaderContext);
+        oprot.writeFieldEnd();
+      }
       oprot.writeFieldStop();
       oprot.writeStructEnd();
     }
@@ -1700,7 +1865,10 @@
       if (struct.isSetScanId()) {
         optionals.set(12);
       }
-      oprot.writeBitSet(optionals, 13);
+      if (struct.isSetClassLoaderContext()) {
+        optionals.set(13);
+      }
+      oprot.writeBitSet(optionals, 14);
       if (struct.isSetClient()) {
         oprot.writeString(struct.client);
       }
@@ -1772,12 +1940,15 @@
       if (struct.isSetScanId()) {
         oprot.writeI64(struct.scanId);
       }
+      if (struct.isSetClassLoaderContext()) {
+        oprot.writeString(struct.classLoaderContext);
+      }
     }
 
     @Override
     public void read(org.apache.thrift.protocol.TProtocol prot, ActiveScan struct) throws org.apache.thrift.TException {
       TTupleProtocol iprot = (TTupleProtocol) prot;
-      BitSet incoming = iprot.readBitSet(13);
+      BitSet incoming = iprot.readBitSet(14);
       if (incoming.get(0)) {
         struct.client = iprot.readString();
         struct.setClientIsSet(true);
@@ -1799,11 +1970,11 @@
         struct.setIdleTimeIsSet(true);
       }
       if (incoming.get(5)) {
-        struct.type = ScanType.findByValue(iprot.readI32());
+        struct.type = org.apache.accumulo.core.tabletserver.thrift.ScanType.findByValue(iprot.readI32());
         struct.setTypeIsSet(true);
       }
       if (incoming.get(6)) {
-        struct.state = ScanState.findByValue(iprot.readI32());
+        struct.state = org.apache.accumulo.core.tabletserver.thrift.ScanState.findByValue(iprot.readI32());
         struct.setStateIsSet(true);
       }
       if (incoming.get(7)) {
@@ -1815,12 +1986,12 @@
         {
           org.apache.thrift.protocol.TList _list35 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.columns = new ArrayList<org.apache.accumulo.core.data.thrift.TColumn>(_list35.size);
-          for (int _i36 = 0; _i36 < _list35.size; ++_i36)
+          org.apache.accumulo.core.data.thrift.TColumn _elem36;
+          for (int _i37 = 0; _i37 < _list35.size; ++_i37)
           {
-            org.apache.accumulo.core.data.thrift.TColumn _elem37;
-            _elem37 = new org.apache.accumulo.core.data.thrift.TColumn();
-            _elem37.read(iprot);
-            struct.columns.add(_elem37);
+            _elem36 = new org.apache.accumulo.core.data.thrift.TColumn();
+            _elem36.read(iprot);
+            struct.columns.add(_elem36);
           }
         }
         struct.setColumnsIsSet(true);
@@ -1829,12 +2000,12 @@
         {
           org.apache.thrift.protocol.TList _list38 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list38.size);
-          for (int _i39 = 0; _i39 < _list38.size; ++_i39)
+          org.apache.accumulo.core.data.thrift.IterInfo _elem39;
+          for (int _i40 = 0; _i40 < _list38.size; ++_i40)
           {
-            org.apache.accumulo.core.data.thrift.IterInfo _elem40;
-            _elem40 = new org.apache.accumulo.core.data.thrift.IterInfo();
-            _elem40.read(iprot);
-            struct.ssiList.add(_elem40);
+            _elem39 = new org.apache.accumulo.core.data.thrift.IterInfo();
+            _elem39.read(iprot);
+            struct.ssiList.add(_elem39);
           }
         }
         struct.setSsiListIsSet(true);
@@ -1843,24 +2014,24 @@
         {
           org.apache.thrift.protocol.TMap _map41 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
           struct.ssio = new HashMap<String,Map<String,String>>(2*_map41.size);
-          for (int _i42 = 0; _i42 < _map41.size; ++_i42)
+          String _key42;
+          Map<String,String> _val43;
+          for (int _i44 = 0; _i44 < _map41.size; ++_i44)
           {
-            String _key43;
-            Map<String,String> _val44;
-            _key43 = iprot.readString();
+            _key42 = iprot.readString();
             {
               org.apache.thrift.protocol.TMap _map45 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-              _val44 = new HashMap<String,String>(2*_map45.size);
-              for (int _i46 = 0; _i46 < _map45.size; ++_i46)
+              _val43 = new HashMap<String,String>(2*_map45.size);
+              String _key46;
+              String _val47;
+              for (int _i48 = 0; _i48 < _map45.size; ++_i48)
               {
-                String _key47;
-                String _val48;
-                _key47 = iprot.readString();
-                _val48 = iprot.readString();
-                _val44.put(_key47, _val48);
+                _key46 = iprot.readString();
+                _val47 = iprot.readString();
+                _val43.put(_key46, _val47);
               }
             }
-            struct.ssio.put(_key43, _val44);
+            struct.ssio.put(_key42, _val43);
           }
         }
         struct.setSsioIsSet(true);
@@ -1869,11 +2040,11 @@
         {
           org.apache.thrift.protocol.TList _list49 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.authorizations = new ArrayList<ByteBuffer>(_list49.size);
-          for (int _i50 = 0; _i50 < _list49.size; ++_i50)
+          ByteBuffer _elem50;
+          for (int _i51 = 0; _i51 < _list49.size; ++_i51)
           {
-            ByteBuffer _elem51;
-            _elem51 = iprot.readBinary();
-            struct.authorizations.add(_elem51);
+            _elem50 = iprot.readBinary();
+            struct.authorizations.add(_elem50);
           }
         }
         struct.setAuthorizationsIsSet(true);
@@ -1882,6 +2053,10 @@
         struct.scanId = iprot.readI64();
         struct.setScanIdIsSet(true);
       }
+      if (incoming.get(13)) {
+        struct.classLoaderContext = iprot.readString();
+        struct.setClassLoaderContextIsSet(true);
+      }
     }
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/CompactionReason.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/CompactionReason.java
index 591dffd..387908b 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/CompactionReason.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/CompactionReason.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/CompactionType.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/CompactionType.java
index f2f2cd6..9252e3d 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/CompactionType.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/CompactionType.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ConstraintViolationException.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ConstraintViolationException.java
index 7a94159..f35e187 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ConstraintViolationException.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ConstraintViolationException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ConstraintViolationException extends TException implements org.apache.thrift.TBase<ConstraintViolationException, ConstraintViolationException._Fields>, java.io.Serializable, Cloneable, Comparable<ConstraintViolationException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ConstraintViolationException extends TException implements org.apache.thrift.TBase<ConstraintViolationException, ConstraintViolationException._Fields>, java.io.Serializable, Cloneable, Comparable<ConstraintViolationException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ConstraintViolationException");
 
   private static final org.apache.thrift.protocol.TField VIOLATION_SUMMARIES_FIELD_DESC = new org.apache.thrift.protocol.TField("violationSummaries", org.apache.thrift.protocol.TType.LIST, (short)1);
@@ -263,7 +266,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_violationSummaries = true && (isSetViolationSummaries());
+    list.add(present_violationSummaries);
+    if (present_violationSummaries)
+      list.add(violationSummaries);
+
+    return list.hashCode();
   }
 
   @Override
@@ -359,12 +369,12 @@
               {
                 org.apache.thrift.protocol.TList _list0 = iprot.readListBegin();
                 struct.violationSummaries = new ArrayList<org.apache.accumulo.core.data.thrift.TConstraintViolationSummary>(_list0.size);
-                for (int _i1 = 0; _i1 < _list0.size; ++_i1)
+                org.apache.accumulo.core.data.thrift.TConstraintViolationSummary _elem1;
+                for (int _i2 = 0; _i2 < _list0.size; ++_i2)
                 {
-                  org.apache.accumulo.core.data.thrift.TConstraintViolationSummary _elem2;
-                  _elem2 = new org.apache.accumulo.core.data.thrift.TConstraintViolationSummary();
-                  _elem2.read(iprot);
-                  struct.violationSummaries.add(_elem2);
+                  _elem1 = new org.apache.accumulo.core.data.thrift.TConstraintViolationSummary();
+                  _elem1.read(iprot);
+                  struct.violationSummaries.add(_elem1);
                 }
                 iprot.readListEnd();
               }
@@ -441,12 +451,12 @@
         {
           org.apache.thrift.protocol.TList _list5 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.violationSummaries = new ArrayList<org.apache.accumulo.core.data.thrift.TConstraintViolationSummary>(_list5.size);
-          for (int _i6 = 0; _i6 < _list5.size; ++_i6)
+          org.apache.accumulo.core.data.thrift.TConstraintViolationSummary _elem6;
+          for (int _i7 = 0; _i7 < _list5.size; ++_i7)
           {
-            org.apache.accumulo.core.data.thrift.TConstraintViolationSummary _elem7;
-            _elem7 = new org.apache.accumulo.core.data.thrift.TConstraintViolationSummary();
-            _elem7.read(iprot);
-            struct.violationSummaries.add(_elem7);
+            _elem6 = new org.apache.accumulo.core.data.thrift.TConstraintViolationSummary();
+            _elem6.read(iprot);
+            struct.violationSummaries.add(_elem6);
           }
         }
         struct.setViolationSummariesIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/IteratorConfig.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/IteratorConfig.java
index 9db0608..21fd63c 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/IteratorConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/IteratorConfig.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class IteratorConfig implements org.apache.thrift.TBase<IteratorConfig, IteratorConfig._Fields>, java.io.Serializable, Cloneable, Comparable<IteratorConfig> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class IteratorConfig implements org.apache.thrift.TBase<IteratorConfig, IteratorConfig._Fields>, java.io.Serializable, Cloneable, Comparable<IteratorConfig> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("IteratorConfig");
 
   private static final org.apache.thrift.protocol.TField ITERATORS_FIELD_DESC = new org.apache.thrift.protocol.TField("iterators", org.apache.thrift.protocol.TType.LIST, (short)1);
@@ -263,7 +266,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_iterators = true && (isSetIterators());
+    list.add(present_iterators);
+    if (present_iterators)
+      list.add(iterators);
+
+    return list.hashCode();
   }
 
   @Override
@@ -359,12 +369,12 @@
               {
                 org.apache.thrift.protocol.TList _list98 = iprot.readListBegin();
                 struct.iterators = new ArrayList<TIteratorSetting>(_list98.size);
-                for (int _i99 = 0; _i99 < _list98.size; ++_i99)
+                TIteratorSetting _elem99;
+                for (int _i100 = 0; _i100 < _list98.size; ++_i100)
                 {
-                  TIteratorSetting _elem100;
-                  _elem100 = new TIteratorSetting();
-                  _elem100.read(iprot);
-                  struct.iterators.add(_elem100);
+                  _elem99 = new TIteratorSetting();
+                  _elem99.read(iprot);
+                  struct.iterators.add(_elem99);
                 }
                 iprot.readListEnd();
               }
@@ -441,12 +451,12 @@
         {
           org.apache.thrift.protocol.TList _list103 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.iterators = new ArrayList<TIteratorSetting>(_list103.size);
-          for (int _i104 = 0; _i104 < _list103.size; ++_i104)
+          TIteratorSetting _elem104;
+          for (int _i105 = 0; _i105 < _list103.size; ++_i105)
           {
-            TIteratorSetting _elem105;
-            _elem105 = new TIteratorSetting();
-            _elem105.read(iprot);
-            struct.iterators.add(_elem105);
+            _elem104 = new TIteratorSetting();
+            _elem104.read(iprot);
+            struct.iterators.add(_elem104);
           }
         }
         struct.setIteratorsIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/NoSuchScanIDException.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/NoSuchScanIDException.java
index c0dc02b..bdbdd18 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/NoSuchScanIDException.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/NoSuchScanIDException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class NoSuchScanIDException extends TException implements org.apache.thrift.TBase<NoSuchScanIDException, NoSuchScanIDException._Fields>, java.io.Serializable, Cloneable, Comparable<NoSuchScanIDException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class NoSuchScanIDException extends TException implements org.apache.thrift.TBase<NoSuchScanIDException, NoSuchScanIDException._Fields>, java.io.Serializable, Cloneable, Comparable<NoSuchScanIDException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("NoSuchScanIDException");
 
 
@@ -178,7 +181,9 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/NotServingTabletException.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/NotServingTabletException.java
index ef87937..20903b3 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/NotServingTabletException.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/NotServingTabletException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class NotServingTabletException extends TException implements org.apache.thrift.TBase<NotServingTabletException, NotServingTabletException._Fields>, java.io.Serializable, Cloneable, Comparable<NotServingTabletException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class NotServingTabletException extends TException implements org.apache.thrift.TBase<NotServingTabletException, NotServingTabletException._Fields>, java.io.Serializable, Cloneable, Comparable<NotServingTabletException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("NotServingTabletException");
 
   private static final org.apache.thrift.protocol.TField EXTENT_FIELD_DESC = new org.apache.thrift.protocol.TField("extent", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_extent = true && (isSetExtent());
+    list.add(present_extent);
+    if (present_extent)
+      list.add(extent);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ScanState.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ScanState.java
index 5bb6857..b117890 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ScanState.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ScanState.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ScanType.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ScanType.java
index 34d7ac5..25d741d 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ScanType.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/ScanType.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TDurability.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TDurability.java
index 629b770..b8c7cff 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TDurability.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TDurability.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TIteratorSetting.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TIteratorSetting.java
index 8eddaba..a541206 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TIteratorSetting.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TIteratorSetting.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TIteratorSetting implements org.apache.thrift.TBase<TIteratorSetting, TIteratorSetting._Fields>, java.io.Serializable, Cloneable, Comparable<TIteratorSetting> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TIteratorSetting implements org.apache.thrift.TBase<TIteratorSetting, TIteratorSetting._Fields>, java.io.Serializable, Cloneable, Comparable<TIteratorSetting> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TIteratorSetting");
 
   private static final org.apache.thrift.protocol.TField PRIORITY_FIELD_DESC = new org.apache.thrift.protocol.TField("priority", org.apache.thrift.protocol.TType.I32, (short)1);
@@ -348,7 +351,7 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case PRIORITY:
-      return Integer.valueOf(getPriority());
+      return getPriority();
 
     case NAME:
       return getName();
@@ -436,7 +439,29 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_priority = true;
+    list.add(present_priority);
+    if (present_priority)
+      list.add(priority);
+
+    boolean present_name = true && (isSetName());
+    list.add(present_name);
+    if (present_name)
+      list.add(name);
+
+    boolean present_iteratorClass = true && (isSetIteratorClass());
+    list.add(present_iteratorClass);
+    if (present_iteratorClass)
+      list.add(iteratorClass);
+
+    boolean present_properties = true && (isSetProperties());
+    list.add(present_properties);
+    if (present_properties)
+      list.add(properties);
+
+    return list.hashCode();
   }
 
   @Override
@@ -608,13 +633,13 @@
               {
                 org.apache.thrift.protocol.TMap _map88 = iprot.readMapBegin();
                 struct.properties = new HashMap<String,String>(2*_map88.size);
-                for (int _i89 = 0; _i89 < _map88.size; ++_i89)
+                String _key89;
+                String _val90;
+                for (int _i91 = 0; _i91 < _map88.size; ++_i91)
                 {
-                  String _key90;
-                  String _val91;
-                  _key90 = iprot.readString();
-                  _val91 = iprot.readString();
-                  struct.properties.put(_key90, _val91);
+                  _key89 = iprot.readString();
+                  _val90 = iprot.readString();
+                  struct.properties.put(_key89, _val90);
                 }
                 iprot.readMapEnd();
               }
@@ -736,13 +761,13 @@
         {
           org.apache.thrift.protocol.TMap _map94 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.properties = new HashMap<String,String>(2*_map94.size);
-          for (int _i95 = 0; _i95 < _map94.size; ++_i95)
+          String _key95;
+          String _val96;
+          for (int _i97 = 0; _i97 < _map94.size; ++_i97)
           {
-            String _key96;
-            String _val97;
-            _key96 = iprot.readString();
-            _val97 = iprot.readString();
-            struct.properties.put(_key96, _val97);
+            _key95 = iprot.readString();
+            _val96 = iprot.readString();
+            struct.properties.put(_key95, _val96);
           }
         }
         struct.setPropertiesIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TSampleNotPresentException.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TSampleNotPresentException.java
new file mode 100644
index 0000000..dadea82
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TSampleNotPresentException.java
@@ -0,0 +1,419 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.accumulo.core.tabletserver.thrift;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TSampleNotPresentException extends TException implements org.apache.thrift.TBase<TSampleNotPresentException, TSampleNotPresentException._Fields>, java.io.Serializable, Cloneable, Comparable<TSampleNotPresentException> {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TSampleNotPresentException");
+
+  private static final org.apache.thrift.protocol.TField EXTENT_FIELD_DESC = new org.apache.thrift.protocol.TField("extent", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+
+  private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+  static {
+    schemes.put(StandardScheme.class, new TSampleNotPresentExceptionStandardSchemeFactory());
+    schemes.put(TupleScheme.class, new TSampleNotPresentExceptionTupleSchemeFactory());
+  }
+
+  public org.apache.accumulo.core.data.thrift.TKeyExtent extent; // required
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+    EXTENT((short)1, "extent");
+
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      switch(fieldId) {
+        case 1: // EXTENT
+          return EXTENT;
+        default:
+          return null;
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+  public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+  static {
+    Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+    tmpMap.put(_Fields.EXTENT, new org.apache.thrift.meta_data.FieldMetaData("extent", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, org.apache.accumulo.core.data.thrift.TKeyExtent.class)));
+    metaDataMap = Collections.unmodifiableMap(tmpMap);
+    org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(TSampleNotPresentException.class, metaDataMap);
+  }
+
+  public TSampleNotPresentException() {
+  }
+
+  public TSampleNotPresentException(
+    org.apache.accumulo.core.data.thrift.TKeyExtent extent)
+  {
+    this();
+    this.extent = extent;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public TSampleNotPresentException(TSampleNotPresentException other) {
+    if (other.isSetExtent()) {
+      this.extent = new org.apache.accumulo.core.data.thrift.TKeyExtent(other.extent);
+    }
+  }
+
+  public TSampleNotPresentException deepCopy() {
+    return new TSampleNotPresentException(this);
+  }
+
+  @Override
+  public void clear() {
+    this.extent = null;
+  }
+
+  public org.apache.accumulo.core.data.thrift.TKeyExtent getExtent() {
+    return this.extent;
+  }
+
+  public TSampleNotPresentException setExtent(org.apache.accumulo.core.data.thrift.TKeyExtent extent) {
+    this.extent = extent;
+    return this;
+  }
+
+  public void unsetExtent() {
+    this.extent = null;
+  }
+
+  /** Returns true if field extent is set (has been assigned a value) and false otherwise */
+  public boolean isSetExtent() {
+    return this.extent != null;
+  }
+
+  public void setExtentIsSet(boolean value) {
+    if (!value) {
+      this.extent = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case EXTENT:
+      if (value == null) {
+        unsetExtent();
+      } else {
+        setExtent((org.apache.accumulo.core.data.thrift.TKeyExtent)value);
+      }
+      break;
+
+    }
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case EXTENT:
+      return getExtent();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    if (field == null) {
+      throw new IllegalArgumentException();
+    }
+
+    switch (field) {
+    case EXTENT:
+      return isSetExtent();
+    }
+    throw new IllegalStateException();
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof TSampleNotPresentException)
+      return this.equals((TSampleNotPresentException)that);
+    return false;
+  }
+
+  public boolean equals(TSampleNotPresentException that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_extent = true && this.isSetExtent();
+    boolean that_present_extent = true && that.isSetExtent();
+    if (this_present_extent || that_present_extent) {
+      if (!(this_present_extent && that_present_extent))
+        return false;
+      if (!this.extent.equals(that.extent))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_extent = true && (isSetExtent());
+    list.add(present_extent);
+    if (present_extent)
+      list.add(extent);
+
+    return list.hashCode();
+  }
+
+  @Override
+  public int compareTo(TSampleNotPresentException other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+
+    lastComparison = Boolean.valueOf(isSetExtent()).compareTo(other.isSetExtent());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetExtent()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.extent, other.extent);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    return 0;
+  }
+
+  public _Fields fieldForId(int fieldId) {
+    return _Fields.findByThriftId(fieldId);
+  }
+
+  public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+    schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+  }
+
+  public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+    schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("TSampleNotPresentException(");
+    boolean first = true;
+
+    sb.append("extent:");
+    if (this.extent == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.extent);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws org.apache.thrift.TException {
+    // check for required fields
+    // check for sub-struct validity
+    if (extent != null) {
+      extent.validate();
+    }
+  }
+
+  private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+    try {
+      write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+    try {
+      read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private static class TSampleNotPresentExceptionStandardSchemeFactory implements SchemeFactory {
+    public TSampleNotPresentExceptionStandardScheme getScheme() {
+      return new TSampleNotPresentExceptionStandardScheme();
+    }
+  }
+
+  private static class TSampleNotPresentExceptionStandardScheme extends StandardScheme<TSampleNotPresentException> {
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot, TSampleNotPresentException struct) throws org.apache.thrift.TException {
+      org.apache.thrift.protocol.TField schemeField;
+      iprot.readStructBegin();
+      while (true)
+      {
+        schemeField = iprot.readFieldBegin();
+        if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+          break;
+        }
+        switch (schemeField.id) {
+          case 1: // EXTENT
+            if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+              struct.extent = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+              struct.extent.read(iprot);
+              struct.setExtentIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          default:
+            org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+        }
+        iprot.readFieldEnd();
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      struct.validate();
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot, TSampleNotPresentException struct) throws org.apache.thrift.TException {
+      struct.validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (struct.extent != null) {
+        oprot.writeFieldBegin(EXTENT_FIELD_DESC);
+        struct.extent.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+  }
+
+  private static class TSampleNotPresentExceptionTupleSchemeFactory implements SchemeFactory {
+    public TSampleNotPresentExceptionTupleScheme getScheme() {
+      return new TSampleNotPresentExceptionTupleScheme();
+    }
+  }
+
+  private static class TSampleNotPresentExceptionTupleScheme extends TupleScheme<TSampleNotPresentException> {
+
+    @Override
+    public void write(org.apache.thrift.protocol.TProtocol prot, TSampleNotPresentException struct) throws org.apache.thrift.TException {
+      TTupleProtocol oprot = (TTupleProtocol) prot;
+      BitSet optionals = new BitSet();
+      if (struct.isSetExtent()) {
+        optionals.set(0);
+      }
+      oprot.writeBitSet(optionals, 1);
+      if (struct.isSetExtent()) {
+        struct.extent.write(oprot);
+      }
+    }
+
+    @Override
+    public void read(org.apache.thrift.protocol.TProtocol prot, TSampleNotPresentException struct) throws org.apache.thrift.TException {
+      TTupleProtocol iprot = (TTupleProtocol) prot;
+      BitSet incoming = iprot.readBitSet(1);
+      if (incoming.get(0)) {
+        struct.extent = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+        struct.extent.read(iprot);
+        struct.setExtentIsSet(true);
+      }
+    }
+  }
+
+}
+
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TSamplerConfiguration.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TSamplerConfiguration.java
new file mode 100644
index 0000000..ee49a77
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TSamplerConfiguration.java
@@ -0,0 +1,571 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.accumulo.core.tabletserver.thrift;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TSamplerConfiguration implements org.apache.thrift.TBase<TSamplerConfiguration, TSamplerConfiguration._Fields>, java.io.Serializable, Cloneable, Comparable<TSamplerConfiguration> {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TSamplerConfiguration");
+
+  private static final org.apache.thrift.protocol.TField CLASS_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("className", org.apache.thrift.protocol.TType.STRING, (short)1);
+  private static final org.apache.thrift.protocol.TField OPTIONS_FIELD_DESC = new org.apache.thrift.protocol.TField("options", org.apache.thrift.protocol.TType.MAP, (short)2);
+
+  private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+  static {
+    schemes.put(StandardScheme.class, new TSamplerConfigurationStandardSchemeFactory());
+    schemes.put(TupleScheme.class, new TSamplerConfigurationTupleSchemeFactory());
+  }
+
+  public String className; // required
+  public Map<String,String> options; // required
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+    CLASS_NAME((short)1, "className"),
+    OPTIONS((short)2, "options");
+
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      switch(fieldId) {
+        case 1: // CLASS_NAME
+          return CLASS_NAME;
+        case 2: // OPTIONS
+          return OPTIONS;
+        default:
+          return null;
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+  public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+  static {
+    Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+    tmpMap.put(_Fields.CLASS_NAME, new org.apache.thrift.meta_data.FieldMetaData("className", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+    tmpMap.put(_Fields.OPTIONS, new org.apache.thrift.meta_data.FieldMetaData("options", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.MapMetaData(org.apache.thrift.protocol.TType.MAP, 
+            new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING), 
+            new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING))));
+    metaDataMap = Collections.unmodifiableMap(tmpMap);
+    org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(TSamplerConfiguration.class, metaDataMap);
+  }
+
+  public TSamplerConfiguration() {
+  }
+
+  public TSamplerConfiguration(
+    String className,
+    Map<String,String> options)
+  {
+    this();
+    this.className = className;
+    this.options = options;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public TSamplerConfiguration(TSamplerConfiguration other) {
+    if (other.isSetClassName()) {
+      this.className = other.className;
+    }
+    if (other.isSetOptions()) {
+      Map<String,String> __this__options = new HashMap<String,String>(other.options);
+      this.options = __this__options;
+    }
+  }
+
+  public TSamplerConfiguration deepCopy() {
+    return new TSamplerConfiguration(this);
+  }
+
+  @Override
+  public void clear() {
+    this.className = null;
+    this.options = null;
+  }
+
+  public String getClassName() {
+    return this.className;
+  }
+
+  public TSamplerConfiguration setClassName(String className) {
+    this.className = className;
+    return this;
+  }
+
+  public void unsetClassName() {
+    this.className = null;
+  }
+
+  /** Returns true if field className is set (has been assigned a value) and false otherwise */
+  public boolean isSetClassName() {
+    return this.className != null;
+  }
+
+  public void setClassNameIsSet(boolean value) {
+    if (!value) {
+      this.className = null;
+    }
+  }
+
+  public int getOptionsSize() {
+    return (this.options == null) ? 0 : this.options.size();
+  }
+
+  public void putToOptions(String key, String val) {
+    if (this.options == null) {
+      this.options = new HashMap<String,String>();
+    }
+    this.options.put(key, val);
+  }
+
+  public Map<String,String> getOptions() {
+    return this.options;
+  }
+
+  public TSamplerConfiguration setOptions(Map<String,String> options) {
+    this.options = options;
+    return this;
+  }
+
+  public void unsetOptions() {
+    this.options = null;
+  }
+
+  /** Returns true if field options is set (has been assigned a value) and false otherwise */
+  public boolean isSetOptions() {
+    return this.options != null;
+  }
+
+  public void setOptionsIsSet(boolean value) {
+    if (!value) {
+      this.options = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case CLASS_NAME:
+      if (value == null) {
+        unsetClassName();
+      } else {
+        setClassName((String)value);
+      }
+      break;
+
+    case OPTIONS:
+      if (value == null) {
+        unsetOptions();
+      } else {
+        setOptions((Map<String,String>)value);
+      }
+      break;
+
+    }
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case CLASS_NAME:
+      return getClassName();
+
+    case OPTIONS:
+      return getOptions();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    if (field == null) {
+      throw new IllegalArgumentException();
+    }
+
+    switch (field) {
+    case CLASS_NAME:
+      return isSetClassName();
+    case OPTIONS:
+      return isSetOptions();
+    }
+    throw new IllegalStateException();
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof TSamplerConfiguration)
+      return this.equals((TSamplerConfiguration)that);
+    return false;
+  }
+
+  public boolean equals(TSamplerConfiguration that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_className = true && this.isSetClassName();
+    boolean that_present_className = true && that.isSetClassName();
+    if (this_present_className || that_present_className) {
+      if (!(this_present_className && that_present_className))
+        return false;
+      if (!this.className.equals(that.className))
+        return false;
+    }
+
+    boolean this_present_options = true && this.isSetOptions();
+    boolean that_present_options = true && that.isSetOptions();
+    if (this_present_options || that_present_options) {
+      if (!(this_present_options && that_present_options))
+        return false;
+      if (!this.options.equals(that.options))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_className = true && (isSetClassName());
+    list.add(present_className);
+    if (present_className)
+      list.add(className);
+
+    boolean present_options = true && (isSetOptions());
+    list.add(present_options);
+    if (present_options)
+      list.add(options);
+
+    return list.hashCode();
+  }
+
+  @Override
+  public int compareTo(TSamplerConfiguration other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+
+    lastComparison = Boolean.valueOf(isSetClassName()).compareTo(other.isSetClassName());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetClassName()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.className, other.className);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    lastComparison = Boolean.valueOf(isSetOptions()).compareTo(other.isSetOptions());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetOptions()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.options, other.options);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    return 0;
+  }
+
+  public _Fields fieldForId(int fieldId) {
+    return _Fields.findByThriftId(fieldId);
+  }
+
+  public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+    schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+  }
+
+  public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+    schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("TSamplerConfiguration(");
+    boolean first = true;
+
+    sb.append("className:");
+    if (this.className == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.className);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("options:");
+    if (this.options == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.options);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws org.apache.thrift.TException {
+    // check for required fields
+    // check for sub-struct validity
+  }
+
+  private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+    try {
+      write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+    try {
+      read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private static class TSamplerConfigurationStandardSchemeFactory implements SchemeFactory {
+    public TSamplerConfigurationStandardScheme getScheme() {
+      return new TSamplerConfigurationStandardScheme();
+    }
+  }
+
+  private static class TSamplerConfigurationStandardScheme extends StandardScheme<TSamplerConfiguration> {
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot, TSamplerConfiguration struct) throws org.apache.thrift.TException {
+      org.apache.thrift.protocol.TField schemeField;
+      iprot.readStructBegin();
+      while (true)
+      {
+        schemeField = iprot.readFieldBegin();
+        if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+          break;
+        }
+        switch (schemeField.id) {
+          case 1: // CLASS_NAME
+            if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+              struct.className = iprot.readString();
+              struct.setClassNameIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          case 2: // OPTIONS
+            if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
+              {
+                org.apache.thrift.protocol.TMap _map106 = iprot.readMapBegin();
+                struct.options = new HashMap<String,String>(2*_map106.size);
+                String _key107;
+                String _val108;
+                for (int _i109 = 0; _i109 < _map106.size; ++_i109)
+                {
+                  _key107 = iprot.readString();
+                  _val108 = iprot.readString();
+                  struct.options.put(_key107, _val108);
+                }
+                iprot.readMapEnd();
+              }
+              struct.setOptionsIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          default:
+            org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+        }
+        iprot.readFieldEnd();
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      struct.validate();
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot, TSamplerConfiguration struct) throws org.apache.thrift.TException {
+      struct.validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (struct.className != null) {
+        oprot.writeFieldBegin(CLASS_NAME_FIELD_DESC);
+        oprot.writeString(struct.className);
+        oprot.writeFieldEnd();
+      }
+      if (struct.options != null) {
+        oprot.writeFieldBegin(OPTIONS_FIELD_DESC);
+        {
+          oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, struct.options.size()));
+          for (Map.Entry<String, String> _iter110 : struct.options.entrySet())
+          {
+            oprot.writeString(_iter110.getKey());
+            oprot.writeString(_iter110.getValue());
+          }
+          oprot.writeMapEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+  }
+
+  private static class TSamplerConfigurationTupleSchemeFactory implements SchemeFactory {
+    public TSamplerConfigurationTupleScheme getScheme() {
+      return new TSamplerConfigurationTupleScheme();
+    }
+  }
+
+  private static class TSamplerConfigurationTupleScheme extends TupleScheme<TSamplerConfiguration> {
+
+    @Override
+    public void write(org.apache.thrift.protocol.TProtocol prot, TSamplerConfiguration struct) throws org.apache.thrift.TException {
+      TTupleProtocol oprot = (TTupleProtocol) prot;
+      BitSet optionals = new BitSet();
+      if (struct.isSetClassName()) {
+        optionals.set(0);
+      }
+      if (struct.isSetOptions()) {
+        optionals.set(1);
+      }
+      oprot.writeBitSet(optionals, 2);
+      if (struct.isSetClassName()) {
+        oprot.writeString(struct.className);
+      }
+      if (struct.isSetOptions()) {
+        {
+          oprot.writeI32(struct.options.size());
+          for (Map.Entry<String, String> _iter111 : struct.options.entrySet())
+          {
+            oprot.writeString(_iter111.getKey());
+            oprot.writeString(_iter111.getValue());
+          }
+        }
+      }
+    }
+
+    @Override
+    public void read(org.apache.thrift.protocol.TProtocol prot, TSamplerConfiguration struct) throws org.apache.thrift.TException {
+      TTupleProtocol iprot = (TTupleProtocol) prot;
+      BitSet incoming = iprot.readBitSet(2);
+      if (incoming.get(0)) {
+        struct.className = iprot.readString();
+        struct.setClassNameIsSet(true);
+      }
+      if (incoming.get(1)) {
+        {
+          org.apache.thrift.protocol.TMap _map112 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+          struct.options = new HashMap<String,String>(2*_map112.size);
+          String _key113;
+          String _val114;
+          for (int _i115 = 0; _i115 < _map112.size; ++_i115)
+          {
+            _key113 = iprot.readString();
+            _val114 = iprot.readString();
+            struct.options.put(_key113, _val114);
+          }
+        }
+        struct.setOptionsIsSet(true);
+      }
+    }
+  }
+
+}
+
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TUnloadTabletGoal.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TUnloadTabletGoal.java
new file mode 100644
index 0000000..3ce0b31
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TUnloadTabletGoal.java
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.accumulo.core.tabletserver.thrift;
+
+
+import java.util.Map;
+import java.util.HashMap;
+import org.apache.thrift.TEnum;
+
+@SuppressWarnings({"unused"}) public enum TUnloadTabletGoal implements org.apache.thrift.TEnum {
+  UNKNOWN(0),
+  UNASSIGNED(1),
+  SUSPENDED(2),
+  DELETED(3);
+
+  private final int value;
+
+  private TUnloadTabletGoal(int value) {
+    this.value = value;
+  }
+
+  /**
+   * Get the integer value of this enum value, as defined in the Thrift IDL.
+   */
+  public int getValue() {
+    return value;
+  }
+
+  /**
+   * Find a the enum type by its integer value, as defined in the Thrift IDL.
+   * @return null if the value is not found.
+   */
+  public static TUnloadTabletGoal findByValue(int value) { 
+    switch (value) {
+      case 0:
+        return UNKNOWN;
+      case 1:
+        return UNASSIGNED;
+      case 2:
+        return SUSPENDED;
+      case 3:
+        return DELETED;
+      default:
+        return null;
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
index 02bd4e1..4ce9927 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletClientService.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,22 +45,25 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TabletClientService {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TabletClientService {
 
   public interface Iface extends org.apache.accumulo.core.client.impl.thrift.ClientService.Iface {
 
-    public org.apache.accumulo.core.data.thrift.InitialScan startScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, NotServingTabletException, TooManyFilesException, org.apache.thrift.TException;
+    public org.apache.accumulo.core.data.thrift.InitialScan startScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, NotServingTabletException, TooManyFilesException, TSampleNotPresentException, org.apache.thrift.TException;
 
-    public org.apache.accumulo.core.data.thrift.ScanResult continueScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID) throws NoSuchScanIDException, NotServingTabletException, TooManyFilesException, org.apache.thrift.TException;
+    public org.apache.accumulo.core.data.thrift.ScanResult continueScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID) throws NoSuchScanIDException, NotServingTabletException, TooManyFilesException, TSampleNotPresentException, org.apache.thrift.TException;
 
     public void closeScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID) throws org.apache.thrift.TException;
 
-    public org.apache.accumulo.core.data.thrift.InitialMultiScan startMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException;
+    public org.apache.accumulo.core.data.thrift.InitialMultiScan startMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, TSampleNotPresentException, org.apache.thrift.TException;
 
-    public org.apache.accumulo.core.data.thrift.MultiScanResult continueMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID) throws NoSuchScanIDException, org.apache.thrift.TException;
+    public org.apache.accumulo.core.data.thrift.MultiScanResult continueMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID) throws NoSuchScanIDException, TSampleNotPresentException, org.apache.thrift.TException;
 
     public void closeMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID) throws NoSuchScanIDException, org.apache.thrift.TException;
 
@@ -72,7 +75,7 @@
 
     public void update(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent keyExtent, org.apache.accumulo.core.data.thrift.TMutation mutation, TDurability durability) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, NotServingTabletException, ConstraintViolationException, org.apache.thrift.TException;
 
-    public org.apache.accumulo.core.data.thrift.TConditionalSession startConditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException;
+    public org.apache.accumulo.core.data.thrift.TConditionalSession startConditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability, String classLoaderContext) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException;
 
     public List<org.apache.accumulo.core.data.thrift.TCMResult> conditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long sessID, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TConditionalMutation>> mutations, List<String> symbols) throws NoSuchScanIDException, org.apache.thrift.TException;
 
@@ -86,7 +89,7 @@
 
     public void loadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent) throws org.apache.thrift.TException;
 
-    public void unloadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, boolean save) throws org.apache.thrift.TException;
+    public void unloadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, TUnloadTabletGoal goal, long requestTime) throws org.apache.thrift.TException;
 
     public void flush(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, String tableId, ByteBuffer startRow, ByteBuffer endRow) throws org.apache.thrift.TException;
 
@@ -118,13 +121,13 @@
 
   public interface AsyncIface extends org.apache.accumulo.core.client.impl.thrift.ClientService .AsyncIface {
 
-    public void startScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+    public void startScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
     public void continueScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
     public void closeScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
-    public void startMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+    public void startMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
     public void continueMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
@@ -138,7 +141,7 @@
 
     public void update(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent keyExtent, org.apache.accumulo.core.data.thrift.TMutation mutation, TDurability durability, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
-    public void startConditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+    public void startConditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability, String classLoaderContext, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
     public void conditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long sessID, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TConditionalMutation>> mutations, List<String> symbols, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
@@ -152,7 +155,7 @@
 
     public void loadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
-    public void unloadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, boolean save, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+    public void unloadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, TUnloadTabletGoal goal, long requestTime, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
     public void flush(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, String tableId, ByteBuffer startRow, ByteBuffer endRow, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
@@ -202,13 +205,13 @@
       super(iprot, oprot);
     }
 
-    public org.apache.accumulo.core.data.thrift.InitialScan startScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, NotServingTabletException, TooManyFilesException, org.apache.thrift.TException
+    public org.apache.accumulo.core.data.thrift.InitialScan startScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, NotServingTabletException, TooManyFilesException, TSampleNotPresentException, org.apache.thrift.TException
     {
-      send_startScan(tinfo, credentials, extent, range, columns, batchSize, ssiList, ssio, authorizations, waitForWrites, isolated, readaheadThreshold);
+      send_startScan(tinfo, credentials, extent, range, columns, batchSize, ssiList, ssio, authorizations, waitForWrites, isolated, readaheadThreshold, samplerConfig, batchTimeOut, classLoaderContext);
       return recv_startScan();
     }
 
-    public void send_startScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold) throws org.apache.thrift.TException
+    public void send_startScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext) throws org.apache.thrift.TException
     {
       startScan_args args = new startScan_args();
       args.setTinfo(tinfo);
@@ -223,10 +226,13 @@
       args.setWaitForWrites(waitForWrites);
       args.setIsolated(isolated);
       args.setReadaheadThreshold(readaheadThreshold);
+      args.setSamplerConfig(samplerConfig);
+      args.setBatchTimeOut(batchTimeOut);
+      args.setClassLoaderContext(classLoaderContext);
       sendBase("startScan", args);
     }
 
-    public org.apache.accumulo.core.data.thrift.InitialScan recv_startScan() throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, NotServingTabletException, TooManyFilesException, org.apache.thrift.TException
+    public org.apache.accumulo.core.data.thrift.InitialScan recv_startScan() throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, NotServingTabletException, TooManyFilesException, TSampleNotPresentException, org.apache.thrift.TException
     {
       startScan_result result = new startScan_result();
       receiveBase(result, "startScan");
@@ -242,10 +248,13 @@
       if (result.tmfe != null) {
         throw result.tmfe;
       }
+      if (result.tsnpe != null) {
+        throw result.tsnpe;
+      }
       throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "startScan failed: unknown result");
     }
 
-    public org.apache.accumulo.core.data.thrift.ScanResult continueScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID) throws NoSuchScanIDException, NotServingTabletException, TooManyFilesException, org.apache.thrift.TException
+    public org.apache.accumulo.core.data.thrift.ScanResult continueScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID) throws NoSuchScanIDException, NotServingTabletException, TooManyFilesException, TSampleNotPresentException, org.apache.thrift.TException
     {
       send_continueScan(tinfo, scanID);
       return recv_continueScan();
@@ -259,7 +268,7 @@
       sendBase("continueScan", args);
     }
 
-    public org.apache.accumulo.core.data.thrift.ScanResult recv_continueScan() throws NoSuchScanIDException, NotServingTabletException, TooManyFilesException, org.apache.thrift.TException
+    public org.apache.accumulo.core.data.thrift.ScanResult recv_continueScan() throws NoSuchScanIDException, NotServingTabletException, TooManyFilesException, TSampleNotPresentException, org.apache.thrift.TException
     {
       continueScan_result result = new continueScan_result();
       receiveBase(result, "continueScan");
@@ -275,6 +284,9 @@
       if (result.tmfe != null) {
         throw result.tmfe;
       }
+      if (result.tsnpe != null) {
+        throw result.tsnpe;
+      }
       throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "continueScan failed: unknown result");
     }
 
@@ -288,16 +300,16 @@
       closeScan_args args = new closeScan_args();
       args.setTinfo(tinfo);
       args.setScanID(scanID);
-      sendBase("closeScan", args);
+      sendBaseOneway("closeScan", args);
     }
 
-    public org.apache.accumulo.core.data.thrift.InitialMultiScan startMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException
+    public org.apache.accumulo.core.data.thrift.InitialMultiScan startMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, TSampleNotPresentException, org.apache.thrift.TException
     {
-      send_startMultiScan(tinfo, credentials, batch, columns, ssiList, ssio, authorizations, waitForWrites);
+      send_startMultiScan(tinfo, credentials, batch, columns, ssiList, ssio, authorizations, waitForWrites, samplerConfig, batchTimeOut, classLoaderContext);
       return recv_startMultiScan();
     }
 
-    public void send_startMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites) throws org.apache.thrift.TException
+    public void send_startMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext) throws org.apache.thrift.TException
     {
       startMultiScan_args args = new startMultiScan_args();
       args.setTinfo(tinfo);
@@ -308,10 +320,13 @@
       args.setSsio(ssio);
       args.setAuthorizations(authorizations);
       args.setWaitForWrites(waitForWrites);
+      args.setSamplerConfig(samplerConfig);
+      args.setBatchTimeOut(batchTimeOut);
+      args.setClassLoaderContext(classLoaderContext);
       sendBase("startMultiScan", args);
     }
 
-    public org.apache.accumulo.core.data.thrift.InitialMultiScan recv_startMultiScan() throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException
+    public org.apache.accumulo.core.data.thrift.InitialMultiScan recv_startMultiScan() throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, TSampleNotPresentException, org.apache.thrift.TException
     {
       startMultiScan_result result = new startMultiScan_result();
       receiveBase(result, "startMultiScan");
@@ -321,10 +336,13 @@
       if (result.sec != null) {
         throw result.sec;
       }
+      if (result.tsnpe != null) {
+        throw result.tsnpe;
+      }
       throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "startMultiScan failed: unknown result");
     }
 
-    public org.apache.accumulo.core.data.thrift.MultiScanResult continueMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID) throws NoSuchScanIDException, org.apache.thrift.TException
+    public org.apache.accumulo.core.data.thrift.MultiScanResult continueMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long scanID) throws NoSuchScanIDException, TSampleNotPresentException, org.apache.thrift.TException
     {
       send_continueMultiScan(tinfo, scanID);
       return recv_continueMultiScan();
@@ -338,7 +356,7 @@
       sendBase("continueMultiScan", args);
     }
 
-    public org.apache.accumulo.core.data.thrift.MultiScanResult recv_continueMultiScan() throws NoSuchScanIDException, org.apache.thrift.TException
+    public org.apache.accumulo.core.data.thrift.MultiScanResult recv_continueMultiScan() throws NoSuchScanIDException, TSampleNotPresentException, org.apache.thrift.TException
     {
       continueMultiScan_result result = new continueMultiScan_result();
       receiveBase(result, "continueMultiScan");
@@ -348,6 +366,9 @@
       if (result.nssi != null) {
         throw result.nssi;
       }
+      if (result.tsnpe != null) {
+        throw result.tsnpe;
+      }
       throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "continueMultiScan failed: unknown result");
     }
 
@@ -415,7 +436,7 @@
       args.setUpdateID(updateID);
       args.setKeyExtent(keyExtent);
       args.setMutations(mutations);
-      sendBase("applyUpdates", args);
+      sendBaseOneway("applyUpdates", args);
     }
 
     public org.apache.accumulo.core.data.thrift.UpdateErrors closeUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, long updateID) throws NoSuchScanIDException, org.apache.thrift.TException
@@ -478,13 +499,13 @@
       return;
     }
 
-    public org.apache.accumulo.core.data.thrift.TConditionalSession startConditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException
+    public org.apache.accumulo.core.data.thrift.TConditionalSession startConditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability, String classLoaderContext) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException
     {
-      send_startConditionalUpdate(tinfo, credentials, authorizations, tableID, durability);
+      send_startConditionalUpdate(tinfo, credentials, authorizations, tableID, durability, classLoaderContext);
       return recv_startConditionalUpdate();
     }
 
-    public void send_startConditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability) throws org.apache.thrift.TException
+    public void send_startConditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability, String classLoaderContext) throws org.apache.thrift.TException
     {
       startConditionalUpdate_args args = new startConditionalUpdate_args();
       args.setTinfo(tinfo);
@@ -492,6 +513,7 @@
       args.setAuthorizations(authorizations);
       args.setTableID(tableID);
       args.setDurability(durability);
+      args.setClassLoaderContext(classLoaderContext);
       sendBase("startConditionalUpdate", args);
     }
 
@@ -568,7 +590,7 @@
       closeConditionalUpdate_args args = new closeConditionalUpdate_args();
       args.setTinfo(tinfo);
       args.setSessID(sessID);
-      sendBase("closeConditionalUpdate", args);
+      sendBaseOneway("closeConditionalUpdate", args);
     }
 
     public List<org.apache.accumulo.core.data.thrift.TKeyExtent> bulkImport(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, long tid, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>> files, boolean setTime) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException
@@ -642,23 +664,24 @@
       args.setCredentials(credentials);
       args.setLock(lock);
       args.setExtent(extent);
-      sendBase("loadTablet", args);
+      sendBaseOneway("loadTablet", args);
     }
 
-    public void unloadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, boolean save) throws org.apache.thrift.TException
+    public void unloadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, TUnloadTabletGoal goal, long requestTime) throws org.apache.thrift.TException
     {
-      send_unloadTablet(tinfo, credentials, lock, extent, save);
+      send_unloadTablet(tinfo, credentials, lock, extent, goal, requestTime);
     }
 
-    public void send_unloadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, boolean save) throws org.apache.thrift.TException
+    public void send_unloadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, TUnloadTabletGoal goal, long requestTime) throws org.apache.thrift.TException
     {
       unloadTablet_args args = new unloadTablet_args();
       args.setTinfo(tinfo);
       args.setCredentials(credentials);
       args.setLock(lock);
       args.setExtent(extent);
-      args.setSave(save);
-      sendBase("unloadTablet", args);
+      args.setGoal(goal);
+      args.setRequestTime(requestTime);
+      sendBaseOneway("unloadTablet", args);
     }
 
     public void flush(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, String tableId, ByteBuffer startRow, ByteBuffer endRow) throws org.apache.thrift.TException
@@ -675,7 +698,7 @@
       args.setTableId(tableId);
       args.setStartRow(startRow);
       args.setEndRow(endRow);
-      sendBase("flush", args);
+      sendBaseOneway("flush", args);
     }
 
     public void flushTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent) throws org.apache.thrift.TException
@@ -690,7 +713,7 @@
       args.setCredentials(credentials);
       args.setLock(lock);
       args.setExtent(extent);
-      sendBase("flushTablet", args);
+      sendBaseOneway("flushTablet", args);
     }
 
     public void chop(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent) throws org.apache.thrift.TException
@@ -705,7 +728,7 @@
       args.setCredentials(credentials);
       args.setLock(lock);
       args.setExtent(extent);
-      sendBase("chop", args);
+      sendBaseOneway("chop", args);
     }
 
     public void compact(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, String tableId, ByteBuffer startRow, ByteBuffer endRow) throws org.apache.thrift.TException
@@ -722,7 +745,7 @@
       args.setTableId(tableId);
       args.setStartRow(startRow);
       args.setEndRow(endRow);
-      sendBase("compact", args);
+      sendBaseOneway("compact", args);
     }
 
     public org.apache.accumulo.core.master.thrift.TabletServerStatus getTabletServerStatus(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException
@@ -843,7 +866,7 @@
       args.setTinfo(tinfo);
       args.setCredentials(credentials);
       args.setLock(lock);
-      sendBase("fastHalt", args);
+      sendBaseOneway("fastHalt", args);
     }
 
     public List<ActiveScan> getActiveScans(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials) throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException
@@ -911,7 +934,7 @@
       args.setTinfo(tinfo);
       args.setCredentials(credentials);
       args.setFilenames(filenames);
-      sendBase("removeLogs", args);
+      sendBaseOneway("removeLogs", args);
     }
 
     public List<String> getActiveLogs(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials) throws org.apache.thrift.TException
@@ -956,9 +979,9 @@
       super(protocolFactory, clientManager, transport);
     }
 
-    public void startScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+    public void startScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
       checkReady();
-      startScan_call method_call = new startScan_call(tinfo, credentials, extent, range, columns, batchSize, ssiList, ssio, authorizations, waitForWrites, isolated, readaheadThreshold, resultHandler, this, ___protocolFactory, ___transport);
+      startScan_call method_call = new startScan_call(tinfo, credentials, extent, range, columns, batchSize, ssiList, ssio, authorizations, waitForWrites, isolated, readaheadThreshold, samplerConfig, batchTimeOut, classLoaderContext, resultHandler, this, ___protocolFactory, ___transport);
       this.___currentMethod = method_call;
       ___manager.call(method_call);
     }
@@ -976,7 +999,10 @@
       private boolean waitForWrites;
       private boolean isolated;
       private long readaheadThreshold;
-      public startScan_call(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+      private TSamplerConfiguration samplerConfig;
+      private long batchTimeOut;
+      private String classLoaderContext;
+      public startScan_call(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, org.apache.accumulo.core.data.thrift.TKeyExtent extent, org.apache.accumulo.core.data.thrift.TRange range, List<org.apache.accumulo.core.data.thrift.TColumn> columns, int batchSize, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated, long readaheadThreshold, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
         super(client, protocolFactory, transport, resultHandler, false);
         this.tinfo = tinfo;
         this.credentials = credentials;
@@ -990,6 +1016,9 @@
         this.waitForWrites = waitForWrites;
         this.isolated = isolated;
         this.readaheadThreshold = readaheadThreshold;
+        this.samplerConfig = samplerConfig;
+        this.batchTimeOut = batchTimeOut;
+        this.classLoaderContext = classLoaderContext;
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
@@ -1007,11 +1036,14 @@
         args.setWaitForWrites(waitForWrites);
         args.setIsolated(isolated);
         args.setReadaheadThreshold(readaheadThreshold);
+        args.setSamplerConfig(samplerConfig);
+        args.setBatchTimeOut(batchTimeOut);
+        args.setClassLoaderContext(classLoaderContext);
         args.write(prot);
         prot.writeMessageEnd();
       }
 
-      public org.apache.accumulo.core.data.thrift.InitialScan getResult() throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, NotServingTabletException, TooManyFilesException, org.apache.thrift.TException {
+      public org.apache.accumulo.core.data.thrift.InitialScan getResult() throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, NotServingTabletException, TooManyFilesException, TSampleNotPresentException, org.apache.thrift.TException {
         if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
           throw new IllegalStateException("Method call not finished!");
         }
@@ -1046,7 +1078,7 @@
         prot.writeMessageEnd();
       }
 
-      public org.apache.accumulo.core.data.thrift.ScanResult getResult() throws NoSuchScanIDException, NotServingTabletException, TooManyFilesException, org.apache.thrift.TException {
+      public org.apache.accumulo.core.data.thrift.ScanResult getResult() throws NoSuchScanIDException, NotServingTabletException, TooManyFilesException, TSampleNotPresentException, org.apache.thrift.TException {
         if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
           throw new IllegalStateException("Method call not finished!");
         }
@@ -1073,7 +1105,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("closeScan", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("closeScan", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         closeScan_args args = new closeScan_args();
         args.setTinfo(tinfo);
         args.setScanID(scanID);
@@ -1090,9 +1122,9 @@
       }
     }
 
-    public void startMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+    public void startMultiScan(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
       checkReady();
-      startMultiScan_call method_call = new startMultiScan_call(tinfo, credentials, batch, columns, ssiList, ssio, authorizations, waitForWrites, resultHandler, this, ___protocolFactory, ___transport);
+      startMultiScan_call method_call = new startMultiScan_call(tinfo, credentials, batch, columns, ssiList, ssio, authorizations, waitForWrites, samplerConfig, batchTimeOut, classLoaderContext, resultHandler, this, ___protocolFactory, ___transport);
       this.___currentMethod = method_call;
       ___manager.call(method_call);
     }
@@ -1106,7 +1138,10 @@
       private Map<String,Map<String,String>> ssio;
       private List<ByteBuffer> authorizations;
       private boolean waitForWrites;
-      public startMultiScan_call(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+      private TSamplerConfiguration samplerConfig;
+      private long batchTimeOut;
+      private String classLoaderContext;
+      public startMultiScan_call(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, Map<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>> batch, List<org.apache.accumulo.core.data.thrift.TColumn> columns, List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, TSamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
         super(client, protocolFactory, transport, resultHandler, false);
         this.tinfo = tinfo;
         this.credentials = credentials;
@@ -1116,6 +1151,9 @@
         this.ssio = ssio;
         this.authorizations = authorizations;
         this.waitForWrites = waitForWrites;
+        this.samplerConfig = samplerConfig;
+        this.batchTimeOut = batchTimeOut;
+        this.classLoaderContext = classLoaderContext;
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
@@ -1129,11 +1167,14 @@
         args.setSsio(ssio);
         args.setAuthorizations(authorizations);
         args.setWaitForWrites(waitForWrites);
+        args.setSamplerConfig(samplerConfig);
+        args.setBatchTimeOut(batchTimeOut);
+        args.setClassLoaderContext(classLoaderContext);
         args.write(prot);
         prot.writeMessageEnd();
       }
 
-      public org.apache.accumulo.core.data.thrift.InitialMultiScan getResult() throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, org.apache.thrift.TException {
+      public org.apache.accumulo.core.data.thrift.InitialMultiScan getResult() throws org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException, TSampleNotPresentException, org.apache.thrift.TException {
         if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
           throw new IllegalStateException("Method call not finished!");
         }
@@ -1168,7 +1209,7 @@
         prot.writeMessageEnd();
       }
 
-      public org.apache.accumulo.core.data.thrift.MultiScanResult getResult() throws NoSuchScanIDException, org.apache.thrift.TException {
+      public org.apache.accumulo.core.data.thrift.MultiScanResult getResult() throws NoSuchScanIDException, TSampleNotPresentException, org.apache.thrift.TException {
         if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
           throw new IllegalStateException("Method call not finished!");
         }
@@ -1272,7 +1313,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("applyUpdates", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("applyUpdates", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         applyUpdates_args args = new applyUpdates_args();
         args.setTinfo(tinfo);
         args.setUpdateID(updateID);
@@ -1370,9 +1411,9 @@
       }
     }
 
-    public void startConditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+    public void startConditionalUpdate(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability, String classLoaderContext, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
       checkReady();
-      startConditionalUpdate_call method_call = new startConditionalUpdate_call(tinfo, credentials, authorizations, tableID, durability, resultHandler, this, ___protocolFactory, ___transport);
+      startConditionalUpdate_call method_call = new startConditionalUpdate_call(tinfo, credentials, authorizations, tableID, durability, classLoaderContext, resultHandler, this, ___protocolFactory, ___transport);
       this.___currentMethod = method_call;
       ___manager.call(method_call);
     }
@@ -1383,13 +1424,15 @@
       private List<ByteBuffer> authorizations;
       private String tableID;
       private TDurability durability;
-      public startConditionalUpdate_call(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+      private String classLoaderContext;
+      public startConditionalUpdate_call(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, List<ByteBuffer> authorizations, String tableID, TDurability durability, String classLoaderContext, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
         super(client, protocolFactory, transport, resultHandler, false);
         this.tinfo = tinfo;
         this.credentials = credentials;
         this.authorizations = authorizations;
         this.tableID = tableID;
         this.durability = durability;
+        this.classLoaderContext = classLoaderContext;
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
@@ -1400,6 +1443,7 @@
         args.setAuthorizations(authorizations);
         args.setTableID(tableID);
         args.setDurability(durability);
+        args.setClassLoaderContext(classLoaderContext);
         args.write(prot);
         prot.writeMessageEnd();
       }
@@ -1507,7 +1551,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("closeConditionalUpdate", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("closeConditionalUpdate", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         closeConditionalUpdate_args args = new closeConditionalUpdate_args();
         args.setTinfo(tinfo);
         args.setSessID(sessID);
@@ -1630,7 +1674,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("loadTablet", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("loadTablet", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         loadTablet_args args = new loadTablet_args();
         args.setTinfo(tinfo);
         args.setCredentials(credentials);
@@ -1649,9 +1693,9 @@
       }
     }
 
-    public void unloadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, boolean save, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+    public void unloadTablet(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, TUnloadTabletGoal goal, long requestTime, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
       checkReady();
-      unloadTablet_call method_call = new unloadTablet_call(tinfo, credentials, lock, extent, save, resultHandler, this, ___protocolFactory, ___transport);
+      unloadTablet_call method_call = new unloadTablet_call(tinfo, credentials, lock, extent, goal, requestTime, resultHandler, this, ___protocolFactory, ___transport);
       this.___currentMethod = method_call;
       ___manager.call(method_call);
     }
@@ -1661,24 +1705,27 @@
       private org.apache.accumulo.core.security.thrift.TCredentials credentials;
       private String lock;
       private org.apache.accumulo.core.data.thrift.TKeyExtent extent;
-      private boolean save;
-      public unloadTablet_call(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, boolean save, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+      private TUnloadTabletGoal goal;
+      private long requestTime;
+      public unloadTablet_call(org.apache.accumulo.core.trace.thrift.TInfo tinfo, org.apache.accumulo.core.security.thrift.TCredentials credentials, String lock, org.apache.accumulo.core.data.thrift.TKeyExtent extent, TUnloadTabletGoal goal, long requestTime, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
         super(client, protocolFactory, transport, resultHandler, true);
         this.tinfo = tinfo;
         this.credentials = credentials;
         this.lock = lock;
         this.extent = extent;
-        this.save = save;
+        this.goal = goal;
+        this.requestTime = requestTime;
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("unloadTablet", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("unloadTablet", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         unloadTablet_args args = new unloadTablet_args();
         args.setTinfo(tinfo);
         args.setCredentials(credentials);
         args.setLock(lock);
         args.setExtent(extent);
-        args.setSave(save);
+        args.setGoal(goal);
+        args.setRequestTime(requestTime);
         args.write(prot);
         prot.writeMessageEnd();
       }
@@ -1717,7 +1764,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("flush", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("flush", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         flush_args args = new flush_args();
         args.setTinfo(tinfo);
         args.setCredentials(credentials);
@@ -1759,7 +1806,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("flushTablet", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("flushTablet", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         flushTablet_args args = new flushTablet_args();
         args.setTinfo(tinfo);
         args.setCredentials(credentials);
@@ -1799,7 +1846,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("chop", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("chop", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         chop_args args = new chop_args();
         args.setTinfo(tinfo);
         args.setCredentials(credentials);
@@ -1843,7 +1890,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("compact", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("compact", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         compact_args args = new compact_args();
         args.setTinfo(tinfo);
         args.setCredentials(credentials);
@@ -2029,7 +2076,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("fastHalt", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("fastHalt", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         fastHalt_args args = new fastHalt_args();
         args.setTinfo(tinfo);
         args.setCredentials(credentials);
@@ -2136,7 +2183,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("removeLogs", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("removeLogs", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         removeLogs_args args = new removeLogs_args();
         args.setTinfo(tinfo);
         args.setCredentials(credentials);
@@ -2252,13 +2299,15 @@
       public startScan_result getResult(I iface, startScan_args args) throws org.apache.thrift.TException {
         startScan_result result = new startScan_result();
         try {
-          result.success = iface.startScan(args.tinfo, args.credentials, args.extent, args.range, args.columns, args.batchSize, args.ssiList, args.ssio, args.authorizations, args.waitForWrites, args.isolated, args.readaheadThreshold);
+          result.success = iface.startScan(args.tinfo, args.credentials, args.extent, args.range, args.columns, args.batchSize, args.ssiList, args.ssio, args.authorizations, args.waitForWrites, args.isolated, args.readaheadThreshold, args.samplerConfig, args.batchTimeOut, args.classLoaderContext);
         } catch (org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException sec) {
           result.sec = sec;
         } catch (NotServingTabletException nste) {
           result.nste = nste;
         } catch (TooManyFilesException tmfe) {
           result.tmfe = tmfe;
+        } catch (TSampleNotPresentException tsnpe) {
+          result.tsnpe = tsnpe;
         }
         return result;
       }
@@ -2287,6 +2336,8 @@
           result.nste = nste;
         } catch (TooManyFilesException tmfe) {
           result.tmfe = tmfe;
+        } catch (TSampleNotPresentException tsnpe) {
+          result.tsnpe = tsnpe;
         }
         return result;
       }
@@ -2327,9 +2378,11 @@
       public startMultiScan_result getResult(I iface, startMultiScan_args args) throws org.apache.thrift.TException {
         startMultiScan_result result = new startMultiScan_result();
         try {
-          result.success = iface.startMultiScan(args.tinfo, args.credentials, args.batch, args.columns, args.ssiList, args.ssio, args.authorizations, args.waitForWrites);
+          result.success = iface.startMultiScan(args.tinfo, args.credentials, args.batch, args.columns, args.ssiList, args.ssio, args.authorizations, args.waitForWrites, args.samplerConfig, args.batchTimeOut, args.classLoaderContext);
         } catch (org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException sec) {
           result.sec = sec;
+        } catch (TSampleNotPresentException tsnpe) {
+          result.tsnpe = tsnpe;
         }
         return result;
       }
@@ -2354,6 +2407,8 @@
           result.success = iface.continueMultiScan(args.tinfo, args.scanID);
         } catch (NoSuchScanIDException nssi) {
           result.nssi = nssi;
+        } catch (TSampleNotPresentException tsnpe) {
+          result.tsnpe = tsnpe;
         }
         return result;
       }
@@ -2495,7 +2550,7 @@
       public startConditionalUpdate_result getResult(I iface, startConditionalUpdate_args args) throws org.apache.thrift.TException {
         startConditionalUpdate_result result = new startConditionalUpdate_result();
         try {
-          result.success = iface.startConditionalUpdate(args.tinfo, args.credentials, args.authorizations, args.tableID, args.durability);
+          result.success = iface.startConditionalUpdate(args.tinfo, args.credentials, args.authorizations, args.tableID, args.durability, args.classLoaderContext);
         } catch (org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException sec) {
           result.sec = sec;
         }
@@ -2649,7 +2704,7 @@
       }
 
       public org.apache.thrift.TBase getResult(I iface, unloadTablet_args args) throws org.apache.thrift.TException {
-        iface.unloadTablet(args.tinfo, args.credentials, args.lock, args.extent, args.save);
+        iface.unloadTablet(args.tinfo, args.credentials, args.lock, args.extent, args.goal, args.requestTime);
         return null;
       }
     }
@@ -3021,6 +3076,11 @@
                         result.setTmfeIsSet(true);
                         msg = result;
             }
+            else             if (e instanceof TSampleNotPresentException) {
+                        result.tsnpe = (TSampleNotPresentException) e;
+                        result.setTsnpeIsSet(true);
+                        msg = result;
+            }
              else 
             {
               msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
@@ -3042,7 +3102,7 @@
       }
 
       public void start(I iface, startScan_args args, org.apache.thrift.async.AsyncMethodCallback<org.apache.accumulo.core.data.thrift.InitialScan> resultHandler) throws TException {
-        iface.startScan(args.tinfo, args.credentials, args.extent, args.range, args.columns, args.batchSize, args.ssiList, args.ssio, args.authorizations, args.waitForWrites, args.isolated, args.readaheadThreshold,resultHandler);
+        iface.startScan(args.tinfo, args.credentials, args.extent, args.range, args.columns, args.batchSize, args.ssiList, args.ssio, args.authorizations, args.waitForWrites, args.isolated, args.readaheadThreshold, args.samplerConfig, args.batchTimeOut, args.classLoaderContext,resultHandler);
       }
     }
 
@@ -3088,6 +3148,11 @@
                         result.setTmfeIsSet(true);
                         msg = result;
             }
+            else             if (e instanceof TSampleNotPresentException) {
+                        result.tsnpe = (TSampleNotPresentException) e;
+                        result.setTsnpeIsSet(true);
+                        msg = result;
+            }
              else 
             {
               msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
@@ -3173,6 +3238,11 @@
                         result.setSecIsSet(true);
                         msg = result;
             }
+            else             if (e instanceof TSampleNotPresentException) {
+                        result.tsnpe = (TSampleNotPresentException) e;
+                        result.setTsnpeIsSet(true);
+                        msg = result;
+            }
              else 
             {
               msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
@@ -3194,7 +3264,7 @@
       }
 
       public void start(I iface, startMultiScan_args args, org.apache.thrift.async.AsyncMethodCallback<org.apache.accumulo.core.data.thrift.InitialMultiScan> resultHandler) throws TException {
-        iface.startMultiScan(args.tinfo, args.credentials, args.batch, args.columns, args.ssiList, args.ssio, args.authorizations, args.waitForWrites,resultHandler);
+        iface.startMultiScan(args.tinfo, args.credentials, args.batch, args.columns, args.ssiList, args.ssio, args.authorizations, args.waitForWrites, args.samplerConfig, args.batchTimeOut, args.classLoaderContext,resultHandler);
       }
     }
 
@@ -3230,6 +3300,11 @@
                         result.setNssiIsSet(true);
                         msg = result;
             }
+            else             if (e instanceof TSampleNotPresentException) {
+                        result.tsnpe = (TSampleNotPresentException) e;
+                        result.setTsnpeIsSet(true);
+                        msg = result;
+            }
              else 
             {
               msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
@@ -3573,7 +3648,7 @@
       }
 
       public void start(I iface, startConditionalUpdate_args args, org.apache.thrift.async.AsyncMethodCallback<org.apache.accumulo.core.data.thrift.TConditionalSession> resultHandler) throws TException {
-        iface.startConditionalUpdate(args.tinfo, args.credentials, args.authorizations, args.tableID, args.durability,resultHandler);
+        iface.startConditionalUpdate(args.tinfo, args.credentials, args.authorizations, args.tableID, args.durability, args.classLoaderContext,resultHandler);
       }
     }
 
@@ -3882,7 +3957,7 @@
       }
 
       public void start(I iface, unloadTablet_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
-        iface.unloadTablet(args.tinfo, args.credentials, args.lock, args.extent, args.save,resultHandler);
+        iface.unloadTablet(args.tinfo, args.credentials, args.lock, args.extent, args.goal, args.requestTime,resultHandler);
       }
     }
 
@@ -4463,6 +4538,9 @@
     private static final org.apache.thrift.protocol.TField WAIT_FOR_WRITES_FIELD_DESC = new org.apache.thrift.protocol.TField("waitForWrites", org.apache.thrift.protocol.TType.BOOL, (short)9);
     private static final org.apache.thrift.protocol.TField ISOLATED_FIELD_DESC = new org.apache.thrift.protocol.TField("isolated", org.apache.thrift.protocol.TType.BOOL, (short)10);
     private static final org.apache.thrift.protocol.TField READAHEAD_THRESHOLD_FIELD_DESC = new org.apache.thrift.protocol.TField("readaheadThreshold", org.apache.thrift.protocol.TType.I64, (short)12);
+    private static final org.apache.thrift.protocol.TField SAMPLER_CONFIG_FIELD_DESC = new org.apache.thrift.protocol.TField("samplerConfig", org.apache.thrift.protocol.TType.STRUCT, (short)13);
+    private static final org.apache.thrift.protocol.TField BATCH_TIME_OUT_FIELD_DESC = new org.apache.thrift.protocol.TField("batchTimeOut", org.apache.thrift.protocol.TType.I64, (short)14);
+    private static final org.apache.thrift.protocol.TField CLASS_LOADER_CONTEXT_FIELD_DESC = new org.apache.thrift.protocol.TField("classLoaderContext", org.apache.thrift.protocol.TType.STRING, (short)15);
 
     private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
     static {
@@ -4482,6 +4560,9 @@
     public boolean waitForWrites; // required
     public boolean isolated; // required
     public long readaheadThreshold; // required
+    public TSamplerConfiguration samplerConfig; // required
+    public long batchTimeOut; // required
+    public String classLoaderContext; // required
 
     /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
     public enum _Fields implements org.apache.thrift.TFieldIdEnum {
@@ -4496,7 +4577,10 @@
       AUTHORIZATIONS((short)8, "authorizations"),
       WAIT_FOR_WRITES((short)9, "waitForWrites"),
       ISOLATED((short)10, "isolated"),
-      READAHEAD_THRESHOLD((short)12, "readaheadThreshold");
+      READAHEAD_THRESHOLD((short)12, "readaheadThreshold"),
+      SAMPLER_CONFIG((short)13, "samplerConfig"),
+      BATCH_TIME_OUT((short)14, "batchTimeOut"),
+      CLASS_LOADER_CONTEXT((short)15, "classLoaderContext");
 
       private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -4535,6 +4619,12 @@
             return ISOLATED;
           case 12: // READAHEAD_THRESHOLD
             return READAHEAD_THRESHOLD;
+          case 13: // SAMPLER_CONFIG
+            return SAMPLER_CONFIG;
+          case 14: // BATCH_TIME_OUT
+            return BATCH_TIME_OUT;
+          case 15: // CLASS_LOADER_CONTEXT
+            return CLASS_LOADER_CONTEXT;
           default:
             return null;
         }
@@ -4579,6 +4669,7 @@
     private static final int __WAITFORWRITES_ISSET_ID = 1;
     private static final int __ISOLATED_ISSET_ID = 2;
     private static final int __READAHEADTHRESHOLD_ISSET_ID = 3;
+    private static final int __BATCHTIMEOUT_ISSET_ID = 4;
     private byte __isset_bitfield = 0;
     public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
     static {
@@ -4614,6 +4705,12 @@
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.BOOL)));
       tmpMap.put(_Fields.READAHEAD_THRESHOLD, new org.apache.thrift.meta_data.FieldMetaData("readaheadThreshold", org.apache.thrift.TFieldRequirementType.DEFAULT, 
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I64)));
+      tmpMap.put(_Fields.SAMPLER_CONFIG, new org.apache.thrift.meta_data.FieldMetaData("samplerConfig", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, TSamplerConfiguration.class)));
+      tmpMap.put(_Fields.BATCH_TIME_OUT, new org.apache.thrift.meta_data.FieldMetaData("batchTimeOut", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I64)));
+      tmpMap.put(_Fields.CLASS_LOADER_CONTEXT, new org.apache.thrift.meta_data.FieldMetaData("classLoaderContext", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
       metaDataMap = Collections.unmodifiableMap(tmpMap);
       org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(startScan_args.class, metaDataMap);
     }
@@ -4633,7 +4730,10 @@
       List<ByteBuffer> authorizations,
       boolean waitForWrites,
       boolean isolated,
-      long readaheadThreshold)
+      long readaheadThreshold,
+      TSamplerConfiguration samplerConfig,
+      long batchTimeOut,
+      String classLoaderContext)
     {
       this();
       this.tinfo = tinfo;
@@ -4652,6 +4752,10 @@
       setIsolatedIsSet(true);
       this.readaheadThreshold = readaheadThreshold;
       setReadaheadThresholdIsSet(true);
+      this.samplerConfig = samplerConfig;
+      this.batchTimeOut = batchTimeOut;
+      setBatchTimeOutIsSet(true);
+      this.classLoaderContext = classLoaderContext;
     }
 
     /**
@@ -4708,6 +4812,13 @@
       this.waitForWrites = other.waitForWrites;
       this.isolated = other.isolated;
       this.readaheadThreshold = other.readaheadThreshold;
+      if (other.isSetSamplerConfig()) {
+        this.samplerConfig = new TSamplerConfiguration(other.samplerConfig);
+      }
+      this.batchTimeOut = other.batchTimeOut;
+      if (other.isSetClassLoaderContext()) {
+        this.classLoaderContext = other.classLoaderContext;
+      }
     }
 
     public startScan_args deepCopy() {
@@ -4732,6 +4843,10 @@
       this.isolated = false;
       setReadaheadThresholdIsSet(false);
       this.readaheadThreshold = 0;
+      this.samplerConfig = null;
+      setBatchTimeOutIsSet(false);
+      this.batchTimeOut = 0;
+      this.classLoaderContext = null;
     }
 
     public org.apache.accumulo.core.trace.thrift.TInfo getTinfo() {
@@ -5074,6 +5189,77 @@
       __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __READAHEADTHRESHOLD_ISSET_ID, value);
     }
 
+    public TSamplerConfiguration getSamplerConfig() {
+      return this.samplerConfig;
+    }
+
+    public startScan_args setSamplerConfig(TSamplerConfiguration samplerConfig) {
+      this.samplerConfig = samplerConfig;
+      return this;
+    }
+
+    public void unsetSamplerConfig() {
+      this.samplerConfig = null;
+    }
+
+    /** Returns true if field samplerConfig is set (has been assigned a value) and false otherwise */
+    public boolean isSetSamplerConfig() {
+      return this.samplerConfig != null;
+    }
+
+    public void setSamplerConfigIsSet(boolean value) {
+      if (!value) {
+        this.samplerConfig = null;
+      }
+    }
+
+    public long getBatchTimeOut() {
+      return this.batchTimeOut;
+    }
+
+    public startScan_args setBatchTimeOut(long batchTimeOut) {
+      this.batchTimeOut = batchTimeOut;
+      setBatchTimeOutIsSet(true);
+      return this;
+    }
+
+    public void unsetBatchTimeOut() {
+      __isset_bitfield = EncodingUtils.clearBit(__isset_bitfield, __BATCHTIMEOUT_ISSET_ID);
+    }
+
+    /** Returns true if field batchTimeOut is set (has been assigned a value) and false otherwise */
+    public boolean isSetBatchTimeOut() {
+      return EncodingUtils.testBit(__isset_bitfield, __BATCHTIMEOUT_ISSET_ID);
+    }
+
+    public void setBatchTimeOutIsSet(boolean value) {
+      __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __BATCHTIMEOUT_ISSET_ID, value);
+    }
+
+    public String getClassLoaderContext() {
+      return this.classLoaderContext;
+    }
+
+    public startScan_args setClassLoaderContext(String classLoaderContext) {
+      this.classLoaderContext = classLoaderContext;
+      return this;
+    }
+
+    public void unsetClassLoaderContext() {
+      this.classLoaderContext = null;
+    }
+
+    /** Returns true if field classLoaderContext is set (has been assigned a value) and false otherwise */
+    public boolean isSetClassLoaderContext() {
+      return this.classLoaderContext != null;
+    }
+
+    public void setClassLoaderContextIsSet(boolean value) {
+      if (!value) {
+        this.classLoaderContext = null;
+      }
+    }
+
     public void setFieldValue(_Fields field, Object value) {
       switch (field) {
       case TINFO:
@@ -5172,6 +5358,30 @@
         }
         break;
 
+      case SAMPLER_CONFIG:
+        if (value == null) {
+          unsetSamplerConfig();
+        } else {
+          setSamplerConfig((TSamplerConfiguration)value);
+        }
+        break;
+
+      case BATCH_TIME_OUT:
+        if (value == null) {
+          unsetBatchTimeOut();
+        } else {
+          setBatchTimeOut((Long)value);
+        }
+        break;
+
+      case CLASS_LOADER_CONTEXT:
+        if (value == null) {
+          unsetClassLoaderContext();
+        } else {
+          setClassLoaderContext((String)value);
+        }
+        break;
+
       }
     }
 
@@ -5193,7 +5403,7 @@
         return getColumns();
 
       case BATCH_SIZE:
-        return Integer.valueOf(getBatchSize());
+        return getBatchSize();
 
       case SSI_LIST:
         return getSsiList();
@@ -5205,13 +5415,22 @@
         return getAuthorizations();
 
       case WAIT_FOR_WRITES:
-        return Boolean.valueOf(isWaitForWrites());
+        return isWaitForWrites();
 
       case ISOLATED:
-        return Boolean.valueOf(isIsolated());
+        return isIsolated();
 
       case READAHEAD_THRESHOLD:
-        return Long.valueOf(getReadaheadThreshold());
+        return getReadaheadThreshold();
+
+      case SAMPLER_CONFIG:
+        return getSamplerConfig();
+
+      case BATCH_TIME_OUT:
+        return getBatchTimeOut();
+
+      case CLASS_LOADER_CONTEXT:
+        return getClassLoaderContext();
 
       }
       throw new IllegalStateException();
@@ -5248,6 +5467,12 @@
         return isSetIsolated();
       case READAHEAD_THRESHOLD:
         return isSetReadaheadThreshold();
+      case SAMPLER_CONFIG:
+        return isSetSamplerConfig();
+      case BATCH_TIME_OUT:
+        return isSetBatchTimeOut();
+      case CLASS_LOADER_CONTEXT:
+        return isSetClassLoaderContext();
       }
       throw new IllegalStateException();
     }
@@ -5373,12 +5598,116 @@
           return false;
       }
 
+      boolean this_present_samplerConfig = true && this.isSetSamplerConfig();
+      boolean that_present_samplerConfig = true && that.isSetSamplerConfig();
+      if (this_present_samplerConfig || that_present_samplerConfig) {
+        if (!(this_present_samplerConfig && that_present_samplerConfig))
+          return false;
+        if (!this.samplerConfig.equals(that.samplerConfig))
+          return false;
+      }
+
+      boolean this_present_batchTimeOut = true;
+      boolean that_present_batchTimeOut = true;
+      if (this_present_batchTimeOut || that_present_batchTimeOut) {
+        if (!(this_present_batchTimeOut && that_present_batchTimeOut))
+          return false;
+        if (this.batchTimeOut != that.batchTimeOut)
+          return false;
+      }
+
+      boolean this_present_classLoaderContext = true && this.isSetClassLoaderContext();
+      boolean that_present_classLoaderContext = true && that.isSetClassLoaderContext();
+      if (this_present_classLoaderContext || that_present_classLoaderContext) {
+        if (!(this_present_classLoaderContext && that_present_classLoaderContext))
+          return false;
+        if (!this.classLoaderContext.equals(that.classLoaderContext))
+          return false;
+      }
+
       return true;
     }
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_extent = true && (isSetExtent());
+      list.add(present_extent);
+      if (present_extent)
+        list.add(extent);
+
+      boolean present_range = true && (isSetRange());
+      list.add(present_range);
+      if (present_range)
+        list.add(range);
+
+      boolean present_columns = true && (isSetColumns());
+      list.add(present_columns);
+      if (present_columns)
+        list.add(columns);
+
+      boolean present_batchSize = true;
+      list.add(present_batchSize);
+      if (present_batchSize)
+        list.add(batchSize);
+
+      boolean present_ssiList = true && (isSetSsiList());
+      list.add(present_ssiList);
+      if (present_ssiList)
+        list.add(ssiList);
+
+      boolean present_ssio = true && (isSetSsio());
+      list.add(present_ssio);
+      if (present_ssio)
+        list.add(ssio);
+
+      boolean present_authorizations = true && (isSetAuthorizations());
+      list.add(present_authorizations);
+      if (present_authorizations)
+        list.add(authorizations);
+
+      boolean present_waitForWrites = true;
+      list.add(present_waitForWrites);
+      if (present_waitForWrites)
+        list.add(waitForWrites);
+
+      boolean present_isolated = true;
+      list.add(present_isolated);
+      if (present_isolated)
+        list.add(isolated);
+
+      boolean present_readaheadThreshold = true;
+      list.add(present_readaheadThreshold);
+      if (present_readaheadThreshold)
+        list.add(readaheadThreshold);
+
+      boolean present_samplerConfig = true && (isSetSamplerConfig());
+      list.add(present_samplerConfig);
+      if (present_samplerConfig)
+        list.add(samplerConfig);
+
+      boolean present_batchTimeOut = true;
+      list.add(present_batchTimeOut);
+      if (present_batchTimeOut)
+        list.add(batchTimeOut);
+
+      boolean present_classLoaderContext = true && (isSetClassLoaderContext());
+      list.add(present_classLoaderContext);
+      if (present_classLoaderContext)
+        list.add(classLoaderContext);
+
+      return list.hashCode();
     }
 
     @Override
@@ -5509,6 +5838,36 @@
           return lastComparison;
         }
       }
+      lastComparison = Boolean.valueOf(isSetSamplerConfig()).compareTo(other.isSetSamplerConfig());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSamplerConfig()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.samplerConfig, other.samplerConfig);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetBatchTimeOut()).compareTo(other.isSetBatchTimeOut());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetBatchTimeOut()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.batchTimeOut, other.batchTimeOut);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetClassLoaderContext()).compareTo(other.isSetClassLoaderContext());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetClassLoaderContext()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.classLoaderContext, other.classLoaderContext);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
       return 0;
     }
 
@@ -5593,7 +5952,7 @@
       if (this.authorizations == null) {
         sb.append("null");
       } else {
-        sb.append(this.authorizations);
+        org.apache.thrift.TBaseHelper.toString(this.authorizations, sb);
       }
       first = false;
       if (!first) sb.append(", ");
@@ -5608,6 +5967,26 @@
       sb.append("readaheadThreshold:");
       sb.append(this.readaheadThreshold);
       first = false;
+      if (!first) sb.append(", ");
+      sb.append("samplerConfig:");
+      if (this.samplerConfig == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.samplerConfig);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("batchTimeOut:");
+      sb.append(this.batchTimeOut);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("classLoaderContext:");
+      if (this.classLoaderContext == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.classLoaderContext);
+      }
+      first = false;
       sb.append(")");
       return sb.toString();
     }
@@ -5627,6 +6006,9 @@
       if (range != null) {
         range.validate();
       }
+      if (samplerConfig != null) {
+        samplerConfig.validate();
+      }
     }
 
     private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
@@ -5704,14 +6086,14 @@
             case 4: // COLUMNS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list106 = iprot.readListBegin();
-                  struct.columns = new ArrayList<org.apache.accumulo.core.data.thrift.TColumn>(_list106.size);
-                  for (int _i107 = 0; _i107 < _list106.size; ++_i107)
+                  org.apache.thrift.protocol.TList _list116 = iprot.readListBegin();
+                  struct.columns = new ArrayList<org.apache.accumulo.core.data.thrift.TColumn>(_list116.size);
+                  org.apache.accumulo.core.data.thrift.TColumn _elem117;
+                  for (int _i118 = 0; _i118 < _list116.size; ++_i118)
                   {
-                    org.apache.accumulo.core.data.thrift.TColumn _elem108;
-                    _elem108 = new org.apache.accumulo.core.data.thrift.TColumn();
-                    _elem108.read(iprot);
-                    struct.columns.add(_elem108);
+                    _elem117 = new org.apache.accumulo.core.data.thrift.TColumn();
+                    _elem117.read(iprot);
+                    struct.columns.add(_elem117);
                   }
                   iprot.readListEnd();
                 }
@@ -5731,14 +6113,14 @@
             case 6: // SSI_LIST
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list109 = iprot.readListBegin();
-                  struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list109.size);
-                  for (int _i110 = 0; _i110 < _list109.size; ++_i110)
+                  org.apache.thrift.protocol.TList _list119 = iprot.readListBegin();
+                  struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list119.size);
+                  org.apache.accumulo.core.data.thrift.IterInfo _elem120;
+                  for (int _i121 = 0; _i121 < _list119.size; ++_i121)
                   {
-                    org.apache.accumulo.core.data.thrift.IterInfo _elem111;
-                    _elem111 = new org.apache.accumulo.core.data.thrift.IterInfo();
-                    _elem111.read(iprot);
-                    struct.ssiList.add(_elem111);
+                    _elem120 = new org.apache.accumulo.core.data.thrift.IterInfo();
+                    _elem120.read(iprot);
+                    struct.ssiList.add(_elem120);
                   }
                   iprot.readListEnd();
                 }
@@ -5750,27 +6132,27 @@
             case 7: // SSIO
               if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
                 {
-                  org.apache.thrift.protocol.TMap _map112 = iprot.readMapBegin();
-                  struct.ssio = new HashMap<String,Map<String,String>>(2*_map112.size);
-                  for (int _i113 = 0; _i113 < _map112.size; ++_i113)
+                  org.apache.thrift.protocol.TMap _map122 = iprot.readMapBegin();
+                  struct.ssio = new HashMap<String,Map<String,String>>(2*_map122.size);
+                  String _key123;
+                  Map<String,String> _val124;
+                  for (int _i125 = 0; _i125 < _map122.size; ++_i125)
                   {
-                    String _key114;
-                    Map<String,String> _val115;
-                    _key114 = iprot.readString();
+                    _key123 = iprot.readString();
                     {
-                      org.apache.thrift.protocol.TMap _map116 = iprot.readMapBegin();
-                      _val115 = new HashMap<String,String>(2*_map116.size);
-                      for (int _i117 = 0; _i117 < _map116.size; ++_i117)
+                      org.apache.thrift.protocol.TMap _map126 = iprot.readMapBegin();
+                      _val124 = new HashMap<String,String>(2*_map126.size);
+                      String _key127;
+                      String _val128;
+                      for (int _i129 = 0; _i129 < _map126.size; ++_i129)
                       {
-                        String _key118;
-                        String _val119;
-                        _key118 = iprot.readString();
-                        _val119 = iprot.readString();
-                        _val115.put(_key118, _val119);
+                        _key127 = iprot.readString();
+                        _val128 = iprot.readString();
+                        _val124.put(_key127, _val128);
                       }
                       iprot.readMapEnd();
                     }
-                    struct.ssio.put(_key114, _val115);
+                    struct.ssio.put(_key123, _val124);
                   }
                   iprot.readMapEnd();
                 }
@@ -5782,13 +6164,13 @@
             case 8: // AUTHORIZATIONS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list120 = iprot.readListBegin();
-                  struct.authorizations = new ArrayList<ByteBuffer>(_list120.size);
-                  for (int _i121 = 0; _i121 < _list120.size; ++_i121)
+                  org.apache.thrift.protocol.TList _list130 = iprot.readListBegin();
+                  struct.authorizations = new ArrayList<ByteBuffer>(_list130.size);
+                  ByteBuffer _elem131;
+                  for (int _i132 = 0; _i132 < _list130.size; ++_i132)
                   {
-                    ByteBuffer _elem122;
-                    _elem122 = iprot.readBinary();
-                    struct.authorizations.add(_elem122);
+                    _elem131 = iprot.readBinary();
+                    struct.authorizations.add(_elem131);
                   }
                   iprot.readListEnd();
                 }
@@ -5821,6 +6203,31 @@
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
               }
               break;
+            case 13: // SAMPLER_CONFIG
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.samplerConfig = new TSamplerConfiguration();
+                struct.samplerConfig.read(iprot);
+                struct.setSamplerConfigIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 14: // BATCH_TIME_OUT
+              if (schemeField.type == org.apache.thrift.protocol.TType.I64) {
+                struct.batchTimeOut = iprot.readI64();
+                struct.setBatchTimeOutIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 15: // CLASS_LOADER_CONTEXT
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.classLoaderContext = iprot.readString();
+                struct.setClassLoaderContextIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
             default:
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
           }
@@ -5855,9 +6262,9 @@
           oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.columns.size()));
-            for (org.apache.accumulo.core.data.thrift.TColumn _iter123 : struct.columns)
+            for (org.apache.accumulo.core.data.thrift.TColumn _iter133 : struct.columns)
             {
-              _iter123.write(oprot);
+              _iter133.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -5870,9 +6277,9 @@
           oprot.writeFieldBegin(SSI_LIST_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.ssiList.size()));
-            for (org.apache.accumulo.core.data.thrift.IterInfo _iter124 : struct.ssiList)
+            for (org.apache.accumulo.core.data.thrift.IterInfo _iter134 : struct.ssiList)
             {
-              _iter124.write(oprot);
+              _iter134.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -5882,15 +6289,15 @@
           oprot.writeFieldBegin(SSIO_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.ssio.size()));
-            for (Map.Entry<String, Map<String,String>> _iter125 : struct.ssio.entrySet())
+            for (Map.Entry<String, Map<String,String>> _iter135 : struct.ssio.entrySet())
             {
-              oprot.writeString(_iter125.getKey());
+              oprot.writeString(_iter135.getKey());
               {
-                oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, _iter125.getValue().size()));
-                for (Map.Entry<String, String> _iter126 : _iter125.getValue().entrySet())
+                oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, _iter135.getValue().size()));
+                for (Map.Entry<String, String> _iter136 : _iter135.getValue().entrySet())
                 {
-                  oprot.writeString(_iter126.getKey());
-                  oprot.writeString(_iter126.getValue());
+                  oprot.writeString(_iter136.getKey());
+                  oprot.writeString(_iter136.getValue());
                 }
                 oprot.writeMapEnd();
               }
@@ -5903,9 +6310,9 @@
           oprot.writeFieldBegin(AUTHORIZATIONS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.authorizations.size()));
-            for (ByteBuffer _iter127 : struct.authorizations)
+            for (ByteBuffer _iter137 : struct.authorizations)
             {
-              oprot.writeBinary(_iter127);
+              oprot.writeBinary(_iter137);
             }
             oprot.writeListEnd();
           }
@@ -5925,6 +6332,19 @@
         oprot.writeFieldBegin(READAHEAD_THRESHOLD_FIELD_DESC);
         oprot.writeI64(struct.readaheadThreshold);
         oprot.writeFieldEnd();
+        if (struct.samplerConfig != null) {
+          oprot.writeFieldBegin(SAMPLER_CONFIG_FIELD_DESC);
+          struct.samplerConfig.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldBegin(BATCH_TIME_OUT_FIELD_DESC);
+        oprot.writeI64(struct.batchTimeOut);
+        oprot.writeFieldEnd();
+        if (struct.classLoaderContext != null) {
+          oprot.writeFieldBegin(CLASS_LOADER_CONTEXT_FIELD_DESC);
+          oprot.writeString(struct.classLoaderContext);
+          oprot.writeFieldEnd();
+        }
         oprot.writeFieldStop();
         oprot.writeStructEnd();
       }
@@ -5979,7 +6399,16 @@
         if (struct.isSetReadaheadThreshold()) {
           optionals.set(11);
         }
-        oprot.writeBitSet(optionals, 12);
+        if (struct.isSetSamplerConfig()) {
+          optionals.set(12);
+        }
+        if (struct.isSetBatchTimeOut()) {
+          optionals.set(13);
+        }
+        if (struct.isSetClassLoaderContext()) {
+          optionals.set(14);
+        }
+        oprot.writeBitSet(optionals, 15);
         if (struct.isSetTinfo()) {
           struct.tinfo.write(oprot);
         }
@@ -5995,9 +6424,9 @@
         if (struct.isSetColumns()) {
           {
             oprot.writeI32(struct.columns.size());
-            for (org.apache.accumulo.core.data.thrift.TColumn _iter128 : struct.columns)
+            for (org.apache.accumulo.core.data.thrift.TColumn _iter138 : struct.columns)
             {
-              _iter128.write(oprot);
+              _iter138.write(oprot);
             }
           }
         }
@@ -6007,24 +6436,24 @@
         if (struct.isSetSsiList()) {
           {
             oprot.writeI32(struct.ssiList.size());
-            for (org.apache.accumulo.core.data.thrift.IterInfo _iter129 : struct.ssiList)
+            for (org.apache.accumulo.core.data.thrift.IterInfo _iter139 : struct.ssiList)
             {
-              _iter129.write(oprot);
+              _iter139.write(oprot);
             }
           }
         }
         if (struct.isSetSsio()) {
           {
             oprot.writeI32(struct.ssio.size());
-            for (Map.Entry<String, Map<String,String>> _iter130 : struct.ssio.entrySet())
+            for (Map.Entry<String, Map<String,String>> _iter140 : struct.ssio.entrySet())
             {
-              oprot.writeString(_iter130.getKey());
+              oprot.writeString(_iter140.getKey());
               {
-                oprot.writeI32(_iter130.getValue().size());
-                for (Map.Entry<String, String> _iter131 : _iter130.getValue().entrySet())
+                oprot.writeI32(_iter140.getValue().size());
+                for (Map.Entry<String, String> _iter141 : _iter140.getValue().entrySet())
                 {
-                  oprot.writeString(_iter131.getKey());
-                  oprot.writeString(_iter131.getValue());
+                  oprot.writeString(_iter141.getKey());
+                  oprot.writeString(_iter141.getValue());
                 }
               }
             }
@@ -6033,9 +6462,9 @@
         if (struct.isSetAuthorizations()) {
           {
             oprot.writeI32(struct.authorizations.size());
-            for (ByteBuffer _iter132 : struct.authorizations)
+            for (ByteBuffer _iter142 : struct.authorizations)
             {
-              oprot.writeBinary(_iter132);
+              oprot.writeBinary(_iter142);
             }
           }
         }
@@ -6048,12 +6477,21 @@
         if (struct.isSetReadaheadThreshold()) {
           oprot.writeI64(struct.readaheadThreshold);
         }
+        if (struct.isSetSamplerConfig()) {
+          struct.samplerConfig.write(oprot);
+        }
+        if (struct.isSetBatchTimeOut()) {
+          oprot.writeI64(struct.batchTimeOut);
+        }
+        if (struct.isSetClassLoaderContext()) {
+          oprot.writeString(struct.classLoaderContext);
+        }
       }
 
       @Override
       public void read(org.apache.thrift.protocol.TProtocol prot, startScan_args struct) throws org.apache.thrift.TException {
         TTupleProtocol iprot = (TTupleProtocol) prot;
-        BitSet incoming = iprot.readBitSet(12);
+        BitSet incoming = iprot.readBitSet(15);
         if (incoming.get(0)) {
           struct.tinfo = new org.apache.accumulo.core.trace.thrift.TInfo();
           struct.tinfo.read(iprot);
@@ -6076,14 +6514,14 @@
         }
         if (incoming.get(4)) {
           {
-            org.apache.thrift.protocol.TList _list133 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.columns = new ArrayList<org.apache.accumulo.core.data.thrift.TColumn>(_list133.size);
-            for (int _i134 = 0; _i134 < _list133.size; ++_i134)
+            org.apache.thrift.protocol.TList _list143 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.columns = new ArrayList<org.apache.accumulo.core.data.thrift.TColumn>(_list143.size);
+            org.apache.accumulo.core.data.thrift.TColumn _elem144;
+            for (int _i145 = 0; _i145 < _list143.size; ++_i145)
             {
-              org.apache.accumulo.core.data.thrift.TColumn _elem135;
-              _elem135 = new org.apache.accumulo.core.data.thrift.TColumn();
-              _elem135.read(iprot);
-              struct.columns.add(_elem135);
+              _elem144 = new org.apache.accumulo.core.data.thrift.TColumn();
+              _elem144.read(iprot);
+              struct.columns.add(_elem144);
             }
           }
           struct.setColumnsIsSet(true);
@@ -6094,53 +6532,53 @@
         }
         if (incoming.get(6)) {
           {
-            org.apache.thrift.protocol.TList _list136 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list136.size);
-            for (int _i137 = 0; _i137 < _list136.size; ++_i137)
+            org.apache.thrift.protocol.TList _list146 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list146.size);
+            org.apache.accumulo.core.data.thrift.IterInfo _elem147;
+            for (int _i148 = 0; _i148 < _list146.size; ++_i148)
             {
-              org.apache.accumulo.core.data.thrift.IterInfo _elem138;
-              _elem138 = new org.apache.accumulo.core.data.thrift.IterInfo();
-              _elem138.read(iprot);
-              struct.ssiList.add(_elem138);
+              _elem147 = new org.apache.accumulo.core.data.thrift.IterInfo();
+              _elem147.read(iprot);
+              struct.ssiList.add(_elem147);
             }
           }
           struct.setSsiListIsSet(true);
         }
         if (incoming.get(7)) {
           {
-            org.apache.thrift.protocol.TMap _map139 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-            struct.ssio = new HashMap<String,Map<String,String>>(2*_map139.size);
-            for (int _i140 = 0; _i140 < _map139.size; ++_i140)
+            org.apache.thrift.protocol.TMap _map149 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
+            struct.ssio = new HashMap<String,Map<String,String>>(2*_map149.size);
+            String _key150;
+            Map<String,String> _val151;
+            for (int _i152 = 0; _i152 < _map149.size; ++_i152)
             {
-              String _key141;
-              Map<String,String> _val142;
-              _key141 = iprot.readString();
+              _key150 = iprot.readString();
               {
-                org.apache.thrift.protocol.TMap _map143 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-                _val142 = new HashMap<String,String>(2*_map143.size);
-                for (int _i144 = 0; _i144 < _map143.size; ++_i144)
+                org.apache.thrift.protocol.TMap _map153 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+                _val151 = new HashMap<String,String>(2*_map153.size);
+                String _key154;
+                String _val155;
+                for (int _i156 = 0; _i156 < _map153.size; ++_i156)
                 {
-                  String _key145;
-                  String _val146;
-                  _key145 = iprot.readString();
-                  _val146 = iprot.readString();
-                  _val142.put(_key145, _val146);
+                  _key154 = iprot.readString();
+                  _val155 = iprot.readString();
+                  _val151.put(_key154, _val155);
                 }
               }
-              struct.ssio.put(_key141, _val142);
+              struct.ssio.put(_key150, _val151);
             }
           }
           struct.setSsioIsSet(true);
         }
         if (incoming.get(8)) {
           {
-            org.apache.thrift.protocol.TList _list147 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-            struct.authorizations = new ArrayList<ByteBuffer>(_list147.size);
-            for (int _i148 = 0; _i148 < _list147.size; ++_i148)
+            org.apache.thrift.protocol.TList _list157 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.authorizations = new ArrayList<ByteBuffer>(_list157.size);
+            ByteBuffer _elem158;
+            for (int _i159 = 0; _i159 < _list157.size; ++_i159)
             {
-              ByteBuffer _elem149;
-              _elem149 = iprot.readBinary();
-              struct.authorizations.add(_elem149);
+              _elem158 = iprot.readBinary();
+              struct.authorizations.add(_elem158);
             }
           }
           struct.setAuthorizationsIsSet(true);
@@ -6157,6 +6595,19 @@
           struct.readaheadThreshold = iprot.readI64();
           struct.setReadaheadThresholdIsSet(true);
         }
+        if (incoming.get(12)) {
+          struct.samplerConfig = new TSamplerConfiguration();
+          struct.samplerConfig.read(iprot);
+          struct.setSamplerConfigIsSet(true);
+        }
+        if (incoming.get(13)) {
+          struct.batchTimeOut = iprot.readI64();
+          struct.setBatchTimeOutIsSet(true);
+        }
+        if (incoming.get(14)) {
+          struct.classLoaderContext = iprot.readString();
+          struct.setClassLoaderContextIsSet(true);
+        }
       }
     }
 
@@ -6169,6 +6620,7 @@
     private static final org.apache.thrift.protocol.TField SEC_FIELD_DESC = new org.apache.thrift.protocol.TField("sec", org.apache.thrift.protocol.TType.STRUCT, (short)1);
     private static final org.apache.thrift.protocol.TField NSTE_FIELD_DESC = new org.apache.thrift.protocol.TField("nste", org.apache.thrift.protocol.TType.STRUCT, (short)2);
     private static final org.apache.thrift.protocol.TField TMFE_FIELD_DESC = new org.apache.thrift.protocol.TField("tmfe", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+    private static final org.apache.thrift.protocol.TField TSNPE_FIELD_DESC = new org.apache.thrift.protocol.TField("tsnpe", org.apache.thrift.protocol.TType.STRUCT, (short)4);
 
     private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
     static {
@@ -6180,13 +6632,15 @@
     public org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException sec; // required
     public NotServingTabletException nste; // required
     public TooManyFilesException tmfe; // required
+    public TSampleNotPresentException tsnpe; // required
 
     /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
     public enum _Fields implements org.apache.thrift.TFieldIdEnum {
       SUCCESS((short)0, "success"),
       SEC((short)1, "sec"),
       NSTE((short)2, "nste"),
-      TMFE((short)3, "tmfe");
+      TMFE((short)3, "tmfe"),
+      TSNPE((short)4, "tsnpe");
 
       private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -6209,6 +6663,8 @@
             return NSTE;
           case 3: // TMFE
             return TMFE;
+          case 4: // TSNPE
+            return TSNPE;
           default:
             return null;
         }
@@ -6260,6 +6716,8 @@
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
       tmpMap.put(_Fields.TMFE, new org.apache.thrift.meta_data.FieldMetaData("tmfe", org.apache.thrift.TFieldRequirementType.DEFAULT, 
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.TSNPE, new org.apache.thrift.meta_data.FieldMetaData("tsnpe", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
       metaDataMap = Collections.unmodifiableMap(tmpMap);
       org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(startScan_result.class, metaDataMap);
     }
@@ -6271,13 +6729,15 @@
       org.apache.accumulo.core.data.thrift.InitialScan success,
       org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException sec,
       NotServingTabletException nste,
-      TooManyFilesException tmfe)
+      TooManyFilesException tmfe,
+      TSampleNotPresentException tsnpe)
     {
       this();
       this.success = success;
       this.sec = sec;
       this.nste = nste;
       this.tmfe = tmfe;
+      this.tsnpe = tsnpe;
     }
 
     /**
@@ -6296,6 +6756,9 @@
       if (other.isSetTmfe()) {
         this.tmfe = new TooManyFilesException(other.tmfe);
       }
+      if (other.isSetTsnpe()) {
+        this.tsnpe = new TSampleNotPresentException(other.tsnpe);
+      }
     }
 
     public startScan_result deepCopy() {
@@ -6308,6 +6771,7 @@
       this.sec = null;
       this.nste = null;
       this.tmfe = null;
+      this.tsnpe = null;
     }
 
     public org.apache.accumulo.core.data.thrift.InitialScan getSuccess() {
@@ -6406,6 +6870,30 @@
       }
     }
 
+    public TSampleNotPresentException getTsnpe() {
+      return this.tsnpe;
+    }
+
+    public startScan_result setTsnpe(TSampleNotPresentException tsnpe) {
+      this.tsnpe = tsnpe;
+      return this;
+    }
+
+    public void unsetTsnpe() {
+      this.tsnpe = null;
+    }
+
+    /** Returns true if field tsnpe is set (has been assigned a value) and false otherwise */
+    public boolean isSetTsnpe() {
+      return this.tsnpe != null;
+    }
+
+    public void setTsnpeIsSet(boolean value) {
+      if (!value) {
+        this.tsnpe = null;
+      }
+    }
+
     public void setFieldValue(_Fields field, Object value) {
       switch (field) {
       case SUCCESS:
@@ -6440,6 +6928,14 @@
         }
         break;
 
+      case TSNPE:
+        if (value == null) {
+          unsetTsnpe();
+        } else {
+          setTsnpe((TSampleNotPresentException)value);
+        }
+        break;
+
       }
     }
 
@@ -6457,6 +6953,9 @@
       case TMFE:
         return getTmfe();
 
+      case TSNPE:
+        return getTsnpe();
+
       }
       throw new IllegalStateException();
     }
@@ -6476,6 +6975,8 @@
         return isSetNste();
       case TMFE:
         return isSetTmfe();
+      case TSNPE:
+        return isSetTsnpe();
       }
       throw new IllegalStateException();
     }
@@ -6529,12 +7030,48 @@
           return false;
       }
 
+      boolean this_present_tsnpe = true && this.isSetTsnpe();
+      boolean that_present_tsnpe = true && that.isSetTsnpe();
+      if (this_present_tsnpe || that_present_tsnpe) {
+        if (!(this_present_tsnpe && that_present_tsnpe))
+          return false;
+        if (!this.tsnpe.equals(that.tsnpe))
+          return false;
+      }
+
       return true;
     }
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_nste = true && (isSetNste());
+      list.add(present_nste);
+      if (present_nste)
+        list.add(nste);
+
+      boolean present_tmfe = true && (isSetTmfe());
+      list.add(present_tmfe);
+      if (present_tmfe)
+        list.add(tmfe);
+
+      boolean present_tsnpe = true && (isSetTsnpe());
+      list.add(present_tsnpe);
+      if (present_tsnpe)
+        list.add(tsnpe);
+
+      return list.hashCode();
     }
 
     @Override
@@ -6585,6 +7122,16 @@
           return lastComparison;
         }
       }
+      lastComparison = Boolean.valueOf(isSetTsnpe()).compareTo(other.isSetTsnpe());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetTsnpe()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.tsnpe, other.tsnpe);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
       return 0;
     }
 
@@ -6636,6 +7183,14 @@
         sb.append(this.tmfe);
       }
       first = false;
+      if (!first) sb.append(", ");
+      sb.append("tsnpe:");
+      if (this.tsnpe == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tsnpe);
+      }
+      first = false;
       sb.append(")");
       return sb.toString();
     }
@@ -6718,6 +7273,15 @@
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
               }
               break;
+            case 4: // TSNPE
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.tsnpe = new TSampleNotPresentException();
+                struct.tsnpe.read(iprot);
+                struct.setTsnpeIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
             default:
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
           }
@@ -6753,6 +7317,11 @@
           struct.tmfe.write(oprot);
           oprot.writeFieldEnd();
         }
+        if (struct.tsnpe != null) {
+          oprot.writeFieldBegin(TSNPE_FIELD_DESC);
+          struct.tsnpe.write(oprot);
+          oprot.writeFieldEnd();
+        }
         oprot.writeFieldStop();
         oprot.writeStructEnd();
       }
@@ -6783,7 +7352,10 @@
         if (struct.isSetTmfe()) {
           optionals.set(3);
         }
-        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetTsnpe()) {
+          optionals.set(4);
+        }
+        oprot.writeBitSet(optionals, 5);
         if (struct.isSetSuccess()) {
           struct.success.write(oprot);
         }
@@ -6796,12 +7368,15 @@
         if (struct.isSetTmfe()) {
           struct.tmfe.write(oprot);
         }
+        if (struct.isSetTsnpe()) {
+          struct.tsnpe.write(oprot);
+        }
       }
 
       @Override
       public void read(org.apache.thrift.protocol.TProtocol prot, startScan_result struct) throws org.apache.thrift.TException {
         TTupleProtocol iprot = (TTupleProtocol) prot;
-        BitSet incoming = iprot.readBitSet(4);
+        BitSet incoming = iprot.readBitSet(5);
         if (incoming.get(0)) {
           struct.success = new org.apache.accumulo.core.data.thrift.InitialScan();
           struct.success.read(iprot);
@@ -6822,6 +7397,11 @@
           struct.tmfe.read(iprot);
           struct.setTmfeIsSet(true);
         }
+        if (incoming.get(4)) {
+          struct.tsnpe = new TSampleNotPresentException();
+          struct.tsnpe.read(iprot);
+          struct.setTsnpeIsSet(true);
+        }
       }
     }
 
@@ -7026,7 +7606,7 @@
         return getTinfo();
 
       case SCAN_ID:
-        return Long.valueOf(getScanID());
+        return getScanID();
 
       }
       throw new IllegalStateException();
@@ -7083,7 +7663,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_scanID = true;
+      list.add(present_scanID);
+      if (present_scanID)
+        list.add(scanID);
+
+      return list.hashCode();
     }
 
     @Override
@@ -7291,6 +7883,7 @@
     private static final org.apache.thrift.protocol.TField NSSI_FIELD_DESC = new org.apache.thrift.protocol.TField("nssi", org.apache.thrift.protocol.TType.STRUCT, (short)1);
     private static final org.apache.thrift.protocol.TField NSTE_FIELD_DESC = new org.apache.thrift.protocol.TField("nste", org.apache.thrift.protocol.TType.STRUCT, (short)2);
     private static final org.apache.thrift.protocol.TField TMFE_FIELD_DESC = new org.apache.thrift.protocol.TField("tmfe", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+    private static final org.apache.thrift.protocol.TField TSNPE_FIELD_DESC = new org.apache.thrift.protocol.TField("tsnpe", org.apache.thrift.protocol.TType.STRUCT, (short)4);
 
     private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
     static {
@@ -7302,13 +7895,15 @@
     public NoSuchScanIDException nssi; // required
     public NotServingTabletException nste; // required
     public TooManyFilesException tmfe; // required
+    public TSampleNotPresentException tsnpe; // required
 
     /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
     public enum _Fields implements org.apache.thrift.TFieldIdEnum {
       SUCCESS((short)0, "success"),
       NSSI((short)1, "nssi"),
       NSTE((short)2, "nste"),
-      TMFE((short)3, "tmfe");
+      TMFE((short)3, "tmfe"),
+      TSNPE((short)4, "tsnpe");
 
       private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -7331,6 +7926,8 @@
             return NSTE;
           case 3: // TMFE
             return TMFE;
+          case 4: // TSNPE
+            return TSNPE;
           default:
             return null;
         }
@@ -7382,6 +7979,8 @@
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
       tmpMap.put(_Fields.TMFE, new org.apache.thrift.meta_data.FieldMetaData("tmfe", org.apache.thrift.TFieldRequirementType.DEFAULT, 
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.TSNPE, new org.apache.thrift.meta_data.FieldMetaData("tsnpe", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
       metaDataMap = Collections.unmodifiableMap(tmpMap);
       org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(continueScan_result.class, metaDataMap);
     }
@@ -7393,13 +7992,15 @@
       org.apache.accumulo.core.data.thrift.ScanResult success,
       NoSuchScanIDException nssi,
       NotServingTabletException nste,
-      TooManyFilesException tmfe)
+      TooManyFilesException tmfe,
+      TSampleNotPresentException tsnpe)
     {
       this();
       this.success = success;
       this.nssi = nssi;
       this.nste = nste;
       this.tmfe = tmfe;
+      this.tsnpe = tsnpe;
     }
 
     /**
@@ -7418,6 +8019,9 @@
       if (other.isSetTmfe()) {
         this.tmfe = new TooManyFilesException(other.tmfe);
       }
+      if (other.isSetTsnpe()) {
+        this.tsnpe = new TSampleNotPresentException(other.tsnpe);
+      }
     }
 
     public continueScan_result deepCopy() {
@@ -7430,6 +8034,7 @@
       this.nssi = null;
       this.nste = null;
       this.tmfe = null;
+      this.tsnpe = null;
     }
 
     public org.apache.accumulo.core.data.thrift.ScanResult getSuccess() {
@@ -7528,6 +8133,30 @@
       }
     }
 
+    public TSampleNotPresentException getTsnpe() {
+      return this.tsnpe;
+    }
+
+    public continueScan_result setTsnpe(TSampleNotPresentException tsnpe) {
+      this.tsnpe = tsnpe;
+      return this;
+    }
+
+    public void unsetTsnpe() {
+      this.tsnpe = null;
+    }
+
+    /** Returns true if field tsnpe is set (has been assigned a value) and false otherwise */
+    public boolean isSetTsnpe() {
+      return this.tsnpe != null;
+    }
+
+    public void setTsnpeIsSet(boolean value) {
+      if (!value) {
+        this.tsnpe = null;
+      }
+    }
+
     public void setFieldValue(_Fields field, Object value) {
       switch (field) {
       case SUCCESS:
@@ -7562,6 +8191,14 @@
         }
         break;
 
+      case TSNPE:
+        if (value == null) {
+          unsetTsnpe();
+        } else {
+          setTsnpe((TSampleNotPresentException)value);
+        }
+        break;
+
       }
     }
 
@@ -7579,6 +8216,9 @@
       case TMFE:
         return getTmfe();
 
+      case TSNPE:
+        return getTsnpe();
+
       }
       throw new IllegalStateException();
     }
@@ -7598,6 +8238,8 @@
         return isSetNste();
       case TMFE:
         return isSetTmfe();
+      case TSNPE:
+        return isSetTsnpe();
       }
       throw new IllegalStateException();
     }
@@ -7651,12 +8293,48 @@
           return false;
       }
 
+      boolean this_present_tsnpe = true && this.isSetTsnpe();
+      boolean that_present_tsnpe = true && that.isSetTsnpe();
+      if (this_present_tsnpe || that_present_tsnpe) {
+        if (!(this_present_tsnpe && that_present_tsnpe))
+          return false;
+        if (!this.tsnpe.equals(that.tsnpe))
+          return false;
+      }
+
       return true;
     }
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_nssi = true && (isSetNssi());
+      list.add(present_nssi);
+      if (present_nssi)
+        list.add(nssi);
+
+      boolean present_nste = true && (isSetNste());
+      list.add(present_nste);
+      if (present_nste)
+        list.add(nste);
+
+      boolean present_tmfe = true && (isSetTmfe());
+      list.add(present_tmfe);
+      if (present_tmfe)
+        list.add(tmfe);
+
+      boolean present_tsnpe = true && (isSetTsnpe());
+      list.add(present_tsnpe);
+      if (present_tsnpe)
+        list.add(tsnpe);
+
+      return list.hashCode();
     }
 
     @Override
@@ -7707,6 +8385,16 @@
           return lastComparison;
         }
       }
+      lastComparison = Boolean.valueOf(isSetTsnpe()).compareTo(other.isSetTsnpe());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetTsnpe()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.tsnpe, other.tsnpe);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
       return 0;
     }
 
@@ -7758,6 +8446,14 @@
         sb.append(this.tmfe);
       }
       first = false;
+      if (!first) sb.append(", ");
+      sb.append("tsnpe:");
+      if (this.tsnpe == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tsnpe);
+      }
+      first = false;
       sb.append(")");
       return sb.toString();
     }
@@ -7840,6 +8536,15 @@
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
               }
               break;
+            case 4: // TSNPE
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.tsnpe = new TSampleNotPresentException();
+                struct.tsnpe.read(iprot);
+                struct.setTsnpeIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
             default:
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
           }
@@ -7875,6 +8580,11 @@
           struct.tmfe.write(oprot);
           oprot.writeFieldEnd();
         }
+        if (struct.tsnpe != null) {
+          oprot.writeFieldBegin(TSNPE_FIELD_DESC);
+          struct.tsnpe.write(oprot);
+          oprot.writeFieldEnd();
+        }
         oprot.writeFieldStop();
         oprot.writeStructEnd();
       }
@@ -7905,7 +8615,10 @@
         if (struct.isSetTmfe()) {
           optionals.set(3);
         }
-        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetTsnpe()) {
+          optionals.set(4);
+        }
+        oprot.writeBitSet(optionals, 5);
         if (struct.isSetSuccess()) {
           struct.success.write(oprot);
         }
@@ -7918,12 +8631,15 @@
         if (struct.isSetTmfe()) {
           struct.tmfe.write(oprot);
         }
+        if (struct.isSetTsnpe()) {
+          struct.tsnpe.write(oprot);
+        }
       }
 
       @Override
       public void read(org.apache.thrift.protocol.TProtocol prot, continueScan_result struct) throws org.apache.thrift.TException {
         TTupleProtocol iprot = (TTupleProtocol) prot;
-        BitSet incoming = iprot.readBitSet(4);
+        BitSet incoming = iprot.readBitSet(5);
         if (incoming.get(0)) {
           struct.success = new org.apache.accumulo.core.data.thrift.ScanResult();
           struct.success.read(iprot);
@@ -7944,6 +8660,11 @@
           struct.tmfe.read(iprot);
           struct.setTmfeIsSet(true);
         }
+        if (incoming.get(4)) {
+          struct.tsnpe = new TSampleNotPresentException();
+          struct.tsnpe.read(iprot);
+          struct.setTsnpeIsSet(true);
+        }
       }
     }
 
@@ -8148,7 +8869,7 @@
         return getTinfo();
 
       case SCAN_ID:
-        return Long.valueOf(getScanID());
+        return getScanID();
 
       }
       throw new IllegalStateException();
@@ -8205,7 +8926,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_scanID = true;
+      list.add(present_scanID);
+      if (present_scanID)
+        list.add(scanID);
+
+      return list.hashCode();
     }
 
     @Override
@@ -8417,6 +9150,9 @@
     private static final org.apache.thrift.protocol.TField SSIO_FIELD_DESC = new org.apache.thrift.protocol.TField("ssio", org.apache.thrift.protocol.TType.MAP, (short)5);
     private static final org.apache.thrift.protocol.TField AUTHORIZATIONS_FIELD_DESC = new org.apache.thrift.protocol.TField("authorizations", org.apache.thrift.protocol.TType.LIST, (short)6);
     private static final org.apache.thrift.protocol.TField WAIT_FOR_WRITES_FIELD_DESC = new org.apache.thrift.protocol.TField("waitForWrites", org.apache.thrift.protocol.TType.BOOL, (short)7);
+    private static final org.apache.thrift.protocol.TField SAMPLER_CONFIG_FIELD_DESC = new org.apache.thrift.protocol.TField("samplerConfig", org.apache.thrift.protocol.TType.STRUCT, (short)9);
+    private static final org.apache.thrift.protocol.TField BATCH_TIME_OUT_FIELD_DESC = new org.apache.thrift.protocol.TField("batchTimeOut", org.apache.thrift.protocol.TType.I64, (short)10);
+    private static final org.apache.thrift.protocol.TField CLASS_LOADER_CONTEXT_FIELD_DESC = new org.apache.thrift.protocol.TField("classLoaderContext", org.apache.thrift.protocol.TType.STRING, (short)11);
 
     private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
     static {
@@ -8432,6 +9168,9 @@
     public Map<String,Map<String,String>> ssio; // required
     public List<ByteBuffer> authorizations; // required
     public boolean waitForWrites; // required
+    public TSamplerConfiguration samplerConfig; // required
+    public long batchTimeOut; // required
+    public String classLoaderContext; // required
 
     /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
     public enum _Fields implements org.apache.thrift.TFieldIdEnum {
@@ -8442,7 +9181,10 @@
       SSI_LIST((short)4, "ssiList"),
       SSIO((short)5, "ssio"),
       AUTHORIZATIONS((short)6, "authorizations"),
-      WAIT_FOR_WRITES((short)7, "waitForWrites");
+      WAIT_FOR_WRITES((short)7, "waitForWrites"),
+      SAMPLER_CONFIG((short)9, "samplerConfig"),
+      BATCH_TIME_OUT((short)10, "batchTimeOut"),
+      CLASS_LOADER_CONTEXT((short)11, "classLoaderContext");
 
       private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -8473,6 +9215,12 @@
             return AUTHORIZATIONS;
           case 7: // WAIT_FOR_WRITES
             return WAIT_FOR_WRITES;
+          case 9: // SAMPLER_CONFIG
+            return SAMPLER_CONFIG;
+          case 10: // BATCH_TIME_OUT
+            return BATCH_TIME_OUT;
+          case 11: // CLASS_LOADER_CONTEXT
+            return CLASS_LOADER_CONTEXT;
           default:
             return null;
         }
@@ -8514,6 +9262,7 @@
 
     // isset id assignments
     private static final int __WAITFORWRITES_ISSET_ID = 0;
+    private static final int __BATCHTIMEOUT_ISSET_ID = 1;
     private byte __isset_bitfield = 0;
     public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
     static {
@@ -8541,6 +9290,12 @@
               new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING              , true))));
       tmpMap.put(_Fields.WAIT_FOR_WRITES, new org.apache.thrift.meta_data.FieldMetaData("waitForWrites", org.apache.thrift.TFieldRequirementType.DEFAULT, 
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.BOOL)));
+      tmpMap.put(_Fields.SAMPLER_CONFIG, new org.apache.thrift.meta_data.FieldMetaData("samplerConfig", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, TSamplerConfiguration.class)));
+      tmpMap.put(_Fields.BATCH_TIME_OUT, new org.apache.thrift.meta_data.FieldMetaData("batchTimeOut", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I64)));
+      tmpMap.put(_Fields.CLASS_LOADER_CONTEXT, new org.apache.thrift.meta_data.FieldMetaData("classLoaderContext", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
       metaDataMap = Collections.unmodifiableMap(tmpMap);
       org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(startMultiScan_args.class, metaDataMap);
     }
@@ -8556,7 +9311,10 @@
       List<org.apache.accumulo.core.data.thrift.IterInfo> ssiList,
       Map<String,Map<String,String>> ssio,
       List<ByteBuffer> authorizations,
-      boolean waitForWrites)
+      boolean waitForWrites,
+      TSamplerConfiguration samplerConfig,
+      long batchTimeOut,
+      String classLoaderContext)
     {
       this();
       this.tinfo = tinfo;
@@ -8568,6 +9326,10 @@
       this.authorizations = authorizations;
       this.waitForWrites = waitForWrites;
       setWaitForWritesIsSet(true);
+      this.samplerConfig = samplerConfig;
+      this.batchTimeOut = batchTimeOut;
+      setBatchTimeOutIsSet(true);
+      this.classLoaderContext = classLoaderContext;
     }
 
     /**
@@ -8618,6 +9380,13 @@
         this.authorizations = __this__authorizations;
       }
       this.waitForWrites = other.waitForWrites;
+      if (other.isSetSamplerConfig()) {
+        this.samplerConfig = new TSamplerConfiguration(other.samplerConfig);
+      }
+      this.batchTimeOut = other.batchTimeOut;
+      if (other.isSetClassLoaderContext()) {
+        this.classLoaderContext = other.classLoaderContext;
+      }
     }
 
     public startMultiScan_args deepCopy() {
@@ -8635,6 +9404,10 @@
       this.authorizations = null;
       setWaitForWritesIsSet(false);
       this.waitForWrites = false;
+      this.samplerConfig = null;
+      setBatchTimeOutIsSet(false);
+      this.batchTimeOut = 0;
+      this.classLoaderContext = null;
     }
 
     public org.apache.accumulo.core.trace.thrift.TInfo getTinfo() {
@@ -8895,6 +9668,77 @@
       __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __WAITFORWRITES_ISSET_ID, value);
     }
 
+    public TSamplerConfiguration getSamplerConfig() {
+      return this.samplerConfig;
+    }
+
+    public startMultiScan_args setSamplerConfig(TSamplerConfiguration samplerConfig) {
+      this.samplerConfig = samplerConfig;
+      return this;
+    }
+
+    public void unsetSamplerConfig() {
+      this.samplerConfig = null;
+    }
+
+    /** Returns true if field samplerConfig is set (has been assigned a value) and false otherwise */
+    public boolean isSetSamplerConfig() {
+      return this.samplerConfig != null;
+    }
+
+    public void setSamplerConfigIsSet(boolean value) {
+      if (!value) {
+        this.samplerConfig = null;
+      }
+    }
+
+    public long getBatchTimeOut() {
+      return this.batchTimeOut;
+    }
+
+    public startMultiScan_args setBatchTimeOut(long batchTimeOut) {
+      this.batchTimeOut = batchTimeOut;
+      setBatchTimeOutIsSet(true);
+      return this;
+    }
+
+    public void unsetBatchTimeOut() {
+      __isset_bitfield = EncodingUtils.clearBit(__isset_bitfield, __BATCHTIMEOUT_ISSET_ID);
+    }
+
+    /** Returns true if field batchTimeOut is set (has been assigned a value) and false otherwise */
+    public boolean isSetBatchTimeOut() {
+      return EncodingUtils.testBit(__isset_bitfield, __BATCHTIMEOUT_ISSET_ID);
+    }
+
+    public void setBatchTimeOutIsSet(boolean value) {
+      __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __BATCHTIMEOUT_ISSET_ID, value);
+    }
+
+    public String getClassLoaderContext() {
+      return this.classLoaderContext;
+    }
+
+    public startMultiScan_args setClassLoaderContext(String classLoaderContext) {
+      this.classLoaderContext = classLoaderContext;
+      return this;
+    }
+
+    public void unsetClassLoaderContext() {
+      this.classLoaderContext = null;
+    }
+
+    /** Returns true if field classLoaderContext is set (has been assigned a value) and false otherwise */
+    public boolean isSetClassLoaderContext() {
+      return this.classLoaderContext != null;
+    }
+
+    public void setClassLoaderContextIsSet(boolean value) {
+      if (!value) {
+        this.classLoaderContext = null;
+      }
+    }
+
     public void setFieldValue(_Fields field, Object value) {
       switch (field) {
       case TINFO:
@@ -8961,6 +9805,30 @@
         }
         break;
 
+      case SAMPLER_CONFIG:
+        if (value == null) {
+          unsetSamplerConfig();
+        } else {
+          setSamplerConfig((TSamplerConfiguration)value);
+        }
+        break;
+
+      case BATCH_TIME_OUT:
+        if (value == null) {
+          unsetBatchTimeOut();
+        } else {
+          setBatchTimeOut((Long)value);
+        }
+        break;
+
+      case CLASS_LOADER_CONTEXT:
+        if (value == null) {
+          unsetClassLoaderContext();
+        } else {
+          setClassLoaderContext((String)value);
+        }
+        break;
+
       }
     }
 
@@ -8988,7 +9856,16 @@
         return getAuthorizations();
 
       case WAIT_FOR_WRITES:
-        return Boolean.valueOf(isWaitForWrites());
+        return isWaitForWrites();
+
+      case SAMPLER_CONFIG:
+        return getSamplerConfig();
+
+      case BATCH_TIME_OUT:
+        return getBatchTimeOut();
+
+      case CLASS_LOADER_CONTEXT:
+        return getClassLoaderContext();
 
       }
       throw new IllegalStateException();
@@ -9017,6 +9894,12 @@
         return isSetAuthorizations();
       case WAIT_FOR_WRITES:
         return isSetWaitForWrites();
+      case SAMPLER_CONFIG:
+        return isSetSamplerConfig();
+      case BATCH_TIME_OUT:
+        return isSetBatchTimeOut();
+      case CLASS_LOADER_CONTEXT:
+        return isSetClassLoaderContext();
       }
       throw new IllegalStateException();
     }
@@ -9106,12 +9989,96 @@
           return false;
       }
 
+      boolean this_present_samplerConfig = true && this.isSetSamplerConfig();
+      boolean that_present_samplerConfig = true && that.isSetSamplerConfig();
+      if (this_present_samplerConfig || that_present_samplerConfig) {
+        if (!(this_present_samplerConfig && that_present_samplerConfig))
+          return false;
+        if (!this.samplerConfig.equals(that.samplerConfig))
+          return false;
+      }
+
+      boolean this_present_batchTimeOut = true;
+      boolean that_present_batchTimeOut = true;
+      if (this_present_batchTimeOut || that_present_batchTimeOut) {
+        if (!(this_present_batchTimeOut && that_present_batchTimeOut))
+          return false;
+        if (this.batchTimeOut != that.batchTimeOut)
+          return false;
+      }
+
+      boolean this_present_classLoaderContext = true && this.isSetClassLoaderContext();
+      boolean that_present_classLoaderContext = true && that.isSetClassLoaderContext();
+      if (this_present_classLoaderContext || that_present_classLoaderContext) {
+        if (!(this_present_classLoaderContext && that_present_classLoaderContext))
+          return false;
+        if (!this.classLoaderContext.equals(that.classLoaderContext))
+          return false;
+      }
+
       return true;
     }
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_batch = true && (isSetBatch());
+      list.add(present_batch);
+      if (present_batch)
+        list.add(batch);
+
+      boolean present_columns = true && (isSetColumns());
+      list.add(present_columns);
+      if (present_columns)
+        list.add(columns);
+
+      boolean present_ssiList = true && (isSetSsiList());
+      list.add(present_ssiList);
+      if (present_ssiList)
+        list.add(ssiList);
+
+      boolean present_ssio = true && (isSetSsio());
+      list.add(present_ssio);
+      if (present_ssio)
+        list.add(ssio);
+
+      boolean present_authorizations = true && (isSetAuthorizations());
+      list.add(present_authorizations);
+      if (present_authorizations)
+        list.add(authorizations);
+
+      boolean present_waitForWrites = true;
+      list.add(present_waitForWrites);
+      if (present_waitForWrites)
+        list.add(waitForWrites);
+
+      boolean present_samplerConfig = true && (isSetSamplerConfig());
+      list.add(present_samplerConfig);
+      if (present_samplerConfig)
+        list.add(samplerConfig);
+
+      boolean present_batchTimeOut = true;
+      list.add(present_batchTimeOut);
+      if (present_batchTimeOut)
+        list.add(batchTimeOut);
+
+      boolean present_classLoaderContext = true && (isSetClassLoaderContext());
+      list.add(present_classLoaderContext);
+      if (present_classLoaderContext)
+        list.add(classLoaderContext);
+
+      return list.hashCode();
     }
 
     @Override
@@ -9202,6 +10169,36 @@
           return lastComparison;
         }
       }
+      lastComparison = Boolean.valueOf(isSetSamplerConfig()).compareTo(other.isSetSamplerConfig());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSamplerConfig()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.samplerConfig, other.samplerConfig);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetBatchTimeOut()).compareTo(other.isSetBatchTimeOut());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetBatchTimeOut()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.batchTimeOut, other.batchTimeOut);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetClassLoaderContext()).compareTo(other.isSetClassLoaderContext());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetClassLoaderContext()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.classLoaderContext, other.classLoaderContext);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
       return 0;
     }
 
@@ -9274,13 +10271,33 @@
       if (this.authorizations == null) {
         sb.append("null");
       } else {
-        sb.append(this.authorizations);
+        org.apache.thrift.TBaseHelper.toString(this.authorizations, sb);
       }
       first = false;
       if (!first) sb.append(", ");
       sb.append("waitForWrites:");
       sb.append(this.waitForWrites);
       first = false;
+      if (!first) sb.append(", ");
+      sb.append("samplerConfig:");
+      if (this.samplerConfig == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.samplerConfig);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("batchTimeOut:");
+      sb.append(this.batchTimeOut);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("classLoaderContext:");
+      if (this.classLoaderContext == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.classLoaderContext);
+      }
+      first = false;
       sb.append(")");
       return sb.toString();
     }
@@ -9294,6 +10311,9 @@
       if (credentials != null) {
         credentials.validate();
       }
+      if (samplerConfig != null) {
+        samplerConfig.validate();
+      }
     }
 
     private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
@@ -9353,27 +10373,27 @@
             case 2: // BATCH
               if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
                 {
-                  org.apache.thrift.protocol.TMap _map150 = iprot.readMapBegin();
-                  struct.batch = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>>(2*_map150.size);
-                  for (int _i151 = 0; _i151 < _map150.size; ++_i151)
+                  org.apache.thrift.protocol.TMap _map160 = iprot.readMapBegin();
+                  struct.batch = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>>(2*_map160.size);
+                  org.apache.accumulo.core.data.thrift.TKeyExtent _key161;
+                  List<org.apache.accumulo.core.data.thrift.TRange> _val162;
+                  for (int _i163 = 0; _i163 < _map160.size; ++_i163)
                   {
-                    org.apache.accumulo.core.data.thrift.TKeyExtent _key152;
-                    List<org.apache.accumulo.core.data.thrift.TRange> _val153;
-                    _key152 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
-                    _key152.read(iprot);
+                    _key161 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+                    _key161.read(iprot);
                     {
-                      org.apache.thrift.protocol.TList _list154 = iprot.readListBegin();
-                      _val153 = new ArrayList<org.apache.accumulo.core.data.thrift.TRange>(_list154.size);
-                      for (int _i155 = 0; _i155 < _list154.size; ++_i155)
+                      org.apache.thrift.protocol.TList _list164 = iprot.readListBegin();
+                      _val162 = new ArrayList<org.apache.accumulo.core.data.thrift.TRange>(_list164.size);
+                      org.apache.accumulo.core.data.thrift.TRange _elem165;
+                      for (int _i166 = 0; _i166 < _list164.size; ++_i166)
                       {
-                        org.apache.accumulo.core.data.thrift.TRange _elem156;
-                        _elem156 = new org.apache.accumulo.core.data.thrift.TRange();
-                        _elem156.read(iprot);
-                        _val153.add(_elem156);
+                        _elem165 = new org.apache.accumulo.core.data.thrift.TRange();
+                        _elem165.read(iprot);
+                        _val162.add(_elem165);
                       }
                       iprot.readListEnd();
                     }
-                    struct.batch.put(_key152, _val153);
+                    struct.batch.put(_key161, _val162);
                   }
                   iprot.readMapEnd();
                 }
@@ -9385,14 +10405,14 @@
             case 3: // COLUMNS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list157 = iprot.readListBegin();
-                  struct.columns = new ArrayList<org.apache.accumulo.core.data.thrift.TColumn>(_list157.size);
-                  for (int _i158 = 0; _i158 < _list157.size; ++_i158)
+                  org.apache.thrift.protocol.TList _list167 = iprot.readListBegin();
+                  struct.columns = new ArrayList<org.apache.accumulo.core.data.thrift.TColumn>(_list167.size);
+                  org.apache.accumulo.core.data.thrift.TColumn _elem168;
+                  for (int _i169 = 0; _i169 < _list167.size; ++_i169)
                   {
-                    org.apache.accumulo.core.data.thrift.TColumn _elem159;
-                    _elem159 = new org.apache.accumulo.core.data.thrift.TColumn();
-                    _elem159.read(iprot);
-                    struct.columns.add(_elem159);
+                    _elem168 = new org.apache.accumulo.core.data.thrift.TColumn();
+                    _elem168.read(iprot);
+                    struct.columns.add(_elem168);
                   }
                   iprot.readListEnd();
                 }
@@ -9404,14 +10424,14 @@
             case 4: // SSI_LIST
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list160 = iprot.readListBegin();
-                  struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list160.size);
-                  for (int _i161 = 0; _i161 < _list160.size; ++_i161)
+                  org.apache.thrift.protocol.TList _list170 = iprot.readListBegin();
+                  struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list170.size);
+                  org.apache.accumulo.core.data.thrift.IterInfo _elem171;
+                  for (int _i172 = 0; _i172 < _list170.size; ++_i172)
                   {
-                    org.apache.accumulo.core.data.thrift.IterInfo _elem162;
-                    _elem162 = new org.apache.accumulo.core.data.thrift.IterInfo();
-                    _elem162.read(iprot);
-                    struct.ssiList.add(_elem162);
+                    _elem171 = new org.apache.accumulo.core.data.thrift.IterInfo();
+                    _elem171.read(iprot);
+                    struct.ssiList.add(_elem171);
                   }
                   iprot.readListEnd();
                 }
@@ -9423,27 +10443,27 @@
             case 5: // SSIO
               if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
                 {
-                  org.apache.thrift.protocol.TMap _map163 = iprot.readMapBegin();
-                  struct.ssio = new HashMap<String,Map<String,String>>(2*_map163.size);
-                  for (int _i164 = 0; _i164 < _map163.size; ++_i164)
+                  org.apache.thrift.protocol.TMap _map173 = iprot.readMapBegin();
+                  struct.ssio = new HashMap<String,Map<String,String>>(2*_map173.size);
+                  String _key174;
+                  Map<String,String> _val175;
+                  for (int _i176 = 0; _i176 < _map173.size; ++_i176)
                   {
-                    String _key165;
-                    Map<String,String> _val166;
-                    _key165 = iprot.readString();
+                    _key174 = iprot.readString();
                     {
-                      org.apache.thrift.protocol.TMap _map167 = iprot.readMapBegin();
-                      _val166 = new HashMap<String,String>(2*_map167.size);
-                      for (int _i168 = 0; _i168 < _map167.size; ++_i168)
+                      org.apache.thrift.protocol.TMap _map177 = iprot.readMapBegin();
+                      _val175 = new HashMap<String,String>(2*_map177.size);
+                      String _key178;
+                      String _val179;
+                      for (int _i180 = 0; _i180 < _map177.size; ++_i180)
                       {
-                        String _key169;
-                        String _val170;
-                        _key169 = iprot.readString();
-                        _val170 = iprot.readString();
-                        _val166.put(_key169, _val170);
+                        _key178 = iprot.readString();
+                        _val179 = iprot.readString();
+                        _val175.put(_key178, _val179);
                       }
                       iprot.readMapEnd();
                     }
-                    struct.ssio.put(_key165, _val166);
+                    struct.ssio.put(_key174, _val175);
                   }
                   iprot.readMapEnd();
                 }
@@ -9455,13 +10475,13 @@
             case 6: // AUTHORIZATIONS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list171 = iprot.readListBegin();
-                  struct.authorizations = new ArrayList<ByteBuffer>(_list171.size);
-                  for (int _i172 = 0; _i172 < _list171.size; ++_i172)
+                  org.apache.thrift.protocol.TList _list181 = iprot.readListBegin();
+                  struct.authorizations = new ArrayList<ByteBuffer>(_list181.size);
+                  ByteBuffer _elem182;
+                  for (int _i183 = 0; _i183 < _list181.size; ++_i183)
                   {
-                    ByteBuffer _elem173;
-                    _elem173 = iprot.readBinary();
-                    struct.authorizations.add(_elem173);
+                    _elem182 = iprot.readBinary();
+                    struct.authorizations.add(_elem182);
                   }
                   iprot.readListEnd();
                 }
@@ -9478,6 +10498,31 @@
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
               }
               break;
+            case 9: // SAMPLER_CONFIG
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.samplerConfig = new TSamplerConfiguration();
+                struct.samplerConfig.read(iprot);
+                struct.setSamplerConfigIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 10: // BATCH_TIME_OUT
+              if (schemeField.type == org.apache.thrift.protocol.TType.I64) {
+                struct.batchTimeOut = iprot.readI64();
+                struct.setBatchTimeOutIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 11: // CLASS_LOADER_CONTEXT
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.classLoaderContext = iprot.readString();
+                struct.setClassLoaderContextIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
             default:
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
           }
@@ -9502,14 +10547,14 @@
           oprot.writeFieldBegin(BATCH_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.LIST, struct.batch.size()));
-            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, List<org.apache.accumulo.core.data.thrift.TRange>> _iter174 : struct.batch.entrySet())
+            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, List<org.apache.accumulo.core.data.thrift.TRange>> _iter184 : struct.batch.entrySet())
             {
-              _iter174.getKey().write(oprot);
+              _iter184.getKey().write(oprot);
               {
-                oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, _iter174.getValue().size()));
-                for (org.apache.accumulo.core.data.thrift.TRange _iter175 : _iter174.getValue())
+                oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, _iter184.getValue().size()));
+                for (org.apache.accumulo.core.data.thrift.TRange _iter185 : _iter184.getValue())
                 {
-                  _iter175.write(oprot);
+                  _iter185.write(oprot);
                 }
                 oprot.writeListEnd();
               }
@@ -9522,9 +10567,9 @@
           oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.columns.size()));
-            for (org.apache.accumulo.core.data.thrift.TColumn _iter176 : struct.columns)
+            for (org.apache.accumulo.core.data.thrift.TColumn _iter186 : struct.columns)
             {
-              _iter176.write(oprot);
+              _iter186.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -9534,9 +10579,9 @@
           oprot.writeFieldBegin(SSI_LIST_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.ssiList.size()));
-            for (org.apache.accumulo.core.data.thrift.IterInfo _iter177 : struct.ssiList)
+            for (org.apache.accumulo.core.data.thrift.IterInfo _iter187 : struct.ssiList)
             {
-              _iter177.write(oprot);
+              _iter187.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -9546,15 +10591,15 @@
           oprot.writeFieldBegin(SSIO_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, struct.ssio.size()));
-            for (Map.Entry<String, Map<String,String>> _iter178 : struct.ssio.entrySet())
+            for (Map.Entry<String, Map<String,String>> _iter188 : struct.ssio.entrySet())
             {
-              oprot.writeString(_iter178.getKey());
+              oprot.writeString(_iter188.getKey());
               {
-                oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, _iter178.getValue().size()));
-                for (Map.Entry<String, String> _iter179 : _iter178.getValue().entrySet())
+                oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, _iter188.getValue().size()));
+                for (Map.Entry<String, String> _iter189 : _iter188.getValue().entrySet())
                 {
-                  oprot.writeString(_iter179.getKey());
-                  oprot.writeString(_iter179.getValue());
+                  oprot.writeString(_iter189.getKey());
+                  oprot.writeString(_iter189.getValue());
                 }
                 oprot.writeMapEnd();
               }
@@ -9567,9 +10612,9 @@
           oprot.writeFieldBegin(AUTHORIZATIONS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.authorizations.size()));
-            for (ByteBuffer _iter180 : struct.authorizations)
+            for (ByteBuffer _iter190 : struct.authorizations)
             {
-              oprot.writeBinary(_iter180);
+              oprot.writeBinary(_iter190);
             }
             oprot.writeListEnd();
           }
@@ -9583,6 +10628,19 @@
           struct.tinfo.write(oprot);
           oprot.writeFieldEnd();
         }
+        if (struct.samplerConfig != null) {
+          oprot.writeFieldBegin(SAMPLER_CONFIG_FIELD_DESC);
+          struct.samplerConfig.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldBegin(BATCH_TIME_OUT_FIELD_DESC);
+        oprot.writeI64(struct.batchTimeOut);
+        oprot.writeFieldEnd();
+        if (struct.classLoaderContext != null) {
+          oprot.writeFieldBegin(CLASS_LOADER_CONTEXT_FIELD_DESC);
+          oprot.writeString(struct.classLoaderContext);
+          oprot.writeFieldEnd();
+        }
         oprot.writeFieldStop();
         oprot.writeStructEnd();
       }
@@ -9625,7 +10683,16 @@
         if (struct.isSetWaitForWrites()) {
           optionals.set(7);
         }
-        oprot.writeBitSet(optionals, 8);
+        if (struct.isSetSamplerConfig()) {
+          optionals.set(8);
+        }
+        if (struct.isSetBatchTimeOut()) {
+          optionals.set(9);
+        }
+        if (struct.isSetClassLoaderContext()) {
+          optionals.set(10);
+        }
+        oprot.writeBitSet(optionals, 11);
         if (struct.isSetTinfo()) {
           struct.tinfo.write(oprot);
         }
@@ -9635,14 +10702,14 @@
         if (struct.isSetBatch()) {
           {
             oprot.writeI32(struct.batch.size());
-            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, List<org.apache.accumulo.core.data.thrift.TRange>> _iter181 : struct.batch.entrySet())
+            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, List<org.apache.accumulo.core.data.thrift.TRange>> _iter191 : struct.batch.entrySet())
             {
-              _iter181.getKey().write(oprot);
+              _iter191.getKey().write(oprot);
               {
-                oprot.writeI32(_iter181.getValue().size());
-                for (org.apache.accumulo.core.data.thrift.TRange _iter182 : _iter181.getValue())
+                oprot.writeI32(_iter191.getValue().size());
+                for (org.apache.accumulo.core.data.thrift.TRange _iter192 : _iter191.getValue())
                 {
-                  _iter182.write(oprot);
+                  _iter192.write(oprot);
                 }
               }
             }
@@ -9651,33 +10718,33 @@
         if (struct.isSetColumns()) {
           {
             oprot.writeI32(struct.columns.size());
-            for (org.apache.accumulo.core.data.thrift.TColumn _iter183 : struct.columns)
+            for (org.apache.accumulo.core.data.thrift.TColumn _iter193 : struct.columns)
             {
-              _iter183.write(oprot);
+              _iter193.write(oprot);
             }
           }
         }
         if (struct.isSetSsiList()) {
           {
             oprot.writeI32(struct.ssiList.size());
-            for (org.apache.accumulo.core.data.thrift.IterInfo _iter184 : struct.ssiList)
+            for (org.apache.accumulo.core.data.thrift.IterInfo _iter194 : struct.ssiList)
             {
-              _iter184.write(oprot);
+              _iter194.write(oprot);
             }
           }
         }
         if (struct.isSetSsio()) {
           {
             oprot.writeI32(struct.ssio.size());
-            for (Map.Entry<String, Map<String,String>> _iter185 : struct.ssio.entrySet())
+            for (Map.Entry<String, Map<String,String>> _iter195 : struct.ssio.entrySet())
             {
-              oprot.writeString(_iter185.getKey());
+              oprot.writeString(_iter195.getKey());
               {
-                oprot.writeI32(_iter185.getValue().size());
-                for (Map.Entry<String, String> _iter186 : _iter185.getValue().entrySet())
+                oprot.writeI32(_iter195.getValue().size());
+                for (Map.Entry<String, String> _iter196 : _iter195.getValue().entrySet())
                 {
-                  oprot.writeString(_iter186.getKey());
-                  oprot.writeString(_iter186.getValue());
+                  oprot.writeString(_iter196.getKey());
+                  oprot.writeString(_iter196.getValue());
                 }
               }
             }
@@ -9686,21 +10753,30 @@
         if (struct.isSetAuthorizations()) {
           {
             oprot.writeI32(struct.authorizations.size());
-            for (ByteBuffer _iter187 : struct.authorizations)
+            for (ByteBuffer _iter197 : struct.authorizations)
             {
-              oprot.writeBinary(_iter187);
+              oprot.writeBinary(_iter197);
             }
           }
         }
         if (struct.isSetWaitForWrites()) {
           oprot.writeBool(struct.waitForWrites);
         }
+        if (struct.isSetSamplerConfig()) {
+          struct.samplerConfig.write(oprot);
+        }
+        if (struct.isSetBatchTimeOut()) {
+          oprot.writeI64(struct.batchTimeOut);
+        }
+        if (struct.isSetClassLoaderContext()) {
+          oprot.writeString(struct.classLoaderContext);
+        }
       }
 
       @Override
       public void read(org.apache.thrift.protocol.TProtocol prot, startMultiScan_args struct) throws org.apache.thrift.TException {
         TTupleProtocol iprot = (TTupleProtocol) prot;
-        BitSet incoming = iprot.readBitSet(8);
+        BitSet incoming = iprot.readBitSet(11);
         if (incoming.get(0)) {
           struct.tinfo = new org.apache.accumulo.core.trace.thrift.TInfo();
           struct.tinfo.read(iprot);
@@ -9713,93 +10789,93 @@
         }
         if (incoming.get(2)) {
           {
-            org.apache.thrift.protocol.TMap _map188 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.LIST, iprot.readI32());
-            struct.batch = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>>(2*_map188.size);
-            for (int _i189 = 0; _i189 < _map188.size; ++_i189)
+            org.apache.thrift.protocol.TMap _map198 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.LIST, iprot.readI32());
+            struct.batch = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TRange>>(2*_map198.size);
+            org.apache.accumulo.core.data.thrift.TKeyExtent _key199;
+            List<org.apache.accumulo.core.data.thrift.TRange> _val200;
+            for (int _i201 = 0; _i201 < _map198.size; ++_i201)
             {
-              org.apache.accumulo.core.data.thrift.TKeyExtent _key190;
-              List<org.apache.accumulo.core.data.thrift.TRange> _val191;
-              _key190 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
-              _key190.read(iprot);
+              _key199 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+              _key199.read(iprot);
               {
-                org.apache.thrift.protocol.TList _list192 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-                _val191 = new ArrayList<org.apache.accumulo.core.data.thrift.TRange>(_list192.size);
-                for (int _i193 = 0; _i193 < _list192.size; ++_i193)
+                org.apache.thrift.protocol.TList _list202 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+                _val200 = new ArrayList<org.apache.accumulo.core.data.thrift.TRange>(_list202.size);
+                org.apache.accumulo.core.data.thrift.TRange _elem203;
+                for (int _i204 = 0; _i204 < _list202.size; ++_i204)
                 {
-                  org.apache.accumulo.core.data.thrift.TRange _elem194;
-                  _elem194 = new org.apache.accumulo.core.data.thrift.TRange();
-                  _elem194.read(iprot);
-                  _val191.add(_elem194);
+                  _elem203 = new org.apache.accumulo.core.data.thrift.TRange();
+                  _elem203.read(iprot);
+                  _val200.add(_elem203);
                 }
               }
-              struct.batch.put(_key190, _val191);
+              struct.batch.put(_key199, _val200);
             }
           }
           struct.setBatchIsSet(true);
         }
         if (incoming.get(3)) {
           {
-            org.apache.thrift.protocol.TList _list195 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.columns = new ArrayList<org.apache.accumulo.core.data.thrift.TColumn>(_list195.size);
-            for (int _i196 = 0; _i196 < _list195.size; ++_i196)
+            org.apache.thrift.protocol.TList _list205 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.columns = new ArrayList<org.apache.accumulo.core.data.thrift.TColumn>(_list205.size);
+            org.apache.accumulo.core.data.thrift.TColumn _elem206;
+            for (int _i207 = 0; _i207 < _list205.size; ++_i207)
             {
-              org.apache.accumulo.core.data.thrift.TColumn _elem197;
-              _elem197 = new org.apache.accumulo.core.data.thrift.TColumn();
-              _elem197.read(iprot);
-              struct.columns.add(_elem197);
+              _elem206 = new org.apache.accumulo.core.data.thrift.TColumn();
+              _elem206.read(iprot);
+              struct.columns.add(_elem206);
             }
           }
           struct.setColumnsIsSet(true);
         }
         if (incoming.get(4)) {
           {
-            org.apache.thrift.protocol.TList _list198 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list198.size);
-            for (int _i199 = 0; _i199 < _list198.size; ++_i199)
+            org.apache.thrift.protocol.TList _list208 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.ssiList = new ArrayList<org.apache.accumulo.core.data.thrift.IterInfo>(_list208.size);
+            org.apache.accumulo.core.data.thrift.IterInfo _elem209;
+            for (int _i210 = 0; _i210 < _list208.size; ++_i210)
             {
-              org.apache.accumulo.core.data.thrift.IterInfo _elem200;
-              _elem200 = new org.apache.accumulo.core.data.thrift.IterInfo();
-              _elem200.read(iprot);
-              struct.ssiList.add(_elem200);
+              _elem209 = new org.apache.accumulo.core.data.thrift.IterInfo();
+              _elem209.read(iprot);
+              struct.ssiList.add(_elem209);
             }
           }
           struct.setSsiListIsSet(true);
         }
         if (incoming.get(5)) {
           {
-            org.apache.thrift.protocol.TMap _map201 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-            struct.ssio = new HashMap<String,Map<String,String>>(2*_map201.size);
-            for (int _i202 = 0; _i202 < _map201.size; ++_i202)
+            org.apache.thrift.protocol.TMap _map211 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
+            struct.ssio = new HashMap<String,Map<String,String>>(2*_map211.size);
+            String _key212;
+            Map<String,String> _val213;
+            for (int _i214 = 0; _i214 < _map211.size; ++_i214)
             {
-              String _key203;
-              Map<String,String> _val204;
-              _key203 = iprot.readString();
+              _key212 = iprot.readString();
               {
-                org.apache.thrift.protocol.TMap _map205 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-                _val204 = new HashMap<String,String>(2*_map205.size);
-                for (int _i206 = 0; _i206 < _map205.size; ++_i206)
+                org.apache.thrift.protocol.TMap _map215 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+                _val213 = new HashMap<String,String>(2*_map215.size);
+                String _key216;
+                String _val217;
+                for (int _i218 = 0; _i218 < _map215.size; ++_i218)
                 {
-                  String _key207;
-                  String _val208;
-                  _key207 = iprot.readString();
-                  _val208 = iprot.readString();
-                  _val204.put(_key207, _val208);
+                  _key216 = iprot.readString();
+                  _val217 = iprot.readString();
+                  _val213.put(_key216, _val217);
                 }
               }
-              struct.ssio.put(_key203, _val204);
+              struct.ssio.put(_key212, _val213);
             }
           }
           struct.setSsioIsSet(true);
         }
         if (incoming.get(6)) {
           {
-            org.apache.thrift.protocol.TList _list209 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-            struct.authorizations = new ArrayList<ByteBuffer>(_list209.size);
-            for (int _i210 = 0; _i210 < _list209.size; ++_i210)
+            org.apache.thrift.protocol.TList _list219 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.authorizations = new ArrayList<ByteBuffer>(_list219.size);
+            ByteBuffer _elem220;
+            for (int _i221 = 0; _i221 < _list219.size; ++_i221)
             {
-              ByteBuffer _elem211;
-              _elem211 = iprot.readBinary();
-              struct.authorizations.add(_elem211);
+              _elem220 = iprot.readBinary();
+              struct.authorizations.add(_elem220);
             }
           }
           struct.setAuthorizationsIsSet(true);
@@ -9808,6 +10884,19 @@
           struct.waitForWrites = iprot.readBool();
           struct.setWaitForWritesIsSet(true);
         }
+        if (incoming.get(8)) {
+          struct.samplerConfig = new TSamplerConfiguration();
+          struct.samplerConfig.read(iprot);
+          struct.setSamplerConfigIsSet(true);
+        }
+        if (incoming.get(9)) {
+          struct.batchTimeOut = iprot.readI64();
+          struct.setBatchTimeOutIsSet(true);
+        }
+        if (incoming.get(10)) {
+          struct.classLoaderContext = iprot.readString();
+          struct.setClassLoaderContextIsSet(true);
+        }
       }
     }
 
@@ -9818,6 +10907,7 @@
 
     private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.STRUCT, (short)0);
     private static final org.apache.thrift.protocol.TField SEC_FIELD_DESC = new org.apache.thrift.protocol.TField("sec", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField TSNPE_FIELD_DESC = new org.apache.thrift.protocol.TField("tsnpe", org.apache.thrift.protocol.TType.STRUCT, (short)2);
 
     private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
     static {
@@ -9827,11 +10917,13 @@
 
     public org.apache.accumulo.core.data.thrift.InitialMultiScan success; // required
     public org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException sec; // required
+    public TSampleNotPresentException tsnpe; // required
 
     /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
     public enum _Fields implements org.apache.thrift.TFieldIdEnum {
       SUCCESS((short)0, "success"),
-      SEC((short)1, "sec");
+      SEC((short)1, "sec"),
+      TSNPE((short)2, "tsnpe");
 
       private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -9850,6 +10942,8 @@
             return SUCCESS;
           case 1: // SEC
             return SEC;
+          case 2: // TSNPE
+            return TSNPE;
           default:
             return null;
         }
@@ -9897,6 +10991,8 @@
           new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, org.apache.accumulo.core.data.thrift.InitialMultiScan.class)));
       tmpMap.put(_Fields.SEC, new org.apache.thrift.meta_data.FieldMetaData("sec", org.apache.thrift.TFieldRequirementType.DEFAULT, 
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.TSNPE, new org.apache.thrift.meta_data.FieldMetaData("tsnpe", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
       metaDataMap = Collections.unmodifiableMap(tmpMap);
       org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(startMultiScan_result.class, metaDataMap);
     }
@@ -9906,11 +11002,13 @@
 
     public startMultiScan_result(
       org.apache.accumulo.core.data.thrift.InitialMultiScan success,
-      org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException sec)
+      org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException sec,
+      TSampleNotPresentException tsnpe)
     {
       this();
       this.success = success;
       this.sec = sec;
+      this.tsnpe = tsnpe;
     }
 
     /**
@@ -9923,6 +11021,9 @@
       if (other.isSetSec()) {
         this.sec = new org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException(other.sec);
       }
+      if (other.isSetTsnpe()) {
+        this.tsnpe = new TSampleNotPresentException(other.tsnpe);
+      }
     }
 
     public startMultiScan_result deepCopy() {
@@ -9933,6 +11034,7 @@
     public void clear() {
       this.success = null;
       this.sec = null;
+      this.tsnpe = null;
     }
 
     public org.apache.accumulo.core.data.thrift.InitialMultiScan getSuccess() {
@@ -9983,6 +11085,30 @@
       }
     }
 
+    public TSampleNotPresentException getTsnpe() {
+      return this.tsnpe;
+    }
+
+    public startMultiScan_result setTsnpe(TSampleNotPresentException tsnpe) {
+      this.tsnpe = tsnpe;
+      return this;
+    }
+
+    public void unsetTsnpe() {
+      this.tsnpe = null;
+    }
+
+    /** Returns true if field tsnpe is set (has been assigned a value) and false otherwise */
+    public boolean isSetTsnpe() {
+      return this.tsnpe != null;
+    }
+
+    public void setTsnpeIsSet(boolean value) {
+      if (!value) {
+        this.tsnpe = null;
+      }
+    }
+
     public void setFieldValue(_Fields field, Object value) {
       switch (field) {
       case SUCCESS:
@@ -10001,6 +11127,14 @@
         }
         break;
 
+      case TSNPE:
+        if (value == null) {
+          unsetTsnpe();
+        } else {
+          setTsnpe((TSampleNotPresentException)value);
+        }
+        break;
+
       }
     }
 
@@ -10012,6 +11146,9 @@
       case SEC:
         return getSec();
 
+      case TSNPE:
+        return getTsnpe();
+
       }
       throw new IllegalStateException();
     }
@@ -10027,6 +11164,8 @@
         return isSetSuccess();
       case SEC:
         return isSetSec();
+      case TSNPE:
+        return isSetTsnpe();
       }
       throw new IllegalStateException();
     }
@@ -10062,12 +11201,38 @@
           return false;
       }
 
+      boolean this_present_tsnpe = true && this.isSetTsnpe();
+      boolean that_present_tsnpe = true && that.isSetTsnpe();
+      if (this_present_tsnpe || that_present_tsnpe) {
+        if (!(this_present_tsnpe && that_present_tsnpe))
+          return false;
+        if (!this.tsnpe.equals(that.tsnpe))
+          return false;
+      }
+
       return true;
     }
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_tsnpe = true && (isSetTsnpe());
+      list.add(present_tsnpe);
+      if (present_tsnpe)
+        list.add(tsnpe);
+
+      return list.hashCode();
     }
 
     @Override
@@ -10098,6 +11263,16 @@
           return lastComparison;
         }
       }
+      lastComparison = Boolean.valueOf(isSetTsnpe()).compareTo(other.isSetTsnpe());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetTsnpe()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.tsnpe, other.tsnpe);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
       return 0;
     }
 
@@ -10133,6 +11308,14 @@
         sb.append(this.sec);
       }
       first = false;
+      if (!first) sb.append(", ");
+      sb.append("tsnpe:");
+      if (this.tsnpe == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tsnpe);
+      }
+      first = false;
       sb.append(")");
       return sb.toString();
     }
@@ -10197,6 +11380,15 @@
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
               }
               break;
+            case 2: // TSNPE
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.tsnpe = new TSampleNotPresentException();
+                struct.tsnpe.read(iprot);
+                struct.setTsnpeIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
             default:
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
           }
@@ -10222,6 +11414,11 @@
           struct.sec.write(oprot);
           oprot.writeFieldEnd();
         }
+        if (struct.tsnpe != null) {
+          oprot.writeFieldBegin(TSNPE_FIELD_DESC);
+          struct.tsnpe.write(oprot);
+          oprot.writeFieldEnd();
+        }
         oprot.writeFieldStop();
         oprot.writeStructEnd();
       }
@@ -10246,19 +11443,25 @@
         if (struct.isSetSec()) {
           optionals.set(1);
         }
-        oprot.writeBitSet(optionals, 2);
+        if (struct.isSetTsnpe()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
         if (struct.isSetSuccess()) {
           struct.success.write(oprot);
         }
         if (struct.isSetSec()) {
           struct.sec.write(oprot);
         }
+        if (struct.isSetTsnpe()) {
+          struct.tsnpe.write(oprot);
+        }
       }
 
       @Override
       public void read(org.apache.thrift.protocol.TProtocol prot, startMultiScan_result struct) throws org.apache.thrift.TException {
         TTupleProtocol iprot = (TTupleProtocol) prot;
-        BitSet incoming = iprot.readBitSet(2);
+        BitSet incoming = iprot.readBitSet(3);
         if (incoming.get(0)) {
           struct.success = new org.apache.accumulo.core.data.thrift.InitialMultiScan();
           struct.success.read(iprot);
@@ -10269,6 +11472,11 @@
           struct.sec.read(iprot);
           struct.setSecIsSet(true);
         }
+        if (incoming.get(2)) {
+          struct.tsnpe = new TSampleNotPresentException();
+          struct.tsnpe.read(iprot);
+          struct.setTsnpeIsSet(true);
+        }
       }
     }
 
@@ -10473,7 +11681,7 @@
         return getTinfo();
 
       case SCAN_ID:
-        return Long.valueOf(getScanID());
+        return getScanID();
 
       }
       throw new IllegalStateException();
@@ -10530,7 +11738,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_scanID = true;
+      list.add(present_scanID);
+      if (present_scanID)
+        list.add(scanID);
+
+      return list.hashCode();
     }
 
     @Override
@@ -10736,6 +11956,7 @@
 
     private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.STRUCT, (short)0);
     private static final org.apache.thrift.protocol.TField NSSI_FIELD_DESC = new org.apache.thrift.protocol.TField("nssi", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField TSNPE_FIELD_DESC = new org.apache.thrift.protocol.TField("tsnpe", org.apache.thrift.protocol.TType.STRUCT, (short)2);
 
     private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
     static {
@@ -10745,11 +11966,13 @@
 
     public org.apache.accumulo.core.data.thrift.MultiScanResult success; // required
     public NoSuchScanIDException nssi; // required
+    public TSampleNotPresentException tsnpe; // required
 
     /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
     public enum _Fields implements org.apache.thrift.TFieldIdEnum {
       SUCCESS((short)0, "success"),
-      NSSI((short)1, "nssi");
+      NSSI((short)1, "nssi"),
+      TSNPE((short)2, "tsnpe");
 
       private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -10768,6 +11991,8 @@
             return SUCCESS;
           case 1: // NSSI
             return NSSI;
+          case 2: // TSNPE
+            return TSNPE;
           default:
             return null;
         }
@@ -10815,6 +12040,8 @@
           new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, org.apache.accumulo.core.data.thrift.MultiScanResult.class)));
       tmpMap.put(_Fields.NSSI, new org.apache.thrift.meta_data.FieldMetaData("nssi", org.apache.thrift.TFieldRequirementType.DEFAULT, 
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.TSNPE, new org.apache.thrift.meta_data.FieldMetaData("tsnpe", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
       metaDataMap = Collections.unmodifiableMap(tmpMap);
       org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(continueMultiScan_result.class, metaDataMap);
     }
@@ -10824,11 +12051,13 @@
 
     public continueMultiScan_result(
       org.apache.accumulo.core.data.thrift.MultiScanResult success,
-      NoSuchScanIDException nssi)
+      NoSuchScanIDException nssi,
+      TSampleNotPresentException tsnpe)
     {
       this();
       this.success = success;
       this.nssi = nssi;
+      this.tsnpe = tsnpe;
     }
 
     /**
@@ -10841,6 +12070,9 @@
       if (other.isSetNssi()) {
         this.nssi = new NoSuchScanIDException(other.nssi);
       }
+      if (other.isSetTsnpe()) {
+        this.tsnpe = new TSampleNotPresentException(other.tsnpe);
+      }
     }
 
     public continueMultiScan_result deepCopy() {
@@ -10851,6 +12083,7 @@
     public void clear() {
       this.success = null;
       this.nssi = null;
+      this.tsnpe = null;
     }
 
     public org.apache.accumulo.core.data.thrift.MultiScanResult getSuccess() {
@@ -10901,6 +12134,30 @@
       }
     }
 
+    public TSampleNotPresentException getTsnpe() {
+      return this.tsnpe;
+    }
+
+    public continueMultiScan_result setTsnpe(TSampleNotPresentException tsnpe) {
+      this.tsnpe = tsnpe;
+      return this;
+    }
+
+    public void unsetTsnpe() {
+      this.tsnpe = null;
+    }
+
+    /** Returns true if field tsnpe is set (has been assigned a value) and false otherwise */
+    public boolean isSetTsnpe() {
+      return this.tsnpe != null;
+    }
+
+    public void setTsnpeIsSet(boolean value) {
+      if (!value) {
+        this.tsnpe = null;
+      }
+    }
+
     public void setFieldValue(_Fields field, Object value) {
       switch (field) {
       case SUCCESS:
@@ -10919,6 +12176,14 @@
         }
         break;
 
+      case TSNPE:
+        if (value == null) {
+          unsetTsnpe();
+        } else {
+          setTsnpe((TSampleNotPresentException)value);
+        }
+        break;
+
       }
     }
 
@@ -10930,6 +12195,9 @@
       case NSSI:
         return getNssi();
 
+      case TSNPE:
+        return getTsnpe();
+
       }
       throw new IllegalStateException();
     }
@@ -10945,6 +12213,8 @@
         return isSetSuccess();
       case NSSI:
         return isSetNssi();
+      case TSNPE:
+        return isSetTsnpe();
       }
       throw new IllegalStateException();
     }
@@ -10980,12 +12250,38 @@
           return false;
       }
 
+      boolean this_present_tsnpe = true && this.isSetTsnpe();
+      boolean that_present_tsnpe = true && that.isSetTsnpe();
+      if (this_present_tsnpe || that_present_tsnpe) {
+        if (!(this_present_tsnpe && that_present_tsnpe))
+          return false;
+        if (!this.tsnpe.equals(that.tsnpe))
+          return false;
+      }
+
       return true;
     }
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_nssi = true && (isSetNssi());
+      list.add(present_nssi);
+      if (present_nssi)
+        list.add(nssi);
+
+      boolean present_tsnpe = true && (isSetTsnpe());
+      list.add(present_tsnpe);
+      if (present_tsnpe)
+        list.add(tsnpe);
+
+      return list.hashCode();
     }
 
     @Override
@@ -11016,6 +12312,16 @@
           return lastComparison;
         }
       }
+      lastComparison = Boolean.valueOf(isSetTsnpe()).compareTo(other.isSetTsnpe());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetTsnpe()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.tsnpe, other.tsnpe);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
       return 0;
     }
 
@@ -11051,6 +12357,14 @@
         sb.append(this.nssi);
       }
       first = false;
+      if (!first) sb.append(", ");
+      sb.append("tsnpe:");
+      if (this.tsnpe == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tsnpe);
+      }
+      first = false;
       sb.append(")");
       return sb.toString();
     }
@@ -11115,6 +12429,15 @@
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
               }
               break;
+            case 2: // TSNPE
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.tsnpe = new TSampleNotPresentException();
+                struct.tsnpe.read(iprot);
+                struct.setTsnpeIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
             default:
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
           }
@@ -11140,6 +12463,11 @@
           struct.nssi.write(oprot);
           oprot.writeFieldEnd();
         }
+        if (struct.tsnpe != null) {
+          oprot.writeFieldBegin(TSNPE_FIELD_DESC);
+          struct.tsnpe.write(oprot);
+          oprot.writeFieldEnd();
+        }
         oprot.writeFieldStop();
         oprot.writeStructEnd();
       }
@@ -11164,19 +12492,25 @@
         if (struct.isSetNssi()) {
           optionals.set(1);
         }
-        oprot.writeBitSet(optionals, 2);
+        if (struct.isSetTsnpe()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
         if (struct.isSetSuccess()) {
           struct.success.write(oprot);
         }
         if (struct.isSetNssi()) {
           struct.nssi.write(oprot);
         }
+        if (struct.isSetTsnpe()) {
+          struct.tsnpe.write(oprot);
+        }
       }
 
       @Override
       public void read(org.apache.thrift.protocol.TProtocol prot, continueMultiScan_result struct) throws org.apache.thrift.TException {
         TTupleProtocol iprot = (TTupleProtocol) prot;
-        BitSet incoming = iprot.readBitSet(2);
+        BitSet incoming = iprot.readBitSet(3);
         if (incoming.get(0)) {
           struct.success = new org.apache.accumulo.core.data.thrift.MultiScanResult();
           struct.success.read(iprot);
@@ -11187,6 +12521,11 @@
           struct.nssi.read(iprot);
           struct.setNssiIsSet(true);
         }
+        if (incoming.get(2)) {
+          struct.tsnpe = new TSampleNotPresentException();
+          struct.tsnpe.read(iprot);
+          struct.setTsnpeIsSet(true);
+        }
       }
     }
 
@@ -11391,7 +12730,7 @@
         return getTinfo();
 
       case SCAN_ID:
-        return Long.valueOf(getScanID());
+        return getScanID();
 
       }
       throw new IllegalStateException();
@@ -11448,7 +12787,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_scanID = true;
+      list.add(present_scanID);
+      if (present_scanID)
+        list.add(scanID);
+
+      return list.hashCode();
     }
 
     @Override
@@ -11844,7 +13195,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_nssi = true && (isSetNssi());
+      list.add(present_nssi);
+      if (present_nssi)
+        list.add(nssi);
+
+      return list.hashCode();
     }
 
     @Override
@@ -12334,7 +13692,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_durability = true && (isSetDurability());
+      list.add(present_durability);
+      if (present_durability)
+        list.add(durability.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -12487,7 +13862,7 @@
               break;
             case 3: // DURABILITY
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.durability = TDurability.findByValue(iprot.readI32());
+                struct.durability = org.apache.accumulo.core.tabletserver.thrift.TDurability.findByValue(iprot.readI32());
                 struct.setDurabilityIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -12577,7 +13952,7 @@
           struct.setCredentialsIsSet(true);
         }
         if (incoming.get(2)) {
-          struct.durability = TDurability.findByValue(iprot.readI32());
+          struct.durability = org.apache.accumulo.core.tabletserver.thrift.TDurability.findByValue(iprot.readI32());
           struct.setDurabilityIsSet(true);
         }
       }
@@ -12781,7 +14156,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Long.valueOf(getSuccess());
+        return getSuccess();
 
       case SEC:
         return getSec();
@@ -12841,7 +14216,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -13350,7 +14737,7 @@
         return getTinfo();
 
       case UPDATE_ID:
-        return Long.valueOf(getUpdateID());
+        return getUpdateID();
 
       case KEY_EXTENT:
         return getKeyExtent();
@@ -13435,7 +14822,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_updateID = true;
+      list.add(present_updateID);
+      if (present_updateID)
+        list.add(updateID);
+
+      boolean present_keyExtent = true && (isSetKeyExtent());
+      list.add(present_keyExtent);
+      if (present_keyExtent)
+        list.add(keyExtent);
+
+      boolean present_mutations = true && (isSetMutations());
+      list.add(present_mutations);
+      if (present_mutations)
+        list.add(mutations);
+
+      return list.hashCode();
     }
 
     @Override
@@ -13613,14 +15022,14 @@
             case 4: // MUTATIONS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list212 = iprot.readListBegin();
-                  struct.mutations = new ArrayList<org.apache.accumulo.core.data.thrift.TMutation>(_list212.size);
-                  for (int _i213 = 0; _i213 < _list212.size; ++_i213)
+                  org.apache.thrift.protocol.TList _list222 = iprot.readListBegin();
+                  struct.mutations = new ArrayList<org.apache.accumulo.core.data.thrift.TMutation>(_list222.size);
+                  org.apache.accumulo.core.data.thrift.TMutation _elem223;
+                  for (int _i224 = 0; _i224 < _list222.size; ++_i224)
                   {
-                    org.apache.accumulo.core.data.thrift.TMutation _elem214;
-                    _elem214 = new org.apache.accumulo.core.data.thrift.TMutation();
-                    _elem214.read(iprot);
-                    struct.mutations.add(_elem214);
+                    _elem223 = new org.apache.accumulo.core.data.thrift.TMutation();
+                    _elem223.read(iprot);
+                    struct.mutations.add(_elem223);
                   }
                   iprot.readListEnd();
                 }
@@ -13661,9 +15070,9 @@
           oprot.writeFieldBegin(MUTATIONS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.mutations.size()));
-            for (org.apache.accumulo.core.data.thrift.TMutation _iter215 : struct.mutations)
+            for (org.apache.accumulo.core.data.thrift.TMutation _iter225 : struct.mutations)
             {
-              _iter215.write(oprot);
+              _iter225.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -13712,9 +15121,9 @@
         if (struct.isSetMutations()) {
           {
             oprot.writeI32(struct.mutations.size());
-            for (org.apache.accumulo.core.data.thrift.TMutation _iter216 : struct.mutations)
+            for (org.apache.accumulo.core.data.thrift.TMutation _iter226 : struct.mutations)
             {
-              _iter216.write(oprot);
+              _iter226.write(oprot);
             }
           }
         }
@@ -13740,14 +15149,14 @@
         }
         if (incoming.get(3)) {
           {
-            org.apache.thrift.protocol.TList _list217 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.mutations = new ArrayList<org.apache.accumulo.core.data.thrift.TMutation>(_list217.size);
-            for (int _i218 = 0; _i218 < _list217.size; ++_i218)
+            org.apache.thrift.protocol.TList _list227 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.mutations = new ArrayList<org.apache.accumulo.core.data.thrift.TMutation>(_list227.size);
+            org.apache.accumulo.core.data.thrift.TMutation _elem228;
+            for (int _i229 = 0; _i229 < _list227.size; ++_i229)
             {
-              org.apache.accumulo.core.data.thrift.TMutation _elem219;
-              _elem219 = new org.apache.accumulo.core.data.thrift.TMutation();
-              _elem219.read(iprot);
-              struct.mutations.add(_elem219);
+              _elem228 = new org.apache.accumulo.core.data.thrift.TMutation();
+              _elem228.read(iprot);
+              struct.mutations.add(_elem228);
             }
           }
           struct.setMutationsIsSet(true);
@@ -13956,7 +15365,7 @@
         return getTinfo();
 
       case UPDATE_ID:
-        return Long.valueOf(getUpdateID());
+        return getUpdateID();
 
       }
       throw new IllegalStateException();
@@ -14013,7 +15422,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_updateID = true;
+      list.add(present_updateID);
+      if (present_updateID)
+        list.add(updateID);
+
+      return list.hashCode();
     }
 
     @Override
@@ -14468,7 +15889,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_nssi = true && (isSetNssi());
+      list.add(present_nssi);
+      if (present_nssi)
+        list.add(nssi);
+
+      return list.hashCode();
     }
 
     @Override
@@ -15122,7 +16555,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_keyExtent = true && (isSetKeyExtent());
+      list.add(present_keyExtent);
+      if (present_keyExtent)
+        list.add(keyExtent);
+
+      boolean present_mutation = true && (isSetMutation());
+      list.add(present_mutation);
+      if (present_mutation)
+        list.add(mutation);
+
+      boolean present_durability = true && (isSetDurability());
+      list.add(present_durability);
+      if (present_durability)
+        list.add(durability.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -15335,7 +16795,7 @@
               break;
             case 5: // DURABILITY
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.durability = TDurability.findByValue(iprot.readI32());
+                struct.durability = org.apache.accumulo.core.tabletserver.thrift.TDurability.findByValue(iprot.readI32());
                 struct.setDurabilityIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -15457,7 +16917,7 @@
           struct.setMutationIsSet(true);
         }
         if (incoming.get(4)) {
-          struct.durability = TDurability.findByValue(iprot.readI32());
+          struct.durability = org.apache.accumulo.core.tabletserver.thrift.TDurability.findByValue(iprot.readI32());
           struct.setDurabilityIsSet(true);
         }
       }
@@ -15778,7 +17238,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_nste = true && (isSetNste());
+      list.add(present_nste);
+      if (present_nste)
+        list.add(nste);
+
+      boolean present_cve = true && (isSetCve());
+      list.add(present_cve);
+      if (present_cve)
+        list.add(cve);
+
+      return list.hashCode();
     }
 
     @Override
@@ -16033,6 +17510,7 @@
     private static final org.apache.thrift.protocol.TField AUTHORIZATIONS_FIELD_DESC = new org.apache.thrift.protocol.TField("authorizations", org.apache.thrift.protocol.TType.LIST, (short)3);
     private static final org.apache.thrift.protocol.TField TABLE_ID_FIELD_DESC = new org.apache.thrift.protocol.TField("tableID", org.apache.thrift.protocol.TType.STRING, (short)4);
     private static final org.apache.thrift.protocol.TField DURABILITY_FIELD_DESC = new org.apache.thrift.protocol.TField("durability", org.apache.thrift.protocol.TType.I32, (short)5);
+    private static final org.apache.thrift.protocol.TField CLASS_LOADER_CONTEXT_FIELD_DESC = new org.apache.thrift.protocol.TField("classLoaderContext", org.apache.thrift.protocol.TType.STRING, (short)6);
 
     private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
     static {
@@ -16049,6 +17527,7 @@
      * @see TDurability
      */
     public TDurability durability; // required
+    public String classLoaderContext; // required
 
     /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
     public enum _Fields implements org.apache.thrift.TFieldIdEnum {
@@ -16060,7 +17539,8 @@
        * 
        * @see TDurability
        */
-      DURABILITY((short)5, "durability");
+      DURABILITY((short)5, "durability"),
+      CLASS_LOADER_CONTEXT((short)6, "classLoaderContext");
 
       private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -16085,6 +17565,8 @@
             return TABLE_ID;
           case 5: // DURABILITY
             return DURABILITY;
+          case 6: // CLASS_LOADER_CONTEXT
+            return CLASS_LOADER_CONTEXT;
           default:
             return null;
         }
@@ -16139,6 +17621,8 @@
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
       tmpMap.put(_Fields.DURABILITY, new org.apache.thrift.meta_data.FieldMetaData("durability", org.apache.thrift.TFieldRequirementType.DEFAULT, 
           new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, TDurability.class)));
+      tmpMap.put(_Fields.CLASS_LOADER_CONTEXT, new org.apache.thrift.meta_data.FieldMetaData("classLoaderContext", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
       metaDataMap = Collections.unmodifiableMap(tmpMap);
       org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(startConditionalUpdate_args.class, metaDataMap);
     }
@@ -16151,7 +17635,8 @@
       org.apache.accumulo.core.security.thrift.TCredentials credentials,
       List<ByteBuffer> authorizations,
       String tableID,
-      TDurability durability)
+      TDurability durability,
+      String classLoaderContext)
     {
       this();
       this.tinfo = tinfo;
@@ -16159,6 +17644,7 @@
       this.authorizations = authorizations;
       this.tableID = tableID;
       this.durability = durability;
+      this.classLoaderContext = classLoaderContext;
     }
 
     /**
@@ -16181,6 +17667,9 @@
       if (other.isSetDurability()) {
         this.durability = other.durability;
       }
+      if (other.isSetClassLoaderContext()) {
+        this.classLoaderContext = other.classLoaderContext;
+      }
     }
 
     public startConditionalUpdate_args deepCopy() {
@@ -16194,6 +17683,7 @@
       this.authorizations = null;
       this.tableID = null;
       this.durability = null;
+      this.classLoaderContext = null;
     }
 
     public org.apache.accumulo.core.trace.thrift.TInfo getTinfo() {
@@ -16339,6 +17829,30 @@
       }
     }
 
+    public String getClassLoaderContext() {
+      return this.classLoaderContext;
+    }
+
+    public startConditionalUpdate_args setClassLoaderContext(String classLoaderContext) {
+      this.classLoaderContext = classLoaderContext;
+      return this;
+    }
+
+    public void unsetClassLoaderContext() {
+      this.classLoaderContext = null;
+    }
+
+    /** Returns true if field classLoaderContext is set (has been assigned a value) and false otherwise */
+    public boolean isSetClassLoaderContext() {
+      return this.classLoaderContext != null;
+    }
+
+    public void setClassLoaderContextIsSet(boolean value) {
+      if (!value) {
+        this.classLoaderContext = null;
+      }
+    }
+
     public void setFieldValue(_Fields field, Object value) {
       switch (field) {
       case TINFO:
@@ -16381,6 +17895,14 @@
         }
         break;
 
+      case CLASS_LOADER_CONTEXT:
+        if (value == null) {
+          unsetClassLoaderContext();
+        } else {
+          setClassLoaderContext((String)value);
+        }
+        break;
+
       }
     }
 
@@ -16401,6 +17923,9 @@
       case DURABILITY:
         return getDurability();
 
+      case CLASS_LOADER_CONTEXT:
+        return getClassLoaderContext();
+
       }
       throw new IllegalStateException();
     }
@@ -16422,6 +17947,8 @@
         return isSetTableID();
       case DURABILITY:
         return isSetDurability();
+      case CLASS_LOADER_CONTEXT:
+        return isSetClassLoaderContext();
       }
       throw new IllegalStateException();
     }
@@ -16484,12 +18011,53 @@
           return false;
       }
 
+      boolean this_present_classLoaderContext = true && this.isSetClassLoaderContext();
+      boolean that_present_classLoaderContext = true && that.isSetClassLoaderContext();
+      if (this_present_classLoaderContext || that_present_classLoaderContext) {
+        if (!(this_present_classLoaderContext && that_present_classLoaderContext))
+          return false;
+        if (!this.classLoaderContext.equals(that.classLoaderContext))
+          return false;
+      }
+
       return true;
     }
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_authorizations = true && (isSetAuthorizations());
+      list.add(present_authorizations);
+      if (present_authorizations)
+        list.add(authorizations);
+
+      boolean present_tableID = true && (isSetTableID());
+      list.add(present_tableID);
+      if (present_tableID)
+        list.add(tableID);
+
+      boolean present_durability = true && (isSetDurability());
+      list.add(present_durability);
+      if (present_durability)
+        list.add(durability.getValue());
+
+      boolean present_classLoaderContext = true && (isSetClassLoaderContext());
+      list.add(present_classLoaderContext);
+      if (present_classLoaderContext)
+        list.add(classLoaderContext);
+
+      return list.hashCode();
     }
 
     @Override
@@ -16550,6 +18118,16 @@
           return lastComparison;
         }
       }
+      lastComparison = Boolean.valueOf(isSetClassLoaderContext()).compareTo(other.isSetClassLoaderContext());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetClassLoaderContext()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.classLoaderContext, other.classLoaderContext);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
       return 0;
     }
 
@@ -16590,7 +18168,7 @@
       if (this.authorizations == null) {
         sb.append("null");
       } else {
-        sb.append(this.authorizations);
+        org.apache.thrift.TBaseHelper.toString(this.authorizations, sb);
       }
       first = false;
       if (!first) sb.append(", ");
@@ -16609,6 +18187,14 @@
         sb.append(this.durability);
       }
       first = false;
+      if (!first) sb.append(", ");
+      sb.append("classLoaderContext:");
+      if (this.classLoaderContext == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.classLoaderContext);
+      }
+      first = false;
       sb.append(")");
       return sb.toString();
     }
@@ -16679,13 +18265,13 @@
             case 3: // AUTHORIZATIONS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list220 = iprot.readListBegin();
-                  struct.authorizations = new ArrayList<ByteBuffer>(_list220.size);
-                  for (int _i221 = 0; _i221 < _list220.size; ++_i221)
+                  org.apache.thrift.protocol.TList _list230 = iprot.readListBegin();
+                  struct.authorizations = new ArrayList<ByteBuffer>(_list230.size);
+                  ByteBuffer _elem231;
+                  for (int _i232 = 0; _i232 < _list230.size; ++_i232)
                   {
-                    ByteBuffer _elem222;
-                    _elem222 = iprot.readBinary();
-                    struct.authorizations.add(_elem222);
+                    _elem231 = iprot.readBinary();
+                    struct.authorizations.add(_elem231);
                   }
                   iprot.readListEnd();
                 }
@@ -16704,12 +18290,20 @@
               break;
             case 5: // DURABILITY
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.durability = TDurability.findByValue(iprot.readI32());
+                struct.durability = org.apache.accumulo.core.tabletserver.thrift.TDurability.findByValue(iprot.readI32());
                 struct.setDurabilityIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
               }
               break;
+            case 6: // CLASS_LOADER_CONTEXT
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.classLoaderContext = iprot.readString();
+                struct.setClassLoaderContextIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
             default:
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
           }
@@ -16739,9 +18333,9 @@
           oprot.writeFieldBegin(AUTHORIZATIONS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.authorizations.size()));
-            for (ByteBuffer _iter223 : struct.authorizations)
+            for (ByteBuffer _iter233 : struct.authorizations)
             {
-              oprot.writeBinary(_iter223);
+              oprot.writeBinary(_iter233);
             }
             oprot.writeListEnd();
           }
@@ -16757,6 +18351,11 @@
           oprot.writeI32(struct.durability.getValue());
           oprot.writeFieldEnd();
         }
+        if (struct.classLoaderContext != null) {
+          oprot.writeFieldBegin(CLASS_LOADER_CONTEXT_FIELD_DESC);
+          oprot.writeString(struct.classLoaderContext);
+          oprot.writeFieldEnd();
+        }
         oprot.writeFieldStop();
         oprot.writeStructEnd();
       }
@@ -16790,7 +18389,10 @@
         if (struct.isSetDurability()) {
           optionals.set(4);
         }
-        oprot.writeBitSet(optionals, 5);
+        if (struct.isSetClassLoaderContext()) {
+          optionals.set(5);
+        }
+        oprot.writeBitSet(optionals, 6);
         if (struct.isSetTinfo()) {
           struct.tinfo.write(oprot);
         }
@@ -16800,9 +18402,9 @@
         if (struct.isSetAuthorizations()) {
           {
             oprot.writeI32(struct.authorizations.size());
-            for (ByteBuffer _iter224 : struct.authorizations)
+            for (ByteBuffer _iter234 : struct.authorizations)
             {
-              oprot.writeBinary(_iter224);
+              oprot.writeBinary(_iter234);
             }
           }
         }
@@ -16812,12 +18414,15 @@
         if (struct.isSetDurability()) {
           oprot.writeI32(struct.durability.getValue());
         }
+        if (struct.isSetClassLoaderContext()) {
+          oprot.writeString(struct.classLoaderContext);
+        }
       }
 
       @Override
       public void read(org.apache.thrift.protocol.TProtocol prot, startConditionalUpdate_args struct) throws org.apache.thrift.TException {
         TTupleProtocol iprot = (TTupleProtocol) prot;
-        BitSet incoming = iprot.readBitSet(5);
+        BitSet incoming = iprot.readBitSet(6);
         if (incoming.get(0)) {
           struct.tinfo = new org.apache.accumulo.core.trace.thrift.TInfo();
           struct.tinfo.read(iprot);
@@ -16830,13 +18435,13 @@
         }
         if (incoming.get(2)) {
           {
-            org.apache.thrift.protocol.TList _list225 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-            struct.authorizations = new ArrayList<ByteBuffer>(_list225.size);
-            for (int _i226 = 0; _i226 < _list225.size; ++_i226)
+            org.apache.thrift.protocol.TList _list235 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.authorizations = new ArrayList<ByteBuffer>(_list235.size);
+            ByteBuffer _elem236;
+            for (int _i237 = 0; _i237 < _list235.size; ++_i237)
             {
-              ByteBuffer _elem227;
-              _elem227 = iprot.readBinary();
-              struct.authorizations.add(_elem227);
+              _elem236 = iprot.readBinary();
+              struct.authorizations.add(_elem236);
             }
           }
           struct.setAuthorizationsIsSet(true);
@@ -16846,9 +18451,13 @@
           struct.setTableIDIsSet(true);
         }
         if (incoming.get(4)) {
-          struct.durability = TDurability.findByValue(iprot.readI32());
+          struct.durability = org.apache.accumulo.core.tabletserver.thrift.TDurability.findByValue(iprot.readI32());
           struct.setDurabilityIsSet(true);
         }
+        if (incoming.get(5)) {
+          struct.classLoaderContext = iprot.readString();
+          struct.setClassLoaderContextIsSet(true);
+        }
       }
     }
 
@@ -17108,7 +18717,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -17632,7 +19253,7 @@
         return getTinfo();
 
       case SESS_ID:
-        return Long.valueOf(getSessID());
+        return getSessID();
 
       case MUTATIONS:
         return getMutations();
@@ -17717,7 +19338,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_sessID = true;
+      list.add(present_sessID);
+      if (present_sessID)
+        list.add(sessID);
+
+      boolean present_mutations = true && (isSetMutations());
+      list.add(present_mutations);
+      if (present_mutations)
+        list.add(mutations);
+
+      boolean present_symbols = true && (isSetSymbols());
+      list.add(present_symbols);
+      if (present_symbols)
+        list.add(symbols);
+
+      return list.hashCode();
     }
 
     @Override
@@ -17883,27 +19526,27 @@
             case 3: // MUTATIONS
               if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
                 {
-                  org.apache.thrift.protocol.TMap _map228 = iprot.readMapBegin();
-                  struct.mutations = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TConditionalMutation>>(2*_map228.size);
-                  for (int _i229 = 0; _i229 < _map228.size; ++_i229)
+                  org.apache.thrift.protocol.TMap _map238 = iprot.readMapBegin();
+                  struct.mutations = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TConditionalMutation>>(2*_map238.size);
+                  org.apache.accumulo.core.data.thrift.TKeyExtent _key239;
+                  List<org.apache.accumulo.core.data.thrift.TConditionalMutation> _val240;
+                  for (int _i241 = 0; _i241 < _map238.size; ++_i241)
                   {
-                    org.apache.accumulo.core.data.thrift.TKeyExtent _key230;
-                    List<org.apache.accumulo.core.data.thrift.TConditionalMutation> _val231;
-                    _key230 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
-                    _key230.read(iprot);
+                    _key239 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+                    _key239.read(iprot);
                     {
-                      org.apache.thrift.protocol.TList _list232 = iprot.readListBegin();
-                      _val231 = new ArrayList<org.apache.accumulo.core.data.thrift.TConditionalMutation>(_list232.size);
-                      for (int _i233 = 0; _i233 < _list232.size; ++_i233)
+                      org.apache.thrift.protocol.TList _list242 = iprot.readListBegin();
+                      _val240 = new ArrayList<org.apache.accumulo.core.data.thrift.TConditionalMutation>(_list242.size);
+                      org.apache.accumulo.core.data.thrift.TConditionalMutation _elem243;
+                      for (int _i244 = 0; _i244 < _list242.size; ++_i244)
                       {
-                        org.apache.accumulo.core.data.thrift.TConditionalMutation _elem234;
-                        _elem234 = new org.apache.accumulo.core.data.thrift.TConditionalMutation();
-                        _elem234.read(iprot);
-                        _val231.add(_elem234);
+                        _elem243 = new org.apache.accumulo.core.data.thrift.TConditionalMutation();
+                        _elem243.read(iprot);
+                        _val240.add(_elem243);
                       }
                       iprot.readListEnd();
                     }
-                    struct.mutations.put(_key230, _val231);
+                    struct.mutations.put(_key239, _val240);
                   }
                   iprot.readMapEnd();
                 }
@@ -17915,13 +19558,13 @@
             case 4: // SYMBOLS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list235 = iprot.readListBegin();
-                  struct.symbols = new ArrayList<String>(_list235.size);
-                  for (int _i236 = 0; _i236 < _list235.size; ++_i236)
+                  org.apache.thrift.protocol.TList _list245 = iprot.readListBegin();
+                  struct.symbols = new ArrayList<String>(_list245.size);
+                  String _elem246;
+                  for (int _i247 = 0; _i247 < _list245.size; ++_i247)
                   {
-                    String _elem237;
-                    _elem237 = iprot.readString();
-                    struct.symbols.add(_elem237);
+                    _elem246 = iprot.readString();
+                    struct.symbols.add(_elem246);
                   }
                   iprot.readListEnd();
                 }
@@ -17957,14 +19600,14 @@
           oprot.writeFieldBegin(MUTATIONS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.LIST, struct.mutations.size()));
-            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, List<org.apache.accumulo.core.data.thrift.TConditionalMutation>> _iter238 : struct.mutations.entrySet())
+            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, List<org.apache.accumulo.core.data.thrift.TConditionalMutation>> _iter248 : struct.mutations.entrySet())
             {
-              _iter238.getKey().write(oprot);
+              _iter248.getKey().write(oprot);
               {
-                oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, _iter238.getValue().size()));
-                for (org.apache.accumulo.core.data.thrift.TConditionalMutation _iter239 : _iter238.getValue())
+                oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, _iter248.getValue().size()));
+                for (org.apache.accumulo.core.data.thrift.TConditionalMutation _iter249 : _iter248.getValue())
                 {
-                  _iter239.write(oprot);
+                  _iter249.write(oprot);
                 }
                 oprot.writeListEnd();
               }
@@ -17977,9 +19620,9 @@
           oprot.writeFieldBegin(SYMBOLS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.symbols.size()));
-            for (String _iter240 : struct.symbols)
+            for (String _iter250 : struct.symbols)
             {
-              oprot.writeString(_iter240);
+              oprot.writeString(_iter250);
             }
             oprot.writeListEnd();
           }
@@ -18025,14 +19668,14 @@
         if (struct.isSetMutations()) {
           {
             oprot.writeI32(struct.mutations.size());
-            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, List<org.apache.accumulo.core.data.thrift.TConditionalMutation>> _iter241 : struct.mutations.entrySet())
+            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, List<org.apache.accumulo.core.data.thrift.TConditionalMutation>> _iter251 : struct.mutations.entrySet())
             {
-              _iter241.getKey().write(oprot);
+              _iter251.getKey().write(oprot);
               {
-                oprot.writeI32(_iter241.getValue().size());
-                for (org.apache.accumulo.core.data.thrift.TConditionalMutation _iter242 : _iter241.getValue())
+                oprot.writeI32(_iter251.getValue().size());
+                for (org.apache.accumulo.core.data.thrift.TConditionalMutation _iter252 : _iter251.getValue())
                 {
-                  _iter242.write(oprot);
+                  _iter252.write(oprot);
                 }
               }
             }
@@ -18041,9 +19684,9 @@
         if (struct.isSetSymbols()) {
           {
             oprot.writeI32(struct.symbols.size());
-            for (String _iter243 : struct.symbols)
+            for (String _iter253 : struct.symbols)
             {
-              oprot.writeString(_iter243);
+              oprot.writeString(_iter253);
             }
           }
         }
@@ -18064,39 +19707,39 @@
         }
         if (incoming.get(2)) {
           {
-            org.apache.thrift.protocol.TMap _map244 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.LIST, iprot.readI32());
-            struct.mutations = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TConditionalMutation>>(2*_map244.size);
-            for (int _i245 = 0; _i245 < _map244.size; ++_i245)
+            org.apache.thrift.protocol.TMap _map254 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.LIST, iprot.readI32());
+            struct.mutations = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,List<org.apache.accumulo.core.data.thrift.TConditionalMutation>>(2*_map254.size);
+            org.apache.accumulo.core.data.thrift.TKeyExtent _key255;
+            List<org.apache.accumulo.core.data.thrift.TConditionalMutation> _val256;
+            for (int _i257 = 0; _i257 < _map254.size; ++_i257)
             {
-              org.apache.accumulo.core.data.thrift.TKeyExtent _key246;
-              List<org.apache.accumulo.core.data.thrift.TConditionalMutation> _val247;
-              _key246 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
-              _key246.read(iprot);
+              _key255 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+              _key255.read(iprot);
               {
-                org.apache.thrift.protocol.TList _list248 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-                _val247 = new ArrayList<org.apache.accumulo.core.data.thrift.TConditionalMutation>(_list248.size);
-                for (int _i249 = 0; _i249 < _list248.size; ++_i249)
+                org.apache.thrift.protocol.TList _list258 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+                _val256 = new ArrayList<org.apache.accumulo.core.data.thrift.TConditionalMutation>(_list258.size);
+                org.apache.accumulo.core.data.thrift.TConditionalMutation _elem259;
+                for (int _i260 = 0; _i260 < _list258.size; ++_i260)
                 {
-                  org.apache.accumulo.core.data.thrift.TConditionalMutation _elem250;
-                  _elem250 = new org.apache.accumulo.core.data.thrift.TConditionalMutation();
-                  _elem250.read(iprot);
-                  _val247.add(_elem250);
+                  _elem259 = new org.apache.accumulo.core.data.thrift.TConditionalMutation();
+                  _elem259.read(iprot);
+                  _val256.add(_elem259);
                 }
               }
-              struct.mutations.put(_key246, _val247);
+              struct.mutations.put(_key255, _val256);
             }
           }
           struct.setMutationsIsSet(true);
         }
         if (incoming.get(3)) {
           {
-            org.apache.thrift.protocol.TList _list251 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-            struct.symbols = new ArrayList<String>(_list251.size);
-            for (int _i252 = 0; _i252 < _list251.size; ++_i252)
+            org.apache.thrift.protocol.TList _list261 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.symbols = new ArrayList<String>(_list261.size);
+            String _elem262;
+            for (int _i263 = 0; _i263 < _list261.size; ++_i263)
             {
-              String _elem253;
-              _elem253 = iprot.readString();
-              struct.symbols.add(_elem253);
+              _elem262 = iprot.readString();
+              struct.symbols.add(_elem262);
             }
           }
           struct.setSymbolsIsSet(true);
@@ -18380,7 +20023,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_nssi = true && (isSetNssi());
+      list.add(present_nssi);
+      if (present_nssi)
+        list.add(nssi);
+
+      return list.hashCode();
     }
 
     @Override
@@ -18492,14 +20147,14 @@
             case 0: // SUCCESS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list254 = iprot.readListBegin();
-                  struct.success = new ArrayList<org.apache.accumulo.core.data.thrift.TCMResult>(_list254.size);
-                  for (int _i255 = 0; _i255 < _list254.size; ++_i255)
+                  org.apache.thrift.protocol.TList _list264 = iprot.readListBegin();
+                  struct.success = new ArrayList<org.apache.accumulo.core.data.thrift.TCMResult>(_list264.size);
+                  org.apache.accumulo.core.data.thrift.TCMResult _elem265;
+                  for (int _i266 = 0; _i266 < _list264.size; ++_i266)
                   {
-                    org.apache.accumulo.core.data.thrift.TCMResult _elem256;
-                    _elem256 = new org.apache.accumulo.core.data.thrift.TCMResult();
-                    _elem256.read(iprot);
-                    struct.success.add(_elem256);
+                    _elem265 = new org.apache.accumulo.core.data.thrift.TCMResult();
+                    _elem265.read(iprot);
+                    struct.success.add(_elem265);
                   }
                   iprot.readListEnd();
                 }
@@ -18536,9 +20191,9 @@
           oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.success.size()));
-            for (org.apache.accumulo.core.data.thrift.TCMResult _iter257 : struct.success)
+            for (org.apache.accumulo.core.data.thrift.TCMResult _iter267 : struct.success)
             {
-              _iter257.write(oprot);
+              _iter267.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -18577,9 +20232,9 @@
         if (struct.isSetSuccess()) {
           {
             oprot.writeI32(struct.success.size());
-            for (org.apache.accumulo.core.data.thrift.TCMResult _iter258 : struct.success)
+            for (org.apache.accumulo.core.data.thrift.TCMResult _iter268 : struct.success)
             {
-              _iter258.write(oprot);
+              _iter268.write(oprot);
             }
           }
         }
@@ -18594,14 +20249,14 @@
         BitSet incoming = iprot.readBitSet(2);
         if (incoming.get(0)) {
           {
-            org.apache.thrift.protocol.TList _list259 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.success = new ArrayList<org.apache.accumulo.core.data.thrift.TCMResult>(_list259.size);
-            for (int _i260 = 0; _i260 < _list259.size; ++_i260)
+            org.apache.thrift.protocol.TList _list269 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.success = new ArrayList<org.apache.accumulo.core.data.thrift.TCMResult>(_list269.size);
+            org.apache.accumulo.core.data.thrift.TCMResult _elem270;
+            for (int _i271 = 0; _i271 < _list269.size; ++_i271)
             {
-              org.apache.accumulo.core.data.thrift.TCMResult _elem261;
-              _elem261 = new org.apache.accumulo.core.data.thrift.TCMResult();
-              _elem261.read(iprot);
-              struct.success.add(_elem261);
+              _elem270 = new org.apache.accumulo.core.data.thrift.TCMResult();
+              _elem270.read(iprot);
+              struct.success.add(_elem270);
             }
           }
           struct.setSuccessIsSet(true);
@@ -18815,7 +20470,7 @@
         return getTinfo();
 
       case SESS_ID:
-        return Long.valueOf(getSessID());
+        return getSessID();
 
       }
       throw new IllegalStateException();
@@ -18872,7 +20527,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_sessID = true;
+      list.add(present_sessID);
+      if (present_sessID)
+        list.add(sessID);
+
+      return list.hashCode();
     }
 
     @Override
@@ -19203,7 +20870,9 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
     }
 
     @Override
@@ -19518,7 +21187,7 @@
         return getTinfo();
 
       case SESS_ID:
-        return Long.valueOf(getSessID());
+        return getSessID();
 
       }
       throw new IllegalStateException();
@@ -19575,7 +21244,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_sessID = true;
+      list.add(present_sessID);
+      if (present_sessID)
+        list.add(sessID);
+
+      return list.hashCode();
     }
 
     @Override
@@ -20124,13 +21805,13 @@
         return getCredentials();
 
       case TID:
-        return Long.valueOf(getTid());
+        return getTid();
 
       case FILES:
         return getFiles();
 
       case SET_TIME:
-        return Boolean.valueOf(isSetTime());
+        return isSetTime();
 
       }
       throw new IllegalStateException();
@@ -20220,7 +21901,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tid = true;
+      list.add(present_tid);
+      if (present_tid)
+        list.add(tid);
+
+      boolean present_files = true && (isSetFiles());
+      list.add(present_files);
+      if (present_files)
+        list.add(files);
+
+      boolean present_setTime = true;
+      list.add(present_setTime);
+      if (present_setTime)
+        list.add(setTime);
+
+      return list.hashCode();
     }
 
     @Override
@@ -20412,29 +22120,29 @@
             case 2: // FILES
               if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
                 {
-                  org.apache.thrift.protocol.TMap _map262 = iprot.readMapBegin();
-                  struct.files = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>>(2*_map262.size);
-                  for (int _i263 = 0; _i263 < _map262.size; ++_i263)
+                  org.apache.thrift.protocol.TMap _map272 = iprot.readMapBegin();
+                  struct.files = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>>(2*_map272.size);
+                  org.apache.accumulo.core.data.thrift.TKeyExtent _key273;
+                  Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo> _val274;
+                  for (int _i275 = 0; _i275 < _map272.size; ++_i275)
                   {
-                    org.apache.accumulo.core.data.thrift.TKeyExtent _key264;
-                    Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo> _val265;
-                    _key264 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
-                    _key264.read(iprot);
+                    _key273 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+                    _key273.read(iprot);
                     {
-                      org.apache.thrift.protocol.TMap _map266 = iprot.readMapBegin();
-                      _val265 = new HashMap<String,org.apache.accumulo.core.data.thrift.MapFileInfo>(2*_map266.size);
-                      for (int _i267 = 0; _i267 < _map266.size; ++_i267)
+                      org.apache.thrift.protocol.TMap _map276 = iprot.readMapBegin();
+                      _val274 = new HashMap<String,org.apache.accumulo.core.data.thrift.MapFileInfo>(2*_map276.size);
+                      String _key277;
+                      org.apache.accumulo.core.data.thrift.MapFileInfo _val278;
+                      for (int _i279 = 0; _i279 < _map276.size; ++_i279)
                       {
-                        String _key268;
-                        org.apache.accumulo.core.data.thrift.MapFileInfo _val269;
-                        _key268 = iprot.readString();
-                        _val269 = new org.apache.accumulo.core.data.thrift.MapFileInfo();
-                        _val269.read(iprot);
-                        _val265.put(_key268, _val269);
+                        _key277 = iprot.readString();
+                        _val278 = new org.apache.accumulo.core.data.thrift.MapFileInfo();
+                        _val278.read(iprot);
+                        _val274.put(_key277, _val278);
                       }
                       iprot.readMapEnd();
                     }
-                    struct.files.put(_key264, _val265);
+                    struct.files.put(_key273, _val274);
                   }
                   iprot.readMapEnd();
                 }
@@ -20475,15 +22183,15 @@
           oprot.writeFieldBegin(FILES_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.MAP, struct.files.size()));
-            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>> _iter270 : struct.files.entrySet())
+            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>> _iter280 : struct.files.entrySet())
             {
-              _iter270.getKey().write(oprot);
+              _iter280.getKey().write(oprot);
               {
-                oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, _iter270.getValue().size()));
-                for (Map.Entry<String, org.apache.accumulo.core.data.thrift.MapFileInfo> _iter271 : _iter270.getValue().entrySet())
+                oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, _iter280.getValue().size()));
+                for (Map.Entry<String, org.apache.accumulo.core.data.thrift.MapFileInfo> _iter281 : _iter280.getValue().entrySet())
                 {
-                  oprot.writeString(_iter271.getKey());
-                  _iter271.getValue().write(oprot);
+                  oprot.writeString(_iter281.getKey());
+                  _iter281.getValue().write(oprot);
                 }
                 oprot.writeMapEnd();
               }
@@ -20549,15 +22257,15 @@
         if (struct.isSetFiles()) {
           {
             oprot.writeI32(struct.files.size());
-            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>> _iter272 : struct.files.entrySet())
+            for (Map.Entry<org.apache.accumulo.core.data.thrift.TKeyExtent, Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>> _iter282 : struct.files.entrySet())
             {
-              _iter272.getKey().write(oprot);
+              _iter282.getKey().write(oprot);
               {
-                oprot.writeI32(_iter272.getValue().size());
-                for (Map.Entry<String, org.apache.accumulo.core.data.thrift.MapFileInfo> _iter273 : _iter272.getValue().entrySet())
+                oprot.writeI32(_iter282.getValue().size());
+                for (Map.Entry<String, org.apache.accumulo.core.data.thrift.MapFileInfo> _iter283 : _iter282.getValue().entrySet())
                 {
-                  oprot.writeString(_iter273.getKey());
-                  _iter273.getValue().write(oprot);
+                  oprot.writeString(_iter283.getKey());
+                  _iter283.getValue().write(oprot);
                 }
               }
             }
@@ -20588,28 +22296,28 @@
         }
         if (incoming.get(3)) {
           {
-            org.apache.thrift.protocol.TMap _map274 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
-            struct.files = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>>(2*_map274.size);
-            for (int _i275 = 0; _i275 < _map274.size; ++_i275)
+            org.apache.thrift.protocol.TMap _map284 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRUCT, org.apache.thrift.protocol.TType.MAP, iprot.readI32());
+            struct.files = new HashMap<org.apache.accumulo.core.data.thrift.TKeyExtent,Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>>(2*_map284.size);
+            org.apache.accumulo.core.data.thrift.TKeyExtent _key285;
+            Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo> _val286;
+            for (int _i287 = 0; _i287 < _map284.size; ++_i287)
             {
-              org.apache.accumulo.core.data.thrift.TKeyExtent _key276;
-              Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo> _val277;
-              _key276 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
-              _key276.read(iprot);
+              _key285 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+              _key285.read(iprot);
               {
-                org.apache.thrift.protocol.TMap _map278 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-                _val277 = new HashMap<String,org.apache.accumulo.core.data.thrift.MapFileInfo>(2*_map278.size);
-                for (int _i279 = 0; _i279 < _map278.size; ++_i279)
+                org.apache.thrift.protocol.TMap _map288 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+                _val286 = new HashMap<String,org.apache.accumulo.core.data.thrift.MapFileInfo>(2*_map288.size);
+                String _key289;
+                org.apache.accumulo.core.data.thrift.MapFileInfo _val290;
+                for (int _i291 = 0; _i291 < _map288.size; ++_i291)
                 {
-                  String _key280;
-                  org.apache.accumulo.core.data.thrift.MapFileInfo _val281;
-                  _key280 = iprot.readString();
-                  _val281 = new org.apache.accumulo.core.data.thrift.MapFileInfo();
-                  _val281.read(iprot);
-                  _val277.put(_key280, _val281);
+                  _key289 = iprot.readString();
+                  _val290 = new org.apache.accumulo.core.data.thrift.MapFileInfo();
+                  _val290.read(iprot);
+                  _val286.put(_key289, _val290);
                 }
               }
-              struct.files.put(_key276, _val277);
+              struct.files.put(_key285, _val286);
             }
           }
           struct.setFilesIsSet(true);
@@ -20897,7 +22605,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -21009,14 +22729,14 @@
             case 0: // SUCCESS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list282 = iprot.readListBegin();
-                  struct.success = new ArrayList<org.apache.accumulo.core.data.thrift.TKeyExtent>(_list282.size);
-                  for (int _i283 = 0; _i283 < _list282.size; ++_i283)
+                  org.apache.thrift.protocol.TList _list292 = iprot.readListBegin();
+                  struct.success = new ArrayList<org.apache.accumulo.core.data.thrift.TKeyExtent>(_list292.size);
+                  org.apache.accumulo.core.data.thrift.TKeyExtent _elem293;
+                  for (int _i294 = 0; _i294 < _list292.size; ++_i294)
                   {
-                    org.apache.accumulo.core.data.thrift.TKeyExtent _elem284;
-                    _elem284 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
-                    _elem284.read(iprot);
-                    struct.success.add(_elem284);
+                    _elem293 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+                    _elem293.read(iprot);
+                    struct.success.add(_elem293);
                   }
                   iprot.readListEnd();
                 }
@@ -21053,9 +22773,9 @@
           oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.success.size()));
-            for (org.apache.accumulo.core.data.thrift.TKeyExtent _iter285 : struct.success)
+            for (org.apache.accumulo.core.data.thrift.TKeyExtent _iter295 : struct.success)
             {
-              _iter285.write(oprot);
+              _iter295.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -21094,9 +22814,9 @@
         if (struct.isSetSuccess()) {
           {
             oprot.writeI32(struct.success.size());
-            for (org.apache.accumulo.core.data.thrift.TKeyExtent _iter286 : struct.success)
+            for (org.apache.accumulo.core.data.thrift.TKeyExtent _iter296 : struct.success)
             {
-              _iter286.write(oprot);
+              _iter296.write(oprot);
             }
           }
         }
@@ -21111,14 +22831,14 @@
         BitSet incoming = iprot.readBitSet(2);
         if (incoming.get(0)) {
           {
-            org.apache.thrift.protocol.TList _list287 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.success = new ArrayList<org.apache.accumulo.core.data.thrift.TKeyExtent>(_list287.size);
-            for (int _i288 = 0; _i288 < _list287.size; ++_i288)
+            org.apache.thrift.protocol.TList _list297 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.success = new ArrayList<org.apache.accumulo.core.data.thrift.TKeyExtent>(_list297.size);
+            org.apache.accumulo.core.data.thrift.TKeyExtent _elem298;
+            for (int _i299 = 0; _i299 < _list297.size; ++_i299)
             {
-              org.apache.accumulo.core.data.thrift.TKeyExtent _elem289;
-              _elem289 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
-              _elem289.read(iprot);
-              struct.success.add(_elem289);
+              _elem298 = new org.apache.accumulo.core.data.thrift.TKeyExtent();
+              _elem298.read(iprot);
+              struct.success.add(_elem298);
             }
           }
           struct.setSuccessIsSet(true);
@@ -21248,7 +22968,7 @@
       this.tinfo = tinfo;
       this.credentials = credentials;
       this.extent = extent;
-      this.splitPoint = splitPoint;
+      this.splitPoint = org.apache.thrift.TBaseHelper.copyBinary(splitPoint);
     }
 
     /**
@@ -21266,7 +22986,6 @@
       }
       if (other.isSetSplitPoint()) {
         this.splitPoint = org.apache.thrift.TBaseHelper.copyBinary(other.splitPoint);
-;
       }
     }
 
@@ -21360,16 +23079,16 @@
     }
 
     public ByteBuffer bufferForSplitPoint() {
-      return splitPoint;
+      return org.apache.thrift.TBaseHelper.copyBinary(splitPoint);
     }
 
     public splitTablet_args setSplitPoint(byte[] splitPoint) {
-      setSplitPoint(splitPoint == null ? (ByteBuffer)null : ByteBuffer.wrap(splitPoint));
+      this.splitPoint = splitPoint == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(splitPoint, splitPoint.length));
       return this;
     }
 
     public splitTablet_args setSplitPoint(ByteBuffer splitPoint) {
-      this.splitPoint = splitPoint;
+      this.splitPoint = org.apache.thrift.TBaseHelper.copyBinary(splitPoint);
       return this;
     }
 
@@ -21516,7 +23235,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_extent = true && (isSetExtent());
+      list.add(present_extent);
+      if (present_extent)
+        list.add(extent);
+
+      boolean present_splitPoint = true && (isSetSplitPoint());
+      list.add(present_splitPoint);
+      if (present_splitPoint)
+        list.add(splitPoint);
+
+      return list.hashCode();
     }
 
     @Override
@@ -22067,7 +23808,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      boolean present_nste = true && (isSetNste());
+      list.add(present_nste);
+      if (present_nste)
+        list.add(nste);
+
+      return list.hashCode();
     }
 
     @Override
@@ -22643,7 +24396,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_lock = true && (isSetLock());
+      list.add(present_lock);
+      if (present_lock)
+        list.add(lock);
+
+      boolean present_extent = true && (isSetExtent());
+      list.add(present_extent);
+      if (present_extent)
+        list.add(extent);
+
+      return list.hashCode();
     }
 
     @Override
@@ -22947,7 +24722,8 @@
     private static final org.apache.thrift.protocol.TField CREDENTIALS_FIELD_DESC = new org.apache.thrift.protocol.TField("credentials", org.apache.thrift.protocol.TType.STRUCT, (short)1);
     private static final org.apache.thrift.protocol.TField LOCK_FIELD_DESC = new org.apache.thrift.protocol.TField("lock", org.apache.thrift.protocol.TType.STRING, (short)4);
     private static final org.apache.thrift.protocol.TField EXTENT_FIELD_DESC = new org.apache.thrift.protocol.TField("extent", org.apache.thrift.protocol.TType.STRUCT, (short)2);
-    private static final org.apache.thrift.protocol.TField SAVE_FIELD_DESC = new org.apache.thrift.protocol.TField("save", org.apache.thrift.protocol.TType.BOOL, (short)3);
+    private static final org.apache.thrift.protocol.TField GOAL_FIELD_DESC = new org.apache.thrift.protocol.TField("goal", org.apache.thrift.protocol.TType.I32, (short)6);
+    private static final org.apache.thrift.protocol.TField REQUEST_TIME_FIELD_DESC = new org.apache.thrift.protocol.TField("requestTime", org.apache.thrift.protocol.TType.I64, (short)7);
 
     private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
     static {
@@ -22959,7 +24735,12 @@
     public org.apache.accumulo.core.security.thrift.TCredentials credentials; // required
     public String lock; // required
     public org.apache.accumulo.core.data.thrift.TKeyExtent extent; // required
-    public boolean save; // required
+    /**
+     * 
+     * @see TUnloadTabletGoal
+     */
+    public TUnloadTabletGoal goal; // required
+    public long requestTime; // required
 
     /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
     public enum _Fields implements org.apache.thrift.TFieldIdEnum {
@@ -22967,7 +24748,12 @@
       CREDENTIALS((short)1, "credentials"),
       LOCK((short)4, "lock"),
       EXTENT((short)2, "extent"),
-      SAVE((short)3, "save");
+      /**
+       * 
+       * @see TUnloadTabletGoal
+       */
+      GOAL((short)6, "goal"),
+      REQUEST_TIME((short)7, "requestTime");
 
       private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
 
@@ -22990,8 +24776,10 @@
             return LOCK;
           case 2: // EXTENT
             return EXTENT;
-          case 3: // SAVE
-            return SAVE;
+          case 6: // GOAL
+            return GOAL;
+          case 7: // REQUEST_TIME
+            return REQUEST_TIME;
           default:
             return null;
         }
@@ -23032,7 +24820,7 @@
     }
 
     // isset id assignments
-    private static final int __SAVE_ISSET_ID = 0;
+    private static final int __REQUESTTIME_ISSET_ID = 0;
     private byte __isset_bitfield = 0;
     public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
     static {
@@ -23045,8 +24833,10 @@
           new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
       tmpMap.put(_Fields.EXTENT, new org.apache.thrift.meta_data.FieldMetaData("extent", org.apache.thrift.TFieldRequirementType.DEFAULT, 
           new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, org.apache.accumulo.core.data.thrift.TKeyExtent.class)));
-      tmpMap.put(_Fields.SAVE, new org.apache.thrift.meta_data.FieldMetaData("save", org.apache.thrift.TFieldRequirementType.DEFAULT, 
-          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.BOOL)));
+      tmpMap.put(_Fields.GOAL, new org.apache.thrift.meta_data.FieldMetaData("goal", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, TUnloadTabletGoal.class)));
+      tmpMap.put(_Fields.REQUEST_TIME, new org.apache.thrift.meta_data.FieldMetaData("requestTime", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I64)));
       metaDataMap = Collections.unmodifiableMap(tmpMap);
       org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(unloadTablet_args.class, metaDataMap);
     }
@@ -23059,15 +24849,17 @@
       org.apache.accumulo.core.security.thrift.TCredentials credentials,
       String lock,
       org.apache.accumulo.core.data.thrift.TKeyExtent extent,
-      boolean save)
+      TUnloadTabletGoal goal,
+      long requestTime)
     {
       this();
       this.tinfo = tinfo;
       this.credentials = credentials;
       this.lock = lock;
       this.extent = extent;
-      this.save = save;
-      setSaveIsSet(true);
+      this.goal = goal;
+      this.requestTime = requestTime;
+      setRequestTimeIsSet(true);
     }
 
     /**
@@ -23087,7 +24879,10 @@
       if (other.isSetExtent()) {
         this.extent = new org.apache.accumulo.core.data.thrift.TKeyExtent(other.extent);
       }
-      this.save = other.save;
+      if (other.isSetGoal()) {
+        this.goal = other.goal;
+      }
+      this.requestTime = other.requestTime;
     }
 
     public unloadTablet_args deepCopy() {
@@ -23100,8 +24895,9 @@
       this.credentials = null;
       this.lock = null;
       this.extent = null;
-      setSaveIsSet(false);
-      this.save = false;
+      this.goal = null;
+      setRequestTimeIsSet(false);
+      this.requestTime = 0;
     }
 
     public org.apache.accumulo.core.trace.thrift.TInfo getTinfo() {
@@ -23200,27 +24996,59 @@
       }
     }
 
-    public boolean isSave() {
-      return this.save;
+    /**
+     * 
+     * @see TUnloadTabletGoal
+     */
+    public TUnloadTabletGoal getGoal() {
+      return this.goal;
     }
 
-    public unloadTablet_args setSave(boolean save) {
-      this.save = save;
-      setSaveIsSet(true);
+    /**
+     * 
+     * @see TUnloadTabletGoal
+     */
+    public unloadTablet_args setGoal(TUnloadTabletGoal goal) {
+      this.goal = goal;
       return this;
     }
 
-    public void unsetSave() {
-      __isset_bitfield = EncodingUtils.clearBit(__isset_bitfield, __SAVE_ISSET_ID);
+    public void unsetGoal() {
+      this.goal = null;
     }
 
-    /** Returns true if field save is set (has been assigned a value) and false otherwise */
-    public boolean isSetSave() {
-      return EncodingUtils.testBit(__isset_bitfield, __SAVE_ISSET_ID);
+    /** Returns true if field goal is set (has been assigned a value) and false otherwise */
+    public boolean isSetGoal() {
+      return this.goal != null;
     }
 
-    public void setSaveIsSet(boolean value) {
-      __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __SAVE_ISSET_ID, value);
+    public void setGoalIsSet(boolean value) {
+      if (!value) {
+        this.goal = null;
+      }
+    }
+
+    public long getRequestTime() {
+      return this.requestTime;
+    }
+
+    public unloadTablet_args setRequestTime(long requestTime) {
+      this.requestTime = requestTime;
+      setRequestTimeIsSet(true);
+      return this;
+    }
+
+    public void unsetRequestTime() {
+      __isset_bitfield = EncodingUtils.clearBit(__isset_bitfield, __REQUESTTIME_ISSET_ID);
+    }
+
+    /** Returns true if field requestTime is set (has been assigned a value) and false otherwise */
+    public boolean isSetRequestTime() {
+      return EncodingUtils.testBit(__isset_bitfield, __REQUESTTIME_ISSET_ID);
+    }
+
+    public void setRequestTimeIsSet(boolean value) {
+      __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __REQUESTTIME_ISSET_ID, value);
     }
 
     public void setFieldValue(_Fields field, Object value) {
@@ -23257,11 +25085,19 @@
         }
         break;
 
-      case SAVE:
+      case GOAL:
         if (value == null) {
-          unsetSave();
+          unsetGoal();
         } else {
-          setSave((Boolean)value);
+          setGoal((TUnloadTabletGoal)value);
+        }
+        break;
+
+      case REQUEST_TIME:
+        if (value == null) {
+          unsetRequestTime();
+        } else {
+          setRequestTime((Long)value);
         }
         break;
 
@@ -23282,8 +25118,11 @@
       case EXTENT:
         return getExtent();
 
-      case SAVE:
-        return Boolean.valueOf(isSave());
+      case GOAL:
+        return getGoal();
+
+      case REQUEST_TIME:
+        return getRequestTime();
 
       }
       throw new IllegalStateException();
@@ -23304,8 +25143,10 @@
         return isSetLock();
       case EXTENT:
         return isSetExtent();
-      case SAVE:
-        return isSetSave();
+      case GOAL:
+        return isSetGoal();
+      case REQUEST_TIME:
+        return isSetRequestTime();
       }
       throw new IllegalStateException();
     }
@@ -23359,12 +25200,21 @@
           return false;
       }
 
-      boolean this_present_save = true;
-      boolean that_present_save = true;
-      if (this_present_save || that_present_save) {
-        if (!(this_present_save && that_present_save))
+      boolean this_present_goal = true && this.isSetGoal();
+      boolean that_present_goal = true && that.isSetGoal();
+      if (this_present_goal || that_present_goal) {
+        if (!(this_present_goal && that_present_goal))
           return false;
-        if (this.save != that.save)
+        if (!this.goal.equals(that.goal))
+          return false;
+      }
+
+      boolean this_present_requestTime = true;
+      boolean that_present_requestTime = true;
+      if (this_present_requestTime || that_present_requestTime) {
+        if (!(this_present_requestTime && that_present_requestTime))
+          return false;
+        if (this.requestTime != that.requestTime)
           return false;
       }
 
@@ -23373,7 +25223,39 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_lock = true && (isSetLock());
+      list.add(present_lock);
+      if (present_lock)
+        list.add(lock);
+
+      boolean present_extent = true && (isSetExtent());
+      list.add(present_extent);
+      if (present_extent)
+        list.add(extent);
+
+      boolean present_goal = true && (isSetGoal());
+      list.add(present_goal);
+      if (present_goal)
+        list.add(goal.getValue());
+
+      boolean present_requestTime = true;
+      list.add(present_requestTime);
+      if (present_requestTime)
+        list.add(requestTime);
+
+      return list.hashCode();
     }
 
     @Override
@@ -23424,12 +25306,22 @@
           return lastComparison;
         }
       }
-      lastComparison = Boolean.valueOf(isSetSave()).compareTo(other.isSetSave());
+      lastComparison = Boolean.valueOf(isSetGoal()).compareTo(other.isSetGoal());
       if (lastComparison != 0) {
         return lastComparison;
       }
-      if (isSetSave()) {
-        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.save, other.save);
+      if (isSetGoal()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.goal, other.goal);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetRequestTime()).compareTo(other.isSetRequestTime());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetRequestTime()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.requestTime, other.requestTime);
         if (lastComparison != 0) {
           return lastComparison;
         }
@@ -23486,8 +25378,16 @@
       }
       first = false;
       if (!first) sb.append(", ");
-      sb.append("save:");
-      sb.append(this.save);
+      sb.append("goal:");
+      if (this.goal == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.goal);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("requestTime:");
+      sb.append(this.requestTime);
       first = false;
       sb.append(")");
       return sb.toString();
@@ -23578,10 +25478,18 @@
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
               }
               break;
-            case 3: // SAVE
-              if (schemeField.type == org.apache.thrift.protocol.TType.BOOL) {
-                struct.save = iprot.readBool();
-                struct.setSaveIsSet(true);
+            case 6: // GOAL
+              if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
+                struct.goal = org.apache.accumulo.core.tabletserver.thrift.TUnloadTabletGoal.findByValue(iprot.readI32());
+                struct.setGoalIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 7: // REQUEST_TIME
+              if (schemeField.type == org.apache.thrift.protocol.TType.I64) {
+                struct.requestTime = iprot.readI64();
+                struct.setRequestTimeIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
               }
@@ -23611,9 +25519,6 @@
           struct.extent.write(oprot);
           oprot.writeFieldEnd();
         }
-        oprot.writeFieldBegin(SAVE_FIELD_DESC);
-        oprot.writeBool(struct.save);
-        oprot.writeFieldEnd();
         if (struct.lock != null) {
           oprot.writeFieldBegin(LOCK_FIELD_DESC);
           oprot.writeString(struct.lock);
@@ -23624,6 +25529,14 @@
           struct.tinfo.write(oprot);
           oprot.writeFieldEnd();
         }
+        if (struct.goal != null) {
+          oprot.writeFieldBegin(GOAL_FIELD_DESC);
+          oprot.writeI32(struct.goal.getValue());
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldBegin(REQUEST_TIME_FIELD_DESC);
+        oprot.writeI64(struct.requestTime);
+        oprot.writeFieldEnd();
         oprot.writeFieldStop();
         oprot.writeStructEnd();
       }
@@ -23654,10 +25567,13 @@
         if (struct.isSetExtent()) {
           optionals.set(3);
         }
-        if (struct.isSetSave()) {
+        if (struct.isSetGoal()) {
           optionals.set(4);
         }
-        oprot.writeBitSet(optionals, 5);
+        if (struct.isSetRequestTime()) {
+          optionals.set(5);
+        }
+        oprot.writeBitSet(optionals, 6);
         if (struct.isSetTinfo()) {
           struct.tinfo.write(oprot);
         }
@@ -23670,15 +25586,18 @@
         if (struct.isSetExtent()) {
           struct.extent.write(oprot);
         }
-        if (struct.isSetSave()) {
-          oprot.writeBool(struct.save);
+        if (struct.isSetGoal()) {
+          oprot.writeI32(struct.goal.getValue());
+        }
+        if (struct.isSetRequestTime()) {
+          oprot.writeI64(struct.requestTime);
         }
       }
 
       @Override
       public void read(org.apache.thrift.protocol.TProtocol prot, unloadTablet_args struct) throws org.apache.thrift.TException {
         TTupleProtocol iprot = (TTupleProtocol) prot;
-        BitSet incoming = iprot.readBitSet(5);
+        BitSet incoming = iprot.readBitSet(6);
         if (incoming.get(0)) {
           struct.tinfo = new org.apache.accumulo.core.trace.thrift.TInfo();
           struct.tinfo.read(iprot);
@@ -23699,8 +25618,12 @@
           struct.setExtentIsSet(true);
         }
         if (incoming.get(4)) {
-          struct.save = iprot.readBool();
-          struct.setSaveIsSet(true);
+          struct.goal = org.apache.accumulo.core.tabletserver.thrift.TUnloadTabletGoal.findByValue(iprot.readI32());
+          struct.setGoalIsSet(true);
+        }
+        if (incoming.get(5)) {
+          struct.requestTime = iprot.readI64();
+          struct.setRequestTimeIsSet(true);
         }
       }
     }
@@ -23839,8 +25762,8 @@
       this.credentials = credentials;
       this.lock = lock;
       this.tableId = tableId;
-      this.startRow = startRow;
-      this.endRow = endRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     /**
@@ -23861,11 +25784,9 @@
       }
       if (other.isSetStartRow()) {
         this.startRow = org.apache.thrift.TBaseHelper.copyBinary(other.startRow);
-;
       }
       if (other.isSetEndRow()) {
         this.endRow = org.apache.thrift.TBaseHelper.copyBinary(other.endRow);
-;
       }
     }
 
@@ -23985,16 +25906,16 @@
     }
 
     public ByteBuffer bufferForStartRow() {
-      return startRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(startRow);
     }
 
     public flush_args setStartRow(byte[] startRow) {
-      setStartRow(startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(startRow));
+      this.startRow = startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(startRow, startRow.length));
       return this;
     }
 
     public flush_args setStartRow(ByteBuffer startRow) {
-      this.startRow = startRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
       return this;
     }
 
@@ -24019,16 +25940,16 @@
     }
 
     public ByteBuffer bufferForEndRow() {
-      return endRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     public flush_args setEndRow(byte[] endRow) {
-      setEndRow(endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(endRow));
+      this.endRow = endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(endRow, endRow.length));
       return this;
     }
 
     public flush_args setEndRow(ByteBuffer endRow) {
-      this.endRow = endRow;
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       return this;
     }
 
@@ -24219,7 +26140,39 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_lock = true && (isSetLock());
+      list.add(present_lock);
+      if (present_lock)
+        list.add(lock);
+
+      boolean present_tableId = true && (isSetTableId());
+      list.add(present_tableId);
+      if (present_tableId)
+        list.add(tableId);
+
+      boolean present_startRow = true && (isSetStartRow());
+      list.add(present_startRow);
+      if (present_startRow)
+        list.add(startRow);
+
+      boolean present_endRow = true && (isSetEndRow());
+      list.add(present_endRow);
+      if (present_endRow)
+        list.add(endRow);
+
+      return list.hashCode();
     }
 
     @Override
@@ -24965,7 +26918,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_lock = true && (isSetLock());
+      list.add(present_lock);
+      if (present_lock)
+        list.add(lock);
+
+      boolean present_extent = true && (isSetExtent());
+      list.add(present_extent);
+      if (present_extent)
+        list.add(extent);
+
+      return list.hashCode();
     }
 
     @Override
@@ -25634,7 +27609,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_lock = true && (isSetLock());
+      list.add(present_lock);
+      if (present_lock)
+        list.add(lock);
+
+      boolean present_extent = true && (isSetExtent());
+      list.add(present_extent);
+      if (present_extent)
+        list.add(extent);
+
+      return list.hashCode();
     }
 
     @Override
@@ -26063,8 +28060,8 @@
       this.credentials = credentials;
       this.lock = lock;
       this.tableId = tableId;
-      this.startRow = startRow;
-      this.endRow = endRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     /**
@@ -26085,11 +28082,9 @@
       }
       if (other.isSetStartRow()) {
         this.startRow = org.apache.thrift.TBaseHelper.copyBinary(other.startRow);
-;
       }
       if (other.isSetEndRow()) {
         this.endRow = org.apache.thrift.TBaseHelper.copyBinary(other.endRow);
-;
       }
     }
 
@@ -26209,16 +28204,16 @@
     }
 
     public ByteBuffer bufferForStartRow() {
-      return startRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(startRow);
     }
 
     public compact_args setStartRow(byte[] startRow) {
-      setStartRow(startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(startRow));
+      this.startRow = startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(startRow, startRow.length));
       return this;
     }
 
     public compact_args setStartRow(ByteBuffer startRow) {
-      this.startRow = startRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
       return this;
     }
 
@@ -26243,16 +28238,16 @@
     }
 
     public ByteBuffer bufferForEndRow() {
-      return endRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     public compact_args setEndRow(byte[] endRow) {
-      setEndRow(endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(endRow));
+      this.endRow = endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(endRow, endRow.length));
       return this;
     }
 
     public compact_args setEndRow(ByteBuffer endRow) {
-      this.endRow = endRow;
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       return this;
     }
 
@@ -26443,7 +28438,39 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_lock = true && (isSetLock());
+      list.add(present_lock);
+      if (present_lock)
+        list.add(lock);
+
+      boolean present_tableId = true && (isSetTableId());
+      list.add(present_tableId);
+      if (present_tableId)
+        list.add(tableId);
+
+      boolean present_startRow = true && (isSetStartRow());
+      list.add(present_startRow);
+      if (present_startRow)
+        list.add(startRow);
+
+      boolean present_endRow = true && (isSetEndRow());
+      list.add(present_endRow);
+      if (present_endRow)
+        list.add(endRow);
+
+      return list.hashCode();
     }
 
     @Override
@@ -27071,7 +29098,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -27535,7 +29574,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -28055,7 +30106,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_tableId = true && (isSetTableId());
+      list.add(present_tableId);
+      if (present_tableId)
+        list.add(tableId);
+
+      return list.hashCode();
     }
 
     @Override
@@ -28580,7 +30648,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -28692,14 +30772,14 @@
             case 0: // SUCCESS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list290 = iprot.readListBegin();
-                  struct.success = new ArrayList<TabletStats>(_list290.size);
-                  for (int _i291 = 0; _i291 < _list290.size; ++_i291)
+                  org.apache.thrift.protocol.TList _list300 = iprot.readListBegin();
+                  struct.success = new ArrayList<TabletStats>(_list300.size);
+                  TabletStats _elem301;
+                  for (int _i302 = 0; _i302 < _list300.size; ++_i302)
                   {
-                    TabletStats _elem292;
-                    _elem292 = new TabletStats();
-                    _elem292.read(iprot);
-                    struct.success.add(_elem292);
+                    _elem301 = new TabletStats();
+                    _elem301.read(iprot);
+                    struct.success.add(_elem301);
                   }
                   iprot.readListEnd();
                 }
@@ -28736,9 +30816,9 @@
           oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.success.size()));
-            for (TabletStats _iter293 : struct.success)
+            for (TabletStats _iter303 : struct.success)
             {
-              _iter293.write(oprot);
+              _iter303.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -28777,9 +30857,9 @@
         if (struct.isSetSuccess()) {
           {
             oprot.writeI32(struct.success.size());
-            for (TabletStats _iter294 : struct.success)
+            for (TabletStats _iter304 : struct.success)
             {
-              _iter294.write(oprot);
+              _iter304.write(oprot);
             }
           }
         }
@@ -28794,14 +30874,14 @@
         BitSet incoming = iprot.readBitSet(2);
         if (incoming.get(0)) {
           {
-            org.apache.thrift.protocol.TList _list295 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.success = new ArrayList<TabletStats>(_list295.size);
-            for (int _i296 = 0; _i296 < _list295.size; ++_i296)
+            org.apache.thrift.protocol.TList _list305 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.success = new ArrayList<TabletStats>(_list305.size);
+            TabletStats _elem306;
+            for (int _i307 = 0; _i307 < _list305.size; ++_i307)
             {
-              TabletStats _elem297;
-              _elem297 = new TabletStats();
-              _elem297.read(iprot);
-              struct.success.add(_elem297);
+              _elem306 = new TabletStats();
+              _elem306.read(iprot);
+              struct.success.add(_elem306);
             }
           }
           struct.setSuccessIsSet(true);
@@ -29070,7 +31150,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -29534,7 +31626,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -30054,7 +32158,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_lock = true && (isSetLock());
+      list.add(present_lock);
+      if (present_lock)
+        list.add(lock);
+
+      return list.hashCode();
     }
 
     @Override
@@ -30500,7 +32621,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -30974,7 +33102,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_lock = true && (isSetLock());
+      list.add(present_lock);
+      if (present_lock)
+        list.add(lock);
+
+      return list.hashCode();
     }
 
     @Override
@@ -31479,7 +33624,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -31963,7 +34120,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -32075,14 +34244,14 @@
             case 0: // SUCCESS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list298 = iprot.readListBegin();
-                  struct.success = new ArrayList<ActiveScan>(_list298.size);
-                  for (int _i299 = 0; _i299 < _list298.size; ++_i299)
+                  org.apache.thrift.protocol.TList _list308 = iprot.readListBegin();
+                  struct.success = new ArrayList<ActiveScan>(_list308.size);
+                  ActiveScan _elem309;
+                  for (int _i310 = 0; _i310 < _list308.size; ++_i310)
                   {
-                    ActiveScan _elem300;
-                    _elem300 = new ActiveScan();
-                    _elem300.read(iprot);
-                    struct.success.add(_elem300);
+                    _elem309 = new ActiveScan();
+                    _elem309.read(iprot);
+                    struct.success.add(_elem309);
                   }
                   iprot.readListEnd();
                 }
@@ -32119,9 +34288,9 @@
           oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.success.size()));
-            for (ActiveScan _iter301 : struct.success)
+            for (ActiveScan _iter311 : struct.success)
             {
-              _iter301.write(oprot);
+              _iter311.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -32160,9 +34329,9 @@
         if (struct.isSetSuccess()) {
           {
             oprot.writeI32(struct.success.size());
-            for (ActiveScan _iter302 : struct.success)
+            for (ActiveScan _iter312 : struct.success)
             {
-              _iter302.write(oprot);
+              _iter312.write(oprot);
             }
           }
         }
@@ -32177,14 +34346,14 @@
         BitSet incoming = iprot.readBitSet(2);
         if (incoming.get(0)) {
           {
-            org.apache.thrift.protocol.TList _list303 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.success = new ArrayList<ActiveScan>(_list303.size);
-            for (int _i304 = 0; _i304 < _list303.size; ++_i304)
+            org.apache.thrift.protocol.TList _list313 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.success = new ArrayList<ActiveScan>(_list313.size);
+            ActiveScan _elem314;
+            for (int _i315 = 0; _i315 < _list313.size; ++_i315)
             {
-              ActiveScan _elem305;
-              _elem305 = new ActiveScan();
-              _elem305.read(iprot);
-              struct.success.add(_elem305);
+              _elem314 = new ActiveScan();
+              _elem314.read(iprot);
+              struct.success.add(_elem314);
             }
           }
           struct.setSuccessIsSet(true);
@@ -32453,7 +34622,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -32937,7 +35118,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_sec = true && (isSetSec());
+      list.add(present_sec);
+      if (present_sec)
+        list.add(sec);
+
+      return list.hashCode();
     }
 
     @Override
@@ -33049,14 +35242,14 @@
             case 0: // SUCCESS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list306 = iprot.readListBegin();
-                  struct.success = new ArrayList<ActiveCompaction>(_list306.size);
-                  for (int _i307 = 0; _i307 < _list306.size; ++_i307)
+                  org.apache.thrift.protocol.TList _list316 = iprot.readListBegin();
+                  struct.success = new ArrayList<ActiveCompaction>(_list316.size);
+                  ActiveCompaction _elem317;
+                  for (int _i318 = 0; _i318 < _list316.size; ++_i318)
                   {
-                    ActiveCompaction _elem308;
-                    _elem308 = new ActiveCompaction();
-                    _elem308.read(iprot);
-                    struct.success.add(_elem308);
+                    _elem317 = new ActiveCompaction();
+                    _elem317.read(iprot);
+                    struct.success.add(_elem317);
                   }
                   iprot.readListEnd();
                 }
@@ -33093,9 +35286,9 @@
           oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, struct.success.size()));
-            for (ActiveCompaction _iter309 : struct.success)
+            for (ActiveCompaction _iter319 : struct.success)
             {
-              _iter309.write(oprot);
+              _iter319.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -33134,9 +35327,9 @@
         if (struct.isSetSuccess()) {
           {
             oprot.writeI32(struct.success.size());
-            for (ActiveCompaction _iter310 : struct.success)
+            for (ActiveCompaction _iter320 : struct.success)
             {
-              _iter310.write(oprot);
+              _iter320.write(oprot);
             }
           }
         }
@@ -33151,14 +35344,14 @@
         BitSet incoming = iprot.readBitSet(2);
         if (incoming.get(0)) {
           {
-            org.apache.thrift.protocol.TList _list311 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.success = new ArrayList<ActiveCompaction>(_list311.size);
-            for (int _i312 = 0; _i312 < _list311.size; ++_i312)
+            org.apache.thrift.protocol.TList _list321 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.success = new ArrayList<ActiveCompaction>(_list321.size);
+            ActiveCompaction _elem322;
+            for (int _i323 = 0; _i323 < _list321.size; ++_i323)
             {
-              ActiveCompaction _elem313;
-              _elem313 = new ActiveCompaction();
-              _elem313.read(iprot);
-              struct.success.add(_elem313);
+              _elem322 = new ActiveCompaction();
+              _elem322.read(iprot);
+              struct.success.add(_elem322);
             }
           }
           struct.setSuccessIsSet(true);
@@ -33503,7 +35696,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      boolean present_filenames = true && (isSetFilenames());
+      list.add(present_filenames);
+      if (present_filenames)
+        list.add(filenames);
+
+      return list.hashCode();
     }
 
     @Override
@@ -33657,13 +35867,13 @@
             case 3: // FILENAMES
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list314 = iprot.readListBegin();
-                  struct.filenames = new ArrayList<String>(_list314.size);
-                  for (int _i315 = 0; _i315 < _list314.size; ++_i315)
+                  org.apache.thrift.protocol.TList _list324 = iprot.readListBegin();
+                  struct.filenames = new ArrayList<String>(_list324.size);
+                  String _elem325;
+                  for (int _i326 = 0; _i326 < _list324.size; ++_i326)
                   {
-                    String _elem316;
-                    _elem316 = iprot.readString();
-                    struct.filenames.add(_elem316);
+                    _elem325 = iprot.readString();
+                    struct.filenames.add(_elem325);
                   }
                   iprot.readListEnd();
                 }
@@ -33701,9 +35911,9 @@
           oprot.writeFieldBegin(FILENAMES_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.filenames.size()));
-            for (String _iter317 : struct.filenames)
+            for (String _iter327 : struct.filenames)
             {
-              oprot.writeString(_iter317);
+              oprot.writeString(_iter327);
             }
             oprot.writeListEnd();
           }
@@ -33746,9 +35956,9 @@
         if (struct.isSetFilenames()) {
           {
             oprot.writeI32(struct.filenames.size());
-            for (String _iter318 : struct.filenames)
+            for (String _iter328 : struct.filenames)
             {
-              oprot.writeString(_iter318);
+              oprot.writeString(_iter328);
             }
           }
         }
@@ -33770,13 +35980,13 @@
         }
         if (incoming.get(2)) {
           {
-            org.apache.thrift.protocol.TList _list319 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-            struct.filenames = new ArrayList<String>(_list319.size);
-            for (int _i320 = 0; _i320 < _list319.size; ++_i320)
+            org.apache.thrift.protocol.TList _list329 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.filenames = new ArrayList<String>(_list329.size);
+            String _elem330;
+            for (int _i331 = 0; _i331 < _list329.size; ++_i331)
             {
-              String _elem321;
-              _elem321 = iprot.readString();
-              struct.filenames.add(_elem321);
+              _elem330 = iprot.readString();
+              struct.filenames.add(_elem330);
             }
           }
           struct.setFilenamesIsSet(true);
@@ -34040,7 +36250,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_credentials = true && (isSetCredentials());
+      list.add(present_credentials);
+      if (present_credentials)
+        list.add(credentials);
+
+      return list.hashCode();
     }
 
     @Override
@@ -34462,7 +36684,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -34556,13 +36785,13 @@
             case 0: // SUCCESS
               if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.thrift.protocol.TList _list322 = iprot.readListBegin();
-                  struct.success = new ArrayList<String>(_list322.size);
-                  for (int _i323 = 0; _i323 < _list322.size; ++_i323)
+                  org.apache.thrift.protocol.TList _list332 = iprot.readListBegin();
+                  struct.success = new ArrayList<String>(_list332.size);
+                  String _elem333;
+                  for (int _i334 = 0; _i334 < _list332.size; ++_i334)
                   {
-                    String _elem324;
-                    _elem324 = iprot.readString();
-                    struct.success.add(_elem324);
+                    _elem333 = iprot.readString();
+                    struct.success.add(_elem333);
                   }
                   iprot.readListEnd();
                 }
@@ -34590,9 +36819,9 @@
           oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.success.size()));
-            for (String _iter325 : struct.success)
+            for (String _iter335 : struct.success)
             {
-              oprot.writeString(_iter325);
+              oprot.writeString(_iter335);
             }
             oprot.writeListEnd();
           }
@@ -34623,9 +36852,9 @@
         if (struct.isSetSuccess()) {
           {
             oprot.writeI32(struct.success.size());
-            for (String _iter326 : struct.success)
+            for (String _iter336 : struct.success)
             {
-              oprot.writeString(_iter326);
+              oprot.writeString(_iter336);
             }
           }
         }
@@ -34637,13 +36866,13 @@
         BitSet incoming = iprot.readBitSet(1);
         if (incoming.get(0)) {
           {
-            org.apache.thrift.protocol.TList _list327 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-            struct.success = new ArrayList<String>(_list327.size);
-            for (int _i328 = 0; _i328 < _list327.size; ++_i328)
+            org.apache.thrift.protocol.TList _list337 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.success = new ArrayList<String>(_list337.size);
+            String _elem338;
+            for (int _i339 = 0; _i339 < _list337.size; ++_i339)
             {
-              String _elem329;
-              _elem329 = iprot.readString();
-              struct.success.add(_elem329);
+              _elem338 = iprot.readString();
+              struct.success.add(_elem338);
             }
           }
           struct.setSuccessIsSet(true);
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletStats.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletStats.java
index 310d3b6..95dd551 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletStats.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TabletStats.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TabletStats implements org.apache.thrift.TBase<TabletStats, TabletStats._Fields>, java.io.Serializable, Cloneable, Comparable<TabletStats> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TabletStats implements org.apache.thrift.TBase<TabletStats, TabletStats._Fields>, java.io.Serializable, Cloneable, Comparable<TabletStats> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TabletStats");
 
   private static final org.apache.thrift.protocol.TField EXTENT_FIELD_DESC = new org.apache.thrift.protocol.TField("extent", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -526,16 +529,16 @@
       return getSplits();
 
     case NUM_ENTRIES:
-      return Long.valueOf(getNumEntries());
+      return getNumEntries();
 
     case INGEST_RATE:
-      return Double.valueOf(getIngestRate());
+      return getIngestRate();
 
     case QUERY_RATE:
-      return Double.valueOf(getQueryRate());
+      return getQueryRate();
 
     case SPLIT_CREATION_TIME:
-      return Long.valueOf(getSplitCreationTime());
+      return getSplitCreationTime();
 
     }
     throw new IllegalStateException();
@@ -658,7 +661,49 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_extent = true && (isSetExtent());
+    list.add(present_extent);
+    if (present_extent)
+      list.add(extent);
+
+    boolean present_majors = true && (isSetMajors());
+    list.add(present_majors);
+    if (present_majors)
+      list.add(majors);
+
+    boolean present_minors = true && (isSetMinors());
+    list.add(present_minors);
+    if (present_minors)
+      list.add(minors);
+
+    boolean present_splits = true && (isSetSplits());
+    list.add(present_splits);
+    if (present_splits)
+      list.add(splits);
+
+    boolean present_numEntries = true;
+    list.add(present_numEntries);
+    if (present_numEntries)
+      list.add(numEntries);
+
+    boolean present_ingestRate = true;
+    list.add(present_ingestRate);
+    if (present_ingestRate)
+      list.add(ingestRate);
+
+    boolean present_queryRate = true;
+    list.add(present_queryRate);
+    if (present_queryRate)
+      list.add(queryRate);
+
+    boolean present_splitCreationTime = true;
+    list.add(present_splitCreationTime);
+    if (present_splitCreationTime)
+      list.add(splitCreationTime);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TooManyFilesException.java b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TooManyFilesException.java
index 203597a..09ef8f8 100644
--- a/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TooManyFilesException.java
+++ b/core/src/main/java/org/apache/accumulo/core/tabletserver/thrift/TooManyFilesException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TooManyFilesException extends TException implements org.apache.thrift.TBase<TooManyFilesException, TooManyFilesException._Fields>, java.io.Serializable, Cloneable, Comparable<TooManyFilesException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TooManyFilesException extends TException implements org.apache.thrift.TBase<TooManyFilesException, TooManyFilesException._Fields>, java.io.Serializable, Cloneable, Comparable<TooManyFilesException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TooManyFilesException");
 
   private static final org.apache.thrift.protocol.TField EXTENT_FIELD_DESC = new org.apache.thrift.protocol.TField("extent", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_extent = true && (isSetExtent());
+    list.add(present_extent);
+    if (present_extent)
+      list.add(extent);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/trace/DistributedTrace.java b/core/src/main/java/org/apache/accumulo/core/trace/DistributedTrace.java
index 14886f1..f833b11 100644
--- a/core/src/main/java/org/apache/accumulo/core/trace/DistributedTrace.java
+++ b/core/src/main/java/org/apache/accumulo/core/trace/DistributedTrace.java
@@ -52,7 +52,7 @@
   public static final String TRACER_ZK_TIMEOUT = "tracer.zookeeper.timeout";
   public static final String TRACER_ZK_PATH = "tracer.zookeeper.path";
 
-  private static final HashSet<SpanReceiver> receivers = new HashSet<SpanReceiver>();
+  private static final HashSet<SpanReceiver> receivers = new HashSet<>();
 
   /**
    * @deprecated since 1.7, use {@link DistributedTrace#enable(String, String, org.apache.accumulo.core.client.ClientConfiguration)} instead
diff --git a/core/src/main/java/org/apache/accumulo/core/trace/thrift/TInfo.java b/core/src/main/java/org/apache/accumulo/core/trace/thrift/TInfo.java
index fa7933b..f96b634 100644
--- a/core/src/main/java/org/apache/accumulo/core/trace/thrift/TInfo.java
+++ b/core/src/main/java/org/apache/accumulo/core/trace/thrift/TInfo.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TInfo implements org.apache.thrift.TBase<TInfo, TInfo._Fields>, java.io.Serializable, Cloneable, Comparable<TInfo> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TInfo implements org.apache.thrift.TBase<TInfo, TInfo._Fields>, java.io.Serializable, Cloneable, Comparable<TInfo> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TInfo");
 
   private static final org.apache.thrift.protocol.TField TRACE_ID_FIELD_DESC = new org.apache.thrift.protocol.TField("traceId", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -244,10 +247,10 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case TRACE_ID:
-      return Long.valueOf(getTraceId());
+      return getTraceId();
 
     case PARENT_ID:
-      return Long.valueOf(getParentId());
+      return getParentId();
 
     }
     throw new IllegalStateException();
@@ -304,7 +307,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_traceId = true;
+    list.add(present_traceId);
+    if (present_traceId)
+      list.add(traceId);
+
+    boolean present_parentId = true;
+    list.add(present_parentId);
+    if (present_parentId)
+      list.add(parentId);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/trace/wrappers/TraceWrap.java b/core/src/main/java/org/apache/accumulo/core/trace/wrappers/TraceWrap.java
index bbfa386..8f83bd2 100644
--- a/core/src/main/java/org/apache/accumulo/core/trace/wrappers/TraceWrap.java
+++ b/core/src/main/java/org/apache/accumulo/core/trace/wrappers/TraceWrap.java
@@ -39,12 +39,12 @@
 public class TraceWrap {
 
   public static <T> T service(final T instance) {
-    InvocationHandler handler = new RpcServerInvocationHandler<T>(instance);
+    InvocationHandler handler = new RpcServerInvocationHandler<>(instance);
     return wrappedInstance(handler, instance);
   }
 
   public static <T> T client(final T instance) {
-    InvocationHandler handler = new RpcClientInvocationHandler<T>(instance);
+    InvocationHandler handler = new RpcClientInvocationHandler<>(instance);
     return wrappedInstance(handler, instance);
   }
 
diff --git a/core/src/main/java/org/apache/accumulo/core/util/ByteArraySet.java b/core/src/main/java/org/apache/accumulo/core/util/ByteArraySet.java
index bf177fd..337fe9c 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/ByteArraySet.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/ByteArraySet.java
@@ -38,7 +38,7 @@
   }
 
   public static ByteArraySet fromStrings(Collection<String> c) {
-    List<byte[]> lst = new ArrayList<byte[]>();
+    List<byte[]> lst = new ArrayList<>();
     for (String s : c)
       lst.add(s.getBytes(UTF_8));
     return new ByteArraySet(lst);
diff --git a/core/src/main/java/org/apache/accumulo/core/util/ByteBufferUtil.java b/core/src/main/java/org/apache/accumulo/core/util/ByteBufferUtil.java
index 85c3e12..27188fe 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/ByteBufferUtil.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/ByteBufferUtil.java
@@ -18,6 +18,7 @@
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 
+import java.io.ByteArrayInputStream;
 import java.io.DataOutput;
 import java.io.IOException;
 import java.nio.ByteBuffer;
@@ -47,7 +48,7 @@
   public static List<ByteBuffer> toByteBuffers(Collection<byte[]> bytesList) {
     if (bytesList == null)
       return null;
-    ArrayList<ByteBuffer> result = new ArrayList<ByteBuffer>();
+    ArrayList<ByteBuffer> result = new ArrayList<>();
     for (byte[] bytes : bytesList) {
       result.add(ByteBuffer.wrap(bytes));
     }
@@ -57,7 +58,7 @@
   public static List<byte[]> toBytesList(Collection<ByteBuffer> bytesList) {
     if (bytesList == null)
       return null;
-    ArrayList<byte[]> result = new ArrayList<byte[]>(bytesList.size());
+    ArrayList<byte[]> result = new ArrayList<>(bytesList.size());
     for (ByteBuffer bytes : bytesList) {
       result.add(toBytes(bytes));
     }
@@ -102,6 +103,13 @@
     } else {
       out.write(toBytes(buffer));
     }
+  }
 
+  public static ByteArrayInputStream toByteArrayInputStream(ByteBuffer buffer) {
+    if (buffer.hasArray()) {
+      return new ByteArrayInputStream(buffer.array(), buffer.arrayOffset() + buffer.position(), buffer.remaining());
+    } else {
+      return new ByteArrayInputStream(toBytes(buffer));
+    }
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/DeprecationUtil.java b/core/src/main/java/org/apache/accumulo/core/util/DeprecationUtil.java
new file mode 100644
index 0000000..cd798bb
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/util/DeprecationUtil.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.util;
+
+import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.impl.TabletLocator;
+import org.apache.accumulo.core.client.mapreduce.RangeInputSplit;
+
+/**
+ * A utility class for managing deprecated items. This avoids scattering private helper methods all over the code with warnings suppression.
+ *
+ * <p>
+ * This class will never be public API and methods will be removed as soon as they are no longer needed. No methods in this class will, themselves, be
+ * deprecated, because that would propagate the deprecation warning we are trying to avoid.
+ *
+ * <p>
+ * This class should not be used as a substitute for deprecated classes. It should <b>only</b> be used for implementation code which must remain to support the
+ * deprecated features, and <b>only</b> until that feature is removed.
+ */
+public class DeprecationUtil {
+
+  @SuppressWarnings("deprecation")
+  public static boolean isMockInstance(Instance instance) {
+    return instance instanceof org.apache.accumulo.core.client.mock.MockInstance;
+  }
+
+  @SuppressWarnings("deprecation")
+  public static Instance makeMockInstance(String instance) {
+    return new org.apache.accumulo.core.client.mock.MockInstance(instance);
+  }
+
+  @SuppressWarnings("deprecation")
+  public static void setMockInstance(RangeInputSplit split, boolean isMockInstance) {
+    split.setMockInstance(isMockInstance);
+  }
+
+  @SuppressWarnings("deprecation")
+  public static boolean isMockInstanceSet(RangeInputSplit split) {
+    return split.isMockInstance();
+  }
+
+  @SuppressWarnings("deprecation")
+  public static TabletLocator makeMockLocator() {
+    return new org.apache.accumulo.core.client.mock.impl.MockTabletLocator();
+  }
+
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/LocalityGroupUtil.java b/core/src/main/java/org/apache/accumulo/core/util/LocalityGroupUtil.java
index a4936cf..2063855 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/LocalityGroupUtil.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/LocalityGroupUtil.java
@@ -49,7 +49,9 @@
   public static final Set<ByteSequence> EMPTY_CF_SET = Collections.emptySet();
 
   public static Set<ByteSequence> families(Collection<Column> columns) {
-    Set<ByteSequence> result = new HashSet<ByteSequence>(columns.size());
+    if (columns.size() == 0)
+      return EMPTY_CF_SET;
+    Set<ByteSequence> result = new HashSet<>(columns.size());
     for (Column col : columns) {
       result.add(new ArrayByteSequence(col.getColumnFamily()));
     }
@@ -64,13 +66,13 @@
   }
 
   public static Map<String,Set<ByteSequence>> getLocalityGroups(AccumuloConfiguration acuconf) throws LocalityGroupConfigurationError {
-    Map<String,Set<ByteSequence>> result = new HashMap<String,Set<ByteSequence>>();
+    Map<String,Set<ByteSequence>> result = new HashMap<>();
     String[] groups = acuconf.get(Property.TABLE_LOCALITY_GROUPS).split(",");
     for (String group : groups) {
       if (group.length() > 0)
         result.put(group, new HashSet<ByteSequence>());
     }
-    HashSet<ByteSequence> all = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> all = new HashSet<>();
     for (Entry<String,String> entry : acuconf) {
       String property = entry.getKey();
       String value = entry.getValue();
@@ -99,7 +101,7 @@
   }
 
   public static Set<ByteSequence> decodeColumnFamilies(String colFams) throws LocalityGroupConfigurationError {
-    HashSet<ByteSequence> colFamsSet = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> colFamsSet = new HashSet<>();
 
     for (String family : colFams.split(",")) {
       ByteSequence cfbs = decodeColumnFamily(family);
@@ -150,7 +152,7 @@
   }
 
   public static String encodeColumnFamilies(Set<Text> colFams) {
-    SortedSet<String> ecfs = new TreeSet<String>();
+    SortedSet<String> ecfs = new TreeSet<>();
 
     StringBuilder sb = new StringBuilder();
 
@@ -186,11 +188,11 @@
     return ecf;
   }
 
-  private static class PartitionedMutation extends Mutation {
+  public static class PartitionedMutation extends Mutation {
     private byte[] row;
     private List<ColumnUpdate> updates;
 
-    PartitionedMutation(byte[] row, List<ColumnUpdate> updates) {
+    public PartitionedMutation(byte[] row, List<ColumnUpdate> updates) {
       this.row = row;
       this.updates = updates;
     }
@@ -233,7 +235,7 @@
 
     public Partitioner(PreAllocatedArray<Map<ByteSequence,MutableLong>> groups) {
       this.groups = groups;
-      this.colfamToLgidMap = new HashMap<ByteSequence,Integer>();
+      this.colfamToLgidMap = new HashMap<>();
 
       for (int i = 0; i < groups.length; i++) {
         for (ByteSequence cf : groups.get(i).keySet()) {
diff --git a/core/src/main/java/org/apache/accumulo/core/util/MapCounter.java b/core/src/main/java/org/apache/accumulo/core/util/MapCounter.java
index 4372cfc..9c504e9 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/MapCounter.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/MapCounter.java
@@ -30,7 +30,7 @@
   private HashMap<KT,MutableLong> map;
 
   public MapCounter() {
-    map = new HashMap<KT,MutableLong>();
+    map = new HashMap<>();
   }
 
   public long increment(KT key, long l) {
@@ -68,7 +68,7 @@
 
   public Collection<Long> values() {
     Collection<MutableLong> vals = map.values();
-    ArrayList<Long> ret = new ArrayList<Long>(vals.size());
+    ArrayList<Long> ret = new ArrayList<>(vals.size());
     for (MutableLong ml : vals) {
       ret.add(ml.l);
     }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/Merge.java b/core/src/main/java/org/apache/accumulo/core/util/Merge.java
index 9f6f6ab..1af5e37 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/Merge.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/Merge.java
@@ -51,7 +51,7 @@
     MergeException(Exception ex) {
       super(ex);
     }
-  };
+  }
 
   private static final Logger log = LoggerFactory.getLogger(Merge.class);
 
@@ -120,7 +120,7 @@
       if (table.equals(MetadataTable.NAME)) {
         throw new IllegalArgumentException("cannot merge tablets on the metadata table");
       }
-      List<Size> sizes = new ArrayList<Size>();
+      List<Size> sizes = new ArrayList<>();
       long totalSize = 0;
       // Merge any until you get larger than the goal size, and then merge one less tablet
       Iterator<Size> sizeIterator = getSizeIterator(conn, table, start, end);
@@ -212,7 +212,7 @@
     } catch (Exception e) {
       throw new MergeException(e);
     }
-    scanner.setRange(new KeyExtent(new Text(tableId), end, start).toMetadataRange());
+    scanner.setRange(new KeyExtent(tableId, end, start).toMetadataRange());
     scanner.fetchColumnFamily(DataFileColumnFamily.NAME);
     TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.fetch(scanner);
     final Iterator<Entry<Key,Value>> iterator = scanner.iterator();
diff --git a/core/src/main/java/org/apache/accumulo/core/util/OpTimer.java b/core/src/main/java/org/apache/accumulo/core/util/OpTimer.java
index 564a824..33ece1a 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/OpTimer.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/OpTimer.java
@@ -16,37 +16,119 @@
  */
 package org.apache.accumulo.core.util;
 
-import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.TimeUnit;
 
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
-
+/**
+ * Provides a stop watch for timing a single type of event. This code is based on the org.apache.hadoop.util.StopWatch available in hadoop 2.7.0
+ */
 public class OpTimer {
-  private Logger log;
-  private Level level;
-  private long t1;
-  private long opid;
-  private static AtomicLong nextOpid = new AtomicLong();
 
-  public OpTimer(Logger log, Level level) {
-    this.log = log;
-    this.level = level;
+  private boolean isStarted;
+  private long startNanos;
+  private long currentElapsedNanos;
+
+  /**
+   * Create an OpTimer instance. The timer is not running.
+   */
+  public OpTimer() {}
+
+  /**
+   * Returns timer running state
+   *
+   * @return true if timer is running
+   */
+  public boolean isRunning() {
+    return isStarted;
   }
 
-  public OpTimer start(String msg) {
-    opid = nextOpid.getAndIncrement();
-    if (log.isEnabledFor(level))
-      log.log(level, "tid=" + Thread.currentThread().getId() + " oid=" + opid + "  " + msg);
-    t1 = System.currentTimeMillis();
+  /**
+   * Start the timer instance.
+   *
+   * @return this instance for fluent chaining.
+   * @throws IllegalStateException
+   *           if start is called on running instance.
+   */
+  public OpTimer start() throws IllegalStateException {
+    if (isStarted) {
+      throw new IllegalStateException("OpTimer is already running");
+    }
+    isStarted = true;
+    startNanos = System.nanoTime();
     return this;
   }
 
-  public void stop(String msg) {
-    if (log.isEnabledFor(level)) {
-      long t2 = System.currentTimeMillis();
-      String duration = String.format("%.3f secs", (t2 - t1) / 1000.0);
-      msg = msg.replace("%DURATION%", duration);
-      log.log(level, "tid=" + Thread.currentThread().getId() + " oid=" + opid + "  " + msg);
+  /**
+   * Stop the timer instance.
+   *
+   * @return this instance for fluent chaining.
+   * @throws IllegalStateException
+   *           if stop is called on instance that is not running.
+   */
+  public OpTimer stop() throws IllegalStateException {
+    if (!isStarted) {
+      throw new IllegalStateException("OpTimer is already stopped");
     }
+    long now = System.nanoTime();
+    isStarted = false;
+    currentElapsedNanos += now - startNanos;
+    return this;
   }
+
+  /**
+   * Stops timer instance and current elapsed time to 0.
+   *
+   * @return this instance for fluent chaining
+   */
+  public OpTimer reset() {
+    currentElapsedNanos = 0;
+    isStarted = false;
+    return this;
+  }
+
+  /**
+   * Converts current timer value to specific unit. The conversion to courser granularities truncate with loss of precision.
+   *
+   * @param timeUnit
+   *          the time unit that will converted to.
+   * @return truncated time in unit of specified time unit.
+   */
+  public long now(TimeUnit timeUnit) {
+    return timeUnit.convert(now(), TimeUnit.NANOSECONDS);
+  }
+
+  /**
+   * Returns the current elapsed time scaled to the provided time unit. This method does not truncate like {@link #now(TimeUnit)} but returns the value as a
+   * double.
+   *
+   * <p>
+   * Note: this method is not included in the hadoop 2.7 org.apache.hadoop.util.StopWatch class. If that class is adopted, then provisions will be required to
+   * replace this method.
+   *
+   * @param timeUnit
+   *          the time unit to scale the elapsed time to.
+   * @return the elapsed time of this instance scaled to the provided time unit.
+   */
+  public double scale(TimeUnit timeUnit) {
+    return (double) now() / TimeUnit.NANOSECONDS.convert(1L, timeUnit);
+  }
+
+  /**
+   * Returns current timer elapsed time as nanoseconds.
+   *
+   * @return elapsed time in nanoseconds.
+   */
+  public long now() {
+    return isStarted ? System.nanoTime() - startNanos + currentElapsedNanos : currentElapsedNanos;
+  }
+
+  /**
+   * Return the current elapsed time in nanoseconds as a string.
+   *
+   * @return timer elapsed time as nanoseconds.
+   */
+  @Override
+  public String toString() {
+    return String.valueOf(now());
+  }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/Pair.java b/core/src/main/java/org/apache/accumulo/core/util/Pair.java
index 293d126..2d51bcd 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/Pair.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/Pair.java
@@ -73,11 +73,11 @@
   }
 
   public Entry<A,B> toMapEntry() {
-    return new SimpleImmutableEntry<A,B>(getFirst(), getSecond());
+    return new SimpleImmutableEntry<>(getFirst(), getSecond());
   }
 
   public Pair<B,A> swap() {
-    return new Pair<B,A>(getSecond(), getFirst());
+    return new Pair<>(getSecond(), getFirst());
   }
 
   public static <K2,V2,K1 extends K2,V1 extends V2> Pair<K2,V2> fromEntry(Entry<K1,V1> entry) {
diff --git a/core/src/main/java/org/apache/accumulo/core/util/ServerServices.java b/core/src/main/java/org/apache/accumulo/core/util/ServerServices.java
index a07ef76..de39d5c 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/ServerServices.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/ServerServices.java
@@ -32,7 +32,7 @@
   private String stringForm = null;
 
   public ServerServices(String services) {
-    this.services = new EnumMap<Service,String>(Service.class);
+    this.services = new EnumMap<>(Service.class);
 
     String[] addresses = services.split(SERVICE_SEPARATOR);
     for (String address : addresses) {
diff --git a/core/src/main/java/org/apache/accumulo/core/util/Stat.java b/core/src/main/java/org/apache/accumulo/core/util/Stat.java
index b7d4165..8bdeb63 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/Stat.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/Stat.java
@@ -16,12 +16,12 @@
  */
 package org.apache.accumulo.core.util;
 
-import org.apache.commons.math.stat.descriptive.StorelessUnivariateStatistic;
-import org.apache.commons.math.stat.descriptive.moment.Mean;
-import org.apache.commons.math.stat.descriptive.moment.StandardDeviation;
-import org.apache.commons.math.stat.descriptive.rank.Max;
-import org.apache.commons.math.stat.descriptive.rank.Min;
-import org.apache.commons.math.stat.descriptive.summary.Sum;
+import org.apache.commons.math3.stat.descriptive.StorelessUnivariateStatistic;
+import org.apache.commons.math3.stat.descriptive.moment.Mean;
+import org.apache.commons.math3.stat.descriptive.moment.StandardDeviation;
+import org.apache.commons.math3.stat.descriptive.rank.Max;
+import org.apache.commons.math3.stat.descriptive.rank.Min;
+import org.apache.commons.math3.stat.descriptive.summary.Sum;
 
 public class Stat {
   Min min;
diff --git a/core/src/main/java/org/apache/accumulo/core/util/StopWatch.java b/core/src/main/java/org/apache/accumulo/core/util/StopWatch.java
index 8abe19e..5a574db 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/StopWatch.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/StopWatch.java
@@ -23,8 +23,8 @@
   EnumMap<K,Long> totalTime;
 
   public StopWatch(Class<K> k) {
-    startTime = new EnumMap<K,Long>(k);
-    totalTime = new EnumMap<K,Long>(k);
+    startTime = new EnumMap<>(k);
+    totalTime = new EnumMap<>(k);
   }
 
   public synchronized void start(K timer) {
diff --git a/core/src/main/java/org/apache/accumulo/core/util/Validator.java b/core/src/main/java/org/apache/accumulo/core/util/Validator.java
index a5ae156..c1e3c80 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/Validator.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/Validator.java
@@ -16,11 +16,13 @@
  */
 package org.apache.accumulo.core.util;
 
+import com.google.common.base.Predicate;
+
 /**
- * A class that validates arguments of a particular type. Implementations must implement {@link #isValid(Object)} and should override
+ * A class that validates arguments of a particular type. Implementations must implement {@link #apply(Object)} and should override
  * {@link #invalidMessage(Object)}.
  */
-public abstract class Validator<T> {
+public abstract class Validator<T> implements Predicate<T> {
 
   /**
    * Validates an argument.
@@ -32,21 +34,12 @@
    *           if validation fails
    */
   public final T validate(final T argument) {
-    if (!isValid(argument))
+    if (!apply(argument))
       throw new IllegalArgumentException(invalidMessage(argument));
     return argument;
   }
 
   /**
-   * Checks an argument for validity.
-   *
-   * @param argument
-   *          argument to validate
-   * @return true if valid, false if invalid
-   */
-  public abstract boolean isValid(final T argument);
-
-  /**
    * Formulates an exception message for invalid values.
    *
    * @param argument
@@ -72,13 +65,13 @@
     return new Validator<T>() {
 
       @Override
-      public boolean isValid(T argument) {
-        return mine.isValid(argument) && other.isValid(argument);
+      public boolean apply(T argument) {
+        return mine.apply(argument) && other.apply(argument);
       }
 
       @Override
       public String invalidMessage(T argument) {
-        return (mine.isValid(argument) ? other : mine).invalidMessage(argument);
+        return (mine.apply(argument) ? other : mine).invalidMessage(argument);
       }
 
     };
@@ -99,8 +92,8 @@
     return new Validator<T>() {
 
       @Override
-      public boolean isValid(T argument) {
-        return mine.isValid(argument) || other.isValid(argument);
+      public boolean apply(T argument) {
+        return mine.apply(argument) || other.apply(argument);
       }
 
       @Override
@@ -121,8 +114,8 @@
     return new Validator<T>() {
 
       @Override
-      public boolean isValid(T argument) {
-        return !mine.isValid(argument);
+      public boolean apply(T argument) {
+        return !mine.apply(argument);
       }
 
       @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/BinaryFormatter.java b/core/src/main/java/org/apache/accumulo/core/util/format/BinaryFormatter.java
index a5f6a8d..f5cbe39 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/BinaryFormatter.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/BinaryFormatter.java
@@ -17,67 +17,50 @@
 package org.apache.accumulo.core.util.format;
 
 import java.util.Map.Entry;
-
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.hadoop.io.Text;
 
+/**
+ * @deprecated Use {@link DefaultFormatter} providing showLength and printTimestamps via {@link FormatterConfig}.
+ */
+@Deprecated
 public class BinaryFormatter extends DefaultFormatter {
-  private static int showLength;
-
+  // this class can probably be replaced by DefaultFormatter since DefaultFormatter has the max length stuff
   @Override
   public String next() {
     checkState(true);
-    return formatEntry(getScannerIterator().next(), isDoTimestamps());
+    return formatEntry(getScannerIterator().next(), config.willPrintTimestamps(), config.getShownLength());
   }
 
-  // this should be replaced with something like Record.toString();
-  // it would be great if we were able to combine code with DefaultFormatter.formatEntry, but that currently does not respect the showLength option.
-  public static String formatEntry(Entry<Key,Value> entry, boolean showTimestamps) {
+  public static String formatEntry(Entry<Key,Value> entry, boolean printTimestamps, int shownLength) {
     StringBuilder sb = new StringBuilder();
 
     Key key = entry.getKey();
 
     // append row
-    appendText(sb, key.getRow()).append(" ");
+    appendText(sb, key.getRow(), shownLength).append(" ");
 
     // append column family
-    appendText(sb, key.getColumnFamily()).append(":");
+    appendText(sb, key.getColumnFamily(), shownLength).append(":");
 
     // append column qualifier
-    appendText(sb, key.getColumnQualifier()).append(" ");
+    appendText(sb, key.getColumnQualifier(), shownLength).append(" ");
 
     // append visibility expression
     sb.append(new ColumnVisibility(key.getColumnVisibility()));
 
     // append timestamp
-    if (showTimestamps)
+    if (printTimestamps)
       sb.append(" ").append(entry.getKey().getTimestamp());
 
     // append value
     Value value = entry.getValue();
     if (value != null && value.getSize() > 0) {
       sb.append("\t");
-      appendValue(sb, value);
+      appendValue(sb, value, shownLength);
     }
     return sb.toString();
   }
 
-  public static StringBuilder appendText(StringBuilder sb, Text t) {
-    return appendBytes(sb, t.getBytes(), 0, t.getLength());
-  }
-
-  static StringBuilder appendValue(StringBuilder sb, Value value) {
-    return appendBytes(sb, value.get(), 0, value.get().length);
-  }
-
-  static StringBuilder appendBytes(StringBuilder sb, byte ba[], int offset, int len) {
-    int length = Math.min(len, showLength);
-    return DefaultFormatter.appendBytes(sb, ba, offset, length);
-  }
-
-  public static void getlength(int length) {
-    showLength = length;
-  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/DateFormatSupplier.java b/core/src/main/java/org/apache/accumulo/core/util/format/DateFormatSupplier.java
new file mode 100644
index 0000000..9cf50e0
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/DateFormatSupplier.java
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.util.format;
+
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.TimeZone;
+
+import com.google.common.base.Supplier;
+
+/**
+ * DateFormatSupplier is a {@code ThreadLocal<DateFormat>} that will set the correct TimeZone when the object is retrieved by {@link #get()}.
+ *
+ * This exists as a way to get around thread safety issues in {@link DateFormat}. This class also contains helper methods that create some useful
+ * DateFormatSuppliers.
+ *
+ * Instances of DateFormatSuppliers can be shared, but note that a DateFormat generated from it will be shared by all classes within a Thread.
+ *
+ * In general, the state of a retrieved DateFormat should not be changed, unless it makes sense to only perform a state change within that Thread.
+ */
+public abstract class DateFormatSupplier extends ThreadLocal<DateFormat> implements Supplier<DateFormat> {
+  private TimeZone timeZone;
+
+  public DateFormatSupplier() {
+    timeZone = TimeZone.getDefault();
+  }
+
+  public DateFormatSupplier(TimeZone timeZone) {
+    this.timeZone = timeZone;
+  }
+
+  public TimeZone getTimeZone() {
+    return timeZone;
+  }
+
+  public void setTimeZone(TimeZone timeZone) {
+    this.timeZone = timeZone;
+  }
+
+  /** Always sets the TimeZone, which is a fast operation */
+  @Override
+  public DateFormat get() {
+    final DateFormat df = super.get();
+    df.setTimeZone(timeZone);
+    return df;
+  }
+
+  public static final String HUMAN_READABLE_FORMAT = "yyyy/MM/dd HH:mm:ss.SSS";
+
+  /**
+   * Create a Supplier for {@link FormatterConfig.DefaultDateFormat}s
+   */
+  public static DateFormatSupplier createDefaultFormatSupplier() {
+    return new DateFormatSupplier() {
+      @Override
+      protected DateFormat initialValue() {
+        return new FormatterConfig.DefaultDateFormat();
+      }
+    };
+  }
+
+  /** Create a generator for SimpleDateFormats accepting a dateFormat */
+  public static DateFormatSupplier createSimpleFormatSupplier(final String dateFormat) {
+    return new DateFormatSupplier() {
+      @Override
+      protected SimpleDateFormat initialValue() {
+        return new SimpleDateFormat(dateFormat);
+      }
+    };
+  }
+
+  /** Create a generator for SimpleDateFormats accepting a dateFormat */
+  public static DateFormatSupplier createSimpleFormatSupplier(final String dateFormat, final TimeZone timeZone) {
+    return new DateFormatSupplier(timeZone) {
+      @Override
+      protected SimpleDateFormat initialValue() {
+        return new SimpleDateFormat(dateFormat);
+      }
+    };
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/DateStringFormatter.java b/core/src/main/java/org/apache/accumulo/core/util/format/DateStringFormatter.java
index 5bcd4a3..63bd536 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/DateStringFormatter.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/DateStringFormatter.java
@@ -16,31 +16,45 @@
  */
 package org.apache.accumulo.core.util.format;
 
-import java.text.DateFormat;
-import java.text.SimpleDateFormat;
 import java.util.Map.Entry;
 import java.util.TimeZone;
-
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 
+/**
+ * This class is <strong>not</strong> recommended because {@link #initialize(Iterable, FormatterConfig)} replaces parameters in {@link FormatterConfig}, which
+ * could surprise users.
+ *
+ * This class can be replaced by {@link DefaultFormatter} where FormatterConfig is initialized with a DateFormat set to {@link #DATE_FORMAT}. See
+ * {@link DateFormatSupplier#createSimpleFormatSupplier(String, java.util.TimeZone)}.
+ *
+ * <pre>
+ * final DateFormatSupplier dfSupplier = DateFormatSupplier.createSimpleFormatSupplier(DateFormatSupplier.HUMAN_READABLE_FORMAT, TimeZone.getTimeZone(&quot;UTC&quot;));
+ * final FormatterConfig config = new FormatterConfig().setPrintTimestamps(true).setDateFormatSupplier(dfSupplier);
+ * </pre>
+ */
+@Deprecated
 public class DateStringFormatter implements Formatter {
-  private boolean printTimestamps = false;
-  private DefaultFormatter defaultFormatter = new DefaultFormatter();
 
-  public static final String DATE_FORMAT = "yyyy/MM/dd HH:mm:ss.SSS";
-  // SimpleDataFormat is not thread safe
-  private static final ThreadLocal<DateFormat> formatter = new ThreadLocal<DateFormat>() {
-    @Override
-    protected SimpleDateFormat initialValue() {
-      return new SimpleDateFormat(DATE_FORMAT);
-    }
-  };
+  private DefaultFormatter defaultFormatter;
+  private TimeZone timeZone;
+
+  public static final String DATE_FORMAT = DateFormatSupplier.HUMAN_READABLE_FORMAT;
+
+  public DateStringFormatter() {
+    this(TimeZone.getDefault());
+  }
+
+  public DateStringFormatter(TimeZone timeZone) {
+    this.defaultFormatter = new DefaultFormatter();
+    this.timeZone = timeZone;
+  }
 
   @Override
-  public void initialize(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps) {
-    this.printTimestamps = printTimestamps;
-    defaultFormatter.initialize(scanner, printTimestamps);
+  public void initialize(Iterable<Entry<Key,Value>> scanner, FormatterConfig config) {
+    FormatterConfig newConfig = new FormatterConfig(config);
+    newConfig.setDateFormatSupplier(DateFormatSupplier.createSimpleFormatSupplier(DATE_FORMAT, timeZone));
+    defaultFormatter.initialize(scanner, newConfig);
   }
 
   @Override
@@ -50,13 +64,7 @@
 
   @Override
   public String next() {
-    DateFormat timestampformat = null;
-
-    if (printTimestamps) {
-      timestampformat = formatter.get();
-    }
-
-    return defaultFormatter.next(timestampformat);
+    return defaultFormatter.next();
   }
 
   @Override
@@ -64,7 +72,4 @@
     defaultFormatter.remove();
   }
 
-  public void setTimeZone(TimeZone zone) {
-    formatter.get().setTimeZone(zone);
-  }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/DefaultFormatter.java b/core/src/main/java/org/apache/accumulo/core/util/format/DefaultFormatter.java
index 5a2f43f..b5df632 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/DefaultFormatter.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/DefaultFormatter.java
@@ -17,12 +17,9 @@
 package org.apache.accumulo.core.util.format;
 
 import java.text.DateFormat;
-import java.text.FieldPosition;
-import java.text.ParsePosition;
 import java.util.Date;
 import java.util.Iterator;
 import java.util.Map.Entry;
-
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.ColumnVisibility;
@@ -30,35 +27,16 @@
 
 public class DefaultFormatter implements Formatter {
   private Iterator<Entry<Key,Value>> si;
-  private boolean doTimestamps;
+  protected FormatterConfig config;
 
-  public static class DefaultDateFormat extends DateFormat {
-    private static final long serialVersionUID = 1L;
-
-    @Override
-    public StringBuffer format(Date date, StringBuffer toAppendTo, FieldPosition fieldPosition) {
-      toAppendTo.append(Long.toString(date.getTime()));
-      return toAppendTo;
-    }
-
-    @Override
-    public Date parse(String source, ParsePosition pos) {
-      return new Date(Long.parseLong(source));
-    }
-  }
-
-  private static final ThreadLocal<DateFormat> formatter = new ThreadLocal<DateFormat>() {
-    @Override
-    protected DateFormat initialValue() {
-      return new DefaultDateFormat();
-    }
-  };
+  /** Used as default DateFormat for some static methods */
+  private static final ThreadLocal<DateFormat> formatter = DateFormatSupplier.createDefaultFormatSupplier();
 
   @Override
-  public void initialize(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps) {
+  public void initialize(Iterable<Entry<Key,Value>> scanner, FormatterConfig config) {
     checkState(false);
     si = scanner.iterator();
-    doTimestamps = printTimestamps;
+    this.config = new FormatterConfig(config);
   }
 
   @Override
@@ -69,18 +47,8 @@
 
   @Override
   public String next() {
-    DateFormat timestampFormat = null;
-
-    if (doTimestamps) {
-      timestampFormat = formatter.get();
-    }
-
-    return next(timestampFormat);
-  }
-
-  protected String next(DateFormat timestampFormat) {
     checkState(true);
-    return formatEntry(si.next(), timestampFormat);
+    return formatEntry(si.next());
   }
 
   @Override
@@ -96,7 +64,10 @@
       throw new IllegalStateException("Already initialized");
   }
 
-  // this should be replaced with something like Record.toString();
+  /**
+   * if showTimestamps, will use {@link FormatterConfig.DefaultDateFormat}.<br>
+   * Preferably, use {@link DefaultFormatter#formatEntry(java.util.Map.Entry, org.apache.accumulo.core.util.format.FormatterConfig)}
+   */
   public static String formatEntry(Entry<Key,Value> entry, boolean showTimestamps) {
     DateFormat timestampFormat = null;
 
@@ -115,6 +86,7 @@
     }
   };
 
+  /** Does not show timestamps if timestampFormat is null */
   public static String formatEntry(Entry<Key,Value> entry, DateFormat timestampFormat) {
     StringBuilder sb = new StringBuilder();
     Key key = entry.getKey();
@@ -149,14 +121,55 @@
     return sb.toString();
   }
 
+  public String formatEntry(Entry<Key,Value> entry) {
+    return formatEntry(entry, this.config);
+  }
+
+  public static String formatEntry(Entry<Key,Value> entry, FormatterConfig config) {
+    // originally from BinaryFormatter
+    StringBuilder sb = new StringBuilder();
+    Key key = entry.getKey();
+    Text buffer = new Text();
+
+    final int shownLength = config.getShownLength();
+
+    appendText(sb, key.getRow(buffer), shownLength).append(" ");
+    appendText(sb, key.getColumnFamily(buffer), shownLength).append(":");
+    appendText(sb, key.getColumnQualifier(buffer), shownLength).append(" ");
+    sb.append(new ColumnVisibility(key.getColumnVisibility(buffer)));
+
+    // append timestamp
+    if (config.willPrintTimestamps() && config.getDateFormatSupplier() != null) {
+      tmpDate.get().setTime(entry.getKey().getTimestamp());
+      sb.append(" ").append(config.getDateFormatSupplier().get().format(tmpDate.get()));
+    }
+
+    // append value
+    Value value = entry.getValue();
+    if (value != null && value.getSize() > 0) {
+      sb.append("\t");
+      appendValue(sb, value, shownLength);
+    }
+    return sb.toString();
+
+  }
+
   static StringBuilder appendText(StringBuilder sb, Text t) {
     return appendBytes(sb, t.getBytes(), 0, t.getLength());
   }
 
+  public static StringBuilder appendText(StringBuilder sb, Text t, int shownLength) {
+    return appendBytes(sb, t.getBytes(), 0, t.getLength(), shownLength);
+  }
+
   static StringBuilder appendValue(StringBuilder sb, Value value) {
     return appendBytes(sb, value.get(), 0, value.get().length);
   }
 
+  static StringBuilder appendValue(StringBuilder sb, Value value, int shownLength) {
+    return appendBytes(sb, value.get(), 0, value.get().length, shownLength);
+  }
+
   static StringBuilder appendBytes(StringBuilder sb, byte ba[], int offset, int len) {
     for (int i = 0; i < len; i++) {
       int c = 0xff & ba[offset + i];
@@ -170,11 +183,17 @@
     return sb;
   }
 
+  static StringBuilder appendBytes(StringBuilder sb, byte ba[], int offset, int len, int shownLength) {
+    int length = Math.min(len, shownLength);
+    return DefaultFormatter.appendBytes(sb, ba, offset, length);
+  }
+
   public Iterator<Entry<Key,Value>> getScannerIterator() {
     return si;
   }
 
   protected boolean isDoTimestamps() {
-    return doTimestamps;
+    return config.willPrintTimestamps();
   }
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/Formatter.java b/core/src/main/java/org/apache/accumulo/core/util/format/Formatter.java
index 497cc74..136b252 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/Formatter.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/Formatter.java
@@ -23,5 +23,5 @@
 import org.apache.accumulo.core.data.Value;
 
 public interface Formatter extends Iterator<String> {
-  void initialize(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps);
+  void initialize(Iterable<Entry<Key,Value>> scanner, FormatterConfig config);
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/FormatterConfig.java b/core/src/main/java/org/apache/accumulo/core/util/format/FormatterConfig.java
new file mode 100644
index 0000000..0cd5139
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/FormatterConfig.java
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.util.format;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+import java.text.DateFormat;
+import java.text.FieldPosition;
+import java.text.ParsePosition;
+import java.text.SimpleDateFormat;
+import java.util.Date;
+
+import com.google.common.base.Supplier;
+
+/**
+ * Holds configuration settings for a {@link Formatter}
+ */
+public class FormatterConfig {
+
+  private boolean printTimestamps;
+  private int shownLength;
+  private Supplier<DateFormat> dateFormatSupplier;
+
+  /** Formats with milliseconds since epoch */
+  public static class DefaultDateFormat extends SimpleDateFormat {
+    private static final long serialVersionUID = 1L;
+
+    @Override
+    public StringBuffer format(Date date, StringBuffer toAppendTo, FieldPosition fieldPosition) {
+      toAppendTo.append(Long.toString(date.getTime()));
+      return toAppendTo;
+    }
+
+    @Override
+    public Date parse(String source, ParsePosition pos) {
+      return new Date(Long.parseLong(source));
+    }
+  }
+
+  public FormatterConfig() {
+    this.setPrintTimestamps(false);
+    this.doNotLimitShowLength();
+    this.dateFormatSupplier = DateFormatSupplier.createDefaultFormatSupplier();
+  }
+
+  /**
+   * Copies most fields, but still points to other.dateFormatSupplier.
+   */
+  public FormatterConfig(FormatterConfig other) {
+    this.printTimestamps = other.printTimestamps;
+    this.shownLength = other.shownLength;
+    this.dateFormatSupplier = other.dateFormatSupplier;
+  }
+
+  public boolean willPrintTimestamps() {
+    return printTimestamps;
+  }
+
+  public FormatterConfig setPrintTimestamps(boolean printTimestamps) {
+    this.printTimestamps = printTimestamps;
+    return this;
+  }
+
+  public int getShownLength() {
+    return shownLength;
+  }
+
+  public boolean willLimitShowLength() {
+    return this.shownLength != Integer.MAX_VALUE;
+  }
+
+  /**
+   * If given a negative number, throws an {@link IllegalArgumentException}
+   *
+   * @param shownLength
+   *          maximum length of formatted output
+   * @return {@code this} to allow chaining of set methods
+   */
+  public FormatterConfig setShownLength(int shownLength) {
+    checkArgument(shownLength >= 0, "Shown length cannot be negative");
+    this.shownLength = shownLength;
+    return this;
+  }
+
+  public FormatterConfig doNotLimitShowLength() {
+    this.shownLength = Integer.MAX_VALUE;
+    return this;
+  }
+
+  public Supplier<DateFormat> getDateFormatSupplier() {
+    return dateFormatSupplier;
+  }
+
+  /**
+   * this.dateFormatSupplier points to dateFormatSupplier, so it is recommended that you create a new {@code Supplier} when calling this function if your
+   * {@code Supplier} maintains some kind of state (see {@link DateFormatSupplier}.
+   */
+  public FormatterConfig setDateFormatSupplier(Supplier<DateFormat> dateFormatSupplier) {
+    this.dateFormatSupplier = dateFormatSupplier;
+    return this;
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/FormatterFactory.java b/core/src/main/java/org/apache/accumulo/core/util/format/FormatterFactory.java
index 7eb542f..9ae1a6c 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/FormatterFactory.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/FormatterFactory.java
@@ -26,7 +26,7 @@
 public class FormatterFactory {
   private static final Logger log = LoggerFactory.getLogger(FormatterFactory.class);
 
-  public static Formatter getFormatter(Class<? extends Formatter> formatterClass, Iterable<Entry<Key,Value>> scanner, boolean printTimestamps) {
+  public static Formatter getFormatter(Class<? extends Formatter> formatterClass, Iterable<Entry<Key,Value>> scanner, FormatterConfig config) {
     Formatter formatter = null;
     try {
       formatter = formatterClass.newInstance();
@@ -34,12 +34,12 @@
       log.warn("Unable to instantiate formatter. Using default formatter.", e);
       formatter = new DefaultFormatter();
     }
-    formatter.initialize(scanner, printTimestamps);
+    formatter.initialize(scanner, config);
     return formatter;
   }
 
-  public static Formatter getDefaultFormatter(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps) {
-    return getFormatter(DefaultFormatter.class, scanner, printTimestamps);
+  public static Formatter getDefaultFormatter(Iterable<Entry<Key,Value>> scanner, FormatterConfig config) {
+    return getFormatter(DefaultFormatter.class, scanner, config);
   }
 
   private FormatterFactory() {
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/HexFormatter.java b/core/src/main/java/org/apache/accumulo/core/util/format/HexFormatter.java
index 65e52d3..54e2598 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/HexFormatter.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/HexFormatter.java
@@ -31,7 +31,7 @@
 
   private char chars[] = new char[] {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'};
   private Iterator<Entry<Key,Value>> iter;
-  private boolean printTimestamps;
+  private FormatterConfig config;
 
   private void toHex(StringBuilder sb, byte[] bin) {
 
@@ -88,7 +88,7 @@
     sb.append(" [");
     sb.append(entry.getKey().getColumnVisibilityData().toString());
     sb.append("] ");
-    if (printTimestamps) {
+    if (config.willPrintTimestamps()) {
       sb.append(Long.toString(entry.getKey().getTimestamp()));
       sb.append("  ");
     }
@@ -103,9 +103,9 @@
   }
 
   @Override
-  public void initialize(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps) {
+  public void initialize(Iterable<Entry<Key,Value>> scanner, FormatterConfig config) {
     this.iter = scanner.iterator();
-    this.printTimestamps = printTimestamps;
+    this.config = new FormatterConfig(config);
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/ShardedTableDistributionFormatter.java b/core/src/main/java/org/apache/accumulo/core/util/format/ShardedTableDistributionFormatter.java
index 877f164..336e986 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/ShardedTableDistributionFormatter.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/ShardedTableDistributionFormatter.java
@@ -34,7 +34,7 @@
  */
 public class ShardedTableDistributionFormatter extends AggregatingFormatter {
 
-  private Map<String,HashSet<String>> countsByDay = new HashMap<String,HashSet<String>>();
+  private Map<String,HashSet<String>> countsByDay = new HashMap<>();
 
   @Override
   protected void aggregateStats(Entry<Key,Value> entry) {
diff --git a/core/src/main/java/org/apache/accumulo/core/util/format/StatisticsDisplayFormatter.java b/core/src/main/java/org/apache/accumulo/core/util/format/StatisticsDisplayFormatter.java
index b3ee3ee..0f4858a 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/format/StatisticsDisplayFormatter.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/format/StatisticsDisplayFormatter.java
@@ -28,9 +28,9 @@
  * debugging. If used on large result sets it will likely fail.
  */
 public class StatisticsDisplayFormatter extends AggregatingFormatter {
-  private Map<String,Long> classifications = new HashMap<String,Long>();
-  private Map<String,Long> columnFamilies = new HashMap<String,Long>();
-  private Map<String,Long> columnQualifiers = new HashMap<String,Long>();
+  private Map<String,Long> classifications = new HashMap<>();
+  private Map<String,Long> columnFamilies = new HashMap<>();
+  private Map<String,Long> columnQualifiers = new HashMap<>();
   private long total = 0;
 
   @Override
@@ -71,9 +71,9 @@
 
     buf.append(total).append(" entries matched.");
     total = 0;
-    classifications = new HashMap<String,Long>();
-    columnFamilies = new HashMap<String,Long>();
-    columnQualifiers = new HashMap<String,Long>();
+    classifications = new HashMap<>();
+    columnFamilies = new HashMap<>();
+    columnQualifiers = new HashMap<>();
 
     return buf.toString();
   }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/interpret/HexScanInterpreter.java b/core/src/main/java/org/apache/accumulo/core/util/interpret/HexScanInterpreter.java
index 964e8c6..3794f27 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/interpret/HexScanInterpreter.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/interpret/HexScanInterpreter.java
@@ -22,6 +22,6 @@
  * As simple scan interpreter that converts hex to binary. IT supports translating the output of {@link HexFormatter} back to binary. The hex input can contain
  * dashes (because {@link HexFormatter} outputs dashes) which are ignored.
  */
-public class HexScanInterpreter extends HexFormatter implements ScanInterpreter {
+public class HexScanInterpreter extends HexFormatter {
 
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/ratelimit/GuavaRateLimiter.java b/core/src/main/java/org/apache/accumulo/core/util/ratelimit/GuavaRateLimiter.java
new file mode 100644
index 0000000..6e9781d
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/util/ratelimit/GuavaRateLimiter.java
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.util.ratelimit;
+
+/** Rate limiter from the Guava library. */
+public class GuavaRateLimiter implements RateLimiter {
+  private final com.google.common.util.concurrent.RateLimiter rateLimiter;
+  private long currentRate;
+
+  /**
+   * Constructor
+   *
+   * @param initialRate
+   *          Count of permits which should be made available per second. A nonpositive rate is taken to indicate there should be no limitation on rate.
+   */
+  public GuavaRateLimiter(long initialRate) {
+    this.currentRate = initialRate;
+    this.rateLimiter = com.google.common.util.concurrent.RateLimiter.create(initialRate > 0 ? initialRate : Long.MAX_VALUE);
+  }
+
+  @Override
+  public long getRate() {
+    return currentRate;
+  }
+
+  /**
+   * Change the rate at which permits are made available.
+   *
+   * @param newRate
+   *          Count of permits which should be made available per second. A nonpositive rate is taken to indicate that there should be no limitation on rate.
+   */
+  public void setRate(long newRate) {
+    this.rateLimiter.setRate(newRate > 0 ? newRate : Long.MAX_VALUE);
+    this.currentRate = newRate;
+  }
+
+  @Override
+  public void acquire(long permits) {
+    if (this.currentRate > 0) {
+      while (permits > Integer.MAX_VALUE) {
+        rateLimiter.acquire(Integer.MAX_VALUE);
+        permits -= Integer.MAX_VALUE;
+      }
+      if (permits > 0) {
+        rateLimiter.acquire((int) permits);
+      }
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java b/core/src/main/java/org/apache/accumulo/core/util/ratelimit/NullRateLimiter.java
similarity index 67%
copy from core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
copy to core/src/main/java/org/apache/accumulo/core/util/ratelimit/NullRateLimiter.java
index 01f5fa8..ac746c8 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/ratelimit/NullRateLimiter.java
@@ -14,19 +14,20 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.util;
+package org.apache.accumulo.core.util.ratelimit;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
+/** A rate limiter which doesn't actually limit rates at all. */
+public class NullRateLimiter implements RateLimiter {
+  public static final NullRateLimiter INSTANCE = new NullRateLimiter();
 
-public class UtilWaitThread {
-  private static final Logger log = LoggerFactory.getLogger(UtilWaitThread.class);
+  private NullRateLimiter() {}
 
-  public static void sleep(long millis) {
-    try {
-      Thread.sleep(millis);
-    } catch (InterruptedException e) {
-      log.error("{}", e.getMessage(), e);
-    }
+  @Override
+  public long getRate() {
+    return 0;
   }
+
+  @Override
+  public void acquire(long permits) {}
+
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java b/core/src/main/java/org/apache/accumulo/core/util/ratelimit/RateLimiter.java
similarity index 67%
copy from core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
copy to core/src/main/java/org/apache/accumulo/core/util/ratelimit/RateLimiter.java
index 01f5fa8..ff64840 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/ratelimit/RateLimiter.java
@@ -14,19 +14,14 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.util;
+package org.apache.accumulo.core.util.ratelimit;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
+public interface RateLimiter {
+  /**
+   * Get current QPS of the rate limiter, with a nonpositive rate indicating no limit.
+   */
+  public long getRate();
 
-public class UtilWaitThread {
-  private static final Logger log = LoggerFactory.getLogger(UtilWaitThread.class);
-
-  public static void sleep(long millis) {
-    try {
-      Thread.sleep(millis);
-    } catch (InterruptedException e) {
-      log.error("{}", e.getMessage(), e);
-    }
-  }
+  /** Sleep until the specified number of queries are available. */
+  public void acquire(long permits);
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/ratelimit/SharedRateLimiterFactory.java b/core/src/main/java/org/apache/accumulo/core/util/ratelimit/SharedRateLimiterFactory.java
new file mode 100644
index 0000000..ac1eec9
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/util/ratelimit/SharedRateLimiterFactory.java
@@ -0,0 +1,180 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.util.ratelimit;
+
+import com.google.common.collect.ImmutableMap;
+import java.util.Map;
+import java.util.Timer;
+import java.util.TimerTask;
+import java.util.WeakHashMap;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Provides the ability to retrieve a {@link RateLimiter} keyed to a specific string, which will dynamically update its rate according to a specified callback
+ * function.
+ */
+public class SharedRateLimiterFactory {
+  private static final long REPORT_RATE = 60000;
+  private static final long UPDATE_RATE = 1000;
+  private static SharedRateLimiterFactory instance = null;
+  private final Logger log = LoggerFactory.getLogger(SharedRateLimiterFactory.class);
+  private final WeakHashMap<String,SharedRateLimiter> activeLimiters = new WeakHashMap<>();
+
+  private SharedRateLimiterFactory() {}
+
+  /** Get the singleton instance of the SharedRateLimiterFactory. */
+  public static synchronized SharedRateLimiterFactory getInstance() {
+    if (instance == null) {
+      instance = new SharedRateLimiterFactory();
+
+      Timer timer = new Timer("SharedRateLimiterFactory update/report polling");
+
+      // Update periodically
+      timer.schedule(new TimerTask() {
+        @Override
+        public void run() {
+          instance.update();
+        }
+      }, UPDATE_RATE, UPDATE_RATE);
+
+      // Report periodically
+      timer.schedule(new TimerTask() {
+        @Override
+        public void run() {
+          instance.report();
+        }
+      }, REPORT_RATE, REPORT_RATE);
+    }
+    return instance;
+  }
+
+  /**
+   * A callback which provides the current rate for a {@link RateLimiter}.
+   */
+  public static interface RateProvider {
+    /**
+     * Calculate the current rate for the {@link RateLimiter}.
+     *
+     * @return Count of permits which should be provided per second. A nonpositive count is taken to indicate that no rate limiting should be performed.
+     */
+    public long getDesiredRate();
+  }
+
+  /**
+   * Lookup the RateLimiter associated with the specified name, or create a new one for that name.
+   *
+   * @param name
+   *          key for the rate limiter
+   * @param rateProvider
+   *          a function which can be called to get what the current rate for the rate limiter should be.
+   */
+  public RateLimiter create(String name, RateProvider rateProvider) {
+    synchronized (activeLimiters) {
+      if (activeLimiters.containsKey(name)) {
+        SharedRateLimiter limiter = activeLimiters.get(name);
+        return limiter;
+      } else {
+        long initialRate;
+        initialRate = rateProvider.getDesiredRate();
+        SharedRateLimiter limiter = new SharedRateLimiter(name, rateProvider, initialRate);
+        activeLimiters.put(name, limiter);
+        return limiter;
+      }
+    }
+  }
+
+  /**
+   * Walk through all of the currently active RateLimiters, having each update its current rate. This is called periodically so that we can dynamically update
+   * as configuration changes.
+   */
+  protected void update() {
+    Map<String,SharedRateLimiter> limitersCopy;
+    synchronized (activeLimiters) {
+      limitersCopy = ImmutableMap.copyOf(activeLimiters);
+    }
+    for (Map.Entry<String,SharedRateLimiter> entry : limitersCopy.entrySet()) {
+      try {
+        entry.getValue().update();
+      } catch (Exception ex) {
+        log.error(String.format("Failed to update limiter %s", entry.getKey()), ex);
+      }
+    }
+  }
+
+  /** Walk through all of the currently active RateLimiters, having each report its activity to the debug log. */
+  protected void report() {
+    Map<String,SharedRateLimiter> limitersCopy;
+    synchronized (activeLimiters) {
+      limitersCopy = ImmutableMap.copyOf(activeLimiters);
+    }
+    for (Map.Entry<String,SharedRateLimiter> entry : limitersCopy.entrySet()) {
+      try {
+        entry.getValue().report();
+      } catch (Exception ex) {
+        log.error(String.format("Failed to report limiter %s", entry.getKey()), ex);
+      }
+    }
+  }
+
+  protected class SharedRateLimiter extends GuavaRateLimiter {
+    private volatile long permitsAcquired = 0;
+    private volatile long lastUpdate;
+
+    private final RateProvider rateProvider;
+    private final String name;
+
+    SharedRateLimiter(String name, RateProvider rateProvider, long initialRate) {
+      super(initialRate);
+      this.name = name;
+      this.rateProvider = rateProvider;
+      this.lastUpdate = System.currentTimeMillis();
+    }
+
+    @Override
+    public void acquire(long permits) {
+      super.acquire(permits);
+      permitsAcquired += permits;
+    }
+
+    /** Poll the callback, updating the current rate if necessary. */
+    public void update() {
+      // Reset rate if needed
+      long rate = rateProvider.getDesiredRate();
+      if (rate != getRate()) {
+        setRate(rate);
+      }
+    }
+
+    /** Report the current throughput and usage of this rate limiter to the debug log. */
+    public void report() {
+      if (log.isDebugEnabled()) {
+        long duration = System.currentTimeMillis() - lastUpdate;
+        if (duration == 0)
+          return;
+        lastUpdate = System.currentTimeMillis();
+
+        long sum = permitsAcquired;
+        permitsAcquired = 0;
+
+        if (sum > 0) {
+          log.debug(String.format("RateLimiter '%s': %,d of %,d permits/second", name, sum * 1000L / duration, getRate()));
+        }
+      }
+    }
+  }
+}
diff --git a/core/src/main/scripts/generate-thrift.sh b/core/src/main/scripts/generate-thrift.sh
index b9f9962..691ea79 100755
--- a/core/src/main/scripts/generate-thrift.sh
+++ b/core/src/main/scripts/generate-thrift.sh
@@ -26,7 +26,7 @@
 #   INCLUDED_MODULES should be an array that includes other Maven modules with src/main/thrift directories
 #   Use INCLUDED_MODULES=(-) in calling scripts that require no other modules
 # ========================================================================================================================
-[[ -z $REQUIRED_THRIFT_VERSION ]] && REQUIRED_THRIFT_VERSION='0.9.1'
+[[ -z $REQUIRED_THRIFT_VERSION ]] && REQUIRED_THRIFT_VERSION='0.9.3'
 [[ -z $INCLUDED_MODULES ]]        && INCLUDED_MODULES=(../server/tracer)
 [[ -z $BASE_OUTPUT_PACKAGE ]]     && BASE_OUTPUT_PACKAGE='org.apache.accumulo.core'
 [[ -z $PACKAGES_TO_GENERATE ]]    && PACKAGES_TO_GENERATE=(gc master tabletserver security client.impl data replication trace)
@@ -65,7 +65,7 @@
 mkdir -p $BUILD_DIR
 rm -rf $BUILD_DIR/gen-java
 for f in src/main/thrift/*.thrift; do
-  thrift ${THRIFT_ARGS} --gen java "$f" || fail unable to generate java thrift classes
+  thrift ${THRIFT_ARGS} --gen java:generated_annotations=undated "$f" || fail unable to generate java thrift classes
   thrift ${THRIFT_ARGS} --gen py "$f" || fail unable to generate python thrift classes
   thrift ${THRIFT_ARGS} --gen rb "$f" || fail unable to generate ruby thrift classes
   thrift ${THRIFT_ARGS} --gen cpp "$f" || fail unable to generate cpp thrift classes
@@ -74,7 +74,7 @@
 # For all generated thrift code, suppress all warnings and add the LICENSE header
 cs='@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"})'
 es='@SuppressWarnings({"unused"})'
-find $BUILD_DIR/gen-java -name '*.java' -print0 | xargs -0 sed -i.orig -e 's/\(public class [A-Z]\)/'"$cs"' \1/'
+find $BUILD_DIR/gen-java -name '*.java' -print0 | xargs -0 sed -i.orig -e 's/"unchecked"/"unchecked", "unused"/'
 find $BUILD_DIR/gen-java -name '*.java' -print0 | xargs -0 sed -i.orig -e 's/\(public enum [A-Z]\)/'"$es"' \1/'
 
 for lang in "${LANGUAGES_TO_GENERATE[@]}"; do
diff --git a/core/src/main/thrift/master.thrift b/core/src/main/thrift/master.thrift
index f7a1dc9..6104fea 100644
--- a/core/src/main/thrift/master.thrift
+++ b/core/src/main/thrift/master.thrift
@@ -48,6 +48,28 @@
   6:double progress
 }
 
+enum BulkImportState {
+   INITIAL
+   # master moves the files into the accumulo area
+   MOVING
+   # slave tserver examines the index of the file
+   PROCESSING
+   # slave tserver assigns the file to tablets
+   ASSIGNING
+   # tserver incorporates file into tablet
+   LOADING
+   # master moves error files into the error directory
+   COPY_FILES
+   # flags and locks removed
+   CLEANUP
+}
+
+struct BulkImportStatus {
+  1:i64 startTime
+  2:string filename
+  3:BulkImportState state
+}
+
 struct TabletServerStatus {
   1:map<string, TableInfo> tableMap
   2:i64 lastContact
@@ -62,6 +84,7 @@
   14:list<RecoveryStatus> logSorts
   15:i64 flushs
   16:i64 syncs
+  17:list<BulkImportStatus> bulkImports
 }
 
 enum MasterState {
@@ -95,6 +118,7 @@
   7:i32 unassignedTablets
   9:set<string> serversShuttingDown
   10:list<DeadServer> deadTabletServers
+  11:list<BulkImportStatus> bulkImports
 }
 
 struct TabletSplit {
diff --git a/core/src/main/thrift/tabletserver.thrift b/core/src/main/thrift/tabletserver.thrift
index 4a31036..7697a2d 100644
--- a/core/src/main/thrift/tabletserver.thrift
+++ b/core/src/main/thrift/tabletserver.thrift
@@ -31,6 +31,10 @@
   1:data.TKeyExtent extent
 }
 
+exception TSampleNotPresentException {
+  1:data.TKeyExtent extent
+}
+
 exception NoSuchScanIDException {
 }
 
@@ -86,6 +90,7 @@
     12:map<string, map<string, string>> ssio  /* Server Side Iterator Options */
     13:list<binary> authorizations
     14:optional i64 scanId
+    15:string classLoaderContext /* name of the classloader context */
 }
 
 enum CompactionType {
@@ -136,6 +141,18 @@
    1:list<TIteratorSetting> iterators;
 }
 
+struct TSamplerConfiguration {
+   1:string className
+   2:map<string, string> options
+}
+
+enum TUnloadTabletGoal {
+   UNKNOWN,
+   UNASSIGNED,
+   SUSPENDED,
+   DELETED
+}
+
 service TabletClientService extends client.ClientService {
   // scan a range of keys
   data.InitialScan startScan(11:trace.TInfo tinfo,
@@ -149,9 +166,12 @@
                              8:list<binary> authorizations
                              9:bool waitForWrites,
                              10:bool isolated,
-                             12:i64 readaheadThreshold)  throws (1:client.ThriftSecurityException sec, 2:NotServingTabletException nste, 3:TooManyFilesException tmfe),
+                             12:i64 readaheadThreshold,
+                             13:TSamplerConfiguration samplerConfig,
+                             14:i64 batchTimeOut,
+                             15:string classLoaderContext /* name of the classloader context */)  throws (1:client.ThriftSecurityException sec, 2:NotServingTabletException nste, 3:TooManyFilesException tmfe, 4:TSampleNotPresentException tsnpe),
                              
-  data.ScanResult continueScan(2:trace.TInfo tinfo, 1:data.ScanID scanID)  throws (1:NoSuchScanIDException nssi, 2:NotServingTabletException nste, 3:TooManyFilesException tmfe),
+  data.ScanResult continueScan(2:trace.TInfo tinfo, 1:data.ScanID scanID)  throws (1:NoSuchScanIDException nssi, 2:NotServingTabletException nste, 3:TooManyFilesException tmfe, 4:TSampleNotPresentException tsnpe),
   oneway void closeScan(2:trace.TInfo tinfo, 1:data.ScanID scanID),
 
   // scan over a series of ranges
@@ -161,9 +181,12 @@
                                   3:list<data.TColumn> columns,
                                   4:list<data.IterInfo> ssiList,
                                   5:map<string, map<string, string>> ssio,
-                                  6:list<binary> authorizations
-                                  7:bool waitForWrites)  throws (1:client.ThriftSecurityException sec),
-  data.MultiScanResult continueMultiScan(2:trace.TInfo tinfo, 1:data.ScanID scanID) throws (1:NoSuchScanIDException nssi),
+                                  6:list<binary> authorizations,
+                                  7:bool waitForWrites,
+                                  9:TSamplerConfiguration samplerConfig,
+                                  10:i64 batchTimeOut,
+                                  11:string classLoaderContext /* name of the classloader context */)  throws (1:client.ThriftSecurityException sec, 2:TSampleNotPresentException tsnpe),
+  data.MultiScanResult continueMultiScan(2:trace.TInfo tinfo, 1:data.ScanID scanID) throws (1:NoSuchScanIDException nssi, 2:TSampleNotPresentException tsnpe),
   void closeMultiScan(2:trace.TInfo tinfo, 1:data.ScanID scanID) throws (1:NoSuchScanIDException nssi),
   
   //the following calls support a batch update to multiple tablets on a tablet server
@@ -177,7 +200,7 @@
             2:NotServingTabletException nste, 
             3:ConstraintViolationException cve),
 
-  data.TConditionalSession startConditionalUpdate(1:trace.TInfo tinfo, 2:security.TCredentials credentials, 3:list<binary> authorizations, 4:string tableID, 5:TDurability durability)
+  data.TConditionalSession startConditionalUpdate(1:trace.TInfo tinfo, 2:security.TCredentials credentials, 3:list<binary> authorizations, 4:string tableID, 5:TDurability durability, 6:string classLoaderContext)
      throws (1:client.ThriftSecurityException sec);
   
   list<data.TCMResult> conditionalUpdate(1:trace.TInfo tinfo, 2:data.UpdateID sessID, 3:data.CMBatch mutations, 4:list<string> symbols)
@@ -191,7 +214,7 @@
   void splitTablet(4:trace.TInfo tinfo, 1:security.TCredentials credentials, 2:data.TKeyExtent extent, 3:binary splitPoint) throws (1:client.ThriftSecurityException sec, 2:NotServingTabletException nste)
  
   oneway void loadTablet(5:trace.TInfo tinfo, 1:security.TCredentials credentials, 4:string lock, 2:data.TKeyExtent extent),
-  oneway void unloadTablet(5:trace.TInfo tinfo, 1:security.TCredentials credentials, 4:string lock, 2:data.TKeyExtent extent, 3:bool save),
+  oneway void unloadTablet(5:trace.TInfo tinfo, 1:security.TCredentials credentials, 4:string lock, 2:data.TKeyExtent extent, 6:TUnloadTabletGoal goal, 7:i64 requestTime),
   oneway void flush(4:trace.TInfo tinfo, 1:security.TCredentials credentials, 3:string lock, 2:string tableId, 5:binary startRow, 6:binary endRow),
   oneway void flushTablet(1:trace.TInfo tinfo, 2:security.TCredentials credentials, 3:string lock, 4:data.TKeyExtent extent),
   oneway void chop(1:trace.TInfo tinfo, 2:security.TCredentials credentials, 3:string lock, 4:data.TKeyExtent extent),
diff --git a/core/src/test/java/org/apache/accumulo/core/client/IteratorSettingTest.java b/core/src/test/java/org/apache/accumulo/core/client/IteratorSettingTest.java
index bbe2415..ddea0b3 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/IteratorSettingTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/IteratorSettingTest.java
@@ -104,7 +104,7 @@
 
     assertNotEquals(setting1, notEqual1);
 
-    Map<String,String> props = new HashMap<String,String>();
+    Map<String,String> props = new HashMap<>();
     props.put("foo", "bar");
     IteratorSetting notEquals2 = new IteratorSetting(100, "Combiner", Combiner.class, props);
 
diff --git a/core/src/test/java/org/apache/accumulo/core/client/RowIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/client/RowIteratorTest.java
index 55e21d1..d8b0696 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/RowIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/RowIteratorTest.java
@@ -36,7 +36,7 @@
 public class RowIteratorTest {
 
   Iterator<Entry<Key,Value>> makeIterator(final String... args) {
-    final Map<Key,Value> result = new TreeMap<Key,Value>();
+    final Map<Key,Value> result = new TreeMap<>();
     for (String s : args) {
       final String parts[] = s.split("[ \t]");
       final Key key = new Key(parts[0], parts[1], parts[2]);
@@ -47,11 +47,11 @@
   }
 
   List<List<Entry<Key,Value>>> getRows(final Iterator<Entry<Key,Value>> iter) {
-    final List<List<Entry<Key,Value>>> result = new ArrayList<List<Entry<Key,Value>>>();
+    final List<List<Entry<Key,Value>>> result = new ArrayList<>();
     final RowIterator riter = new RowIterator(iter);
     while (riter.hasNext()) {
       final Iterator<Entry<Key,Value>> row = riter.next();
-      final List<Entry<Key,Value>> rlist = new ArrayList<Entry<Key,Value>>();
+      final List<Entry<Key,Value>> rlist = new ArrayList<>();
       while (row.hasNext())
         rlist.add(row.next());
       result.add(rlist);
diff --git a/core/src/test/java/org/apache/accumulo/core/client/TestThrift1474.java b/core/src/test/java/org/apache/accumulo/core/client/TestThrift1474.java
index a99f415..845439e 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/TestThrift1474.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/TestThrift1474.java
@@ -25,7 +25,6 @@
 
 import org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException;
 import org.apache.accumulo.core.client.impl.thrift.ThriftTest;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.thrift.TException;
 import org.apache.thrift.protocol.TBinaryProtocol;
 import org.apache.thrift.protocol.TProtocol;
@@ -36,6 +35,8 @@
 import org.apache.thrift.transport.TTransport;
 import org.junit.Test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class TestThrift1474 {
 
   static class TestServer implements ThriftTest.Iface {
@@ -77,7 +78,7 @@
     };
     thread.start();
     while (!server.isServing()) {
-      UtilWaitThread.sleep(10);
+      sleepUninterruptibly(10, TimeUnit.MILLISECONDS);
     }
 
     TTransport transport = new TSocket("localhost", port);
diff --git a/core/src/test/java/org/apache/accumulo/core/client/ZooKeeperInstanceTest.java b/core/src/test/java/org/apache/accumulo/core/client/ZooKeeperInstanceTest.java
index 41e0b92..aecc947 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/ZooKeeperInstanceTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/ZooKeeperInstanceTest.java
@@ -139,7 +139,7 @@
     replay(config);
     zki = new ZooKeeperInstance(config, zcf);
     expect(zc.get(Constants.ZROOT + "/" + IID_STRING)).andReturn("yup".getBytes());
-    List<String> children = new java.util.ArrayList<String>();
+    List<String> children = new java.util.ArrayList<>();
     children.add("child1");
     children.add("child2");
     expect(zc.getChildren(Constants.ZROOT + Constants.ZINSTANCES)).andReturn(children);
diff --git a/core/src/test/java/org/apache/accumulo/core/client/impl/ClientContextTest.java b/core/src/test/java/org/apache/accumulo/core/client/impl/ClientContextTest.java
index ff9863c..4eb348e 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/impl/ClientContextTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/impl/ClientContextTest.java
@@ -99,7 +99,7 @@
     clientConf.addProperty(Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS.getKey(), absPath);
 
     AccumuloConfiguration accClientConf = ClientContext.convertClientConfig(clientConf);
-    Map<String,String> props = new HashMap<String,String>();
+    Map<String,String> props = new HashMap<>();
     Predicate<String> all = Predicates.alwaysTrue();
     accClientConf.getProperties(props, all);
 
diff --git a/core/src/test/java/org/apache/accumulo/core/client/impl/ScannerImplTest.java b/core/src/test/java/org/apache/accumulo/core/client/impl/ScannerImplTest.java
index 95b0903..38e3c07 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/impl/ScannerImplTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/impl/ScannerImplTest.java
@@ -18,11 +18,9 @@
 
 import static org.junit.Assert.assertEquals;
 
-import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.security.Authorizations;
+import org.easymock.EasyMock;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
@@ -32,13 +30,11 @@
  */
 public class ScannerImplTest {
 
-  MockInstance instance;
-  ClientContext context;
+  private ClientContext context;
 
   @Before
   public void setup() {
-    instance = new MockInstance();
-    context = new ClientContext(instance, new Credentials("root", new PasswordToken("")), new ClientConfiguration());
+    context = EasyMock.createMock(ClientContext.class);
   }
 
   @Test
@@ -49,12 +45,14 @@
     s.setReadaheadThreshold(Long.MAX_VALUE);
 
     Assert.assertEquals(Long.MAX_VALUE, s.getReadaheadThreshold());
+    s.close();
   }
 
   @Test(expected = IllegalArgumentException.class)
   public void testInValidReadaheadValues() {
     Scanner s = new ScannerImpl(context, "foo", Authorizations.EMPTY);
     s.setReadaheadThreshold(-1);
+    s.close();
   }
 
   @Test
@@ -62,8 +60,10 @@
     Authorizations expected = new Authorizations("a,b");
     Scanner s = new ScannerImpl(context, "foo", expected);
     assertEquals(expected, s.getAuthorizations());
+    s.close();
   }
 
+  @SuppressWarnings("resource")
   @Test(expected = IllegalArgumentException.class)
   public void testNullAuthorizationsFails() {
     new ScannerImpl(context, "foo", null);
diff --git a/core/src/test/java/org/apache/accumulo/core/client/impl/ScannerOptionsTest.java b/core/src/test/java/org/apache/accumulo/core/client/impl/ScannerOptionsTest.java
index 920d687..cfdbe6f 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/impl/ScannerOptionsTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/impl/ScannerOptionsTest.java
@@ -38,44 +38,48 @@
    */
   @Test
   public void testAddRemoveIterator() throws Throwable {
-    ScannerOptions options = new ScannerOptions();
-    options.addScanIterator(new IteratorSetting(1, "NAME", WholeRowIterator.class));
-    assertEquals(1, options.serverSideIteratorList.size());
-    options.removeScanIterator("NAME");
-    assertEquals(0, options.serverSideIteratorList.size());
+    try (ScannerOptions options = new ScannerOptions()) {
+      options.addScanIterator(new IteratorSetting(1, "NAME", WholeRowIterator.class));
+      assertEquals(1, options.serverSideIteratorList.size());
+      options.removeScanIterator("NAME");
+      assertEquals(0, options.serverSideIteratorList.size());
+    }
   }
 
   @Test
   public void testIteratorConflict() {
-    ScannerOptions options = new ScannerOptions();
-    options.addScanIterator(new IteratorSetting(1, "NAME", DebugIterator.class));
-    try {
-      options.addScanIterator(new IteratorSetting(2, "NAME", DebugIterator.class));
-      fail();
-    } catch (IllegalArgumentException e) {}
-    try {
-      options.addScanIterator(new IteratorSetting(1, "NAME2", DebugIterator.class));
-      fail();
-    } catch (IllegalArgumentException e) {}
+    try (ScannerOptions options = new ScannerOptions()) {
+      options.addScanIterator(new IteratorSetting(1, "NAME", DebugIterator.class));
+      try {
+        options.addScanIterator(new IteratorSetting(2, "NAME", DebugIterator.class));
+        fail();
+      } catch (IllegalArgumentException e) {}
+      try {
+        options.addScanIterator(new IteratorSetting(1, "NAME2", DebugIterator.class));
+        fail();
+      } catch (IllegalArgumentException e) {}
+    }
   }
 
   @Test
   public void testFetchColumn() {
-    ScannerOptions options = new ScannerOptions();
-    assertEquals(0, options.getFetchedColumns().size());
-    IteratorSetting.Column col = new IteratorSetting.Column(new Text("family"), new Text("qualifier"));
-    options.fetchColumn(col);
-    SortedSet<Column> fetchedColumns = options.getFetchedColumns();
-    assertEquals(1, fetchedColumns.size());
-    Column fetchCol = fetchedColumns.iterator().next();
-    assertEquals(col.getColumnFamily(), new Text(fetchCol.getColumnFamily()));
-    assertEquals(col.getColumnQualifier(), new Text(fetchCol.getColumnQualifier()));
+    try (ScannerOptions options = new ScannerOptions()) {
+      assertEquals(0, options.getFetchedColumns().size());
+      IteratorSetting.Column col = new IteratorSetting.Column(new Text("family"), new Text("qualifier"));
+      options.fetchColumn(col);
+      SortedSet<Column> fetchedColumns = options.getFetchedColumns();
+      assertEquals(1, fetchedColumns.size());
+      Column fetchCol = fetchedColumns.iterator().next();
+      assertEquals(col.getColumnFamily(), new Text(fetchCol.getColumnFamily()));
+      assertEquals(col.getColumnQualifier(), new Text(fetchCol.getColumnQualifier()));
+    }
   }
 
   @Test(expected = IllegalArgumentException.class)
   public void testFetchNullColumn() {
-    ScannerOptions options = new ScannerOptions();
-    // Require a non-null instance of Column
-    options.fetchColumn((IteratorSetting.Column) null);
+    try (ScannerOptions options = new ScannerOptions()) {
+      // Require a non-null instance of Column
+      options.fetchColumn((IteratorSetting.Column) null);
+    }
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java b/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
index 7a56d1d..1d699c2 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsHelperTest.java
@@ -35,8 +35,10 @@
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.admin.CompactionConfig;
 import org.apache.accumulo.core.client.admin.DiskUsage;
+import org.apache.accumulo.core.client.admin.Locations;
 import org.apache.accumulo.core.client.admin.NewTableConfiguration;
 import org.apache.accumulo.core.client.admin.TimeType;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.security.Authorizations;
@@ -47,7 +49,7 @@
 public class TableOperationsHelperTest {
 
   static class Tester extends TableOperationsHelper {
-    Map<String,Map<String,String>> settings = new HashMap<String,Map<String,String>>();
+    Map<String,Map<String,String>> settings = new HashMap<>();
 
     @Override
     public SortedSet<String> list() {
@@ -226,6 +228,27 @@
         TableNotFoundException {
       return false;
     }
+
+    @Override
+    public void setSamplerConfiguration(String tableName, SamplerConfiguration samplerConfiguration) throws TableNotFoundException, AccumuloException,
+        AccumuloSecurityException {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public void clearSamplerConfiguration(String tableName) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public SamplerConfiguration getSamplerConfiguration(String tableName) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public Locations locate(String tableName, Collection<Range> ranges) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
+      throw new UnsupportedOperationException();
+    }
   }
 
   protected TableOperationsHelper getHelper() {
@@ -233,12 +256,12 @@
   }
 
   void check(TableOperationsHelper t, String tablename, String[] values) throws Exception {
-    Map<String,String> expected = new TreeMap<String,String>();
+    Map<String,String> expected = new TreeMap<>();
     for (String value : values) {
       String parts[] = value.split("=", 2);
       expected.put(parts[0], parts[1]);
     }
-    Map<String,String> actual = new TreeMap<String,String>();
+    Map<String,String> actual = new TreeMap<>();
     for (Entry<String,String> entry : t.getProperties(tablename)) {
       actual.put(entry.getKey(), entry.getValue());
     }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsImplTest.java b/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsImplTest.java
index 523d157..825060b 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsImplTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/impl/TableOperationsImplTest.java
@@ -27,7 +27,6 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.hadoop.io.Text;
 import org.easymock.EasyMock;
 import org.junit.Test;
 
@@ -45,7 +44,7 @@
     Connector connector = EasyMock.createMock(Connector.class);
     Scanner scanner = EasyMock.createMock(Scanner.class);
 
-    Range range = new KeyExtent(new Text("1"), null, null).toMetadataRange();
+    Range range = new KeyExtent("1", null, null).toMetadataRange();
 
     String user = "root";
     PasswordToken token = new PasswordToken("password");
@@ -74,6 +73,7 @@
     // IsolatedScanner -- make the verification pass, not really relevant
     EasyMock.expect(scanner.getRange()).andReturn(range).anyTimes();
     EasyMock.expect(scanner.getTimeout(TimeUnit.MILLISECONDS)).andReturn(Long.MAX_VALUE);
+    EasyMock.expect(scanner.getBatchTimeout(TimeUnit.MILLISECONDS)).andReturn(Long.MAX_VALUE);
     EasyMock.expect(scanner.getBatchSize()).andReturn(1000);
     EasyMock.expect(scanner.getReadaheadThreshold()).andReturn(100l);
 
diff --git a/core/src/test/java/org/apache/accumulo/core/client/impl/TabletLocatorImplTest.java b/core/src/test/java/org/apache/accumulo/core/client/impl/TabletLocatorImplTest.java
index 2e78bd8..bab52f6 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/impl/TabletLocatorImplTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/impl/TabletLocatorImplTest.java
@@ -63,10 +63,10 @@
 public class TabletLocatorImplTest {
 
   private static final KeyExtent RTE = RootTable.EXTENT;
-  private static final KeyExtent MTE = new KeyExtent(new Text(MetadataTable.ID), null, RTE.getEndRow());
+  private static final KeyExtent MTE = new KeyExtent(MetadataTable.ID, null, RTE.getEndRow());
 
   static KeyExtent nke(String t, String er, String per) {
-    return new KeyExtent(new Text(t), er == null ? null : new Text(er), per == null ? null : new Text(per));
+    return new KeyExtent(t, er == null ? null : new Text(er), per == null ? null : new Text(per));
   }
 
   static Range nr(String k1, boolean si, String k2, boolean ei) {
@@ -88,13 +88,13 @@
   @SuppressWarnings("unchecked")
   static Map<String,Map<KeyExtent,List<Range>>> createExpectedBinnings(Object... data) {
 
-    Map<String,Map<KeyExtent,List<Range>>> expBinnedRanges = new HashMap<String,Map<KeyExtent,List<Range>>>();
+    Map<String,Map<KeyExtent,List<Range>>> expBinnedRanges = new HashMap<>();
 
     for (int i = 0; i < data.length; i += 2) {
       String loc = (String) data[i];
       Object binData[] = (Object[]) data[i + 1];
 
-      HashMap<KeyExtent,List<Range>> binnedKE = new HashMap<KeyExtent,List<Range>>();
+      HashMap<KeyExtent,List<Range>> binnedKE = new HashMap<>();
 
       expBinnedRanges.put(loc, binnedKE);
 
@@ -110,7 +110,7 @@
   }
 
   static TreeMap<KeyExtent,TabletLocation> createMetaCacheKE(Object... data) {
-    TreeMap<KeyExtent,TabletLocation> mcke = new TreeMap<KeyExtent,TabletLocation>();
+    TreeMap<KeyExtent,TabletLocation> mcke = new TreeMap<>();
 
     for (int i = 0; i < data.length; i += 2) {
       KeyExtent ke = (KeyExtent) data[i];
@@ -124,7 +124,7 @@
   static TreeMap<Text,TabletLocation> createMetaCache(Object... data) {
     TreeMap<KeyExtent,TabletLocation> mcke = createMetaCacheKE(data);
 
-    TreeMap<Text,TabletLocation> mc = new TreeMap<Text,TabletLocation>(TabletLocatorImpl.endRowComparator);
+    TreeMap<Text,TabletLocation> mc = new TreeMap<>(TabletLocatorImpl.endRowComparator);
 
     for (Entry<KeyExtent,TabletLocation> entry : mcke.entrySet()) {
       if (entry.getKey().getEndRow() == null)
@@ -143,8 +143,8 @@
     TestTabletLocationObtainer ttlo = new TestTabletLocationObtainer(tservers);
 
     RootTabletLocator rtl = new TestRootTabletLocator();
-    TabletLocatorImpl rootTabletCache = new TabletLocatorImpl(new Text(MetadataTable.ID), rtl, ttlo, new YesLockChecker());
-    TabletLocatorImpl tab1TabletCache = new TabletLocatorImpl(new Text(table), rootTabletCache, ttlo, tslc);
+    TabletLocatorImpl rootTabletCache = new TabletLocatorImpl(MetadataTable.ID, rtl, ttlo, new YesLockChecker());
+    TabletLocatorImpl tab1TabletCache = new TabletLocatorImpl(table, rootTabletCache, ttlo, tslc);
 
     setLocation(tservers, rootTabLoc, RTE, MTE, metaTabLoc);
 
@@ -182,18 +182,18 @@
   private void runTest(Text tableName, List<Range> ranges, TabletLocatorImpl tab1TabletCache, Map<String,Map<KeyExtent,List<Range>>> expected,
       List<Range> efailures) throws Exception {
 
-    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<String,Map<KeyExtent,List<Range>>>();
+    Map<String,Map<KeyExtent,List<Range>>> binnedRanges = new HashMap<>();
     List<Range> f = tab1TabletCache.binRanges(context, ranges, binnedRanges);
     assertEquals(expected, binnedRanges);
 
-    HashSet<Range> f1 = new HashSet<Range>(f);
-    HashSet<Range> f2 = new HashSet<Range>(efailures);
+    HashSet<Range> f1 = new HashSet<>(f);
+    HashSet<Range> f2 = new HashSet<>(efailures);
 
     assertEquals(f2, f1);
   }
 
   static Set<KeyExtent> nkes(KeyExtent... extents) {
-    HashSet<KeyExtent> kes = new HashSet<KeyExtent>();
+    HashSet<KeyExtent> kes = new HashSet<>();
 
     for (KeyExtent keyExtent : extents) {
       kes.add(keyExtent);
@@ -205,11 +205,11 @@
   static void runTest(TreeMap<Text,TabletLocation> mc, KeyExtent remove, Set<KeyExtent> expected) {
     // copy so same metaCache can be used for multiple test
 
-    mc = new TreeMap<Text,TabletLocation>(mc);
+    mc = new TreeMap<>(mc);
 
     TabletLocatorImpl.removeOverlapping(mc, remove);
 
-    HashSet<KeyExtent> eic = new HashSet<KeyExtent>();
+    HashSet<KeyExtent> eic = new HashSet<>();
     for (TabletLocation tl : mc.values()) {
       eic.add(tl.tablet_extent);
     }
@@ -235,14 +235,14 @@
   }
 
   private void runTest(TabletLocatorImpl metaCache, List<Mutation> ml, Map<String,Map<KeyExtent,List<String>>> emb, String... efailures) throws Exception {
-    Map<String,TabletServerMutations<Mutation>> binnedMutations = new HashMap<String,TabletServerMutations<Mutation>>();
-    List<Mutation> afailures = new ArrayList<Mutation>();
+    Map<String,TabletServerMutations<Mutation>> binnedMutations = new HashMap<>();
+    List<Mutation> afailures = new ArrayList<>();
     metaCache.binMutations(context, ml, binnedMutations, afailures);
 
     verify(emb, binnedMutations);
 
-    ArrayList<String> afs = new ArrayList<String>();
-    ArrayList<String> efs = new ArrayList<String>(Arrays.asList(efailures));
+    ArrayList<String> afs = new ArrayList<>();
+    ArrayList<String> efs = new ArrayList<>(Arrays.asList(efailures));
 
     for (Mutation mutation : afailures) {
       afs.add(new String(mutation.getRow()));
@@ -265,8 +265,8 @@
       assertEquals(etb.keySet(), atb.getMutations().keySet());
 
       for (KeyExtent ke : etb.keySet()) {
-        ArrayList<String> eRows = new ArrayList<String>(etb.get(ke));
-        ArrayList<String> aRows = new ArrayList<String>();
+        ArrayList<String> eRows = new ArrayList<>(etb.get(ke));
+        ArrayList<String> aRows = new ArrayList<>();
 
         for (Mutation m : atb.getMutations().get(ke)) {
           aRows.add(new String(m.getRow()));
@@ -283,7 +283,7 @@
 
   static Map<String,Map<KeyExtent,List<String>>> cemb(Object[]... ols) {
 
-    Map<String,Map<KeyExtent,List<String>>> emb = new HashMap<String,Map<KeyExtent,List<String>>>();
+    Map<String,Map<KeyExtent,List<String>>> emb = new HashMap<>();
 
     for (Object[] ol : ols) {
       String row = (String) ol[0];
@@ -292,13 +292,13 @@
 
       Map<KeyExtent,List<String>> tb = emb.get(server);
       if (tb == null) {
-        tb = new HashMap<KeyExtent,List<String>>();
+        tb = new HashMap<>();
         emb.put(server, tb);
       }
 
       List<String> rl = tb.get(ke);
       if (rl == null) {
-        rl = new ArrayList<String>();
+        rl = new ArrayList<>();
         tb.put(ke, rl);
       }
 
@@ -476,7 +476,7 @@
   }
 
   static class TServers {
-    private final Map<String,Map<KeyExtent,SortedMap<Key,Value>>> tservers = new HashMap<String,Map<KeyExtent,SortedMap<Key,Value>>>();
+    private final Map<String,Map<KeyExtent,SortedMap<Key,Value>>> tservers = new HashMap<>();
   }
 
   static class TestTabletLocationObtainer implements TabletLocationObtainer {
@@ -523,7 +523,7 @@
     public List<TabletLocation> lookupTablets(ClientContext context, String tserver, Map<KeyExtent,List<Range>> map, TabletLocator parent)
         throws AccumuloSecurityException {
 
-      ArrayList<TabletLocation> list = new ArrayList<TabletLocation>();
+      ArrayList<TabletLocation> list = new ArrayList<>();
 
       Map<KeyExtent,SortedMap<Key,Value>> tablets = tservers.get(tserver);
 
@@ -532,10 +532,10 @@
         return list;
       }
 
-      TreeMap<Key,Value> results = new TreeMap<Key,Value>();
+      TreeMap<Key,Value> results = new TreeMap<>();
 
       Set<Entry<KeyExtent,List<Range>>> es = map.entrySet();
-      List<KeyExtent> failures = new ArrayList<KeyExtent>();
+      List<KeyExtent> failures = new ArrayList<>();
       for (Entry<KeyExtent,List<Range>> entry : es) {
         SortedMap<Key,Value> tabletData = tablets.get(entry.getKey());
 
@@ -601,13 +601,13 @@
   static void createEmptyTablet(TServers tservers, String server, KeyExtent tablet) {
     Map<KeyExtent,SortedMap<Key,Value>> tablets = tservers.tservers.get(server);
     if (tablets == null) {
-      tablets = new HashMap<KeyExtent,SortedMap<Key,Value>>();
+      tablets = new HashMap<>();
       tservers.tservers.put(server, tablets);
     }
 
     SortedMap<Key,Value> tabletData = tablets.get(tablet);
     if (tabletData == null) {
-      tabletData = new TreeMap<Key,Value>();
+      tabletData = new TreeMap<>();
       tablets.put(tablet, tabletData);
     } else if (tabletData.size() > 0) {
       throw new RuntimeException("Asked for empty tablet, but non empty tablet exists");
@@ -634,13 +634,13 @@
   static void setLocation(TServers tservers, String server, KeyExtent tablet, KeyExtent ke, String location, String instance) {
     Map<KeyExtent,SortedMap<Key,Value>> tablets = tservers.tservers.get(server);
     if (tablets == null) {
-      tablets = new HashMap<KeyExtent,SortedMap<Key,Value>>();
+      tablets = new HashMap<>();
       tservers.tservers.put(server, tablets);
     }
 
     SortedMap<Key,Value> tabletData = tablets.get(tablet);
     if (tabletData == null) {
-      tabletData = new TreeMap<Key,Value>();
+      tabletData = new TreeMap<>();
       tablets.put(tablet, tabletData);
     }
 
@@ -692,8 +692,8 @@
     TestTabletLocationObtainer ttlo = new TestTabletLocationObtainer(tservers);
 
     RootTabletLocator rtl = new TestRootTabletLocator();
-    TabletLocatorImpl rootTabletCache = new TabletLocatorImpl(new Text(MetadataTable.ID), rtl, ttlo, new YesLockChecker());
-    TabletLocatorImpl tab1TabletCache = new TabletLocatorImpl(new Text("tab1"), rootTabletCache, ttlo, new YesLockChecker());
+    TabletLocatorImpl rootTabletCache = new TabletLocatorImpl(MetadataTable.ID, rtl, ttlo, new YesLockChecker());
+    TabletLocatorImpl tab1TabletCache = new TabletLocatorImpl("tab1", rootTabletCache, ttlo, new YesLockChecker());
 
     locateTabletTest(tab1TabletCache, "r1", null, null);
 
@@ -770,8 +770,8 @@
     locateTabletTest(tab1TabletCache, "r", tab1e22, "tserver3");
 
     // simulate the metadata table splitting
-    KeyExtent mte1 = new KeyExtent(new Text(MetadataTable.ID), tab1e21.getMetadataEntry(), RTE.getEndRow());
-    KeyExtent mte2 = new KeyExtent(new Text(MetadataTable.ID), null, tab1e21.getMetadataEntry());
+    KeyExtent mte1 = new KeyExtent(MetadataTable.ID, tab1e21.getMetadataEntry(), RTE.getEndRow());
+    KeyExtent mte2 = new KeyExtent(MetadataTable.ID, null, tab1e21.getMetadataEntry());
 
     setLocation(tservers, "tserver4", RTE, mte1, "tserver5");
     setLocation(tservers, "tserver4", RTE, mte2, "tserver6");
@@ -809,8 +809,8 @@
     locateTabletTest(tab1TabletCache, "r", tab1e22, "tserver9");
 
     // simulate a hole in the metadata, caused by a partial split
-    KeyExtent mte11 = new KeyExtent(new Text(MetadataTable.ID), tab1e1.getMetadataEntry(), RTE.getEndRow());
-    KeyExtent mte12 = new KeyExtent(new Text(MetadataTable.ID), tab1e21.getMetadataEntry(), tab1e1.getMetadataEntry());
+    KeyExtent mte11 = new KeyExtent(MetadataTable.ID, tab1e1.getMetadataEntry(), RTE.getEndRow());
+    KeyExtent mte12 = new KeyExtent(MetadataTable.ID, tab1e21.getMetadataEntry(), tab1e1.getMetadataEntry());
     deleteServer(tservers, "tserver10");
     setLocation(tservers, "tserver4", RTE, mte12, "tserver10");
     setLocation(tservers, "tserver10", mte12, tab1e21, "tserver12");
@@ -1228,22 +1228,22 @@
   @Test
   public void testBug1() throws Exception {
     // a bug that occurred while running continuous ingest
-    KeyExtent mte1 = new KeyExtent(new Text(MetadataTable.ID), new Text("0;0bc"), RTE.getEndRow());
-    KeyExtent mte2 = new KeyExtent(new Text(MetadataTable.ID), null, new Text("0;0bc"));
+    KeyExtent mte1 = new KeyExtent(MetadataTable.ID, new Text("0;0bc"), RTE.getEndRow());
+    KeyExtent mte2 = new KeyExtent(MetadataTable.ID, null, new Text("0;0bc"));
 
     TServers tservers = new TServers();
     TestTabletLocationObtainer ttlo = new TestTabletLocationObtainer(tservers);
 
     RootTabletLocator rtl = new TestRootTabletLocator();
-    TabletLocatorImpl rootTabletCache = new TabletLocatorImpl(new Text(MetadataTable.ID), rtl, ttlo, new YesLockChecker());
-    TabletLocatorImpl tab0TabletCache = new TabletLocatorImpl(new Text("0"), rootTabletCache, ttlo, new YesLockChecker());
+    TabletLocatorImpl rootTabletCache = new TabletLocatorImpl(MetadataTable.ID, rtl, ttlo, new YesLockChecker());
+    TabletLocatorImpl tab0TabletCache = new TabletLocatorImpl("0", rootTabletCache, ttlo, new YesLockChecker());
 
     setLocation(tservers, "tserver1", RTE, mte1, "tserver2");
     setLocation(tservers, "tserver1", RTE, mte2, "tserver3");
 
     // create two tablets that straddle a metadata split point
-    KeyExtent ke1 = new KeyExtent(new Text("0"), new Text("0bbf20e"), null);
-    KeyExtent ke2 = new KeyExtent(new Text("0"), new Text("0bc0756"), new Text("0bbf20e"));
+    KeyExtent ke1 = new KeyExtent("0", new Text("0bbf20e"), null);
+    KeyExtent ke2 = new KeyExtent("0", new Text("0bc0756"), new Text("0bbf20e"));
 
     setLocation(tservers, "tserver2", mte1, ke1, "tserver4");
     setLocation(tservers, "tserver3", mte2, ke2, "tserver5");
@@ -1255,21 +1255,21 @@
   @Test
   public void testBug2() throws Exception {
     // a bug that occurred while running a functional test
-    KeyExtent mte1 = new KeyExtent(new Text(MetadataTable.ID), new Text("~"), RTE.getEndRow());
-    KeyExtent mte2 = new KeyExtent(new Text(MetadataTable.ID), null, new Text("~"));
+    KeyExtent mte1 = new KeyExtent(MetadataTable.ID, new Text("~"), RTE.getEndRow());
+    KeyExtent mte2 = new KeyExtent(MetadataTable.ID, null, new Text("~"));
 
     TServers tservers = new TServers();
     TestTabletLocationObtainer ttlo = new TestTabletLocationObtainer(tservers);
 
     RootTabletLocator rtl = new TestRootTabletLocator();
-    TabletLocatorImpl rootTabletCache = new TabletLocatorImpl(new Text(MetadataTable.ID), rtl, ttlo, new YesLockChecker());
-    TabletLocatorImpl tab0TabletCache = new TabletLocatorImpl(new Text("0"), rootTabletCache, ttlo, new YesLockChecker());
+    TabletLocatorImpl rootTabletCache = new TabletLocatorImpl(MetadataTable.ID, rtl, ttlo, new YesLockChecker());
+    TabletLocatorImpl tab0TabletCache = new TabletLocatorImpl("0", rootTabletCache, ttlo, new YesLockChecker());
 
     setLocation(tservers, "tserver1", RTE, mte1, "tserver2");
     setLocation(tservers, "tserver1", RTE, mte2, "tserver3");
 
     // create the ~ tablet so it exists
-    Map<KeyExtent,SortedMap<Key,Value>> ts3 = new HashMap<KeyExtent,SortedMap<Key,Value>>();
+    Map<KeyExtent,SortedMap<Key,Value>> ts3 = new HashMap<>();
     ts3.put(mte2, new TreeMap<Key,Value>());
     tservers.tservers.put("tserver3", ts3);
 
@@ -1280,21 +1280,21 @@
   // this test reproduces a problem where empty metadata tablets, that were created by user tablets being merged away, caused locating tablets to fail
   @Test
   public void testBug3() throws Exception {
-    KeyExtent mte1 = new KeyExtent(new Text(MetadataTable.ID), new Text("1;c"), RTE.getEndRow());
-    KeyExtent mte2 = new KeyExtent(new Text(MetadataTable.ID), new Text("1;f"), new Text("1;c"));
-    KeyExtent mte3 = new KeyExtent(new Text(MetadataTable.ID), new Text("1;j"), new Text("1;f"));
-    KeyExtent mte4 = new KeyExtent(new Text(MetadataTable.ID), new Text("1;r"), new Text("1;j"));
-    KeyExtent mte5 = new KeyExtent(new Text(MetadataTable.ID), null, new Text("1;r"));
+    KeyExtent mte1 = new KeyExtent(MetadataTable.ID, new Text("1;c"), RTE.getEndRow());
+    KeyExtent mte2 = new KeyExtent(MetadataTable.ID, new Text("1;f"), new Text("1;c"));
+    KeyExtent mte3 = new KeyExtent(MetadataTable.ID, new Text("1;j"), new Text("1;f"));
+    KeyExtent mte4 = new KeyExtent(MetadataTable.ID, new Text("1;r"), new Text("1;j"));
+    KeyExtent mte5 = new KeyExtent(MetadataTable.ID, null, new Text("1;r"));
 
-    KeyExtent ke1 = new KeyExtent(new Text("1"), null, null);
+    KeyExtent ke1 = new KeyExtent("1", null, null);
 
     TServers tservers = new TServers();
     TestTabletLocationObtainer ttlo = new TestTabletLocationObtainer(tservers);
 
     RootTabletLocator rtl = new TestRootTabletLocator();
 
-    TabletLocatorImpl rootTabletCache = new TabletLocatorImpl(new Text(MetadataTable.ID), rtl, ttlo, new YesLockChecker());
-    TabletLocatorImpl tab0TabletCache = new TabletLocatorImpl(new Text("1"), rootTabletCache, ttlo, new YesLockChecker());
+    TabletLocatorImpl rootTabletCache = new TabletLocatorImpl(MetadataTable.ID, rtl, ttlo, new YesLockChecker());
+    TabletLocatorImpl tab0TabletCache = new TabletLocatorImpl("1", rootTabletCache, ttlo, new YesLockChecker());
 
     setLocation(tservers, "tserver1", RTE, mte1, "tserver2");
     setLocation(tservers, "tserver1", RTE, mte2, "tserver3");
@@ -1336,7 +1336,7 @@
   @Test
   public void testLostLock() throws Exception {
 
-    final HashSet<String> activeLocks = new HashSet<String>();
+    final HashSet<String> activeLocks = new HashSet<>();
 
     TServers tservers = new TServers();
     TabletLocatorImpl metaCache = createLocators(tservers, "tserver1", "tserver2", "foo", new TabletServerLockChecker() {
diff --git a/core/src/test/java/org/apache/accumulo/core/client/impl/TabletServerBatchReaderTest.java b/core/src/test/java/org/apache/accumulo/core/client/impl/TabletServerBatchReaderTest.java
index 23c223e..af4a474 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/impl/TabletServerBatchReaderTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/impl/TabletServerBatchReaderTest.java
@@ -16,34 +16,32 @@
  */
 package org.apache.accumulo.core.client.impl;
 
+import static org.junit.Assert.assertEquals;
+
 import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.client.ClientConfiguration;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.security.Authorizations;
+import org.easymock.EasyMock;
 import org.junit.Before;
 import org.junit.Test;
 
-import static org.junit.Assert.assertEquals;
-
 public class TabletServerBatchReaderTest {
 
-  MockInstance instance;
-  ClientContext context;
+  private ClientContext context;
 
   @Before
   public void setup() {
-    instance = new MockInstance();
-    context = new ClientContext(instance, new Credentials("root", new PasswordToken("")), new ClientConfiguration());
+    context = EasyMock.createMock(ClientContext.class);
   }
 
   @Test
   public void testGetAuthorizations() {
     Authorizations expected = new Authorizations("a,b");
-    BatchScanner s = new TabletServerBatchReader(context, "foo", expected, 1);
-    assertEquals(expected, s.getAuthorizations());
+    try (BatchScanner s = new TabletServerBatchReader(context, "foo", expected, 1)) {
+      assertEquals(expected, s.getAuthorizations());
+    }
   }
 
+  @SuppressWarnings("resource")
   @Test(expected = IllegalArgumentException.class)
   public void testNullAuthorizationsFails() {
     new TabletServerBatchReader(context, "foo", null, 1);
diff --git a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/FloatLexicoderTest.java b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/FloatLexicoderTest.java
new file mode 100644
index 0000000..7ac683a
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/FloatLexicoderTest.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.client.lexicoder;
+
+import org.apache.accumulo.core.client.lexicoder.impl.AbstractLexicoderTest;
+
+import java.util.Arrays;
+
+/**
+ *
+ */
+public class FloatLexicoderTest extends AbstractLexicoderTest {
+
+  public void testSortOrder() {
+    assertSortOrder(
+        new FloatLexicoder(),
+        Arrays.asList(Float.MIN_VALUE, Float.MAX_VALUE, Float.NEGATIVE_INFINITY, Float.POSITIVE_INFINITY, 0.0F, 0.01F, 0.001F, 1.0F, -1.0F, -1.1F, -1.01F,
+            Math.nextUp(Float.NEGATIVE_INFINITY), Math.nextAfter(0.0F, Float.NEGATIVE_INFINITY), Math.nextAfter(Float.MAX_VALUE, Float.NEGATIVE_INFINITY)));
+
+  }
+
+  public void testDecode() {
+    assertDecodes(new FloatLexicoder(), Float.MIN_VALUE);
+    assertDecodes(new FloatLexicoder(), Math.nextUp(Float.NEGATIVE_INFINITY));
+    assertDecodes(new FloatLexicoder(), -1.0F);
+    assertDecodes(new FloatLexicoder(), 0.0F);
+    assertDecodes(new FloatLexicoder(), 1.0F);
+    assertDecodes(new FloatLexicoder(), Math.nextAfter(Float.POSITIVE_INFINITY, 0.0F));
+    assertDecodes(new FloatLexicoder(), Float.MAX_VALUE);
+  }
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/LexicoderTest.java b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/LexicoderTest.java
index 6d06f97..faf3c23 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/LexicoderTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/LexicoderTest.java
@@ -32,8 +32,8 @@
   }
 
   public <T extends Comparable<T>> void assertSortOrder(Lexicoder<T> lexicoder, Comparator<T> comp, List<T> data) {
-    List<T> list = new ArrayList<T>();
-    List<Text> encList = new ArrayList<Text>();
+    List<T> list = new ArrayList<>();
+    List<Text> encList = new ArrayList<>();
 
     for (T d : data) {
       list.add(d);
@@ -47,7 +47,7 @@
 
     Collections.sort(encList);
 
-    List<T> decodedList = new ArrayList<T>();
+    List<T> decodedList = new ArrayList<>();
 
     for (Text t : encList) {
       decodedList.add(lexicoder.decode(TextUtil.getBytes(t)));
diff --git a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/ListLexicoderTest.java b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/ListLexicoderTest.java
index 6353863..22cdc50 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/ListLexicoderTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/ListLexicoderTest.java
@@ -26,11 +26,11 @@
 
 public class ListLexicoderTest extends AbstractLexicoderTest {
 
-  private List<Long> data1 = new ArrayList<Long>();
-  private List<Long> data2 = new ArrayList<Long>();
-  private List<Long> data3 = new ArrayList<Long>();
-  private List<Long> data4 = new ArrayList<Long>();
-  private List<Long> data5 = new ArrayList<Long>();
+  private List<Long> data1 = new ArrayList<>();
+  private List<Long> data2 = new ArrayList<>();
+  private List<Long> data3 = new ArrayList<>();
+  private List<Long> data4 = new ArrayList<>();
+  private List<Long> data5 = new ArrayList<>();
 
   @Override
   public void setUp() {
@@ -52,7 +52,7 @@
   }
 
   public void testSortOrder() {
-    List<List<Long>> data = new ArrayList<List<Long>>();
+    List<List<Long>> data = new ArrayList<>();
 
     // add list in expected sort order
     data.add(data2);
@@ -61,15 +61,15 @@
     data.add(data3);
     data.add(data5);
 
-    TreeSet<Text> sortedEnc = new TreeSet<Text>();
+    TreeSet<Text> sortedEnc = new TreeSet<>();
 
-    ListLexicoder<Long> listLexicoder = new ListLexicoder<Long>(new LongLexicoder());
+    ListLexicoder<Long> listLexicoder = new ListLexicoder<>(new LongLexicoder());
 
     for (List<Long> list : data) {
       sortedEnc.add(new Text(listLexicoder.encode(list)));
     }
 
-    List<List<Long>> unenc = new ArrayList<List<Long>>();
+    List<List<Long>> unenc = new ArrayList<>();
 
     for (Text enc : sortedEnc) {
       unenc.add(listLexicoder.decode(TextUtil.getBytes(enc)));
@@ -80,10 +80,10 @@
   }
 
   public void testDecodes() {
-    assertDecodes(new ListLexicoder<Long>(new LongLexicoder()), data1);
-    assertDecodes(new ListLexicoder<Long>(new LongLexicoder()), data2);
-    assertDecodes(new ListLexicoder<Long>(new LongLexicoder()), data3);
-    assertDecodes(new ListLexicoder<Long>(new LongLexicoder()), data4);
-    assertDecodes(new ListLexicoder<Long>(new LongLexicoder()), data5);
+    assertDecodes(new ListLexicoder<>(new LongLexicoder()), data1);
+    assertDecodes(new ListLexicoder<>(new LongLexicoder()), data2);
+    assertDecodes(new ListLexicoder<>(new LongLexicoder()), data3);
+    assertDecodes(new ListLexicoder<>(new LongLexicoder()), data4);
+    assertDecodes(new ListLexicoder<>(new LongLexicoder()), data5);
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/PairLexicoderTest.java b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/PairLexicoderTest.java
index 0979801..d94793e 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/PairLexicoderTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/PairLexicoderTest.java
@@ -26,20 +26,19 @@
  */
 public class PairLexicoderTest extends AbstractLexicoderTest {
   public void testSortOrder() {
-    PairLexicoder<String,String> plexc = new PairLexicoder<String,String>(new StringLexicoder(), new StringLexicoder());
+    PairLexicoder<String,String> plexc = new PairLexicoder<>(new StringLexicoder(), new StringLexicoder());
 
-    assertSortOrder(plexc, Arrays.asList(new ComparablePair<String,String>("a", "b"), new ComparablePair<String,String>("a", "bc"),
-        new ComparablePair<String,String>("a", "c"), new ComparablePair<String,String>("ab", "c"), new ComparablePair<String,String>("ab", ""),
-        new ComparablePair<String,String>("ab", "d"), new ComparablePair<String,String>("b", "f"), new ComparablePair<String,String>("b", "a")));
+    assertSortOrder(plexc, Arrays.asList(new ComparablePair<>("a", "b"), new ComparablePair<>("a", "bc"), new ComparablePair<>("a", "c"), new ComparablePair<>(
+        "ab", "c"), new ComparablePair<>("ab", ""), new ComparablePair<>("ab", "d"), new ComparablePair<>("b", "f"), new ComparablePair<>("b", "a")));
 
-    PairLexicoder<Long,String> plexc2 = new PairLexicoder<Long,String>(new LongLexicoder(), new StringLexicoder());
+    PairLexicoder<Long,String> plexc2 = new PairLexicoder<>(new LongLexicoder(), new StringLexicoder());
 
-    assertSortOrder(plexc2, Arrays.asList(new ComparablePair<Long,String>(0x100l, "a"), new ComparablePair<Long,String>(0x100l, "ab"),
-        new ComparablePair<Long,String>(0xf0l, "a"), new ComparablePair<Long,String>(0xf0l, "ab")));
+    assertSortOrder(plexc2, Arrays.asList(new ComparablePair<>(0x100l, "a"), new ComparablePair<>(0x100l, "ab"), new ComparablePair<>(0xf0l, "a"),
+        new ComparablePair<>(0xf0l, "ab")));
   }
 
   public void testDecodes() {
-    PairLexicoder<String,String> plexc = new PairLexicoder<String,String>(new StringLexicoder(), new StringLexicoder());
-    assertDecodes(plexc, new ComparablePair<String,String>("a", "b"));
+    PairLexicoder<String,String> plexc = new PairLexicoder<>(new StringLexicoder(), new StringLexicoder());
+    assertDecodes(plexc, new ComparablePair<>("a", "b"));
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/ReverseLexicoderTest.java b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/ReverseLexicoderTest.java
index a49409f..df6f131 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/ReverseLexicoderTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/ReverseLexicoderTest.java
@@ -29,12 +29,12 @@
 public class ReverseLexicoderTest extends AbstractLexicoderTest {
   public void testSortOrder() {
     Comparator<Long> comp = Collections.reverseOrder();
-    assertSortOrder(new ReverseLexicoder<Long>(new LongLexicoder()), comp, Arrays.asList(Long.MIN_VALUE, 0xff1234567890abcdl, 0xffff1234567890abl,
+    assertSortOrder(new ReverseLexicoder<>(new LongLexicoder()), comp, Arrays.asList(Long.MIN_VALUE, 0xff1234567890abcdl, 0xffff1234567890abl,
         0xffffff567890abcdl, 0xffffffff7890abcdl, 0xffffffffff90abcdl, 0xffffffffffffabcdl, 0xffffffffffffffcdl, -1l, 0l, 0x01l, 0x1234l, 0x123456l,
         0x12345678l, 0x1234567890l, 0x1234567890abl, 0x1234567890abcdl, 0x1234567890abcdefl, Long.MAX_VALUE));
 
     Comparator<String> comp2 = Collections.reverseOrder();
-    assertSortOrder(new ReverseLexicoder<String>(new StringLexicoder()), comp2, Arrays.asList("a", "aa", "ab", "b", "aab"));
+    assertSortOrder(new ReverseLexicoder<>(new StringLexicoder()), comp2, Arrays.asList("a", "aa", "ab", "b", "aab"));
 
   }
 
@@ -44,7 +44,7 @@
   @Test
   public void testReverseSortDates() throws UnsupportedEncodingException {
 
-    ReverseLexicoder<Date> revLex = new ReverseLexicoder<Date>(new DateLexicoder());
+    ReverseLexicoder<Date> revLex = new ReverseLexicoder<>(new DateLexicoder());
 
     Calendar cal = Calendar.getInstance();
     cal.set(1920, 1, 2, 3, 4, 5); // create an instance prior to 1970 for ACCUMULO-3385
@@ -65,10 +65,10 @@
   }
 
   public void testDecodes() {
-    assertDecodes(new ReverseLexicoder<Long>(new LongLexicoder()), Long.MIN_VALUE);
-    assertDecodes(new ReverseLexicoder<Long>(new LongLexicoder()), -1l);
-    assertDecodes(new ReverseLexicoder<Long>(new LongLexicoder()), 0l);
-    assertDecodes(new ReverseLexicoder<Long>(new LongLexicoder()), 1l);
-    assertDecodes(new ReverseLexicoder<Long>(new LongLexicoder()), Long.MAX_VALUE);
+    assertDecodes(new ReverseLexicoder<>(new LongLexicoder()), Long.MIN_VALUE);
+    assertDecodes(new ReverseLexicoder<>(new LongLexicoder()), -1l);
+    assertDecodes(new ReverseLexicoder<>(new LongLexicoder()), 0l);
+    assertDecodes(new ReverseLexicoder<>(new LongLexicoder()), 1l);
+    assertDecodes(new ReverseLexicoder<>(new LongLexicoder()), Long.MAX_VALUE);
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/UUIDLexicoderTest.java b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/UUIDLexicoderTest.java
index 60c89a3..8424a75 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/lexicoder/UUIDLexicoderTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/lexicoder/UUIDLexicoderTest.java
@@ -28,7 +28,7 @@
     assertSortOrder(new UUIDLexicoder(),
         Arrays.asList(UUID.randomUUID(), UUID.randomUUID(), UUID.randomUUID(), UUID.randomUUID(), UUID.randomUUID(), UUID.randomUUID()));
 
-    ArrayList<UUID> uuids = new ArrayList<UUID>();
+    ArrayList<UUID> uuids = new ArrayList<>();
 
     for (long ms = -260l; ms < 260l; ms++) {
       for (long ls = -2l; ls < 2; ls++) {
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormatTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormatTest.java
index c4a4a29..d85db92 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormatTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormatTest.java
@@ -17,199 +17,19 @@
 package org.apache.accumulo.core.client.mapred;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
 
-import java.io.File;
-import java.io.FileFilter;
 import java.io.IOException;
 
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.mapred.JobClient;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.Mapper;
-import org.apache.hadoop.mapred.OutputCollector;
-import org.apache.hadoop.mapred.Reporter;
-import org.apache.hadoop.mapred.lib.IdentityMapper;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
-import org.junit.BeforeClass;
-import org.junit.Rule;
 import org.junit.Test;
-import org.junit.rules.TemporaryFolder;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 public class AccumuloFileOutputFormatTest {
-  private static final Logger log = LoggerFactory.getLogger(AccumuloFileOutputFormatTest.class);
-  private static final int JOB_VISIBILITY_CACHE_SIZE = 3000;
-  private static final String PREFIX = AccumuloFileOutputFormatTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapred_instance";
-  private static final String BAD_TABLE = PREFIX + "_mapred_bad_table";
-  private static final String TEST_TABLE = PREFIX + "_mapred_test_table";
-  private static final String EMPTY_TABLE = PREFIX + "_mapred_empty_table";
-
-  private static AssertionError e1 = null;
-  private static AssertionError e2 = null;
-
-  @Rule
-  public TemporaryFolder folder = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
-
-  @BeforeClass
-  public static void setup() throws Exception {
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(EMPTY_TABLE);
-    c.tableOperations().create(TEST_TABLE);
-    c.tableOperations().create(BAD_TABLE);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE, new BatchWriterConfig());
-    Mutation m = new Mutation("Key");
-    m.put("", "", "");
-    bw.addMutation(m);
-    bw.close();
-    bw = c.createBatchWriter(BAD_TABLE, new BatchWriterConfig());
-    m = new Mutation("r1");
-    m.put("cf1", "cq1", "A&B");
-    m.put("cf1", "cq1", "A&B");
-    m.put("cf1", "cq2", "A&");
-    bw.addMutation(m);
-    bw.close();
-  }
-
-  @Test
-  public void testEmptyWrite() throws Exception {
-    handleWriteTests(false);
-  }
-
-  @Test
-  public void testRealWrite() throws Exception {
-    handleWriteTests(true);
-  }
-
-  private static class MRTester extends Configured implements Tool {
-    private static class BadKeyMapper implements Mapper<Key,Value,Key,Value> {
-
-      int index = 0;
-
-      @Override
-      public void map(Key key, Value value, OutputCollector<Key,Value> output, Reporter reporter) throws IOException {
-        try {
-          try {
-            output.collect(key, value);
-            if (index == 2)
-              fail();
-          } catch (Exception e) {
-            log.error(e.toString(), e);
-            assertEquals(2, index);
-          }
-        } catch (AssertionError e) {
-          e1 = e;
-        }
-        index++;
-      }
-
-      @Override
-      public void configure(JobConf job) {}
-
-      @Override
-      public void close() throws IOException {
-        try {
-          assertEquals(2, index);
-        } catch (AssertionError e) {
-          e2 = e;
-        }
-      }
-
-    }
-
-    @Override
-    public int run(String[] args) throws Exception {
-
-      if (args.length != 4) {
-        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <table> <outputfile>");
-      }
-
-      String user = args[0];
-      String pass = args[1];
-      String table = args[2];
-
-      JobConf job = new JobConf(getConf());
-      job.setJarByClass(this.getClass());
-      ConfiguratorBase.setVisibilityCacheSize(job, JOB_VISIBILITY_CACHE_SIZE);
-
-      job.setInputFormat(AccumuloInputFormat.class);
-
-      AccumuloInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
-      AccumuloInputFormat.setInputTableName(job, table);
-      AccumuloInputFormat.setMockInstance(job, INSTANCE_NAME);
-      AccumuloFileOutputFormat.setOutputPath(job, new Path(args[3]));
-
-      job.setMapperClass(BAD_TABLE.equals(table) ? BadKeyMapper.class : IdentityMapper.class);
-      job.setMapOutputKeyClass(Key.class);
-      job.setMapOutputValueClass(Value.class);
-      job.setOutputFormat(AccumuloFileOutputFormat.class);
-
-      job.setNumReduceTasks(0);
-
-      return JobClient.runJob(job).isSuccessful() ? 0 : 1;
-    }
-
-    public static void main(String[] args) throws Exception {
-      Configuration conf = new Configuration();
-      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
-      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
-    }
-  }
-
-  public void handleWriteTests(boolean content) throws Exception {
-    File f = folder.newFile("handleWriteTests");
-    if (f.delete()) {
-      log.debug("Deleted {}", f);
-    }
-    MRTester.main(new String[] {"root", "", content ? TEST_TABLE : EMPTY_TABLE, f.getAbsolutePath()});
-
-    assertTrue(f.exists());
-    File[] files = f.listFiles(new FileFilter() {
-      @Override
-      public boolean accept(File file) {
-        return file.getName().startsWith("part-m-");
-      }
-    });
-    assertNotNull(files);
-    if (content) {
-      assertEquals(1, files.length);
-      assertTrue(files[0].exists());
-    } else {
-      assertEquals(0, files.length);
-    }
-  }
-
-  @Test
-  public void writeBadVisibility() throws Exception {
-    File f = folder.newFile("writeBadVisibility");
-    if (f.delete()) {
-      log.debug("Deleted {}", f);
-    }
-    MRTester.main(new String[] {"root", "", BAD_TABLE, f.getAbsolutePath()});
-    assertNull(e1);
-    assertNull(e2);
-  }
 
   @Test
   public void validateConfiguration() throws IOException, InterruptedException {
@@ -219,6 +39,9 @@
     long c = 50l;
     long d = 10l;
     String e = "snappy";
+    SamplerConfiguration samplerConfig = new SamplerConfiguration(RowSampler.class.getName());
+    samplerConfig.addOption("hasher", "murmur3_32");
+    samplerConfig.addOption("modulus", "109");
 
     JobConf job = new JobConf();
     AccumuloFileOutputFormat.setReplication(job, a);
@@ -226,6 +49,7 @@
     AccumuloFileOutputFormat.setDataBlockSize(job, c);
     AccumuloFileOutputFormat.setIndexBlockSize(job, d);
     AccumuloFileOutputFormat.setCompressionType(job, e);
+    AccumuloFileOutputFormat.setSampler(job, samplerConfig);
 
     AccumuloConfiguration acuconf = FileOutputConfigurator.getAccumuloConfiguration(AccumuloFileOutputFormat.class, job);
 
@@ -234,12 +58,16 @@
     assertEquals(50l, acuconf.getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE));
     assertEquals(10l, acuconf.getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE_INDEX));
     assertEquals("snappy", acuconf.get(Property.TABLE_FILE_COMPRESSION_TYPE));
+    assertEquals(new SamplerConfigurationImpl(samplerConfig), SamplerConfigurationImpl.newSamplerConfig(acuconf));
 
     a = 17;
     b = 1300l;
     c = 150l;
     d = 110l;
     e = "lzo";
+    samplerConfig = new SamplerConfiguration(RowSampler.class.getName());
+    samplerConfig.addOption("hasher", "md5");
+    samplerConfig.addOption("modulus", "100003");
 
     job = new JobConf();
     AccumuloFileOutputFormat.setReplication(job, a);
@@ -247,6 +75,7 @@
     AccumuloFileOutputFormat.setDataBlockSize(job, c);
     AccumuloFileOutputFormat.setIndexBlockSize(job, d);
     AccumuloFileOutputFormat.setCompressionType(job, e);
+    AccumuloFileOutputFormat.setSampler(job, samplerConfig);
 
     acuconf = FileOutputConfigurator.getAccumuloConfiguration(AccumuloFileOutputFormat.class, job);
 
@@ -255,6 +84,6 @@
     assertEquals(150l, acuconf.getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE));
     assertEquals(110l, acuconf.getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE_INDEX));
     assertEquals("lzo", acuconf.get(Property.TABLE_FILE_COMPRESSION_TYPE));
-
+    assertEquals(new SamplerConfigurationImpl(samplerConfig), SamplerConfigurationImpl.newSamplerConfig(acuconf));
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
index 4e565ee..cb28958 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloInputFormatTest.java
@@ -17,62 +17,29 @@
 package org.apache.accumulo.core.client.mapred;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
 import java.io.ByteArrayOutputStream;
 import java.io.DataOutputStream;
-import java.io.File;
 import java.io.IOException;
-import java.util.Collection;
-import java.util.Collections;
 import java.util.List;
 
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.user.RegExFilter;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
-import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.Base64;
-import org.apache.accumulo.core.util.Pair;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapred.InputSplit;
-import org.apache.hadoop.mapred.JobClient;
 import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.Mapper;
-import org.apache.hadoop.mapred.OutputCollector;
-import org.apache.hadoop.mapred.Reporter;
-import org.apache.hadoop.mapred.lib.NullOutputFormat;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
-import org.apache.log4j.Level;
-import org.junit.Assert;
 import org.junit.Before;
-import org.junit.BeforeClass;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.TestName;
 
 public class AccumuloInputFormatTest {
 
-  private static final String PREFIX = AccumuloInputFormatTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapred_instance";
-  private static final String TEST_TABLE_1 = PREFIX + "_mapred_table_1";
-
   private JobConf job;
 
-  @BeforeClass
-  public static void setupClass() {
-    System.setProperty("hadoop.tmp.dir", System.getProperty("user.dir") + "/target/hadoop-tmp");
-  }
+  @Rule
+  public TestName test = new TestName();
 
   @Before
   public void createJob() {
@@ -203,141 +170,4 @@
     assertTrue(regex.equals(AccumuloInputFormat.getIterators(job).get(0).getName()));
   }
 
-  private static AssertionError e1 = null;
-  private static AssertionError e2 = null;
-
-  private static class MRTester extends Configured implements Tool {
-    private static class TestMapper implements Mapper<Key,Value,Key,Value> {
-      Key key = null;
-      int count = 0;
-
-      @Override
-      public void map(Key k, Value v, OutputCollector<Key,Value> output, Reporter reporter) throws IOException {
-        try {
-          if (key != null)
-            assertEquals(key.getRow().toString(), new String(v.get()));
-          assertEquals(k.getRow(), new Text(String.format("%09x", count + 1)));
-          assertEquals(new String(v.get()), String.format("%09x", count));
-        } catch (AssertionError e) {
-          e1 = e;
-        }
-        key = new Key(k);
-        count++;
-      }
-
-      @Override
-      public void configure(JobConf job) {}
-
-      @Override
-      public void close() throws IOException {
-        try {
-          assertEquals(100, count);
-        } catch (AssertionError e) {
-          e2 = e;
-        }
-      }
-
-    }
-
-    @Override
-    public int run(String[] args) throws Exception {
-
-      if (args.length != 3) {
-        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <table>");
-      }
-
-      String user = args[0];
-      String pass = args[1];
-      String table = args[2];
-
-      JobConf job = new JobConf(getConf());
-      job.setJarByClass(this.getClass());
-
-      job.setInputFormat(AccumuloInputFormat.class);
-
-      AccumuloInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
-      AccumuloInputFormat.setInputTableName(job, table);
-      AccumuloInputFormat.setMockInstance(job, INSTANCE_NAME);
-
-      job.setMapperClass(TestMapper.class);
-      job.setMapOutputKeyClass(Key.class);
-      job.setMapOutputValueClass(Value.class);
-      job.setOutputFormat(NullOutputFormat.class);
-
-      job.setNumReduceTasks(0);
-
-      return JobClient.runJob(job).isSuccessful() ? 0 : 1;
-    }
-
-    public static void main(String... args) throws Exception {
-      Configuration conf = new Configuration();
-      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
-      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
-    }
-  }
-
-  @Test
-  public void testMap() throws Exception {
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(TEST_TABLE_1);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
-    for (int i = 0; i < 100; i++) {
-      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
-      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
-      bw.addMutation(m);
-    }
-    bw.close();
-
-    MRTester.main("root", "", TEST_TABLE_1);
-    assertNull(e1);
-    assertNull(e2);
-  }
-
-  @Test
-  public void testCorrectRangeInputSplits() throws Exception {
-    JobConf job = new JobConf();
-
-    String username = "user", table = "table", instance = "mapred_testCorrectRangeInputSplits";
-    PasswordToken password = new PasswordToken("password");
-    Authorizations auths = new Authorizations("foo");
-    Collection<Pair<Text,Text>> fetchColumns = Collections.singleton(new Pair<Text,Text>(new Text("foo"), new Text("bar")));
-    boolean isolated = true, localIters = true;
-    Level level = Level.WARN;
-
-    Instance inst = new MockInstance(instance);
-    Connector connector = inst.getConnector(username, password);
-    connector.tableOperations().create(table);
-
-    AccumuloInputFormat.setConnectorInfo(job, username, password);
-    AccumuloInputFormat.setInputTableName(job, table);
-    AccumuloInputFormat.setScanAuthorizations(job, auths);
-    AccumuloInputFormat.setMockInstance(job, instance);
-    AccumuloInputFormat.setScanIsolation(job, isolated);
-    AccumuloInputFormat.setLocalIterators(job, localIters);
-    AccumuloInputFormat.fetchColumns(job, fetchColumns);
-    AccumuloInputFormat.setLogLevel(job, level);
-
-    AccumuloInputFormat aif = new AccumuloInputFormat();
-
-    InputSplit[] splits = aif.getSplits(job, 1);
-
-    Assert.assertEquals(1, splits.length);
-
-    InputSplit split = splits[0];
-
-    Assert.assertEquals(RangeInputSplit.class, split.getClass());
-
-    RangeInputSplit risplit = (RangeInputSplit) split;
-
-    Assert.assertEquals(username, risplit.getPrincipal());
-    Assert.assertEquals(table, risplit.getTableName());
-    Assert.assertEquals(password, risplit.getToken());
-    Assert.assertEquals(auths, risplit.getAuths());
-    Assert.assertEquals(instance, risplit.getInstanceName());
-    Assert.assertEquals(isolated, risplit.isIsolatedScan());
-    Assert.assertEquals(localIters, risplit.usesLocalIterators());
-    Assert.assertEquals(fetchColumns, risplit.getFetchedColumns());
-    Assert.assertEquals(level, risplit.getLogLevel());
-  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloMultiTableInputFormatTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloMultiTableInputFormatTest.java
index a5545ee..4e75ab6 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloMultiTableInputFormatTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloMultiTableInputFormatTest.java
@@ -17,175 +17,50 @@
 package org.apache.accumulo.core.client.mapred;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNull;
 
-import java.io.File;
 import java.io.IOException;
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.Map;
 
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.mapreduce.InputTableConfig;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapred.JobClient;
 import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.Mapper;
-import org.apache.hadoop.mapred.OutputCollector;
-import org.apache.hadoop.mapred.Reporter;
-import org.apache.hadoop.mapred.lib.NullOutputFormat;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.TestName;
 
 public class AccumuloMultiTableInputFormatTest {
 
-  private static final String PREFIX = AccumuloMultiTableInputFormatTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapred_instance";
-  private static final String TEST_TABLE_1 = PREFIX + "_mapred_table_1";
-  private static final String TEST_TABLE_2 = PREFIX + "_mapred_table_2";
-
-  private static AssertionError e1 = null;
-  private static AssertionError e2 = null;
-
-  private static class MRTester extends Configured implements Tool {
-    private static class TestMapper implements Mapper<Key,Value,Key,Value> {
-      Key key = null;
-      int count = 0;
-
-      @Override
-      public void map(Key k, Value v, OutputCollector<Key,Value> output, Reporter reporter) throws IOException {
-        try {
-          String tableName = ((RangeInputSplit) reporter.getInputSplit()).getTableName();
-          if (key != null)
-            assertEquals(key.getRow().toString(), new String(v.get()));
-          assertEquals(new Text(String.format("%s_%09x", tableName, count + 1)), k.getRow());
-          assertEquals(String.format("%s_%09x", tableName, count), new String(v.get()));
-        } catch (AssertionError e) {
-          e1 = e;
-        }
-        key = new Key(k);
-        count++;
-      }
-
-      @Override
-      public void configure(JobConf job) {}
-
-      @Override
-      public void close() throws IOException {
-        try {
-          assertEquals(100, count);
-        } catch (AssertionError e) {
-          e2 = e;
-        }
-      }
-
-    }
-
-    @Override
-    public int run(String[] args) throws Exception {
-
-      if (args.length != 4) {
-        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <table1> <table2>");
-      }
-
-      String user = args[0];
-      String pass = args[1];
-      String table1 = args[2];
-      String table2 = args[3];
-
-      JobConf job = new JobConf(getConf());
-      job.setJarByClass(this.getClass());
-
-      job.setInputFormat(AccumuloInputFormat.class);
-
-      AccumuloMultiTableInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
-      AccumuloMultiTableInputFormat.setMockInstance(job, INSTANCE_NAME);
-
-      InputTableConfig tableConfig1 = new InputTableConfig();
-      InputTableConfig tableConfig2 = new InputTableConfig();
-
-      Map<String,InputTableConfig> configMap = new HashMap<String,InputTableConfig>();
-      configMap.put(table1, tableConfig1);
-      configMap.put(table2, tableConfig2);
-
-      AccumuloMultiTableInputFormat.setInputTableConfigs(job, configMap);
-
-      job.setMapperClass(TestMapper.class);
-      job.setMapOutputKeyClass(Key.class);
-      job.setMapOutputValueClass(Value.class);
-      job.setOutputFormat(NullOutputFormat.class);
-
-      job.setNumReduceTasks(0);
-
-      return JobClient.runJob(job).isSuccessful() ? 0 : 1;
-    }
-
-    public static void main(String[] args) throws Exception {
-      Configuration conf = new Configuration();
-      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
-      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
-    }
-  }
-
-  @Test
-  public void testMap() throws Exception {
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(TEST_TABLE_1);
-    c.tableOperations().create(TEST_TABLE_2);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
-    BatchWriter bw2 = c.createBatchWriter(TEST_TABLE_2, new BatchWriterConfig());
-    for (int i = 0; i < 100; i++) {
-      Mutation t1m = new Mutation(new Text(String.format("%s_%09x", TEST_TABLE_1, i + 1)));
-      t1m.put(new Text(), new Text(), new Value(String.format("%s_%09x", TEST_TABLE_1, i).getBytes()));
-      bw.addMutation(t1m);
-      Mutation t2m = new Mutation(new Text(String.format("%s_%09x", TEST_TABLE_2, i + 1)));
-      t2m.put(new Text(), new Text(), new Value(String.format("%s_%09x", TEST_TABLE_2, i).getBytes()));
-      bw2.addMutation(t2m);
-    }
-    bw.close();
-    bw2.close();
-
-    MRTester.main(new String[] {"root", "", TEST_TABLE_1, TEST_TABLE_2});
-    assertNull(e1);
-    assertNull(e2);
-  }
+  @Rule
+  public TestName testName = new TestName();
 
   /**
    * Verify {@link org.apache.accumulo.core.client.mapreduce.InputTableConfig} objects get correctly serialized in the JobContext.
    */
   @Test
   public void testTableQueryConfigSerialization() throws IOException {
-
+    String table1Name = testName.getMethodName() + "1";
+    String table2Name = testName.getMethodName() + "2";
     JobConf job = new JobConf();
 
     InputTableConfig table1 = new InputTableConfig().setRanges(Collections.singletonList(new Range("a", "b")))
-        .fetchColumns(Collections.singleton(new Pair<Text,Text>(new Text("CF1"), new Text("CQ1"))))
+        .fetchColumns(Collections.singleton(new Pair<>(new Text("CF1"), new Text("CQ1"))))
         .setIterators(Collections.singletonList(new IteratorSetting(50, "iter1", "iterclass1")));
 
     InputTableConfig table2 = new InputTableConfig().setRanges(Collections.singletonList(new Range("a", "b")))
-        .fetchColumns(Collections.singleton(new Pair<Text,Text>(new Text("CF1"), new Text("CQ1"))))
+        .fetchColumns(Collections.singleton(new Pair<>(new Text("CF1"), new Text("CQ1"))))
         .setIterators(Collections.singletonList(new IteratorSetting(50, "iter1", "iterclass1")));
 
-    Map<String,InputTableConfig> configMap = new HashMap<String,InputTableConfig>();
-    configMap.put(TEST_TABLE_1, table1);
-    configMap.put(TEST_TABLE_2, table2);
+    Map<String,InputTableConfig> configMap = new HashMap<>();
+    configMap.put(table1Name, table1);
+    configMap.put(table2Name, table2);
     AccumuloMultiTableInputFormat.setInputTableConfigs(job, configMap);
 
-    assertEquals(table1, AccumuloMultiTableInputFormat.getInputTableConfig(job, TEST_TABLE_1));
-    assertEquals(table2, AccumuloMultiTableInputFormat.getInputTableConfig(job, TEST_TABLE_2));
+    assertEquals(table1, AccumuloMultiTableInputFormat.getInputTableConfig(job, table1Name));
+    assertEquals(table2, AccumuloMultiTableInputFormat.getInputTableConfig(job, table2Name));
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormatTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormatTest.java
index fa12227..ab73cfe 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormatTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormatTest.java
@@ -1,12 +1,12 @@
 /*
  * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
+ * contributor license agreements. See the NOTICE file distributed with
  * this work for additional information regarding copyright ownership.
  * The ASF licenses this file to You under the Apache License, Version 2.0
  * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
+ * the License. You may obtain a copy of the License at
  *
- *     http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
  *
  * Unless required by applicable law or agreed to in writing, software
  * distributed under the License is distributed on an "AS IS" BASIS,
@@ -17,127 +17,17 @@
 package org.apache.accumulo.core.client.mapred;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
 
-import java.io.File;
 import java.io.IOException;
-import java.util.Iterator;
-import java.util.Map.Entry;
 import java.util.concurrent.TimeUnit;
 
-import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapred.JobClient;
 import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.Mapper;
-import org.apache.hadoop.mapred.OutputCollector;
-import org.apache.hadoop.mapred.Reporter;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
 import org.junit.Test;
 
-/**
- *
- */
 public class AccumuloOutputFormatTest {
-  private static AssertionError e1 = null;
-  private static final String PREFIX = AccumuloOutputFormatTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapred_instance";
-  private static final String TEST_TABLE_1 = PREFIX + "_mapred_table_1";
-  private static final String TEST_TABLE_2 = PREFIX + "_mapred_table_2";
-
-  private static class MRTester extends Configured implements Tool {
-    private static class TestMapper implements Mapper<Key,Value,Text,Mutation> {
-      Key key = null;
-      int count = 0;
-      OutputCollector<Text,Mutation> finalOutput;
-
-      @Override
-      public void map(Key k, Value v, OutputCollector<Text,Mutation> output, Reporter reporter) throws IOException {
-        finalOutput = output;
-        try {
-          if (key != null)
-            assertEquals(key.getRow().toString(), new String(v.get()));
-          assertEquals(k.getRow(), new Text(String.format("%09x", count + 1)));
-          assertEquals(new String(v.get()), String.format("%09x", count));
-        } catch (AssertionError e) {
-          e1 = e;
-        }
-        key = new Key(k);
-        count++;
-      }
-
-      @Override
-      public void configure(JobConf job) {}
-
-      @Override
-      public void close() throws IOException {
-        Mutation m = new Mutation("total");
-        m.put("", "", Integer.toString(count));
-        finalOutput.collect(new Text(), m);
-      }
-
-    }
-
-    @Override
-    public int run(String[] args) throws Exception {
-
-      if (args.length != 4) {
-        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <inputtable> <outputtable>");
-      }
-
-      String user = args[0];
-      String pass = args[1];
-      String table1 = args[2];
-      String table2 = args[3];
-
-      JobConf job = new JobConf(getConf());
-      job.setJarByClass(this.getClass());
-
-      job.setInputFormat(AccumuloInputFormat.class);
-
-      AccumuloInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
-      AccumuloInputFormat.setInputTableName(job, table1);
-      AccumuloInputFormat.setMockInstance(job, INSTANCE_NAME);
-
-      job.setMapperClass(TestMapper.class);
-      job.setMapOutputKeyClass(Key.class);
-      job.setMapOutputValueClass(Value.class);
-      job.setOutputFormat(AccumuloOutputFormat.class);
-      job.setOutputKeyClass(Text.class);
-      job.setOutputValueClass(Mutation.class);
-
-      AccumuloOutputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
-      AccumuloOutputFormat.setCreateTables(job, false);
-      AccumuloOutputFormat.setDefaultTableName(job, table2);
-      AccumuloOutputFormat.setMockInstance(job, INSTANCE_NAME);
-
-      job.setNumReduceTasks(0);
-
-      return JobClient.runJob(job).isSuccessful() ? 0 : 1;
-    }
-
-    public static void main(String[] args) throws Exception {
-      Configuration conf = new Configuration();
-      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
-      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
-    }
-  }
 
   @Test
   public void testBWSettings() throws IOException {
@@ -179,28 +69,4 @@
     myAOF.checkOutputSpecs(null, job);
   }
 
-  @Test
-  public void testMR() throws Exception {
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(TEST_TABLE_1);
-    c.tableOperations().create(TEST_TABLE_2);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
-    for (int i = 0; i < 100; i++) {
-      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
-      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
-      bw.addMutation(m);
-    }
-    bw.close();
-
-    MRTester.main(new String[] {"root", "", TEST_TABLE_1, TEST_TABLE_2});
-    assertNull(e1);
-
-    Scanner scanner = c.createScanner(TEST_TABLE_2, new Authorizations());
-    Iterator<Entry<Key,Value>> iter = scanner.iterator();
-    assertTrue(iter.hasNext());
-    Entry<Key,Value> entry = iter.next();
-    assertEquals(Integer.parseInt(new String(entry.getValue().get())), 100);
-    assertFalse(iter.hasNext());
-  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapred/RangeInputSplitTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapred/RangeInputSplitTest.java
index f567454..c399fb0 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapred/RangeInputSplitTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapred/RangeInputSplitTest.java
@@ -33,6 +33,7 @@
 import org.apache.accumulo.core.iterators.user.SummingCombiner;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.io.Text;
 import org.apache.log4j.Level;
@@ -63,13 +64,13 @@
   public void testAllFieldsWritable() throws IOException {
     RangeInputSplit split = new RangeInputSplit("table", "1", new Range(new Key("a"), new Key("b")), new String[] {"localhost"});
 
-    Set<Pair<Text,Text>> fetchedColumns = new HashSet<Pair<Text,Text>>();
+    Set<Pair<Text,Text>> fetchedColumns = new HashSet<>();
 
-    fetchedColumns.add(new Pair<Text,Text>(new Text("colf1"), new Text("colq1")));
-    fetchedColumns.add(new Pair<Text,Text>(new Text("colf2"), new Text("colq2")));
+    fetchedColumns.add(new Pair<>(new Text("colf1"), new Text("colq1")));
+    fetchedColumns.add(new Pair<>(new Text("colf2"), new Text("colq2")));
 
     // Fake some iterators
-    ArrayList<IteratorSetting> iterators = new ArrayList<IteratorSetting>();
+    ArrayList<IteratorSetting> iterators = new ArrayList<>();
     IteratorSetting setting = new IteratorSetting(50, SummingCombiner.class);
     setting.addOption("foo", "bar");
     iterators.add(setting);
@@ -86,7 +87,7 @@
     split.setToken(new PasswordToken("password"));
     split.setPrincipal("root");
     split.setInstanceName("instance");
-    split.setMockInstance(true);
+    DeprecationUtil.setMockInstance(split, true);
     split.setZooKeepers("localhost");
     split.setIterators(iterators);
     split.setLogLevel(Level.WARN);
@@ -112,7 +113,7 @@
     Assert.assertEquals(split.getToken(), newSplit.getToken());
     Assert.assertEquals(split.getPrincipal(), newSplit.getPrincipal());
     Assert.assertEquals(split.getInstanceName(), newSplit.getInstanceName());
-    Assert.assertEquals(split.isMockInstance(), newSplit.isMockInstance());
+    Assert.assertEquals(DeprecationUtil.isMockInstanceSet(split), DeprecationUtil.isMockInstanceSet(newSplit));
     Assert.assertEquals(split.getZooKeepers(), newSplit.getZooKeepers());
     Assert.assertEquals(split.getIterators(), newSplit.getIterators());
     Assert.assertEquals(split.getLogLevel(), newSplit.getLogLevel());
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormatTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormatTest.java
index b8b3c47..39d226b 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormatTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormatTest.java
@@ -17,187 +17,19 @@
 package org.apache.accumulo.core.client.mapreduce;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
 
-import java.io.File;
-import java.io.FileFilter;
 import java.io.IOException;
 
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.fs.Path;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.Mapper;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
-import org.junit.BeforeClass;
-import org.junit.Rule;
 import org.junit.Test;
-import org.junit.rules.TemporaryFolder;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 public class AccumuloFileOutputFormatTest {
-  private static final Logger log = LoggerFactory.getLogger(AccumuloFileOutputFormatTest.class);
-  private static final String PREFIX = AccumuloFileOutputFormatTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapreduce_instance";
-  private static final String BAD_TABLE = PREFIX + "_mapreduce_bad_table";
-  private static final String TEST_TABLE = PREFIX + "_mapreduce_test_table";
-  private static final String EMPTY_TABLE = PREFIX + "_mapreduce_empty_table";
-
-  private static AssertionError e1 = null;
-  private static AssertionError e2 = null;
-
-  @Rule
-  public TemporaryFolder folder = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
-
-  @BeforeClass
-  public static void setup() throws Exception {
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(EMPTY_TABLE);
-    c.tableOperations().create(TEST_TABLE);
-    c.tableOperations().create(BAD_TABLE);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE, new BatchWriterConfig());
-    Mutation m = new Mutation("Key");
-    m.put("", "", "");
-    bw.addMutation(m);
-    bw.close();
-    bw = c.createBatchWriter(BAD_TABLE, new BatchWriterConfig());
-    m = new Mutation("r1");
-    m.put("cf1", "cq1", "A&B");
-    m.put("cf1", "cq1", "A&B");
-    m.put("cf1", "cq2", "A&");
-    bw.addMutation(m);
-    bw.close();
-  }
-
-  @Test
-  public void testEmptyWrite() throws Exception {
-    handleWriteTests(false);
-  }
-
-  @Test
-  public void testRealWrite() throws Exception {
-    handleWriteTests(true);
-  }
-
-  private static class MRTester extends Configured implements Tool {
-    private static class BadKeyMapper extends Mapper<Key,Value,Key,Value> {
-      int index = 0;
-
-      @Override
-      protected void map(Key key, Value value, Context context) throws IOException, InterruptedException {
-        try {
-          try {
-            context.write(key, value);
-            if (index == 2)
-              assertTrue(false);
-          } catch (Exception e) {
-            assertEquals(2, index);
-          }
-        } catch (AssertionError e) {
-          e1 = e;
-        }
-        index++;
-      }
-
-      @Override
-      protected void cleanup(Context context) throws IOException, InterruptedException {
-        try {
-          assertEquals(2, index);
-        } catch (AssertionError e) {
-          e2 = e;
-        }
-      }
-    }
-
-    @Override
-    public int run(String[] args) throws Exception {
-
-      if (args.length != 4) {
-        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <table> <outputfile>");
-      }
-
-      String user = args[0];
-      String pass = args[1];
-      String table = args[2];
-
-      Job job = Job.getInstance(getConf(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
-      job.setJarByClass(this.getClass());
-
-      job.setInputFormatClass(AccumuloInputFormat.class);
-
-      AccumuloInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
-      AccumuloInputFormat.setInputTableName(job, table);
-      AccumuloInputFormat.setMockInstance(job, INSTANCE_NAME);
-      AccumuloFileOutputFormat.setOutputPath(job, new Path(args[3]));
-
-      job.setMapperClass(BAD_TABLE.equals(table) ? BadKeyMapper.class : Mapper.class);
-      job.setMapOutputKeyClass(Key.class);
-      job.setMapOutputValueClass(Value.class);
-      job.setOutputFormatClass(AccumuloFileOutputFormat.class);
-
-      job.setNumReduceTasks(0);
-
-      job.waitForCompletion(true);
-
-      return job.isSuccessful() ? 0 : 1;
-    }
-
-    public static void main(String[] args) throws Exception {
-      Configuration conf = new Configuration();
-      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
-      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
-    }
-  }
-
-  public void handleWriteTests(boolean content) throws Exception {
-    File f = folder.newFile("handleWriteTests");
-    if (f.delete()) {
-      log.debug("Deleted {}", f);
-    }
-    MRTester.main(new String[] {"root", "", content ? TEST_TABLE : EMPTY_TABLE, f.getAbsolutePath()});
-
-    assertTrue(f.exists());
-    File[] files = f.listFiles(new FileFilter() {
-      @Override
-      public boolean accept(File file) {
-        return file.getName().startsWith("part-m-");
-      }
-    });
-    assertNotNull(files);
-    if (content) {
-      assertEquals(1, files.length);
-      assertTrue(files[0].exists());
-    } else {
-      assertEquals(0, files.length);
-    }
-  }
-
-  @Test
-  public void writeBadVisibility() throws Exception {
-    File f = folder.newFile("writeBadVisibility");
-    if (f.delete()) {
-      log.debug("Deleted {}", f);
-    }
-    MRTester.main(new String[] {"root", "", BAD_TABLE, f.getAbsolutePath()});
-    assertNull(e1);
-    assertNull(e2);
-  }
 
   @Test
   public void validateConfiguration() throws IOException, InterruptedException {
@@ -207,6 +39,9 @@
     long c = 50l;
     long d = 10l;
     String e = "snappy";
+    SamplerConfiguration samplerConfig = new SamplerConfiguration(RowSampler.class.getName());
+    samplerConfig.addOption("hasher", "murmur3_32");
+    samplerConfig.addOption("modulus", "109");
 
     Job job1 = Job.getInstance();
     AccumuloFileOutputFormat.setReplication(job1, a);
@@ -214,6 +49,7 @@
     AccumuloFileOutputFormat.setDataBlockSize(job1, c);
     AccumuloFileOutputFormat.setIndexBlockSize(job1, d);
     AccumuloFileOutputFormat.setCompressionType(job1, e);
+    AccumuloFileOutputFormat.setSampler(job1, samplerConfig);
 
     AccumuloConfiguration acuconf = FileOutputConfigurator.getAccumuloConfiguration(AccumuloFileOutputFormat.class, job1.getConfiguration());
 
@@ -222,12 +58,16 @@
     assertEquals(50l, acuconf.getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE));
     assertEquals(10l, acuconf.getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE_INDEX));
     assertEquals("snappy", acuconf.get(Property.TABLE_FILE_COMPRESSION_TYPE));
+    assertEquals(new SamplerConfigurationImpl(samplerConfig), SamplerConfigurationImpl.newSamplerConfig(acuconf));
 
     a = 17;
     b = 1300l;
     c = 150l;
     d = 110l;
     e = "lzo";
+    samplerConfig = new SamplerConfiguration(RowSampler.class.getName());
+    samplerConfig.addOption("hasher", "md5");
+    samplerConfig.addOption("modulus", "100003");
 
     Job job2 = Job.getInstance();
     AccumuloFileOutputFormat.setReplication(job2, a);
@@ -235,6 +75,7 @@
     AccumuloFileOutputFormat.setDataBlockSize(job2, c);
     AccumuloFileOutputFormat.setIndexBlockSize(job2, d);
     AccumuloFileOutputFormat.setCompressionType(job2, e);
+    AccumuloFileOutputFormat.setSampler(job2, samplerConfig);
 
     acuconf = FileOutputConfigurator.getAccumuloConfiguration(AccumuloFileOutputFormat.class, job2.getConfiguration());
 
@@ -243,6 +84,7 @@
     assertEquals(150l, acuconf.getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE));
     assertEquals(110l, acuconf.getMemoryInBytes(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE_INDEX));
     assertEquals("lzo", acuconf.get(Property.TABLE_FILE_COMPRESSION_TYPE));
+    assertEquals(new SamplerConfigurationImpl(samplerConfig), SamplerConfigurationImpl.newSamplerConfig(acuconf));
 
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
index 83662e8..3eef024 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloInputFormatTest.java
@@ -17,52 +17,27 @@
 package org.apache.accumulo.core.client.mapreduce;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
 import java.io.ByteArrayOutputStream;
 import java.io.DataOutputStream;
-import java.io.File;
 import java.io.IOException;
-import java.util.Collection;
-import java.util.Collections;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
 
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.user.RegExFilter;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
-import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapreduce.InputFormat;
-import org.apache.hadoop.mapreduce.InputSplit;
 import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.Mapper;
-import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
-import org.apache.log4j.Level;
-import org.junit.Assert;
 import org.junit.Test;
 
 public class AccumuloInputFormatTest {
 
-  private static final String PREFIX = AccumuloInputFormatTest.class.getSimpleName();
-
   /**
    * Check that the iterator configuration is getting stored in the Job conf correctly.
    */
@@ -197,232 +172,15 @@
     assertTrue(regex.equals(AccumuloInputFormat.getIterators(job).get(0).getName()));
   }
 
-  private static AssertionError e1 = null;
-  private static AssertionError e2 = null;
-
-  private static class MRTester extends Configured implements Tool {
-    private static class TestMapper extends Mapper<Key,Value,Key,Value> {
-      Key key = null;
-      int count = 0;
-
-      @Override
-      protected void map(Key k, Value v, Context context) throws IOException, InterruptedException {
-        try {
-          if (key != null)
-            assertEquals(key.getRow().toString(), new String(v.get()));
-          assertEquals(k.getRow(), new Text(String.format("%09x", count + 1)));
-          assertEquals(new String(v.get()), String.format("%09x", count));
-        } catch (AssertionError e) {
-          e1 = e;
-        }
-        key = new Key(k);
-        count++;
-      }
-
-      @Override
-      protected void cleanup(Context context) throws IOException, InterruptedException {
-        try {
-          assertEquals(100, count);
-        } catch (AssertionError e) {
-          e2 = e;
-        }
-      }
-    }
-
-    @Override
-    public int run(String[] args) throws Exception {
-
-      if (args.length != 5 && args.length != 6) {
-        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <table> <instanceName> <inputFormatClass> [<batchScan>]");
-      }
-
-      String user = args[0];
-      String pass = args[1];
-      String table = args[2];
-
-      String instanceName = args[3];
-      String inputFormatClassName = args[4];
-      Boolean batchScan = false;
-      if (args.length == 6)
-        batchScan = Boolean.parseBoolean(args[5]);
-
-      @SuppressWarnings("unchecked")
-      Class<? extends InputFormat<?,?>> inputFormatClass = (Class<? extends InputFormat<?,?>>) Class.forName(inputFormatClassName);
-
-      Job job = Job.getInstance(getConf(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
-      job.setJarByClass(this.getClass());
-
-      job.setInputFormatClass(inputFormatClass);
-
-      AccumuloInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
-      AccumuloInputFormat.setInputTableName(job, table);
-      AccumuloInputFormat.setMockInstance(job, instanceName);
-      AccumuloInputFormat.setBatchScan(job, batchScan);
-
-      job.setMapperClass(TestMapper.class);
-      job.setMapOutputKeyClass(Key.class);
-      job.setMapOutputValueClass(Value.class);
-      job.setOutputFormatClass(NullOutputFormat.class);
-
-      job.setNumReduceTasks(0);
-
-      job.waitForCompletion(true);
-
-      return job.isSuccessful() ? 0 : 1;
-    }
-
-    public static int main(String[] args) throws Exception {
-      Configuration conf = new Configuration();
-      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
-      return ToolRunner.run(conf, new MRTester(), args);
-    }
-  }
-
-  @Test
-  public void testMap() throws Exception {
-    final String INSTANCE_NAME = PREFIX + "_mapreduce_instance";
-    final String TEST_TABLE_1 = PREFIX + "_mapreduce_table_1";
-
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(TEST_TABLE_1);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
-    for (int i = 0; i < 100; i++) {
-      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
-      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
-      bw.addMutation(m);
-    }
-    bw.close();
-
-    Assert.assertEquals(0, MRTester.main(new String[] {"root", "", TEST_TABLE_1, INSTANCE_NAME, AccumuloInputFormat.class.getName()}));
-    assertNull(e1);
-    assertNull(e2);
-  }
-
-  @Test
-  public void testMapWithBatchScanner() throws Exception {
-    final String INSTANCE_NAME = PREFIX + "_mapreduce_instance";
-    final String TEST_TABLE_2 = PREFIX + "_mapreduce_table_2";
-
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(TEST_TABLE_2);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE_2, new BatchWriterConfig());
-    for (int i = 0; i < 100; i++) {
-      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
-      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
-      bw.addMutation(m);
-    }
-    bw.close();
-
-    Assert.assertEquals(0, MRTester.main(new String[] {"root", "", TEST_TABLE_2, INSTANCE_NAME, AccumuloInputFormat.class.getName(), "True"}));
-    assertNull(e1);
-    assertNull(e2);
-  }
-
-  @Test
-  public void testCorrectRangeInputSplits() throws Exception {
-    Job job = Job.getInstance(new Configuration(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
-
-    String username = "user", table = "table", instance = "mapreduce_testCorrectRangeInputSplits";
-    PasswordToken password = new PasswordToken("password");
-    Authorizations auths = new Authorizations("foo");
-    Collection<Pair<Text,Text>> fetchColumns = Collections.singleton(new Pair<Text,Text>(new Text("foo"), new Text("bar")));
-    boolean isolated = true, localIters = true;
-    Level level = Level.WARN;
-
-    Instance inst = new MockInstance(instance);
-    Connector connector = inst.getConnector(username, password);
-    connector.tableOperations().create(table);
-
-    AccumuloInputFormat.setConnectorInfo(job, username, password);
-    AccumuloInputFormat.setInputTableName(job, table);
-    AccumuloInputFormat.setScanAuthorizations(job, auths);
-    AccumuloInputFormat.setMockInstance(job, instance);
-    AccumuloInputFormat.setScanIsolation(job, isolated);
-    AccumuloInputFormat.setLocalIterators(job, localIters);
-    AccumuloInputFormat.fetchColumns(job, fetchColumns);
-    AccumuloInputFormat.setLogLevel(job, level);
-
-    AccumuloInputFormat aif = new AccumuloInputFormat();
-
-    List<InputSplit> splits = aif.getSplits(job);
-
-    Assert.assertEquals(1, splits.size());
-
-    InputSplit split = splits.get(0);
-
-    Assert.assertEquals(RangeInputSplit.class, split.getClass());
-
-    RangeInputSplit risplit = (RangeInputSplit) split;
-
-    Assert.assertEquals(username, risplit.getPrincipal());
-    Assert.assertEquals(table, risplit.getTableName());
-    Assert.assertEquals(password, risplit.getToken());
-    Assert.assertEquals(auths, risplit.getAuths());
-    Assert.assertEquals(instance, risplit.getInstanceName());
-    Assert.assertEquals(isolated, risplit.isIsolatedScan());
-    Assert.assertEquals(localIters, risplit.usesLocalIterators());
-    Assert.assertEquals(fetchColumns, risplit.getFetchedColumns());
-    Assert.assertEquals(level, risplit.getLogLevel());
-  }
-
-  @Test
-  public void testPartialInputSplitDelegationToConfiguration() throws Exception {
-    String user = "testPartialInputSplitUser";
-    PasswordToken password = new PasswordToken("");
-
-    MockInstance mockInstance = new MockInstance("testPartialInputSplitDelegationToConfiguration");
-    Connector c = mockInstance.getConnector(user, password);
-    c.tableOperations().create("testtable");
-    BatchWriter bw = c.createBatchWriter("testtable", new BatchWriterConfig());
-    for (int i = 0; i < 100; i++) {
-      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
-      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
-      bw.addMutation(m);
-    }
-    bw.close();
-
-    Assert.assertEquals(0,
-        MRTester.main(new String[] {user, "", "testtable", "testPartialInputSplitDelegationToConfiguration", EmptySplitsAccumuloInputFormat.class.getName()}));
-    assertNull(e1);
-    assertNull(e2);
-  }
-
-  @Test
-  public void testPartialFailedInputSplitDelegationToConfiguration() throws Exception {
-    String user = "testPartialFailedInputSplit";
-    PasswordToken password = new PasswordToken("");
-
-    MockInstance mockInstance = new MockInstance("testPartialFailedInputSplitDelegationToConfiguration");
-    Connector c = mockInstance.getConnector(user, password);
-    c.tableOperations().create("testtable");
-    BatchWriter bw = c.createBatchWriter("testtable", new BatchWriterConfig());
-    for (int i = 0; i < 100; i++) {
-      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
-      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
-      bw.addMutation(m);
-    }
-    bw.close();
-
-    // We should fail before we even get into the Mapper because we can't make the RecordReader
-    Assert.assertEquals(
-        1,
-        MRTester.main(new String[] {user, "", "testtable", "testPartialFailedInputSplitDelegationToConfiguration",
-            BadPasswordSplitsAccumuloInputFormat.class.getName()}));
-    assertNull(e1);
-    assertNull(e2);
-  }
-
   @Test
   public void testEmptyColumnFamily() throws IOException {
     Job job = Job.getInstance();
-    Set<Pair<Text,Text>> cols = new HashSet<Pair<Text,Text>>();
+    Set<Pair<Text,Text>> cols = new HashSet<>();
     cols.add(new Pair<Text,Text>(new Text(""), null));
-    cols.add(new Pair<Text,Text>(new Text("foo"), new Text("bar")));
-    cols.add(new Pair<Text,Text>(new Text(""), new Text("bar")));
-    cols.add(new Pair<Text,Text>(new Text(""), new Text("")));
-    cols.add(new Pair<Text,Text>(new Text("foo"), new Text("")));
+    cols.add(new Pair<>(new Text("foo"), new Text("bar")));
+    cols.add(new Pair<>(new Text(""), new Text("bar")));
+    cols.add(new Pair<>(new Text(""), new Text("")));
+    cols.add(new Pair<>(new Text("foo"), new Text("")));
     AccumuloInputFormat.fetchColumns(job, cols);
     Set<Pair<Text,Text>> setCols = AccumuloInputFormat.getFetchedColumns(job);
     assertEquals(cols, setCols);
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloMultiTableInputFormatTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloMultiTableInputFormatTest.java
index b83bfef..12849fe 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloMultiTableInputFormatTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloMultiTableInputFormatTest.java
@@ -17,170 +17,47 @@
 package org.apache.accumulo.core.client.mapreduce;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNull;
 
-import java.io.File;
 import java.io.IOException;
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.Map;
 
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.Mapper;
-import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.TestName;
 
 public class AccumuloMultiTableInputFormatTest {
 
-  private static final String PREFIX = AccumuloMultiTableInputFormatTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapreduce_instance";
-  private static final String TEST_TABLE_1 = PREFIX + "_mapreduce_table_1";
-  private static final String TEST_TABLE_2 = PREFIX + "_mapreduce_table_2";
-
-  private static AssertionError e1 = null;
-  private static AssertionError e2 = null;
-
-  private static class MRTester extends Configured implements Tool {
-
-    private static class TestMapper extends Mapper<Key,Value,Key,Value> {
-      Key key = null;
-      int count = 0;
-
-      @Override
-      protected void map(Key k, Value v, Context context) throws IOException, InterruptedException {
-        try {
-          String tableName = ((RangeInputSplit) context.getInputSplit()).getTableName();
-          if (key != null)
-            assertEquals(key.getRow().toString(), new String(v.get()));
-          assertEquals(new Text(String.format("%s_%09x", tableName, count + 1)), k.getRow());
-          assertEquals(String.format("%s_%09x", tableName, count), new String(v.get()));
-        } catch (AssertionError e) {
-          e1 = e;
-        }
-        key = new Key(k);
-        count++;
-      }
-
-      @Override
-      protected void cleanup(Context context) throws IOException, InterruptedException {
-        try {
-          assertEquals(100, count);
-        } catch (AssertionError e) {
-          e2 = e;
-        }
-      }
-    }
-
-    @Override
-    public int run(String[] args) throws Exception {
-
-      if (args.length != 4) {
-        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <table1> <table2>");
-      }
-
-      String user = args[0];
-      String pass = args[1];
-      String table1 = args[2];
-      String table2 = args[3];
-
-      Job job = Job.getInstance(getConf(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
-      job.setJarByClass(this.getClass());
-
-      job.setInputFormatClass(AccumuloMultiTableInputFormat.class);
-
-      AccumuloMultiTableInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
-
-      InputTableConfig tableConfig1 = new InputTableConfig();
-      InputTableConfig tableConfig2 = new InputTableConfig();
-
-      Map<String,InputTableConfig> configMap = new HashMap<String,InputTableConfig>();
-      configMap.put(table1, tableConfig1);
-      configMap.put(table2, tableConfig2);
-
-      AccumuloMultiTableInputFormat.setInputTableConfigs(job, configMap);
-      AccumuloMultiTableInputFormat.setMockInstance(job, INSTANCE_NAME);
-
-      job.setMapperClass(TestMapper.class);
-      job.setMapOutputKeyClass(Key.class);
-      job.setMapOutputValueClass(Value.class);
-      job.setOutputFormatClass(NullOutputFormat.class);
-
-      job.setNumReduceTasks(0);
-
-      job.waitForCompletion(true);
-
-      return job.isSuccessful() ? 0 : 1;
-    }
-
-    public static void main(String[] args) throws Exception {
-      Configuration conf = new Configuration();
-      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
-      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
-    }
-  }
-
-  /**
-   * Generate incrementing counts and attach table name to the key/value so that order and multi-table data can be verified.
-   */
-  @Test
-  public void testMap() throws Exception {
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(TEST_TABLE_1);
-    c.tableOperations().create(TEST_TABLE_2);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
-    BatchWriter bw2 = c.createBatchWriter(TEST_TABLE_2, new BatchWriterConfig());
-    for (int i = 0; i < 100; i++) {
-      Mutation t1m = new Mutation(new Text(String.format("%s_%09x", TEST_TABLE_1, i + 1)));
-      t1m.put(new Text(), new Text(), new Value(String.format("%s_%09x", TEST_TABLE_1, i).getBytes()));
-      bw.addMutation(t1m);
-      Mutation t2m = new Mutation(new Text(String.format("%s_%09x", TEST_TABLE_2, i + 1)));
-      t2m.put(new Text(), new Text(), new Value(String.format("%s_%09x", TEST_TABLE_2, i).getBytes()));
-      bw2.addMutation(t2m);
-    }
-    bw.close();
-    bw2.close();
-
-    MRTester.main(new String[] {"root", "", TEST_TABLE_1, TEST_TABLE_2});
-    assertNull(e1);
-    assertNull(e2);
-  }
+  @Rule
+  public TestName testName = new TestName();
 
   /**
    * Verify {@link InputTableConfig} objects get correctly serialized in the JobContext.
    */
   @Test
   public void testInputTableConfigSerialization() throws IOException {
+    String table1 = testName.getMethodName() + "1";
+    String table2 = testName.getMethodName() + "2";
     Job job = Job.getInstance();
 
     InputTableConfig tableConfig = new InputTableConfig().setRanges(Collections.singletonList(new Range("a", "b")))
-        .fetchColumns(Collections.singleton(new Pair<Text,Text>(new Text("CF1"), new Text("CQ1"))))
+        .fetchColumns(Collections.singleton(new Pair<>(new Text("CF1"), new Text("CQ1"))))
         .setIterators(Collections.singletonList(new IteratorSetting(50, "iter1", "iterclass1")));
 
-    Map<String,InputTableConfig> configMap = new HashMap<String,InputTableConfig>();
-    configMap.put(TEST_TABLE_1, tableConfig);
-    configMap.put(TEST_TABLE_2, tableConfig);
+    Map<String,InputTableConfig> configMap = new HashMap<>();
+    configMap.put(table1, tableConfig);
+    configMap.put(table2, tableConfig);
 
     AccumuloMultiTableInputFormat.setInputTableConfigs(job, configMap);
 
-    assertEquals(tableConfig, AccumuloMultiTableInputFormat.getInputTableConfig(job, TEST_TABLE_1));
-    assertEquals(tableConfig, AccumuloMultiTableInputFormat.getInputTableConfig(job, TEST_TABLE_2));
+    assertEquals(tableConfig, AccumuloMultiTableInputFormat.getInputTableConfig(job, table1));
+    assertEquals(tableConfig, AccumuloMultiTableInputFormat.getInputTableConfig(job, table2));
   }
 
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormatTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormatTest.java
index 242bba6..94ef555 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormatTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormatTest.java
@@ -17,120 +17,17 @@
 package org.apache.accumulo.core.client.mapreduce;
 
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
 
-import java.io.File;
 import java.io.IOException;
-import java.util.Iterator;
-import java.util.Map.Entry;
 import java.util.concurrent.TimeUnit;
 
-import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.JobContext;
-import org.apache.hadoop.mapreduce.Mapper;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
 import org.junit.Test;
 
-/**
- *
- */
 public class AccumuloOutputFormatTest {
-  private static AssertionError e1 = null;
-  private static final String PREFIX = AccumuloOutputFormatTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapreduce_instance";
-  private static final String TEST_TABLE_1 = PREFIX + "_mapreduce_table_1";
-  private static final String TEST_TABLE_2 = PREFIX + "_mapreduce_table_2";
-
-  private static class MRTester extends Configured implements Tool {
-    private static class TestMapper extends Mapper<Key,Value,Text,Mutation> {
-      Key key = null;
-      int count = 0;
-
-      @Override
-      protected void map(Key k, Value v, Context context) throws IOException, InterruptedException {
-        try {
-          if (key != null)
-            assertEquals(key.getRow().toString(), new String(v.get()));
-          assertEquals(k.getRow(), new Text(String.format("%09x", count + 1)));
-          assertEquals(new String(v.get()), String.format("%09x", count));
-        } catch (AssertionError e) {
-          e1 = e;
-        }
-        key = new Key(k);
-        count++;
-      }
-
-      @Override
-      protected void cleanup(Context context) throws IOException, InterruptedException {
-        Mutation m = new Mutation("total");
-        m.put("", "", Integer.toString(count));
-        context.write(new Text(), m);
-      }
-    }
-
-    @Override
-    public int run(String[] args) throws Exception {
-
-      if (args.length != 4) {
-        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <inputtable> <outputtable>");
-      }
-
-      String user = args[0];
-      String pass = args[1];
-      String table1 = args[2];
-      String table2 = args[3];
-
-      Job job = Job.getInstance(getConf(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
-      job.setJarByClass(this.getClass());
-
-      job.setInputFormatClass(AccumuloInputFormat.class);
-
-      AccumuloInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
-      AccumuloInputFormat.setInputTableName(job, table1);
-      AccumuloInputFormat.setMockInstance(job, INSTANCE_NAME);
-
-      job.setMapperClass(TestMapper.class);
-      job.setMapOutputKeyClass(Key.class);
-      job.setMapOutputValueClass(Value.class);
-      job.setOutputFormatClass(AccumuloOutputFormat.class);
-      job.setOutputKeyClass(Text.class);
-      job.setOutputValueClass(Mutation.class);
-
-      AccumuloOutputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
-      AccumuloOutputFormat.setCreateTables(job, false);
-      AccumuloOutputFormat.setDefaultTableName(job, table2);
-      AccumuloOutputFormat.setMockInstance(job, INSTANCE_NAME);
-
-      job.setNumReduceTasks(0);
-
-      job.waitForCompletion(true);
-
-      return job.isSuccessful() ? 0 : 1;
-    }
-
-    public static void main(String[] args) throws Exception {
-      Configuration conf = new Configuration();
-      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
-      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
-    }
-  }
 
   @Test
   public void testBWSettings() throws IOException {
@@ -172,28 +69,4 @@
     myAOF.checkOutputSpecs(job);
   }
 
-  @Test
-  public void testMR() throws Exception {
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(TEST_TABLE_1);
-    c.tableOperations().create(TEST_TABLE_2);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
-    for (int i = 0; i < 100; i++) {
-      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
-      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
-      bw.addMutation(m);
-    }
-    bw.close();
-
-    MRTester.main(new String[] {"root", "", TEST_TABLE_1, TEST_TABLE_2});
-    assertNull(e1);
-
-    Scanner scanner = c.createScanner(TEST_TABLE_2, new Authorizations());
-    Iterator<Entry<Key,Value>> iter = scanner.iterator();
-    assertTrue(iter.hasNext());
-    Entry<Key,Value> entry = iter.next();
-    assertEquals(Integer.parseInt(new String(entry.getValue().get())), 100);
-    assertFalse(iter.hasNext());
-  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/BadPasswordSplitsAccumuloInputFormat.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/BadPasswordSplitsAccumuloInputFormat.java
deleted file mode 100644
index 9028d94..0000000
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/BadPasswordSplitsAccumuloInputFormat.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mapreduce;
-
-import java.io.IOException;
-import java.util.List;
-
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.hadoop.mapreduce.InputSplit;
-import org.apache.hadoop.mapreduce.JobContext;
-
-/**
- * AccumuloInputFormat which returns an "empty" RangeInputSplit
- */
-public class BadPasswordSplitsAccumuloInputFormat extends AccumuloInputFormat {
-
-  @Override
-  public List<InputSplit> getSplits(JobContext context) throws IOException {
-    List<InputSplit> splits = super.getSplits(context);
-
-    for (InputSplit split : splits) {
-      org.apache.accumulo.core.client.mapreduce.RangeInputSplit rangeSplit = (org.apache.accumulo.core.client.mapreduce.RangeInputSplit) split;
-      rangeSplit.setToken(new PasswordToken("anythingelse"));
-    }
-
-    return splits;
-  }
-}
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/EmptySplitsAccumuloInputFormat.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/EmptySplitsAccumuloInputFormat.java
deleted file mode 100644
index dd531c0..0000000
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/EmptySplitsAccumuloInputFormat.java
+++ /dev/null
@@ -1,45 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.client.mapreduce;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.hadoop.mapreduce.InputSplit;
-import org.apache.hadoop.mapreduce.JobContext;
-
-/**
- * AccumuloInputFormat which returns an "empty" RangeInputSplit
- */
-public class EmptySplitsAccumuloInputFormat extends AccumuloInputFormat {
-
-  @Override
-  public List<InputSplit> getSplits(JobContext context) throws IOException {
-    List<InputSplit> oldSplits = super.getSplits(context);
-    List<InputSplit> newSplits = new ArrayList<InputSplit>(oldSplits.size());
-
-    // Copy only the necessary information
-    for (InputSplit oldSplit : oldSplits) {
-      org.apache.accumulo.core.client.mapreduce.RangeInputSplit newSplit = new org.apache.accumulo.core.client.mapreduce.RangeInputSplit(
-          (org.apache.accumulo.core.client.mapreduce.RangeInputSplit) oldSplit);
-      newSplits.add(newSplit);
-    }
-
-    return newSplits;
-  }
-}
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/InputTableConfigTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/InputTableConfigTest.java
index 4953654..f6f7b2f 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/InputTableConfigTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/InputTableConfigTest.java
@@ -66,7 +66,7 @@
 
   @Test
   public void testSerialization_ranges() throws IOException {
-    List<Range> ranges = new ArrayList<Range>();
+    List<Range> ranges = new ArrayList<>();
     ranges.add(new Range("a", "b"));
     ranges.add(new Range("c", "d"));
     tableQueryConfig.setRanges(ranges);
@@ -79,8 +79,8 @@
 
   @Test
   public void testSerialization_columns() throws IOException {
-    Set<Pair<Text,Text>> columns = new HashSet<Pair<Text,Text>>();
-    columns.add(new Pair<Text,Text>(new Text("cf1"), new Text("cq1")));
+    Set<Pair<Text,Text>> columns = new HashSet<>();
+    columns.add(new Pair<>(new Text("cf1"), new Text("cq1")));
     columns.add(new Pair<Text,Text>(new Text("cf2"), null));
     tableQueryConfig.fetchColumns(columns);
 
@@ -92,7 +92,7 @@
 
   @Test
   public void testSerialization_iterators() throws IOException {
-    List<IteratorSetting> settings = new ArrayList<IteratorSetting>();
+    List<IteratorSetting> settings = new ArrayList<>();
     settings.add(new IteratorSetting(50, "iter", "iterclass"));
     settings.add(new IteratorSetting(55, "iter2", "iterclass2"));
     tableQueryConfig.setIterators(settings);
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplitTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplitTest.java
index 833e594..0eb8010 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplitTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/RangeInputSplitTest.java
@@ -33,6 +33,7 @@
 import org.apache.accumulo.core.iterators.user.SummingCombiner;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.io.Text;
 import org.apache.log4j.Level;
@@ -65,13 +66,13 @@
   public void testAllFieldsWritable() throws IOException {
     RangeInputSplit split = new RangeInputSplit("table", "1", new Range(new Key("a"), new Key("b")), new String[] {"localhost"});
 
-    Set<Pair<Text,Text>> fetchedColumns = new HashSet<Pair<Text,Text>>();
+    Set<Pair<Text,Text>> fetchedColumns = new HashSet<>();
 
-    fetchedColumns.add(new Pair<Text,Text>(new Text("colf1"), new Text("colq1")));
-    fetchedColumns.add(new Pair<Text,Text>(new Text("colf2"), new Text("colq2")));
+    fetchedColumns.add(new Pair<>(new Text("colf1"), new Text("colq1")));
+    fetchedColumns.add(new Pair<>(new Text("colf2"), new Text("colq2")));
 
     // Fake some iterators
-    ArrayList<IteratorSetting> iterators = new ArrayList<IteratorSetting>();
+    ArrayList<IteratorSetting> iterators = new ArrayList<>();
     IteratorSetting setting = new IteratorSetting(50, SummingCombiner.class);
     setting.addOption("foo", "bar");
     iterators.add(setting);
@@ -89,7 +90,7 @@
     split.setToken(new PasswordToken("password"));
     split.setPrincipal("root");
     split.setInstanceName("instance");
-    split.setMockInstance(true);
+    DeprecationUtil.setMockInstance(split, true);
     split.setZooKeepers("localhost");
     split.setIterators(iterators);
     split.setLogLevel(Level.WARN);
@@ -116,7 +117,7 @@
     Assert.assertEquals(split.getToken(), newSplit.getToken());
     Assert.assertEquals(split.getPrincipal(), newSplit.getPrincipal());
     Assert.assertEquals(split.getInstanceName(), newSplit.getInstanceName());
-    Assert.assertEquals(split.isMockInstance(), newSplit.isMockInstance());
+    Assert.assertEquals(DeprecationUtil.isMockInstanceSet(split), DeprecationUtil.isMockInstanceSet(newSplit));
     Assert.assertEquals(split.getZooKeepers(), newSplit.getZooKeepers());
     Assert.assertEquals(split.getIterators(), newSplit.getIterators());
     Assert.assertEquals(split.getLogLevel(), newSplit.getLogLevel());
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplitTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplitTest.java
index 4f3caf0..17c781d 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplitTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/impl/BatchInputSplitTest.java
@@ -27,13 +27,13 @@
 import java.util.Set;
 
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.mapreduce.impl.BatchInputSplit;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.iterators.user.SummingCombiner;
 import org.apache.accumulo.core.iterators.user.WholeRowIterator;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.hadoop.io.Text;
 import org.apache.log4j.Level;
@@ -68,13 +68,13 @@
     Range[] ranges = new Range[] {new Range(new Key("a"), new Key("b"))};
     BatchInputSplit split = new BatchInputSplit("table", "1", Arrays.asList(ranges), new String[] {"localhost"});
 
-    Set<Pair<Text,Text>> fetchedColumns = new HashSet<Pair<Text,Text>>();
+    Set<Pair<Text,Text>> fetchedColumns = new HashSet<>();
 
-    fetchedColumns.add(new Pair<Text,Text>(new Text("colf1"), new Text("colq1")));
-    fetchedColumns.add(new Pair<Text,Text>(new Text("colf2"), new Text("colq2")));
+    fetchedColumns.add(new Pair<>(new Text("colf1"), new Text("colq1")));
+    fetchedColumns.add(new Pair<>(new Text("colf2"), new Text("colq2")));
 
     // Fake some iterators
-    ArrayList<IteratorSetting> iterators = new ArrayList<IteratorSetting>();
+    ArrayList<IteratorSetting> iterators = new ArrayList<>();
     IteratorSetting setting = new IteratorSetting(50, SummingCombiner.class);
     setting.addOption("foo", "bar");
     iterators.add(setting);
@@ -88,7 +88,7 @@
     split.setFetchedColumns(fetchedColumns);
     split.setToken(new PasswordToken("password"));
     split.setPrincipal("root");
-    split.setMockInstance(true);
+    DeprecationUtil.setMockInstance(split, true);
     split.setInstanceName("instance");
     split.setZooKeepers("localhost");
     split.setIterators(iterators);
@@ -113,7 +113,7 @@
     Assert.assertEquals(split.getToken(), newSplit.getToken());
     Assert.assertEquals(split.getPrincipal(), newSplit.getPrincipal());
     Assert.assertEquals(split.getInstanceName(), newSplit.getInstanceName());
-    Assert.assertEquals(split.isMockInstance(), newSplit.isMockInstance());
+    Assert.assertEquals(DeprecationUtil.isMockInstanceSet(split), DeprecationUtil.isMockInstanceSet(newSplit));
     Assert.assertEquals(split.getZooKeepers(), newSplit.getZooKeepers());
     Assert.assertEquals(split.getIterators(), newSplit.getIterators());
     Assert.assertEquals(split.getLogLevel(), newSplit.getLogLevel());
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
index 751421a..a7e5e0a 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBaseTest.java
@@ -26,7 +26,6 @@
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
@@ -100,15 +99,17 @@
     // assertEquals(1234000, ((ZooKeeperInstance) instance).getZooKeepersSessionTimeOut());
   }
 
+  @SuppressWarnings("deprecation")
   @Test
   public void testSetMockInstance() {
+    Class<?> mockClass = org.apache.accumulo.core.client.mock.MockInstance.class;
     Configuration conf = new Configuration();
     ConfiguratorBase.setMockInstance(this.getClass(), conf, "testInstanceName");
     assertEquals("testInstanceName", conf.get(ConfiguratorBase.enumToConfKey(this.getClass(), ConfiguratorBase.InstanceOpts.NAME)));
     assertEquals(null, conf.get(ConfiguratorBase.enumToConfKey(this.getClass(), ConfiguratorBase.InstanceOpts.ZOO_KEEPERS)));
-    assertEquals(MockInstance.class.getSimpleName(), conf.get(ConfiguratorBase.enumToConfKey(this.getClass(), ConfiguratorBase.InstanceOpts.TYPE)));
+    assertEquals(mockClass.getSimpleName(), conf.get(ConfiguratorBase.enumToConfKey(this.getClass(), ConfiguratorBase.InstanceOpts.TYPE)));
     Instance instance = ConfiguratorBase.getInstance(this.getClass(), conf);
-    assertEquals(MockInstance.class.getName(), instance.getClass().getName());
+    assertEquals(mockClass.getName(), instance.getClass().getName());
   }
 
   @Test
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mock/MockConnectorTest.java b/core/src/test/java/org/apache/accumulo/core/client/mock/MockConnectorTest.java
index 980498e..b70cb00 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mock/MockConnectorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mock/MockConnectorTest.java
@@ -53,6 +53,7 @@
 
 import com.google.common.collect.Iterators;
 
+@Deprecated
 public class MockConnectorTest {
   Random random = new Random();
 
@@ -124,7 +125,7 @@
 
     Mutation good = new Mutation("good");
     good.put(asText(random.nextInt()), asText(random.nextInt()), new Value("good".getBytes()));
-    List<Mutation> mutations = new ArrayList<Mutation>();
+    List<Mutation> mutations = new ArrayList<>();
     mutations.add(good);
     mutations.add(bad);
     try {
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mock/MockNamespacesTest.java b/core/src/test/java/org/apache/accumulo/core/client/mock/MockNamespacesTest.java
index 308152e..ca12838 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mock/MockNamespacesTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mock/MockNamespacesTest.java
@@ -24,7 +24,6 @@
 import java.util.EnumSet;
 import java.util.HashSet;
 import java.util.Map.Entry;
-import java.util.Random;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -47,11 +46,24 @@
 import org.apache.accumulo.core.iterators.Filter;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.security.Authorizations;
+import org.junit.Before;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.TestName;
 
+@Deprecated
 public class MockNamespacesTest {
 
-  Random random = new Random();
+  @Rule
+  public TestName test = new TestName();
+
+  private Connector conn;
+
+  @Before
+  public void setupInstance() throws Exception {
+    Instance inst = new MockInstance(test.getMethodName());
+    conn = inst.getConnector("user", new PasswordToken("pass"));
+  }
 
   /**
    * This test creates a table without specifying a namespace. In this case, it puts the table into the default namespace.
@@ -59,12 +71,10 @@
   @Test
   public void testDefaultNamespace() throws Exception {
     String tableName = "test";
-    Instance instance = new MockInstance("default");
-    Connector c = instance.getConnector("user", new PasswordToken("pass"));
 
-    assertTrue(c.namespaceOperations().exists(Namespaces.DEFAULT_NAMESPACE));
-    c.tableOperations().create(tableName);
-    assertTrue(c.tableOperations().exists(tableName));
+    assertTrue(conn.namespaceOperations().exists(Namespaces.DEFAULT_NAMESPACE));
+    conn.tableOperations().create(tableName);
+    assertTrue(conn.tableOperations().exists(tableName));
   }
 
   /**
@@ -78,38 +88,35 @@
     String tableName1 = namespace + ".table1";
     String tableName2 = namespace + ".table2";
 
-    Instance instance = new MockInstance("createdelete");
-    Connector c = instance.getConnector("user", new PasswordToken("pass"));
+    conn.namespaceOperations().create(namespace);
+    assertTrue(conn.namespaceOperations().exists(namespace));
 
-    c.namespaceOperations().create(namespace);
-    assertTrue(c.namespaceOperations().exists(namespace));
+    conn.tableOperations().create(tableName1);
+    assertTrue(conn.tableOperations().exists(tableName1));
 
-    c.tableOperations().create(tableName1);
-    assertTrue(c.tableOperations().exists(tableName1));
-
-    c.tableOperations().create(tableName2);
-    assertTrue(c.tableOperations().exists(tableName2));
+    conn.tableOperations().create(tableName2);
+    assertTrue(conn.tableOperations().exists(tableName2));
 
     // deleting
     try {
       // can't delete a namespace with tables in it
-      c.namespaceOperations().delete(namespace);
+      conn.namespaceOperations().delete(namespace);
       fail();
     } catch (NamespaceNotEmptyException e) {
       // ignore, supposed to happen
     }
-    assertTrue(c.namespaceOperations().exists(namespace));
-    assertTrue(c.tableOperations().exists(tableName1));
-    assertTrue(c.tableOperations().exists(tableName2));
+    assertTrue(conn.namespaceOperations().exists(namespace));
+    assertTrue(conn.tableOperations().exists(tableName1));
+    assertTrue(conn.tableOperations().exists(tableName2));
 
-    c.tableOperations().delete(tableName2);
-    assertTrue(!c.tableOperations().exists(tableName2));
-    assertTrue(c.namespaceOperations().exists(namespace));
+    conn.tableOperations().delete(tableName2);
+    assertTrue(!conn.tableOperations().exists(tableName2));
+    assertTrue(conn.namespaceOperations().exists(namespace));
 
-    c.tableOperations().delete(tableName1);
-    assertTrue(!c.tableOperations().exists(tableName1));
-    c.namespaceOperations().delete(namespace);
-    assertTrue(!c.namespaceOperations().exists(namespace));
+    conn.tableOperations().delete(tableName1);
+    assertTrue(!conn.tableOperations().exists(tableName1));
+    conn.namespaceOperations().delete(namespace);
+    assertTrue(!conn.namespaceOperations().exists(namespace));
   }
 
   /**
@@ -130,51 +137,48 @@
     String propKey = Property.TABLE_SCAN_MAXMEM.getKey();
     String propVal = "42K";
 
-    Instance instance = new MockInstance("props");
-    Connector c = instance.getConnector("user", new PasswordToken("pass"));
-
-    c.namespaceOperations().create(namespace);
-    c.tableOperations().create(tableName1);
-    c.namespaceOperations().setProperty(namespace, propKey, propVal);
+    conn.namespaceOperations().create(namespace);
+    conn.tableOperations().create(tableName1);
+    conn.namespaceOperations().setProperty(namespace, propKey, propVal);
 
     // check the namespace has the property
-    assertTrue(checkNamespaceHasProp(c, namespace, propKey, propVal));
+    assertTrue(checkNamespaceHasProp(conn, namespace, propKey, propVal));
 
     // check that the table gets it from the namespace
-    assertTrue(checkTableHasProp(c, tableName1, propKey, propVal));
+    assertTrue(checkTableHasProp(conn, tableName1, propKey, propVal));
 
     // test a second table to be sure the first wasn't magical
     // (also, changed the order, the namespace has the property already)
-    c.tableOperations().create(tableName2);
-    assertTrue(checkTableHasProp(c, tableName2, propKey, propVal));
+    conn.tableOperations().create(tableName2);
+    assertTrue(checkTableHasProp(conn, tableName2, propKey, propVal));
 
     // test that table properties override namespace properties
     String propKey2 = Property.TABLE_FILE_MAX.getKey();
     String propVal2 = "42";
     String tablePropVal = "13";
 
-    c.tableOperations().setProperty(tableName2, propKey2, tablePropVal);
-    c.namespaceOperations().setProperty("propchange", propKey2, propVal2);
+    conn.tableOperations().setProperty(tableName2, propKey2, tablePropVal);
+    conn.namespaceOperations().setProperty("propchange", propKey2, propVal2);
 
-    assertTrue(checkTableHasProp(c, tableName2, propKey2, tablePropVal));
+    assertTrue(checkTableHasProp(conn, tableName2, propKey2, tablePropVal));
 
     // now check that you can change the default namespace's properties
     propVal = "13K";
     String tableName = "some_table";
-    c.tableOperations().create(tableName);
-    c.namespaceOperations().setProperty(Namespaces.DEFAULT_NAMESPACE, propKey, propVal);
+    conn.tableOperations().create(tableName);
+    conn.namespaceOperations().setProperty(Namespaces.DEFAULT_NAMESPACE, propKey, propVal);
 
-    assertTrue(checkTableHasProp(c, tableName, propKey, propVal));
+    assertTrue(checkTableHasProp(conn, tableName, propKey, propVal));
 
     // test the properties server-side by configuring an iterator.
     // should not show anything with column-family = 'a'
     String tableName3 = namespace + ".table3";
-    c.tableOperations().create(tableName3);
+    conn.tableOperations().create(tableName3);
 
     IteratorSetting setting = new IteratorSetting(250, "thing", SimpleFilter.class.getName());
-    c.namespaceOperations().attachIterator(namespace, setting);
+    conn.namespaceOperations().attachIterator(namespace, setting);
 
-    BatchWriter bw = c.createBatchWriter(tableName3, new BatchWriterConfig());
+    BatchWriter bw = conn.createBatchWriter(tableName3, new BatchWriterConfig());
     Mutation m = new Mutation("r");
     m.put("a", "b", new Value("abcde".getBytes()));
     bw.addMutation(m);
@@ -197,22 +201,18 @@
     String tableName1 = "renamed.table1";
     // String tableName2 = "cloned.table2";
 
-    Instance instance = new MockInstance("renameclone");
-    Connector c = instance.getConnector("user", new PasswordToken("pass"));
+    conn.tableOperations().create(tableName);
+    conn.namespaceOperations().create(namespace1);
+    conn.namespaceOperations().create(namespace2);
 
-    c.tableOperations().create(tableName);
-    c.namespaceOperations().create(namespace1);
-    c.namespaceOperations().create(namespace2);
+    conn.tableOperations().rename(tableName, tableName1);
 
-    c.tableOperations().rename(tableName, tableName1);
-
-    assertTrue(c.tableOperations().exists(tableName1));
-    assertTrue(!c.tableOperations().exists(tableName));
+    assertTrue(conn.tableOperations().exists(tableName1));
+    assertTrue(!conn.tableOperations().exists(tableName));
 
     // TODO implement clone in mock
     // c.tableOperations().clone(tableName1, tableName2, false, null, null);
     // assertTrue(c.tableOperations().exists(tableName1)); assertTrue(c.tableOperations().exists(tableName2));
-    return;
   }
 
   /**
@@ -224,18 +224,15 @@
     String namespace2 = "n2";
     String table = "t";
 
-    Instance instance = new MockInstance("rename");
-    Connector c = instance.getConnector("user", new PasswordToken("pass"));
+    conn.namespaceOperations().create(namespace1);
+    conn.tableOperations().create(namespace1 + "." + table);
 
-    c.namespaceOperations().create(namespace1);
-    c.tableOperations().create(namespace1 + "." + table);
+    conn.namespaceOperations().rename(namespace1, namespace2);
 
-    c.namespaceOperations().rename(namespace1, namespace2);
-
-    assertTrue(!c.namespaceOperations().exists(namespace1));
-    assertTrue(c.namespaceOperations().exists(namespace2));
-    assertTrue(!c.tableOperations().exists(namespace1 + "." + table));
-    assertTrue(c.tableOperations().exists(namespace2 + "." + table));
+    assertTrue(!conn.namespaceOperations().exists(namespace1));
+    assertTrue(conn.namespaceOperations().exists(namespace2));
+    assertTrue(!conn.tableOperations().exists(namespace1 + "." + table));
+    assertTrue(conn.tableOperations().exists(namespace2 + "." + table));
   }
 
   /**
@@ -243,34 +240,31 @@
    */
   @Test
   public void testNamespaceIterators() throws Exception {
-    Instance instance = new MockInstance("Iterators");
-    Connector c = instance.getConnector("user", new PasswordToken("pass"));
-
     String namespace = "iterator";
     String tableName = namespace + ".table";
     String iter = "thing";
 
-    c.namespaceOperations().create(namespace);
-    c.tableOperations().create(tableName);
+    conn.namespaceOperations().create(namespace);
+    conn.tableOperations().create(tableName);
 
     IteratorSetting setting = new IteratorSetting(250, iter, SimpleFilter.class.getName());
-    HashSet<IteratorScope> scope = new HashSet<IteratorScope>();
+    HashSet<IteratorScope> scope = new HashSet<>();
     scope.add(IteratorScope.scan);
-    c.namespaceOperations().attachIterator(namespace, setting, EnumSet.copyOf(scope));
+    conn.namespaceOperations().attachIterator(namespace, setting, EnumSet.copyOf(scope));
 
-    BatchWriter bw = c.createBatchWriter(tableName, new BatchWriterConfig());
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
     Mutation m = new Mutation("r");
     m.put("a", "b", new Value("abcde".getBytes(UTF_8)));
     bw.addMutation(m);
     bw.flush();
 
-    Scanner s = c.createScanner(tableName, Authorizations.EMPTY);
+    Scanner s = conn.createScanner(tableName, Authorizations.EMPTY);
     System.out.println(s.iterator().next());
     // do scanners work correctly in mock?
     // assertTrue(!s.iterator().hasNext());
 
-    assertTrue(c.namespaceOperations().listIterators(namespace).containsKey(iter));
-    c.namespaceOperations().removeIterator(namespace, iter, EnumSet.copyOf(scope));
+    assertTrue(conn.namespaceOperations().listIterators(namespace).containsKey(iter));
+    conn.namespaceOperations().removeIterator(namespace, iter, EnumSet.copyOf(scope));
   }
 
   private boolean checkTableHasProp(Connector c, String t, String propKey, String propVal) throws AccumuloException, TableNotFoundException {
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mock/MockTableOperationsTest.java b/core/src/test/java/org/apache/accumulo/core/client/mock/MockTableOperationsTest.java
index 9733bd3..58f3777 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mock/MockTableOperationsTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mock/MockTableOperationsTest.java
@@ -59,16 +59,29 @@
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
 import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.TestName;
 
 import com.google.common.collect.Iterators;
 
+@Deprecated
 public class MockTableOperationsTest {
 
+  @Rule
+  public TestName test = new TestName();
+
+  private Connector conn;
+
+  @Before
+  public void setupInstance() throws Exception {
+    Instance inst = new MockInstance(test.getMethodName());
+    conn = inst.getConnector("user", new PasswordToken("pass"));
+  }
+
   @Test
   public void testCreateUseVersions() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException {
-    Instance instance = new MockInstance("topstest");
-    Connector conn = instance.getConnector("user", new PasswordToken("pass"));
     String t = "tableName1";
 
     {
@@ -128,8 +141,6 @@
 
   @Test
   public void testTableNotFound() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException {
-    Instance instance = new MockInstance("topstest");
-    Connector conn = instance.getConnector("user", new PasswordToken("pass"));
     IteratorSetting setting = new IteratorSetting(100, "myvers", VersioningIterator.class);
     String t = "tableName";
     try {
@@ -161,7 +172,7 @@
       Assert.fail();
     } catch (TableNotFoundException e) {}
     try {
-      conn.tableOperations().removeIterator(t, null, null);
+      conn.tableOperations().removeIterator(t, null, EnumSet.noneOf(IteratorScope.class));
       Assert.fail();
     } catch (TableNotFoundException e) {}
     try {
@@ -188,12 +199,10 @@
   @Test
   public void testImport() throws Throwable {
     ImportTestFilesAndData dataAndFiles = prepareTestFiles();
-    Instance instance = new MockInstance("foo");
-    Connector connector = instance.getConnector("user", new PasswordToken(new byte[0]));
-    TableOperations tableOperations = connector.tableOperations();
+    TableOperations tableOperations = conn.tableOperations();
     tableOperations.create("a_table");
     tableOperations.importDirectory("a_table", dataAndFiles.importPath.toString(), dataAndFiles.failurePath.toString(), false);
-    Scanner scanner = connector.createScanner("a_table", new Authorizations());
+    Scanner scanner = conn.createScanner("a_table", new Authorizations());
     Iterator<Entry<Key,Value>> iterator = scanner.iterator();
     for (int i = 0; i < 5; i++) {
       Assert.assertTrue(iterator.hasNext());
@@ -216,11 +225,12 @@
     fs.delete(tempFile, true);
     fs.mkdirs(failures);
     fs.mkdirs(tempFile.getParent());
-    FileSKVWriter writer = FileOperations.getInstance().openWriter(tempFile.toString(), fs, defaultConf, AccumuloConfiguration.getDefaultConfiguration());
+    FileSKVWriter writer = FileOperations.getInstance().newWriterBuilder().forFile(tempFile.toString(), fs, defaultConf)
+        .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
     writer.startDefaultLocalityGroup();
-    List<Pair<Key,Value>> keyVals = new ArrayList<Pair<Key,Value>>();
+    List<Pair<Key,Value>> keyVals = new ArrayList<>();
     for (int i = 0; i < 5; i++) {
-      keyVals.add(new Pair<Key,Value>(new Key("a" + i, "b" + i, "c" + i, new ColumnVisibility(""), 1000l + i), new Value(Integer.toString(i).getBytes())));
+      keyVals.add(new Pair<>(new Key("a" + i, "b" + i, "c" + i, new ColumnVisibility(""), 1000l + i), new Value(Integer.toString(i).getBytes())));
     }
     for (Pair<Key,Value> keyVal : keyVals) {
       writer.append(keyVal.getFirst(), keyVal.getSecond());
@@ -235,18 +245,14 @@
 
   @Test(expected = TableNotFoundException.class)
   public void testFailsWithNoTable() throws Throwable {
-    Instance instance = new MockInstance("foo");
-    Connector connector = instance.getConnector("user", new PasswordToken(new byte[0]));
-    TableOperations tableOperations = connector.tableOperations();
+    TableOperations tableOperations = conn.tableOperations();
     ImportTestFilesAndData testFiles = prepareTestFiles();
     tableOperations.importDirectory("doesnt_exist_table", testFiles.importPath.toString(), testFiles.failurePath.toString(), false);
   }
 
   @Test(expected = IOException.class)
   public void testFailsWithNonEmptyFailureDirectory() throws Throwable {
-    Instance instance = new MockInstance("foo");
-    Connector connector = instance.getConnector("user", new PasswordToken(new byte[0]));
-    TableOperations tableOperations = connector.tableOperations();
+    TableOperations tableOperations = conn.tableOperations();
     ImportTestFilesAndData testFiles = prepareTestFiles();
     FileSystem fs = testFiles.failurePath.getFileSystem(new Configuration());
     fs.open(testFiles.failurePath.suffix("/something")).close();
@@ -255,11 +261,9 @@
 
   @Test
   public void testDeleteRows() throws Exception {
-    Instance instance = new MockInstance("rows");
-    Connector connector = instance.getConnector("user", new PasswordToken("foo".getBytes()));
-    TableOperations to = connector.tableOperations();
+    TableOperations to = conn.tableOperations();
     to.create("test");
-    BatchWriter bw = connector.createBatchWriter("test", new BatchWriterConfig());
+    BatchWriter bw = conn.createBatchWriter("test", new BatchWriterConfig());
     for (int r = 0; r < 20; r++) {
       Mutation m = new Mutation("" + r);
       for (int c = 0; c < 5; c++) {
@@ -269,7 +273,7 @@
     }
     bw.flush();
     to.deleteRows("test", new Text("1"), new Text("2"));
-    Scanner s = connector.createScanner("test", Authorizations.EMPTY);
+    Scanner s = conn.createScanner("test", Authorizations.EMPTY);
     int oneCnt = 0;
     for (Entry<Key,Value> entry : s) {
       char rowStart = entry.getKey().getRow().toString().charAt(0);
@@ -281,11 +285,9 @@
 
   @Test
   public void testDeleteRowsWithNullKeys() throws Exception {
-    Instance instance = new MockInstance("rows");
-    Connector connector = instance.getConnector("user", new PasswordToken("foo"));
-    TableOperations to = connector.tableOperations();
+    TableOperations to = conn.tableOperations();
     to.create("test2");
-    BatchWriter bw = connector.createBatchWriter("test2", new BatchWriterConfig());
+    BatchWriter bw = conn.createBatchWriter("test2", new BatchWriterConfig());
     for (int r = 0; r < 30; r++) {
       Mutation m = new Mutation(Integer.toString(r));
       for (int c = 0; c < 5; c++) {
@@ -298,7 +300,7 @@
     // test null end
     // will remove rows 4 through 9 (6 * 5 = 30 entries)
     to.deleteRows("test2", new Text("30"), null);
-    Scanner s = connector.createScanner("test2", Authorizations.EMPTY);
+    Scanner s = conn.createScanner("test2", Authorizations.EMPTY);
     int rowCnt = 0;
     for (Entry<Key,Value> entry : s) {
       String rowId = entry.getKey().getRow().toString();
@@ -311,7 +313,7 @@
     // test null start
     // will remove 0-1, 10-19, 2
     to.deleteRows("test2", null, new Text("2"));
-    s = connector.createScanner("test2", Authorizations.EMPTY);
+    s = conn.createScanner("test2", Authorizations.EMPTY);
     rowCnt = 0;
     for (Entry<Key,Value> entry : s) {
       char rowStart = entry.getKey().getRow().toString().charAt(0);
@@ -324,7 +326,7 @@
     // test null start and end
     // deletes everything still left
     to.deleteRows("test2", null, null);
-    s = connector.createScanner("test2", Authorizations.EMPTY);
+    s = conn.createScanner("test2", Authorizations.EMPTY);
     rowCnt = Iterators.size(s.iterator());
     s.close();
     to.delete("test2");
@@ -334,8 +336,6 @@
 
   @Test
   public void testTableIdMap() throws Exception {
-    Instance inst = new MockInstance("testTableIdMap");
-    Connector conn = inst.getConnector("root", new PasswordToken(""));
     TableOperations tops = conn.tableOperations();
     tops.create("foo");
 
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mock/TestBatchScanner821.java b/core/src/test/java/org/apache/accumulo/core/client/mock/TestBatchScanner821.java
index b03bda9..4f041c9 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mock/TestBatchScanner821.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/mock/TestBatchScanner821.java
@@ -31,11 +31,23 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.WrappingIterator;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
+@Deprecated
 public class TestBatchScanner821 {
 
+  public static class TransformIterator extends WrappingIterator {
+
+    @Override
+    public Key getTopKey() {
+      Key k = getSource().getTopKey();
+      return new Key(new Text(k.getRow().toString().toLowerCase()), k.getColumnFamily(), k.getColumnQualifier(), k.getColumnVisibility(), k.getTimestamp());
+    }
+  }
+
   @Test
   public void test() throws Exception {
     MockInstance inst = new MockInstance();
diff --git a/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileTest.java b/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileTest.java
new file mode 100644
index 0000000..4993810
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileTest.java
@@ -0,0 +1,626 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.client.rfile;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Random;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.admin.NewTableConfiguration;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.ArrayByteSequence;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.FileOperations;
+import org.apache.accumulo.core.file.FileSKVIterator;
+import org.apache.accumulo.core.file.rfile.RFile.Reader;
+import org.apache.accumulo.core.iterators.user.RegExFilter;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.io.Text;
+import org.junit.Assert;
+import org.junit.Test;
+
+import com.google.common.collect.ImmutableMap;
+
+public class RFileTest {
+
+  // method created to foil findbugs... it was complaining ret val not used when it did not matter
+  private void foo(boolean b) {}
+
+  private String createTmpTestFile() throws IOException {
+    File dir = new File(System.getProperty("user.dir") + "/target/rfile-test");
+    foo(dir.mkdirs());
+    File testFile = File.createTempFile("test", ".rf", dir);
+    foo(testFile.delete());
+    return testFile.getAbsolutePath();
+  }
+
+  String rowStr(int r) {
+    return String.format("%06x", r);
+  }
+
+  String colStr(int c) {
+    return String.format("%04x", c);
+  }
+
+  private SortedMap<Key,Value> createTestData(int rows, int families, int qualifiers) {
+    return createTestData(0, rows, 0, families, qualifiers);
+  }
+
+  private SortedMap<Key,Value> createTestData(int startRow, int rows, int startFamily, int families, int qualifiers) {
+    TreeMap<Key,Value> testData = new TreeMap<>();
+
+    for (int r = 0; r < rows; r++) {
+      String row = rowStr(r + startRow);
+      for (int f = 0; f < families; f++) {
+        String fam = colStr(f + startFamily);
+        for (int q = 0; q < qualifiers; q++) {
+          String qual = colStr(q);
+          Key k = new Key(row, fam, qual);
+          testData.put(k, new Value((k.hashCode() + "").getBytes()));
+        }
+      }
+    }
+
+    return testData;
+  }
+
+  private String createRFile(SortedMap<Key,Value> testData) throws Exception {
+    String testFile = createTmpTestFile();
+
+    try (RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(FileSystem.getLocal(new Configuration())).build()) {
+      writer.append(testData.entrySet());
+      // TODO ensure compressors are returned
+    }
+
+    return testFile;
+  }
+
+  @Test
+  public void testIndependance() throws Exception {
+    // test to ensure two iterators allocated from same RFile scanner are independent.
+
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+
+    SortedMap<Key,Value> testData = createTestData(10, 10, 10);
+
+    String testFile = createRFile(testData);
+
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).build();
+    Range range1 = Range.exact(rowStr(5));
+    scanner.setRange(range1);
+    Iterator<Entry<Key,Value>> scnIter1 = scanner.iterator();
+    Iterator<Entry<Key,Value>> mapIter1 = testData.subMap(range1.getStartKey(), range1.getEndKey()).entrySet().iterator();
+
+    Range range2 = new Range(rowStr(3), true, rowStr(4), true);
+    scanner.setRange(range2);
+    Iterator<Entry<Key,Value>> scnIter2 = scanner.iterator();
+    Iterator<Entry<Key,Value>> mapIter2 = testData.subMap(range2.getStartKey(), range2.getEndKey()).entrySet().iterator();
+
+    while (scnIter1.hasNext() || scnIter2.hasNext()) {
+      if (scnIter1.hasNext()) {
+        Assert.assertTrue(mapIter1.hasNext());
+        Assert.assertEquals(scnIter1.next(), mapIter1.next());
+      } else {
+        Assert.assertFalse(mapIter1.hasNext());
+      }
+
+      if (scnIter2.hasNext()) {
+        Assert.assertTrue(mapIter2.hasNext());
+        Assert.assertEquals(scnIter2.next(), mapIter2.next());
+      } else {
+        Assert.assertFalse(mapIter2.hasNext());
+      }
+    }
+
+    Assert.assertFalse(mapIter1.hasNext());
+    Assert.assertFalse(mapIter2.hasNext());
+
+    scanner.close();
+  }
+
+  SortedMap<Key,Value> toMap(Scanner scanner) {
+    TreeMap<Key,Value> map = new TreeMap<>();
+    for (Entry<Key,Value> entry : scanner) {
+      map.put(entry.getKey(), entry.getValue());
+    }
+    return map;
+  }
+
+  @Test
+  public void testMultipleSources() throws Exception {
+    SortedMap<Key,Value> testData1 = createTestData(10, 10, 10);
+    SortedMap<Key,Value> testData2 = createTestData(0, 10, 0, 10, 10);
+
+    String testFile1 = createRFile(testData1);
+    String testFile2 = createRFile(testData2);
+
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    Scanner scanner = RFile.newScanner().from(testFile1, testFile2).withFileSystem(localFs).build();
+
+    TreeMap<Key,Value> expected = new TreeMap<>(testData1);
+    expected.putAll(testData2);
+
+    Assert.assertEquals(expected, toMap(scanner));
+
+    Range range = new Range(rowStr(3), true, rowStr(14), true);
+    scanner.setRange(range);
+    Assert.assertEquals(expected.subMap(range.getStartKey(), range.getEndKey()), toMap(scanner));
+
+    scanner.close();
+  }
+
+  @Test
+  public void testWriterTableProperties() throws Exception {
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+
+    String testFile = createTmpTestFile();
+
+    Map<String,String> props = new HashMap<>();
+    props.put(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE.getKey(), "1K");
+    props.put(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE_INDEX.getKey(), "1K");
+    RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).withTableProperties(props).build();
+
+    SortedMap<Key,Value> testData1 = createTestData(10, 10, 10);
+    writer.append(testData1.entrySet());
+    writer.close();
+
+    Reader reader = getReader(localFs, testFile);
+    FileSKVIterator iiter = reader.getIndex();
+
+    int count = 0;
+    while (iiter.hasTop()) {
+      count++;
+      iiter.next();
+    }
+
+    // if settings are used then should create multiple index entries
+    Assert.assertTrue(count > 10);
+
+    reader.close();
+
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).build();
+    Assert.assertEquals(testData1, toMap(scanner));
+    scanner.close();
+  }
+
+  @Test
+  public void testLocalityGroups() throws Exception {
+
+    SortedMap<Key,Value> testData1 = createTestData(0, 10, 0, 2, 10);
+    SortedMap<Key,Value> testData2 = createTestData(0, 10, 2, 1, 10);
+    SortedMap<Key,Value> defaultData = createTestData(0, 10, 3, 7, 10);
+
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build();
+
+    writer.startNewLocalityGroup("z", colStr(0), colStr(1));
+    writer.append(testData1.entrySet());
+
+    writer.startNewLocalityGroup("h", colStr(2));
+    writer.append(testData2.entrySet());
+
+    writer.startDefaultLocalityGroup();
+    writer.append(defaultData.entrySet());
+
+    writer.close();
+
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).build();
+
+    scanner.fetchColumnFamily(new Text(colStr(0)));
+    scanner.fetchColumnFamily(new Text(colStr(1)));
+    Assert.assertEquals(testData1, toMap(scanner));
+
+    scanner.clearColumns();
+    scanner.fetchColumnFamily(new Text(colStr(2)));
+    Assert.assertEquals(testData2, toMap(scanner));
+
+    scanner.clearColumns();
+    for (int i = 3; i < 10; i++) {
+      scanner.fetchColumnFamily(new Text(colStr(i)));
+    }
+    Assert.assertEquals(defaultData, toMap(scanner));
+
+    scanner.clearColumns();
+    Assert.assertEquals(createTestData(10, 10, 10), toMap(scanner));
+
+    scanner.close();
+
+    Reader reader = getReader(localFs, testFile);
+    Map<String,ArrayList<ByteSequence>> lGroups = reader.getLocalityGroupCF();
+    Assert.assertTrue(lGroups.containsKey("z"));
+    Assert.assertTrue(lGroups.get("z").size() == 2);
+    Assert.assertTrue(lGroups.get("z").contains(new ArrayByteSequence(colStr(0))));
+    Assert.assertTrue(lGroups.get("z").contains(new ArrayByteSequence(colStr(1))));
+    Assert.assertTrue(lGroups.containsKey("h"));
+    Assert.assertEquals(Arrays.asList(new ArrayByteSequence(colStr(2))), lGroups.get("h"));
+    reader.close();
+  }
+
+  @Test
+  public void testIterators() throws Exception {
+
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    SortedMap<Key,Value> testData = createTestData(10, 10, 10);
+    String testFile = createRFile(testData);
+
+    IteratorSetting is = new IteratorSetting(50, "regex", RegExFilter.class);
+    RegExFilter.setRegexs(is, ".*00000[78].*", null, null, null, false);
+
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).build();
+    scanner.addScanIterator(is);
+
+    Assert.assertEquals(createTestData(7, 2, 0, 10, 10), toMap(scanner));
+
+    scanner.close();
+  }
+
+  @Test
+  public void testAuths() throws Exception {
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build();
+
+    Key k1 = new Key("r1", "f1", "q1", "A&B");
+    Key k2 = new Key("r1", "f1", "q2", "A");
+    Key k3 = new Key("r1", "f1", "q3");
+
+    Value v1 = new Value("p".getBytes());
+    Value v2 = new Value("c".getBytes());
+    Value v3 = new Value("t".getBytes());
+
+    writer.append(k1, v1);
+    writer.append(k2, v2);
+    writer.append(k3, v3);
+    writer.close();
+
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).withAuthorizations(new Authorizations("A")).build();
+    Assert.assertEquals(ImmutableMap.of(k2, v2, k3, v3), toMap(scanner));
+    Assert.assertEquals(new Authorizations("A"), scanner.getAuthorizations());
+    scanner.close();
+
+    scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).withAuthorizations(new Authorizations("A", "B")).build();
+    Assert.assertEquals(ImmutableMap.of(k1, v1, k2, v2, k3, v3), toMap(scanner));
+    Assert.assertEquals(new Authorizations("A", "B"), scanner.getAuthorizations());
+    scanner.close();
+
+    scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).withAuthorizations(new Authorizations("B")).build();
+    Assert.assertEquals(ImmutableMap.of(k3, v3), toMap(scanner));
+    Assert.assertEquals(new Authorizations("B"), scanner.getAuthorizations());
+    scanner.close();
+  }
+
+  @Test
+  public void testNoSystemIters() throws Exception {
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build();
+
+    Key k1 = new Key("r1", "f1", "q1");
+    k1.setTimestamp(3);
+
+    Key k2 = new Key("r1", "f1", "q1");
+    k2.setTimestamp(6);
+    k2.setDeleted(true);
+
+    Value v1 = new Value("p".getBytes());
+    Value v2 = new Value("".getBytes());
+
+    writer.append(k2, v2);
+    writer.append(k1, v1);
+    writer.close();
+
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).build();
+    Assert.assertFalse(scanner.iterator().hasNext());
+    scanner.close();
+
+    scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).withoutSystemIterators().build();
+    Assert.assertEquals(ImmutableMap.of(k2, v2, k1, v1), toMap(scanner));
+    scanner.setRange(new Range("r2"));
+    Assert.assertFalse(scanner.iterator().hasNext());
+    scanner.close();
+  }
+
+  @Test
+  public void testBounds() throws Exception {
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    SortedMap<Key,Value> testData = createTestData(10, 10, 10);
+    String testFile = createRFile(testData);
+
+    // set a lower bound row
+    Range bounds = new Range(rowStr(3), false, null, true);
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).withBounds(bounds).build();
+    Assert.assertEquals(createTestData(4, 6, 0, 10, 10), toMap(scanner));
+    scanner.close();
+
+    // set an upper bound row
+    bounds = new Range(null, false, rowStr(7), true);
+    scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).withBounds(bounds).build();
+    Assert.assertEquals(createTestData(8, 10, 10), toMap(scanner));
+    scanner.close();
+
+    // set row bounds
+    bounds = new Range(rowStr(3), false, rowStr(7), true);
+    scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).withBounds(bounds).build();
+    Assert.assertEquals(createTestData(4, 4, 0, 10, 10), toMap(scanner));
+    scanner.close();
+
+    // set a row family bound
+    bounds = Range.exact(rowStr(3), colStr(5));
+    scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).withBounds(bounds).build();
+    Assert.assertEquals(createTestData(3, 1, 5, 1, 10), toMap(scanner));
+    scanner.close();
+  }
+
+  @Test
+  public void testScannerTableProperties() throws Exception {
+    NewTableConfiguration ntc = new NewTableConfiguration();
+
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build();
+
+    Key k1 = new Key("r1", "f1", "q1");
+    k1.setTimestamp(3);
+
+    Key k2 = new Key("r1", "f1", "q1");
+    k2.setTimestamp(6);
+
+    Value v1 = new Value("p".getBytes());
+    Value v2 = new Value("q".getBytes());
+
+    writer.append(k2, v2);
+    writer.append(k1, v1);
+    writer.close();
+
+    // pass in table config that has versioning iterator configured
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).withTableProperties(ntc.getProperties()).build();
+    Assert.assertEquals(ImmutableMap.of(k2, v2), toMap(scanner));
+    scanner.close();
+
+    scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).build();
+    Assert.assertEquals(ImmutableMap.of(k2, v2, k1, v1), toMap(scanner));
+    scanner.close();
+  }
+
+  @Test
+  public void testSampling() throws Exception {
+
+    SortedMap<Key,Value> testData1 = createTestData(1000, 2, 1);
+
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+
+    SamplerConfiguration sc = new SamplerConfiguration(RowSampler.class).setOptions(ImmutableMap.of("hasher", "murmur3_32", "modulus", "19"));
+
+    RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).withSampler(sc).build();
+    writer.append(testData1.entrySet());
+    writer.close();
+
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).build();
+    scanner.setSamplerConfiguration(sc);
+
+    RowSampler rowSampler = new RowSampler();
+    rowSampler.init(sc);
+
+    SortedMap<Key,Value> sampleData = new TreeMap<>();
+    for (Entry<Key,Value> e : testData1.entrySet()) {
+      if (rowSampler.accept(e.getKey())) {
+        sampleData.put(e.getKey(), e.getValue());
+      }
+    }
+
+    Assert.assertTrue(sampleData.size() < testData1.size());
+
+    Assert.assertEquals(sampleData, toMap(scanner));
+
+    scanner.clearSamplerConfiguration();
+
+    Assert.assertEquals(testData1, toMap(scanner));
+
+  }
+
+  @Test
+  public void testAppendScanner() throws Exception {
+    SortedMap<Key,Value> testData = createTestData(10000, 1, 1);
+    String testFile = createRFile(testData);
+
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).build();
+
+    String testFile2 = createTmpTestFile();
+    RFileWriter writer = RFile.newWriter().to(testFile2).build();
+    writer.append(scanner);
+    writer.close();
+    scanner.close();
+
+    scanner = RFile.newScanner().from(testFile2).withFileSystem(localFs).build();
+    Assert.assertEquals(testData, toMap(scanner));
+    scanner.close();
+  }
+
+  @Test
+  public void testCache() throws Exception {
+    SortedMap<Key,Value> testData = createTestData(10000, 1, 1);
+    String testFile = createRFile(testData);
+
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    Scanner scanner = RFile.newScanner().from(testFile).withFileSystem(localFs).withIndexCache(1000000).withDataCache(10000000).build();
+
+    Random rand = new Random(5);
+
+    for (int i = 0; i < 100; i++) {
+      int r = rand.nextInt(10000);
+      scanner.setRange(new Range(rowStr(r)));
+      Iterator<Entry<Key,Value>> iter = scanner.iterator();
+      Assert.assertTrue(iter.hasNext());
+      Assert.assertEquals(rowStr(r), iter.next().getKey().getRow().toString());
+      Assert.assertFalse(iter.hasNext());
+    }
+
+    scanner.close();
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testOutOfOrder() throws Exception {
+    // test that exception declared in API is thrown
+    Key k1 = new Key("r1", "f1", "q1");
+    Value v1 = new Value("1".getBytes());
+
+    Key k2 = new Key("r2", "f1", "q1");
+    Value v2 = new Value("2".getBytes());
+
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    try (RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build()) {
+      writer.append(k2, v2);
+      writer.append(k1, v1);
+    }
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testOutOfOrderIterable() throws Exception {
+    // test that exception declared in API is thrown
+    Key k1 = new Key("r1", "f1", "q1");
+    Value v1 = new Value("1".getBytes());
+
+    Key k2 = new Key("r2", "f1", "q1");
+    Value v2 = new Value("2".getBytes());
+
+    ArrayList<Entry<Key,Value>> data = new ArrayList<>();
+    data.add(new AbstractMap.SimpleEntry<>(k2, v2));
+    data.add(new AbstractMap.SimpleEntry<>(k1, v1));
+
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    try (RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build()) {
+      writer.append(data);
+    }
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testBadVis() throws Exception {
+    // this test has two purposes ensure an exception is thrown and ensure the exception document in the javadoc is thrown
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    try (RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build()) {
+      writer.startDefaultLocalityGroup();
+      Key k1 = new Key("r1", "f1", "q1", "(A&(B");
+      writer.append(k1, new Value("".getBytes()));
+    }
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testBadVisIterable() throws Exception {
+    // test append(iterable) method
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    try (RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build()) {
+      writer.startDefaultLocalityGroup();
+      Key k1 = new Key("r1", "f1", "q1", "(A&(B");
+      Entry<Key,Value> entry = new AbstractMap.SimpleEntry<>(k1, new Value("".getBytes()));
+      writer.append(Collections.singletonList(entry));
+    }
+  }
+
+  @Test(expected = IllegalStateException.class)
+  public void testDoubleStart() throws Exception {
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    try (RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build()) {
+      writer.startDefaultLocalityGroup();
+      writer.startDefaultLocalityGroup();
+    }
+  }
+
+  @Test(expected = IllegalStateException.class)
+  public void testAppendStartDefault() throws Exception {
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    try (RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build()) {
+      writer.append(new Key("r1", "f1", "q1"), new Value("1".getBytes()));
+      writer.startDefaultLocalityGroup();
+    }
+  }
+
+  @Test(expected = IllegalStateException.class)
+  public void testStartAfter() throws Exception {
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    try (RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build()) {
+      Key k1 = new Key("r1", "f1", "q1");
+      writer.append(k1, new Value("".getBytes()));
+      writer.startNewLocalityGroup("lg1", "fam1");
+    }
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testIllegalColumn() throws Exception {
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    try (RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build()) {
+      writer.startNewLocalityGroup("lg1", "fam1");
+      Key k1 = new Key("r1", "f1", "q1");
+      // should not be able to append the column family f1
+      writer.append(k1, new Value("".getBytes()));
+    }
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testWrongGroup() throws Exception {
+    LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+    String testFile = createTmpTestFile();
+    try (RFileWriter writer = RFile.newWriter().to(testFile).withFileSystem(localFs).build()) {
+      writer.startNewLocalityGroup("lg1", "fam1");
+      Key k1 = new Key("r1", "fam1", "q1");
+      writer.append(k1, new Value("".getBytes()));
+      writer.startDefaultLocalityGroup();
+      // should not be able to append the column family fam1 to default locality group
+      Key k2 = new Key("r1", "fam1", "q2");
+      writer.append(k2, new Value("".getBytes()));
+    }
+  }
+
+  private Reader getReader(LocalFileSystem localFs, String testFile) throws IOException {
+    Reader reader = (Reader) FileOperations.getInstance().newReaderBuilder().forFile(testFile).inFileSystem(localFs, localFs.getConf())
+        .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
+    return reader;
+  }
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/client/security/SecurityErrorCodeTest.java b/core/src/test/java/org/apache/accumulo/core/client/security/SecurityErrorCodeTest.java
index 75d4c35..8843e3a 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/security/SecurityErrorCodeTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/security/SecurityErrorCodeTest.java
@@ -28,8 +28,8 @@
 
   @Test
   public void testEnumsSame() {
-    HashSet<String> secNames1 = new HashSet<String>();
-    HashSet<String> secNames2 = new HashSet<String>();
+    HashSet<String> secNames1 = new HashSet<>();
+    HashSet<String> secNames2 = new HashSet<>();
 
     for (SecurityErrorCode sec : SecurityErrorCode.values())
       secNames1.add(sec.name());
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/AccumuloConfigurationTest.java b/core/src/test/java/org/apache/accumulo/core/conf/AccumuloConfigurationTest.java
index efb080d..f12ba63 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/AccumuloConfigurationTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/AccumuloConfigurationTest.java
@@ -72,4 +72,79 @@
     }
     assertTrue("test was a dud, and did nothing", found);
   }
+
+  @Test
+  public void testGetSinglePort() {
+    AccumuloConfiguration c = AccumuloConfiguration.getDefaultConfiguration();
+    ConfigurationCopy cc = new ConfigurationCopy(c);
+    cc.set(Property.TSERV_CLIENTPORT, "9997");
+    int[] ports = cc.getPort(Property.TSERV_CLIENTPORT);
+    assertEquals(1, ports.length);
+    assertEquals(9997, ports[0]);
+  }
+
+  @Test
+  public void testGetAnyPort() {
+    AccumuloConfiguration c = AccumuloConfiguration.getDefaultConfiguration();
+    ConfigurationCopy cc = new ConfigurationCopy(c);
+    cc.set(Property.TSERV_CLIENTPORT, "0");
+    int[] ports = cc.getPort(Property.TSERV_CLIENTPORT);
+    assertEquals(1, ports.length);
+    assertEquals(0, ports[0]);
+  }
+
+  @Test
+  public void testGetInvalidPort() {
+    AccumuloConfiguration c = AccumuloConfiguration.getDefaultConfiguration();
+    ConfigurationCopy cc = new ConfigurationCopy(c);
+    cc.set(Property.TSERV_CLIENTPORT, "1020");
+    int[] ports = cc.getPort(Property.TSERV_CLIENTPORT);
+    assertEquals(1, ports.length);
+    assertEquals(Integer.parseInt(Property.TSERV_CLIENTPORT.getDefaultValue()), ports[0]);
+  }
+
+  @Test
+  public void testGetPortRange() {
+    AccumuloConfiguration c = AccumuloConfiguration.getDefaultConfiguration();
+    ConfigurationCopy cc = new ConfigurationCopy(c);
+    cc.set(Property.TSERV_CLIENTPORT, "9997-9999");
+    int[] ports = cc.getPort(Property.TSERV_CLIENTPORT);
+    assertEquals(3, ports.length);
+    assertEquals(9997, ports[0]);
+    assertEquals(9998, ports[1]);
+    assertEquals(9999, ports[2]);
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testGetPortRangeInvalidLow() {
+    AccumuloConfiguration c = AccumuloConfiguration.getDefaultConfiguration();
+    ConfigurationCopy cc = new ConfigurationCopy(c);
+    cc.set(Property.TSERV_CLIENTPORT, "1020-1026");
+    int[] ports = cc.getPort(Property.TSERV_CLIENTPORT);
+    assertEquals(3, ports.length);
+    assertEquals(1024, ports[0]);
+    assertEquals(1025, ports[1]);
+    assertEquals(1026, ports[2]);
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testGetPortRangeInvalidHigh() {
+    AccumuloConfiguration c = AccumuloConfiguration.getDefaultConfiguration();
+    ConfigurationCopy cc = new ConfigurationCopy(c);
+    cc.set(Property.TSERV_CLIENTPORT, "65533-65538");
+    int[] ports = cc.getPort(Property.TSERV_CLIENTPORT);
+    assertEquals(3, ports.length);
+    assertEquals(65533, ports[0]);
+    assertEquals(65534, ports[1]);
+    assertEquals(65535, ports[2]);
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testGetPortInvalidSyntax() {
+    AccumuloConfiguration c = AccumuloConfiguration.getDefaultConfiguration();
+    ConfigurationCopy cc = new ConfigurationCopy(c);
+    cc.set(Property.TSERV_CLIENTPORT, "[65533,65538]");
+    cc.getPort(Property.TSERV_CLIENTPORT);
+  }
+
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/ConfigSanityCheckTest.java b/core/src/test/java/org/apache/accumulo/core/conf/ConfigSanityCheckTest.java
index f34bf3b..7ac2113c 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/ConfigSanityCheckTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/ConfigSanityCheckTest.java
@@ -27,7 +27,7 @@
 
   @Before
   public void setUp() {
-    m = new java.util.HashMap<String,String>();
+    m = new java.util.HashMap<>();
   }
 
   @Test
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShimTest.java b/core/src/test/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShimTest.java
index 37dff20..e540b72 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShimTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/CredentialProviderFactoryShimTest.java
@@ -83,7 +83,7 @@
     List<String> keys = CredentialProviderFactoryShim.getKeys(conf);
     Assert.assertNotNull(keys);
 
-    Assert.assertEquals(expectation.keySet(), new HashSet<String>(keys));
+    Assert.assertEquals(expectation.keySet(), new HashSet<>(keys));
     for (String expectedKey : keys) {
       char[] value = CredentialProviderFactoryShim.getValueFromCredentialProvider(conf, expectedKey);
       Assert.assertNotNull(value);
@@ -96,7 +96,7 @@
     String absPath = getKeyStoreUrl(populatedKeyStore);
     Configuration conf = new Configuration();
     conf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH, absPath);
-    Map<String,String> expectations = new HashMap<String,String>();
+    Map<String,String> expectations = new HashMap<>();
     expectations.put("key1", "value1");
     expectations.put("key2", "value2");
 
@@ -117,7 +117,7 @@
     String populatedAbsPath = getKeyStoreUrl(populatedKeyStore), emptyAbsPath = getKeyStoreUrl(emptyKeyStore);
     Configuration conf = new Configuration();
     conf.set(CredentialProviderFactoryShim.CREDENTIAL_PROVIDER_PATH, populatedAbsPath + "," + emptyAbsPath);
-    Map<String,String> expectations = new HashMap<String,String>();
+    Map<String,String> expectations = new HashMap<>();
     expectations.put("key1", "value1");
     expectations.put("key2", "value2");
 
@@ -186,7 +186,7 @@
       Configuration cpConf = CredentialProviderFactoryShim.getConfiguration(dfsConfiguration, "jceks://hdfs/accumulo.jceks");
 
       // The values in the keystore
-      Map<String,String> expectations = new HashMap<String,String>();
+      Map<String,String> expectations = new HashMap<>();
       expectations.put("key1", "value1");
       expectations.put("key2", "value2");
 
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/DefaultConfigurationTest.java b/core/src/test/java/org/apache/accumulo/core/conf/DefaultConfigurationTest.java
index 9566f2e..cb6810c 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/DefaultConfigurationTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/DefaultConfigurationTest.java
@@ -42,7 +42,7 @@
   @Test
   public void testGetProperties() {
     Predicate<String> all = Predicates.alwaysTrue();
-    Map<String,String> p = new java.util.HashMap<String,String>();
+    Map<String,String> p = new java.util.HashMap<>();
     c.getProperties(p, all);
     assertEquals(Property.MASTER_CLIENTPORT.getDefaultValue(), p.get(Property.MASTER_CLIENTPORT.getKey()));
   }
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/PropertyTest.java b/core/src/test/java/org/apache/accumulo/core/conf/PropertyTest.java
index 4d1dc70..79f2f21 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/PropertyTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/PropertyTest.java
@@ -36,16 +36,20 @@
 public class PropertyTest {
   @Test
   public void testProperties() {
-    HashSet<String> validPrefixes = new HashSet<String>();
+    HashSet<String> validPrefixes = new HashSet<>();
     for (Property prop : Property.values())
       if (prop.getType().equals(PropertyType.PREFIX))
         validPrefixes.add(prop.getKey());
 
-    HashSet<String> propertyNames = new HashSet<String>();
+    HashSet<String> propertyNames = new HashSet<>();
     for (Property prop : Property.values()) {
       // make sure properties default values match their type
-      assertTrue("Property " + prop + " has invalid default value " + prop.getDefaultValue() + " for type " + prop.getType(),
-          prop.getType().isValidFormat(prop.getDefaultValue()));
+      if (prop.getType() == PropertyType.PREFIX) {
+        assertNull("PREFIX property " + prop.name() + " has unexpected non-null default value.", prop.getDefaultValue());
+      } else {
+        assertTrue("Property " + prop + " has invalid default value " + prop.getDefaultValue() + " for type " + prop.getType(),
+            prop.getType().isValidFormat(prop.getDefaultValue()));
+      }
 
       // make sure property has a description
       assertFalse("Description not set for " + prop, prop.getDescription() == null || prop.getDescription().isEmpty());
@@ -68,7 +72,7 @@
 
   @Test
   public void testPorts() {
-    HashSet<Integer> usedPorts = new HashSet<Integer>();
+    HashSet<Integer> usedPorts = new HashSet<>();
     for (Property prop : Property.values())
       if (prop.getType().equals(PropertyType.PORT)) {
         int port = Integer.parseInt(prop.getDefaultValue());
@@ -94,7 +98,7 @@
 
   @Test
   public void testSensitiveKeys() {
-    final TreeMap<String,String> extras = new TreeMap<String,String>();
+    final TreeMap<String,String> extras = new TreeMap<>();
     extras.put("trace.token.property.blah", "something");
 
     AccumuloConfiguration conf = new DefaultConfiguration() {
@@ -122,14 +126,14 @@
         };
       }
     };
-    TreeSet<String> expected = new TreeSet<String>();
+    TreeSet<String> expected = new TreeSet<>();
     for (Entry<String,String> entry : conf) {
       String key = entry.getKey();
       if (key.equals(Property.INSTANCE_SECRET.getKey()) || key.toLowerCase().contains("password") || key.toLowerCase().endsWith("secret")
           || key.startsWith(Property.TRACE_TOKEN_PROPERTY_PREFIX.getKey()))
         expected.add(key);
     }
-    TreeSet<String> actual = new TreeSet<String>();
+    TreeSet<String> actual = new TreeSet<>();
     for (Entry<String,String> entry : conf) {
       String key = entry.getKey();
       if (Property.isSensitive(key))
@@ -147,9 +151,4 @@
       }
     }
   }
-
-  @Test
-  public void testGCDeadServerWaitSecond() {
-    assertEquals("1h", Property.GC_WAL_DEAD_SERVER_WAIT.getDefaultValue());
-  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/PropertyTypeTest.java b/core/src/test/java/org/apache/accumulo/core/conf/PropertyTypeTest.java
index 73174c2..9852ee8 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/PropertyTypeTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/PropertyTypeTest.java
@@ -20,15 +20,35 @@
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.lang.reflect.Method;
 import java.util.Arrays;
-import java.util.List;
 
+import org.junit.Before;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.TestName;
+
+import com.google.common.base.Function;
+import com.google.common.base.Joiner;
+import com.google.common.base.Predicate;
+import com.google.common.collect.Iterables;
 
 public class PropertyTypeTest {
-  @Test
-  public void testToString() {
-    assertEquals("string", PropertyType.STRING.toString());
+
+  @Rule
+  public TestName testName = new TestName();
+  private PropertyType type = null;
+
+  @Before
+  public void getPropertyTypeForTest() {
+    String tn = testName.getMethodName();
+    if (tn.startsWith("testType")) {
+      try {
+        type = PropertyType.valueOf(tn.substring(8));
+      } catch (IllegalArgumentException e) {
+        throw new AssertionError("Unexpected test method for non-existent " + PropertyType.class.getSimpleName() + "." + tn.substring(8));
+      }
+    }
   }
 
   @Test
@@ -37,52 +57,145 @@
         PropertyType.STRING.getFormatDescription());
   }
 
-  private void typeCheckValidFormat(PropertyType type, String... args) {
-    for (String s : args)
-      assertTrue(s + " should be valid", type.isValidFormat(s));
-  }
-
-  private void typeCheckInvalidFormat(PropertyType type, String... args) {
-    for (String s : args)
-      assertFalse(s + " should be invalid", type.isValidFormat(s));
+  @Test
+  public void testToString() {
+    assertEquals("string", PropertyType.STRING.toString());
   }
 
   @Test
-  public void testTypeFormats() {
-    typeCheckValidFormat(PropertyType.TIMEDURATION, "600", "30s", "45m", "30000ms", "3d", "1h");
-    typeCheckInvalidFormat(PropertyType.TIMEDURATION, "1w", "1h30m", "1s 200ms", "ms", "", "a");
+  public void testFullCoverage() {
+    // This test checks the remainder of the methods in this class to ensure each property type has a corresponding test
+    Iterable<String> types = Iterables.transform(Arrays.asList(PropertyType.values()), new Function<PropertyType,String>() {
+      @Override
+      public String apply(final PropertyType input) {
+        return input.name();
+      }
+    });
+    Iterable<String> typesTested = Iterables.transform(
+        Iterables.filter(Iterables.transform(Arrays.asList(this.getClass().getMethods()), new Function<Method,String>() {
+          @Override
+          public String apply(final Method input) {
+            return input.getName();
+          }
+        }), new Predicate<String>() {
+          @Override
+          public boolean apply(final String input) {
+            return input.startsWith("testType");
+          }
+        }), new Function<String,String>() {
+          @Override
+          public String apply(final String input) {
+            return input.substring(8);
+          }
+        });
+    for (String t : types) {
+      assertTrue(PropertyType.class.getSimpleName() + "." + t + " does not have a test.", Iterables.contains(typesTested, t));
+    }
+    assertEquals(Iterables.size(types), Iterables.size(typesTested));
+  }
 
-    typeCheckValidFormat(PropertyType.MEMORY, "1024", "20B", "100K", "1500M", "2G");
-    typeCheckInvalidFormat(PropertyType.MEMORY, "1M500K", "1M 2K", "1MB", "1.5G", "1,024K", "", "a");
+  private void valid(final String... args) {
+    for (String s : args) {
+      assertTrue(s + " should be valid for " + PropertyType.class.getSimpleName() + "." + type.name(), type.isValidFormat(s));
+    }
+  }
 
-    typeCheckValidFormat(PropertyType.HOSTLIST, "localhost", "server1,server2,server3", "server1:1111,server2:3333", "localhost:1111", "server2:1111",
-        "www.server", "www.server:1111", "www.server.com", "www.server.com:111");
-    typeCheckInvalidFormat(PropertyType.HOSTLIST, ":111", "local host");
+  private void invalid(final String... args) {
+    for (String s : args) {
+      assertFalse(s + " should be invalid for " + PropertyType.class.getSimpleName() + "." + type.name(), type.isValidFormat(s));
+    }
+  }
 
-    typeCheckValidFormat(PropertyType.ABSOLUTEPATH, "/foo", "/foo/c", "/");
-    // in hadoop 2.0 Path only normalizes Windows paths properly when run on a Windows system
+  @Test
+  public void testTypeABSOLUTEPATH() {
+    valid(null, "/foo", "/foo/c", "/", System.getProperty("user.dir"));
+    // in Hadoop 2.x, Path only normalizes Windows paths properly when run on a Windows system
     // this makes the following checks fail
-    if (System.getProperty("os.name").toLowerCase().contains("windows"))
-      typeCheckValidFormat(PropertyType.ABSOLUTEPATH, "d:\\foo12", "c:\\foo\\g", "c:\\foo\\c", "c:\\");
-    typeCheckValidFormat(PropertyType.ABSOLUTEPATH, System.getProperty("user.dir"));
-    typeCheckInvalidFormat(PropertyType.ABSOLUTEPATH, "foo12", "foo/g", "foo\\c");
+    if (System.getProperty("os.name").toLowerCase().contains("windows")) {
+      valid("d:\\foo12", "c:\\foo\\g", "c:\\foo\\c", "c:\\");
+    }
+    invalid("foo12", "foo/g", "foo\\c");
   }
 
   @Test
-  public void testIsValidFormat_RegexAbsent() {
-    // assertTrue(PropertyType.PREFIX.isValidFormat("whatever")); currently forbidden
-    assertTrue(PropertyType.PREFIX.isValidFormat(null));
+  public void testTypeBOOLEAN() {
+    valid(null, "True", "true", "False", "false", "tRUE", "fAlSe");
+    invalid("foobar", "", "F", "T", "1", "0", "f", "t");
   }
 
   @Test
-  public void testBooleans() {
-    List<String> goodValues = Arrays.asList("True", "true", "False", "false");
-    for (String value : goodValues) {
-      assertTrue(value + " should be a valid boolean format", PropertyType.BOOLEAN.isValidFormat(value));
-    }
-    List<String> badValues = Arrays.asList("foobar", "tRUE", "fAlSe");
-    for (String value : badValues) {
-      assertFalse(value + " should not be a valid boolean format", PropertyType.BOOLEAN.isValidFormat(value));
-    }
+  public void testTypeCLASSNAME() {
+    valid(null, "", String.class.getName(), String.class.getName() + "$1", String.class.getName() + "$TestClass");
+    invalid("abc-def", "-", "!@#$%");
   }
+
+  @Test
+  public void testTypeCLASSNAMELIST() {
+    testTypeCLASSNAME(); // test single class name
+    valid(null, Joiner.on(",").join(String.class.getName(), String.class.getName() + "$1", String.class.getName() + "$TestClass"));
+  }
+
+  @Test
+  public void testTypeCOUNT() {
+    valid(null, "0", "1024", Long.toString(Integer.MAX_VALUE));
+    invalid(Long.toString(Integer.MAX_VALUE + 1L), "-65535", "-1");
+  }
+
+  @Test
+  public void testTypeDURABILITY() {
+    valid(null, "none", "log", "flush", "sync");
+    invalid("", "other");
+  }
+
+  @Test
+  public void testTypeFRACTION() {
+    valid(null, "1", "0", "1.0", "25%", "2.5%", "10.2E-3", "10.2E-3%", ".3");
+    invalid("", "other", "20%%", "-0.3", "3.6a", "%25", "3%a");
+  }
+
+  @Test
+  public void testTypeHOSTLIST() {
+    valid(null, "localhost", "server1,server2,server3", "server1:1111,server2:3333", "localhost:1111", "server2:1111", "www.server", "www.server:1111",
+        "www.server.com", "www.server.com:111");
+    invalid(":111", "local host");
+  }
+
+  @Test
+  public void testTypeMEMORY() {
+    valid(null, "1024", "20B", "100K", "1500M", "2G");
+    invalid("1M500K", "1M 2K", "1MB", "1.5G", "1,024K", "", "a");
+  }
+
+  @Test
+  public void testTypePATH() {
+    valid(null, "", "/absolute/path", "relative/path", "/with/trailing/slash/", "with/trailing/slash/");
+  }
+
+  @Test
+  public void testTypePORT() {
+    valid(null, "0", "1024", "30000", "65535");
+    invalid("65536", "-65535", "-1", "1023");
+  }
+
+  @Test
+  public void testTypePREFIX() {
+    invalid(null, "", "whatever");
+  }
+
+  @Test
+  public void testTypeSTRING() {
+    valid(null, "", "whatever");
+  }
+
+  @Test
+  public void testTypeTIMEDURATION() {
+    valid(null, "600", "30s", "45m", "30000ms", "3d", "1h");
+    invalid("1w", "1h30m", "1s 200ms", "ms", "", "a");
+  }
+
+  @Test
+  public void testTypeURI() {
+    valid(null, "", "hdfs://hostname", "file:///path/", "hdfs://example.com:port/path");
+  }
+
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java b/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java
index a0d48b9..f89dbfa 100644
--- a/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/conf/SiteConfigurationTest.java
@@ -66,7 +66,7 @@
 
     EasyMock.replay(siteCfg);
 
-    Map<String,String> props = new HashMap<String,String>();
+    Map<String,String> props = new HashMap<>();
     Predicate<String> all = Predicates.alwaysTrue();
     siteCfg.getProperties(props, all);
 
diff --git a/core/src/test/java/org/apache/accumulo/core/data/KeyExtentTest.java b/core/src/test/java/org/apache/accumulo/core/data/KeyExtentTest.java
index b1ac1c5d..79968be 100644
--- a/core/src/test/java/org/apache/accumulo/core/data/KeyExtentTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/data/KeyExtentTest.java
@@ -44,7 +44,7 @@
 
 public class KeyExtentTest {
   KeyExtent nke(String t, String er, String per) {
-    return new KeyExtent(new Text(t), er == null ? null : new Text(er), per == null ? null : new Text(per));
+    return new KeyExtent(t, er == null ? null : new Text(er), per == null ? null : new Text(per));
   }
 
   KeyExtent ke;
@@ -52,7 +52,7 @@
 
   @Before
   public void setup() {
-    set0 = new TreeSet<KeyExtent>();
+    set0 = new TreeSet<>();
   }
 
   @Test
@@ -62,7 +62,7 @@
     ke = new KeyExtent(flattenedExtent, (Text) null);
 
     assertEquals(new Text("bar"), ke.getEndRow());
-    assertEquals(new Text("foo"), ke.getTableId());
+    assertEquals("foo", ke.getTableId());
     assertNull(ke.getPrevEndRow());
 
     flattenedExtent = new Text("foo<");
@@ -70,7 +70,7 @@
     ke = new KeyExtent(flattenedExtent, (Text) null);
 
     assertNull(ke.getEndRow());
-    assertEquals(new Text("foo"), ke.getTableId());
+    assertEquals("foo", ke.getTableId());
     assertNull(ke.getPrevEndRow());
 
     flattenedExtent = new Text("foo;bar;");
@@ -78,7 +78,7 @@
     ke = new KeyExtent(flattenedExtent, (Text) null);
 
     assertEquals(new Text("bar;"), ke.getEndRow());
-    assertEquals(new Text("foo"), ke.getTableId());
+    assertEquals("foo", ke.getTableId());
     assertNull(ke.getPrevEndRow());
 
   }
@@ -90,7 +90,7 @@
     assertNull(KeyExtent.findContainingExtent(nke("t", "1", null), set0));
     assertNull(KeyExtent.findContainingExtent(nke("t", null, "0"), set0));
 
-    TreeSet<KeyExtent> set1 = new TreeSet<KeyExtent>();
+    TreeSet<KeyExtent> set1 = new TreeSet<>();
 
     set1.add(nke("t", null, null));
 
@@ -99,7 +99,7 @@
     assertEquals(nke("t", null, null), KeyExtent.findContainingExtent(nke("t", "1", null), set1));
     assertEquals(nke("t", null, null), KeyExtent.findContainingExtent(nke("t", null, "0"), set1));
 
-    TreeSet<KeyExtent> set2 = new TreeSet<KeyExtent>();
+    TreeSet<KeyExtent> set2 = new TreeSet<>();
 
     set2.add(nke("t", "g", null));
     set2.add(nke("t", null, "g"));
@@ -123,7 +123,7 @@
     assertEquals(nke("t", null, "g"), KeyExtent.findContainingExtent(nke("t", "z", "h"), set2));
     assertEquals(nke("t", null, "g"), KeyExtent.findContainingExtent(nke("t", null, "h"), set2));
 
-    TreeSet<KeyExtent> set3 = new TreeSet<KeyExtent>();
+    TreeSet<KeyExtent> set3 = new TreeSet<>();
 
     set3.add(nke("t", "g", null));
     set3.add(nke("t", "s", "g"));
@@ -149,7 +149,7 @@
     assertEquals(nke("t", "g", null), KeyExtent.findContainingExtent(nke("t", "f", null), set3));
     assertNull(KeyExtent.findContainingExtent(nke("t", "h", null), set3));
 
-    TreeSet<KeyExtent> set4 = new TreeSet<KeyExtent>();
+    TreeSet<KeyExtent> set4 = new TreeSet<>();
 
     set4.add(nke("t1", "d", null));
     set4.add(nke("t1", "q", "d"));
@@ -185,14 +185,14 @@
 
   @Test
   public void testOverlaps() {
-    SortedMap<KeyExtent,Object> set0 = new TreeMap<KeyExtent,Object>();
+    SortedMap<KeyExtent,Object> set0 = new TreeMap<>();
     set0.put(nke("a", null, null), null);
 
     // Nothing overlaps with the empty set
     assertFalse(overlaps(nke("t", null, null), null));
     assertFalse(overlaps(nke("t", null, null), set0));
 
-    SortedMap<KeyExtent,Object> set1 = new TreeMap<KeyExtent,Object>();
+    SortedMap<KeyExtent,Object> set1 = new TreeMap<>();
 
     // Everything overlaps with the infinite range
     set1.put(nke("t", null, null), null);
@@ -206,7 +206,7 @@
     assertTrue(overlaps(nke("t", null, "a"), set1));
 
     // simple overlaps
-    SortedMap<KeyExtent,Object> set2 = new TreeMap<KeyExtent,Object>();
+    SortedMap<KeyExtent,Object> set2 = new TreeMap<>();
     set2.put(nke("a", null, null), null);
     set2.put(nke("t", "m", "j"), null);
     set2.put(nke("z", null, null), null);
@@ -225,7 +225,7 @@
     assertFalse(overlaps(nke("t", null, "m"), set2));
 
     // infinite overlaps
-    SortedMap<KeyExtent,Object> set3 = new TreeMap<KeyExtent,Object>();
+    SortedMap<KeyExtent,Object> set3 = new TreeMap<>();
     set3.put(nke("t", "j", null), null);
     set3.put(nke("t", null, "m"), null);
     assertTrue(overlaps(nke("t", "k", "a"), set3));
@@ -237,7 +237,7 @@
     // falls between
     assertFalse(overlaps(nke("t", "l", "k"), set3));
 
-    SortedMap<KeyExtent,Object> set4 = new TreeMap<KeyExtent,Object>();
+    SortedMap<KeyExtent,Object> set4 = new TreeMap<>();
     set4.put(nke("t", null, null), null);
     assertTrue(overlaps(nke("t", "k", "a"), set4));
     assertTrue(overlaps(nke("t", "k", null), set4));
diff --git a/core/src/test/java/org/apache/accumulo/core/data/KeyTest.java b/core/src/test/java/org/apache/accumulo/core/data/KeyTest.java
index 9cee691..f14786f 100644
--- a/core/src/test/java/org/apache/accumulo/core/data/KeyTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/data/KeyTest.java
@@ -141,7 +141,7 @@
 
   @Test
   public void testCompressDecompress() {
-    List<KeyValue> kvs = new ArrayList<KeyValue>();
+    List<KeyValue> kvs = new ArrayList<>();
     kvs.add(new KeyValue(new Key(), new byte[] {}));
     kvs.add(new KeyValue(new Key("r"), new byte[] {}));
     kvs.add(new KeyValue(new Key("r", "cf"), new byte[] {}));
@@ -166,4 +166,45 @@
       assertEquals(kv.getKey(), new Key(tkv.getKey()));
     }
   }
+
+  @Test
+  public void testBytesText() {
+    byte[] row = new byte[] {1};
+    Key bytesRowKey = new Key(row);
+    Key textRowKey = new Key(new Text(row));
+    assertEquals(bytesRowKey, textRowKey);
+
+    byte[] colFamily = new byte[] {0, 1};
+    Key bytesColFamilyKey = new Key(row, colFamily);
+    Key textColFamilyKey = new Key(new Text(row), new Text(colFamily));
+    assertEquals(bytesColFamilyKey, textColFamilyKey);
+
+    byte[] colQualifier = new byte[] {0, 0, 1};
+    Key bytesColQualifierKey = new Key(row, colFamily, colQualifier);
+    Key textColQualifierKey = new Key(new Text(row), new Text(colFamily), new Text(colQualifier));
+    assertEquals(bytesColQualifierKey, textColQualifierKey);
+
+    byte[] colVisibility = new byte[] {0, 0, 0, 1};
+    Key bytesColVisibilityKey = new Key(row, colFamily, colQualifier, colVisibility);
+    Key textColVisibilityKey = new Key(new Text(row), new Text(colFamily), new Text(colQualifier), new Text(colVisibility));
+    assertEquals(bytesColVisibilityKey, textColVisibilityKey);
+
+    long ts = 0L;
+    Key bytesTSKey = new Key(row, colFamily, colQualifier, colVisibility, ts);
+    Key textTSKey = new Key(new Text(row), new Text(colFamily), new Text(colQualifier), new Text(colVisibility), ts);
+    assertEquals(bytesTSKey, textTSKey);
+
+    Key bytesTSKey2 = new Key(row, ts);
+    Key textTSKey2 = new Key(new Text(row), ts);
+    assertEquals(bytesTSKey2, textTSKey2);
+
+    Key bytesTSKey3 = new Key(row, colFamily, colQualifier, ts);
+    Key testTSKey3 = new Key(new Text(row), new Text(colFamily), new Text(colQualifier), ts);
+    assertEquals(bytesTSKey3, testTSKey3);
+
+    ColumnVisibility colVisibility2 = new ColumnVisibility("v1");
+    Key bytesColVisibilityKey2 = new Key(row, colFamily, colQualifier, colVisibility2, ts);
+    Key textColVisibilityKey2 = new Key(new Text(row), new Text(colFamily), new Text(colQualifier), colVisibility2, ts);
+    assertEquals(bytesColVisibilityKey2, textColVisibilityKey2);
+  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/data/OldMutation.java b/core/src/test/java/org/apache/accumulo/core/data/OldMutation.java
index a40f4e0..5e7d7bd 100644
--- a/core/src/test/java/org/apache/accumulo/core/data/OldMutation.java
+++ b/core/src/test/java/org/apache/accumulo/core/data/OldMutation.java
@@ -221,7 +221,7 @@
       put(val);
     } else {
       if (values == null)
-        values = new ArrayList<byte[]>();
+        values = new ArrayList<>();
       byte copy[] = new byte[val.length];
       System.arraycopy(val, 0, copy, 0, val.length);
       values.add(copy);
@@ -428,7 +428,7 @@
     if (!valuesPresent) {
       values = null;
     } else {
-      values = new ArrayList<byte[]>();
+      values = new ArrayList<>();
       int numValues = in.readInt();
       for (int i = 0; i < numValues; i++) {
         len = in.readInt();
diff --git a/core/src/test/java/org/apache/accumulo/core/data/RangeTest.java b/core/src/test/java/org/apache/accumulo/core/data/RangeTest.java
index 1e5e985..c4837fe 100644
--- a/core/src/test/java/org/apache/accumulo/core/data/RangeTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/data/RangeTest.java
@@ -49,8 +49,8 @@
   }
 
   private void check(List<Range> rl, List<Range> expected) {
-    HashSet<Range> s1 = new HashSet<Range>(rl);
-    HashSet<Range> s2 = new HashSet<Range>(expected);
+    HashSet<Range> s1 = new HashSet<>(rl);
+    HashSet<Range> s2 = new HashSet<>(expected);
 
     assertTrue("got : " + rl + " expected : " + expected, s1.equals(s2));
   }
@@ -191,31 +191,30 @@
 
   public void testMergeOverlapping22() {
 
-    Range ke1 = new KeyExtent(new Text("tab1"), new Text("Bank"), null).toMetadataRange();
-    Range ke2 = new KeyExtent(new Text("tab1"), new Text("Fails"), new Text("Bank")).toMetadataRange();
-    Range ke3 = new KeyExtent(new Text("tab1"), new Text("Sam"), new Text("Fails")).toMetadataRange();
-    Range ke4 = new KeyExtent(new Text("tab1"), new Text("bails"), new Text("Sam")).toMetadataRange();
-    Range ke5 = new KeyExtent(new Text("tab1"), null, new Text("bails")).toMetadataRange();
+    Range ke1 = new KeyExtent("tab1", new Text("Bank"), null).toMetadataRange();
+    Range ke2 = new KeyExtent("tab1", new Text("Fails"), new Text("Bank")).toMetadataRange();
+    Range ke3 = new KeyExtent("tab1", new Text("Sam"), new Text("Fails")).toMetadataRange();
+    Range ke4 = new KeyExtent("tab1", new Text("bails"), new Text("Sam")).toMetadataRange();
+    Range ke5 = new KeyExtent("tab1", null, new Text("bails")).toMetadataRange();
 
     List<Range> rl = nrl(ke1, ke2, ke3, ke4, ke5);
-    List<Range> expected = nrl(new KeyExtent(new Text("tab1"), null, null).toMetadataRange());
+    List<Range> expected = nrl(new KeyExtent("tab1", null, null).toMetadataRange());
     check(Range.mergeOverlapping(rl), expected);
 
     rl = nrl(ke1, ke2, ke4, ke5);
-    expected = nrl(new KeyExtent(new Text("tab1"), new Text("Fails"), null).toMetadataRange(),
-        new KeyExtent(new Text("tab1"), null, new Text("Sam")).toMetadataRange());
+    expected = nrl(new KeyExtent("tab1", new Text("Fails"), null).toMetadataRange(), new KeyExtent("tab1", null, new Text("Sam")).toMetadataRange());
     check(Range.mergeOverlapping(rl), expected);
 
     rl = nrl(ke2, ke3, ke4, ke5);
-    expected = nrl(new KeyExtent(new Text("tab1"), null, new Text("Bank")).toMetadataRange());
+    expected = nrl(new KeyExtent("tab1", null, new Text("Bank")).toMetadataRange());
     check(Range.mergeOverlapping(rl), expected);
 
     rl = nrl(ke1, ke2, ke3, ke4);
-    expected = nrl(new KeyExtent(new Text("tab1"), new Text("bails"), null).toMetadataRange());
+    expected = nrl(new KeyExtent("tab1", new Text("bails"), null).toMetadataRange());
     check(Range.mergeOverlapping(rl), expected);
 
     rl = nrl(ke2, ke3, ke4);
-    expected = nrl(new KeyExtent(new Text("tab1"), new Text("bails"), new Text("Bank")).toMetadataRange());
+    expected = nrl(new KeyExtent("tab1", new Text("bails"), new Text("Bank")).toMetadataRange());
     check(Range.mergeOverlapping(rl), expected);
   }
 
diff --git a/core/src/test/java/org/apache/accumulo/core/data/ValueTest.java b/core/src/test/java/org/apache/accumulo/core/data/ValueTest.java
index 81e7b08..93fab1f 100644
--- a/core/src/test/java/org/apache/accumulo/core/data/ValueTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/data/ValueTest.java
@@ -35,6 +35,7 @@
 import java.nio.ByteBuffer;
 import java.util.List;
 
+import org.apache.hadoop.io.Text;
 import org.junit.Before;
 import org.junit.Test;
 
@@ -201,7 +202,7 @@
   @Test
   @Deprecated
   public void testToArray() {
-    List<byte[]> l = new java.util.ArrayList<byte[]>();
+    List<byte[]> l = new java.util.ArrayList<>();
     byte[] one = toBytes("one");
     byte[] two = toBytes("two");
     byte[] three = toBytes("three");
@@ -215,4 +216,28 @@
     assertArrayEquals(two, a[1]);
     assertArrayEquals(three, a[2]);
   }
+
+  @Test
+  public void testString() {
+    Value v1 = new Value("abc");
+    Value v2 = new Value("abc".getBytes(UTF_8));
+    assertEquals(v2, v1);
+  }
+
+  @Test(expected = NullPointerException.class)
+  public void testNullCharSequence() {
+    new Value((CharSequence) null);
+  }
+
+  @Test
+  public void testText() {
+    Value v1 = new Value(new Text("abc"));
+    Value v2 = new Value("abc".getBytes(UTF_8));
+    assertEquals(v2, v1);
+  }
+
+  @Test(expected = NullPointerException.class)
+  public void testNullText() {
+    new Value((Text) null);
+  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/file/BloomFilterLayerLookupTest.java b/core/src/test/java/org/apache/accumulo/core/file/BloomFilterLayerLookupTest.java
index ca388e5..065438c 100644
--- a/core/src/test/java/org/apache/accumulo/core/file/BloomFilterLayerLookupTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/file/BloomFilterLayerLookupTest.java
@@ -59,12 +59,12 @@
 
   @Test
   public void test() throws IOException {
-    HashSet<Integer> valsSet = new HashSet<Integer>();
+    HashSet<Integer> valsSet = new HashSet<>();
     for (int i = 0; i < 100000; i++) {
       valsSet.add(random.nextInt(Integer.MAX_VALUE));
     }
 
-    ArrayList<Integer> vals = new ArrayList<Integer>(valsSet);
+    ArrayList<Integer> vals = new ArrayList<>(valsSet);
     Collections.sort(vals);
 
     ConfigurationCopy acuconf = new ConfigurationCopy(AccumuloConfiguration.getDefaultConfiguration());
@@ -80,7 +80,7 @@
     // get output file name
     String suffix = FileOperations.getNewFileExtension(acuconf);
     String fname = new File(tempDir.getRoot(), testName + "." + suffix).getAbsolutePath();
-    FileSKVWriter bmfw = FileOperations.getInstance().openWriter(fname, fs, conf, acuconf);
+    FileSKVWriter bmfw = FileOperations.getInstance().newWriterBuilder().forFile(fname, fs, conf).withTableConfiguration(acuconf).build();
 
     // write data to file
     long t1 = System.currentTimeMillis();
@@ -96,7 +96,7 @@
     bmfw.close();
 
     t1 = System.currentTimeMillis();
-    FileSKVIterator bmfr = FileOperations.getInstance().openReader(fname, false, fs, conf, acuconf);
+    FileSKVIterator bmfr = FileOperations.getInstance().newReaderBuilder().forFile(fname, fs, conf).withTableConfiguration(acuconf).build();
     t2 = System.currentTimeMillis();
     LOG.debug("Opened " + fname + " in " + (t2 - t1));
 
diff --git a/core/src/test/java/org/apache/accumulo/core/file/FileOperationsTest.java b/core/src/test/java/org/apache/accumulo/core/file/FileOperationsTest.java
index 3fdeb8a..a8e4b7f 100644
--- a/core/src/test/java/org/apache/accumulo/core/file/FileOperationsTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/file/FileOperationsTest.java
@@ -51,7 +51,7 @@
       Configuration conf = new Configuration();
       FileSystem fs = FileSystem.getLocal(conf);
       AccumuloConfiguration acuconf = AccumuloConfiguration.getDefaultConfiguration();
-      writer = fileOperations.openWriter(filename, fs, conf, acuconf);
+      writer = fileOperations.newWriterBuilder().forFile(filename, fs, conf).withTableConfiguration(acuconf).build();
       writer.close();
     } catch (Exception ex) {
       caughtException = true;
diff --git a/core/src/test/java/org/apache/accumulo/core/file/rfile/CreateCompatTestFile.java b/core/src/test/java/org/apache/accumulo/core/file/rfile/CreateCompatTestFile.java
index 3eadc06..e7c8b46 100644
--- a/core/src/test/java/org/apache/accumulo/core/file/rfile/CreateCompatTestFile.java
+++ b/core/src/test/java/org/apache/accumulo/core/file/rfile/CreateCompatTestFile.java
@@ -32,7 +32,7 @@
 public class CreateCompatTestFile {
 
   public static Set<ByteSequence> ncfs(String... colFams) {
-    HashSet<ByteSequence> cfs = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> cfs = new HashSet<>();
 
     for (String cf : colFams) {
       cfs.add(new ArrayByteSequence(cf));
@@ -56,7 +56,7 @@
   public static void main(String[] args) throws Exception {
     Configuration conf = new Configuration();
     FileSystem fs = FileSystem.get(conf);
-    CachableBlockFile.Writer _cbw = new CachableBlockFile.Writer(fs, new Path(args[0]), "gz", conf, AccumuloConfiguration.getDefaultConfiguration());
+    CachableBlockFile.Writer _cbw = new CachableBlockFile.Writer(fs, new Path(args[0]), "gz", null, conf, AccumuloConfiguration.getDefaultConfiguration());
     RFile.Writer writer = new RFile.Writer(_cbw, 1000);
 
     writer.startNewLocalityGroup("lg1", ncfs(nf("cf_", 1), nf("cf_", 2)));
diff --git a/core/src/test/java/org/apache/accumulo/core/file/rfile/KeyShortenerTest.java b/core/src/test/java/org/apache/accumulo/core/file/rfile/KeyShortenerTest.java
new file mode 100644
index 0000000..67ff70c
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/file/rfile/KeyShortenerTest.java
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.core.file.rfile;
+
+import org.apache.accumulo.core.data.Key;
+import org.junit.Assert;
+import org.junit.Test;
+
+import com.google.common.primitives.Bytes;
+
+public class KeyShortenerTest {
+
+  private static final byte[] E = new byte[0];
+  private static final byte[] FF = new byte[] {(byte) 0xff};
+
+  private void assertBetween(Key p, Key s, Key c) {
+    Assert.assertTrue(p.compareTo(s) < 0);
+    Assert.assertTrue(s.compareTo(c) < 0);
+  }
+
+  private void testKeys(Key prev, Key current, Key expected) {
+    Key sk = KeyShortener.shorten(prev, current);
+    assertBetween(prev, sk, current);
+  }
+
+  /**
+   * append 0xff to end of string
+   */
+  private byte[] aff(String s) {
+    return Bytes.concat(s.getBytes(), FF);
+  }
+
+  /**
+   * append 0x00 to end of string
+   */
+  private byte[] a00(String s) {
+    return Bytes.concat(s.getBytes(), new byte[] {(byte) 0x00});
+  }
+
+  private byte[] toBytes(Object o) {
+    if (o instanceof String) {
+      return ((String) o).getBytes();
+    } else if (o instanceof byte[]) {
+      return (byte[]) o;
+    }
+
+    throw new IllegalArgumentException();
+  }
+
+  private Key nk(Object row, Object fam, Object qual, long ts) {
+    return new Key(toBytes(row), toBytes(fam), toBytes(qual), E, ts);
+  }
+
+  @Test
+  public void testOneCharacterDifference() {
+    // row has char that differs by one byte
+    testKeys(new Key("r321hahahaha", "f89222", "q90232e"), new Key("r321hbhahaha", "f89222", "q90232e"), nk(aff("r321ha"), E, E, 0));
+
+    // family has char that differs by one byte
+    testKeys(new Key("r321hahahaha", "f89222", "q90232e"), new Key("r321hahahaha", "f89322", "q90232e"), nk("r321hahahaha", aff("f892"), E, 0));
+
+    // qualifier has char that differs by one byte
+    testKeys(new Key("r321hahahaha", "f89222", "q90232e"), new Key("r321hahahaha", "f89222", "q91232e"), nk("r321hahahaha", "f89222", aff("q90"), 0));
+  }
+
+  @Test
+  public void testMultiCharacterDifference() {
+    // row has char that differs by two bytes
+    testKeys(new Key("r321hahahaha", "f89222", "q90232e"), new Key("r321hchahaha", "f89222", "q90232e"), nk("r321hb", E, E, 0));
+
+    // family has char that differs by two bytes
+    testKeys(new Key("r321hahahaha", "f89222", "q90232e"), new Key("r321hahahaha", "f89422", "q90232e"), nk("r321hahahaha", "f893", E, 0));
+
+    // qualifier has char that differs by two bytes
+    testKeys(new Key("r321hahahaha", "f89222", "q90232e"), new Key("r321hahahaha", "f89222", "q92232e"), nk("r321hahahaha", "f89222", "q91", 0));
+  }
+
+  @Test
+  public void testOneCharacterDifferenceAndFF() {
+    byte[] ff1 = Bytes.concat(aff("mop"), "b".getBytes());
+    byte[] ff2 = Bytes.concat(aff("mop"), FF, "b".getBytes());
+
+    byte[] eff1 = Bytes.concat(aff("mop"), FF, FF);
+    byte[] eff2 = Bytes.concat(aff("mop"), FF, FF, FF);
+
+    testKeys(nk(ff1, "f89222", "q90232e", 34), new Key("mor56", "f89222", "q90232e"), nk(eff1, E, E, 0));
+    testKeys(nk("r1", ff1, "q90232e", 34), new Key("r1", "mor56", "q90232e"), nk("r1", eff1, E, 0));
+    testKeys(nk("r1", "f1", ff1, 34), new Key("r1", "f1", "mor56"), nk("r1", "f1", eff1, 0));
+
+    testKeys(nk(ff2, "f89222", "q90232e", 34), new Key("mor56", "f89222", "q90232e"), nk(eff2, E, E, 0));
+    testKeys(nk("r1", ff2, "q90232e", 34), new Key("r1", "mor56", "q90232e"), nk("r1", eff2, E, 0));
+    testKeys(nk("r1", "f1", ff2, 34), new Key("r1", "f1", "mor56"), nk("r1", "f1", eff2, 0));
+
+  }
+
+  @Test
+  public void testOneCharacterDifferenceAtEnd() {
+    testKeys(new Key("r321hahahaha", "f89222", "q90232e"), new Key("r321hahahahb", "f89222", "q90232e"), nk(a00("r321hahahaha"), E, E, 0));
+    testKeys(new Key("r321hahahaha", "f89222", "q90232e"), new Key("r321hahahaha", "f89223", "q90232e"), nk("r321hahahaha", a00("f89222"), E, 0));
+    testKeys(new Key("r321hahahaha", "f89222", "q90232e"), new Key("r321hahahaha", "f89222", "q90232f"), nk("r321hahahaha", "f89222", a00("q90232e"), 0));
+  }
+
+  @Test
+  public void testSamePrefix() {
+    testKeys(new Key("r3boot4", "f89222", "q90232e"), new Key("r3boot452", "f89222", "q90232e"), nk(a00("r3boot4"), E, E, 0));
+    testKeys(new Key("r3boot4", "f892", "q90232e"), new Key("r3boot4", "f89222", "q90232e"), nk("r3boot4", a00("f892"), E, 0));
+    testKeys(new Key("r3boot4", "f89222", "q902"), new Key("r3boot4", "f89222", "q90232e"), nk("r3boot4", "f89222", a00("q902"), 0));
+  }
+
+  @Test
+  public void testSamePrefixAnd00() {
+    Key prev = new Key("r3boot4", "f89222", "q90232e");
+    Assert.assertEquals(prev, KeyShortener.shorten(prev, nk(a00("r3boot4"), "f89222", "q90232e", 8)));
+    prev = new Key("r3boot4", "f892", "q90232e");
+    Assert.assertEquals(prev, KeyShortener.shorten(prev, nk("r3boot4", a00("f892"), "q90232e", 8)));
+    prev = new Key("r3boot4", "f89222", "q902");
+    Assert.assertEquals(prev, KeyShortener.shorten(prev, nk("r3boot4", "f89222", a00("q902"), 8)));
+  }
+
+  @Test
+  public void testSanityCheck1() {
+    // prev and shortened equal
+    Key prev = new Key("r001", "f002", "q006");
+    Assert.assertEquals(prev, KeyShortener.sanityCheck(prev, new Key("r002", "f002", "q006"), new Key("r001", "f002", "q006")));
+    // prev > shortened equal
+    Assert.assertEquals(prev, KeyShortener.sanityCheck(prev, new Key("r003", "f002", "q006"), new Key("r001", "f002", "q006")));
+    // current and shortened equal
+    Assert.assertEquals(prev, KeyShortener.sanityCheck(prev, new Key("r003", "f002", "q006"), new Key("r003", "f002", "q006")));
+    // shortened > current
+    Assert.assertEquals(prev, KeyShortener.sanityCheck(prev, new Key("r003", "f002", "q006"), new Key("r004", "f002", "q006")));
+  }
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/file/rfile/MultiLevelIndexTest.java b/core/src/test/java/org/apache/accumulo/core/file/rfile/MultiLevelIndexTest.java
index 6f89454..391bea1 100644
--- a/core/src/test/java/org/apache/accumulo/core/file/rfile/MultiLevelIndexTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/file/rfile/MultiLevelIndexTest.java
@@ -21,7 +21,6 @@
 import java.util.Random;
 
 import junit.framework.TestCase;
-
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.file.blockfile.ABlockWriter;
@@ -33,6 +32,7 @@
 import org.apache.accumulo.core.file.rfile.MultiLevelIndex.Reader.IndexIterator;
 import org.apache.accumulo.core.file.rfile.MultiLevelIndex.Writer;
 import org.apache.accumulo.core.file.rfile.RFileTest.SeekableByteArrayInputStream;
+import org.apache.accumulo.core.file.streams.PositionedOutputs;
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
@@ -55,7 +55,7 @@
     AccumuloConfiguration aconf = AccumuloConfiguration.getDefaultConfiguration();
     ByteArrayOutputStream baos = new ByteArrayOutputStream();
     FSDataOutputStream dos = new FSDataOutputStream(baos, new FileSystem.Statistics("a"));
-    CachableBlockFile.Writer _cbw = new CachableBlockFile.Writer(dos, "gz", CachedConfiguration.getInstance(), aconf);
+    CachableBlockFile.Writer _cbw = new CachableBlockFile.Writer(PositionedOutputs.wrap(dos), "gz", CachedConfiguration.getInstance(), aconf);
 
     BufferedWriter mliw = new BufferedWriter(new Writer(_cbw, maxBlockSize));
 
@@ -77,7 +77,7 @@
     FSDataInputStream in = new FSDataInputStream(bais);
     CachableBlockFile.Reader _cbr = new CachableBlockFile.Reader(in, data.length, CachedConfiguration.getInstance(), aconf);
 
-    Reader reader = new Reader(_cbr, RFile.RINDEX_VER_7);
+    Reader reader = new Reader(_cbr, RFile.RINDEX_VER_8);
     BlockRead rootIn = _cbr.getMetaBlock("root");
     reader.readFields(rootIn);
     rootIn.close();
diff --git a/core/src/test/java/org/apache/accumulo/core/file/rfile/RFileMetricsTest.java b/core/src/test/java/org/apache/accumulo/core/file/rfile/RFileMetricsTest.java
index 7f8c087..92a1d32 100644
--- a/core/src/test/java/org/apache/accumulo/core/file/rfile/RFileMetricsTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/file/rfile/RFileMetricsTest.java
@@ -459,7 +459,7 @@
   public void multiBlockMultiCFNonDefaultAndDefaultLocGroup() throws IOException {
     // test an rfile with multiple column families and multiple blocks in a non-default locality group and the default locality group
 
-    trf.openWriter(false, 20);// Each entry is a block
+    trf.openWriter(false, 10);// Each entry is a block
     Set<ByteSequence> lg1 = new HashSet<>();
     lg1.add(new ArrayByteSequence("cf1"));
     lg1.add(new ArrayByteSequence("cf3"));
diff --git a/core/src/test/java/org/apache/accumulo/core/file/rfile/RFileTest.java b/core/src/test/java/org/apache/accumulo/core/file/rfile/RFileTest.java
index 9fdc086..069077c 100644
--- a/core/src/test/java/org/apache/accumulo/core/file/rfile/RFileTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/file/rfile/RFileTest.java
@@ -28,17 +28,24 @@
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.io.InputStream;
+import java.util.AbstractMap;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.Comparator;
 import java.util.HashSet;
 import java.util.Iterator;
+import java.util.List;
 import java.util.Map.Entry;
 import java.util.Random;
 import java.util.Set;
 
 import org.apache.accumulo.core.Constants;
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.Sampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.ConfigurationCopy;
 import org.apache.accumulo.core.conf.Property;
@@ -53,11 +60,14 @@
 import org.apache.accumulo.core.file.blockfile.cache.LruBlockCache;
 import org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile;
 import org.apache.accumulo.core.file.rfile.RFile.Reader;
+import org.apache.accumulo.core.file.streams.PositionedOutputs;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.system.ColumnFamilySkippingIterator;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
+import org.apache.accumulo.core.sample.impl.SamplerFactory;
 import org.apache.accumulo.core.security.crypto.CryptoTest;
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.hadoop.conf.Configuration;
@@ -69,15 +79,38 @@
 import org.apache.hadoop.io.Text;
 import org.apache.log4j.Level;
 import org.apache.log4j.Logger;
+import org.junit.Assert;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.TemporaryFolder;
 
+import com.google.common.hash.HashCode;
+import com.google.common.hash.Hasher;
+import com.google.common.hash.Hashing;
 import com.google.common.primitives.Bytes;
 
 public class RFileTest {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  public static class SampleIE extends BaseIteratorEnvironment {
+
+    private SamplerConfiguration samplerConfig;
+
+    SampleIE(SamplerConfiguration config) {
+      this.samplerConfig = config;
+    }
+
+    @Override
+    public boolean isSamplingEnabled() {
+      return samplerConfig != null;
+    }
+
+    @Override
+    public SamplerConfiguration getSamplerConfiguration() {
+      return samplerConfig;
+    }
+  }
+
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
 
   @Rule
   public TemporaryFolder tempFolder = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
@@ -193,15 +226,23 @@
     public void openWriter(boolean startDLG, int blockSize) throws IOException {
       baos = new ByteArrayOutputStream();
       dos = new FSDataOutputStream(baos, new FileSystem.Statistics("a"));
-      CachableBlockFile.Writer _cbw = new CachableBlockFile.Writer(dos, "gz", conf, accumuloConfiguration);
-      writer = new RFile.Writer(_cbw, blockSize, 1000);
+      CachableBlockFile.Writer _cbw = new CachableBlockFile.Writer(PositionedOutputs.wrap(dos), "gz", conf, accumuloConfiguration);
+
+      SamplerConfigurationImpl samplerConfig = SamplerConfigurationImpl.newSamplerConfig(accumuloConfiguration);
+      Sampler sampler = null;
+
+      if (samplerConfig != null) {
+        sampler = SamplerFactory.newSampler(samplerConfig, accumuloConfiguration);
+      }
+
+      writer = new RFile.Writer(_cbw, blockSize, 1000, samplerConfig, sampler);
 
       if (startDLG)
         writer.startDefaultLocalityGroup();
     }
 
     public void openWriter() throws IOException {
-      openWriter(true, 1000);
+      openWriter(1000);
     }
 
     public void openWriter(int blockSize) throws IOException {
@@ -222,7 +263,6 @@
     }
 
     public void openReader(boolean cfsi) throws IOException {
-
       int fileLength = 0;
       byte[] data = null;
       data = baos.toByteArray();
@@ -333,8 +373,8 @@
 
     int val = 0;
 
-    ArrayList<Key> expectedKeys = new ArrayList<Key>(10000);
-    ArrayList<Value> expectedValues = new ArrayList<Value>(10000);
+    ArrayList<Key> expectedKeys = new ArrayList<>(10000);
+    ArrayList<Value> expectedValues = new ArrayList<>(10000);
 
     for (int row = 0; row < 4; row++) {
       String rowS = nf("r_", row);
@@ -347,7 +387,7 @@
             for (int ts = 4; ts > 0; ts--) {
               Key k = nk(rowS, cfS, cqS, cvS, ts);
               // check below ensures when all key sizes are same more than one index block is created
-              assertEquals(27, k.getSize());
+              Assert.assertEquals(27, k.getSize());
               k.setDeleted(true);
               Value v = nv("" + val);
               trf.writer.append(k, v);
@@ -355,7 +395,7 @@
               expectedValues.add(v);
 
               k = nk(rowS, cfS, cqS, cvS, ts);
-              assertEquals(27, k.getSize());
+              Assert.assertEquals(27, k.getSize());
               v = nv("" + val);
               trf.writer.append(k, v);
               expectedKeys.add(k);
@@ -471,7 +511,7 @@
       count++;
       iiter.next();
     }
-    assertEquals(20, count);
+    Assert.assertEquals(20, count);
 
     trf.closeReader();
   }
@@ -502,35 +542,35 @@
     try {
       trf.writer.append(nk("r0", "cf1", "cq1", "L1", 55), nv("foo1"));
       assertFalse(true);
-    } catch (IllegalStateException ioe) {
+    } catch (IllegalArgumentException ioe) {
 
     }
 
     try {
       trf.writer.append(nk("r1", "cf0", "cq1", "L1", 55), nv("foo1"));
       assertFalse(true);
-    } catch (IllegalStateException ioe) {
+    } catch (IllegalArgumentException ioe) {
 
     }
 
     try {
       trf.writer.append(nk("r1", "cf1", "cq0", "L1", 55), nv("foo1"));
       assertFalse(true);
-    } catch (IllegalStateException ioe) {
+    } catch (IllegalArgumentException ioe) {
 
     }
 
     try {
       trf.writer.append(nk("r1", "cf1", "cq1", "L0", 55), nv("foo1"));
       assertFalse(true);
-    } catch (IllegalStateException ioe) {
+    } catch (IllegalArgumentException ioe) {
 
     }
 
     try {
       trf.writer.append(nk("r1", "cf1", "cq1", "L1", 56), nv("foo1"));
       assertFalse(true);
-    } catch (IllegalStateException ioe) {
+    } catch (IllegalArgumentException ioe) {
 
     }
   }
@@ -757,7 +797,7 @@
   }
 
   public static Set<ByteSequence> ncfs(String... colFams) {
-    HashSet<ByteSequence> cfs = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> cfs = new HashSet<>();
 
     for (String cf : colFams) {
       cfs.add(new ArrayByteSequence(cf));
@@ -1219,7 +1259,6 @@
   @Test
   public void test14() throws IOException {
     // test starting locality group after default locality group was started
-
     TestRFile trf = new TestRFile(conf);
 
     trf.openWriter(false);
@@ -1355,7 +1394,7 @@
   }
 
   private Set<ByteSequence> t18ncfs(int... colFams) {
-    HashSet<ByteSequence> cfs = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> cfs = new HashSet<>();
     for (int i : colFams) {
       cfs.add(new ArrayByteSequence(t18ncf(i)));
     }
@@ -1372,7 +1411,7 @@
   private void t18Verify(Set<ByteSequence> cfs, SortedKeyValueIterator<Key,Value> iter, Reader reader, HashSet<ByteSequence> allCf, int eialg, int eealg)
       throws IOException {
 
-    HashSet<ByteSequence> colFamsSeen = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> colFamsSeen = new HashSet<>();
 
     iter.seek(new Range(), cfs, true);
     assertEquals(eialg, reader.getNumLocalityGroupsSeeked());
@@ -1382,7 +1421,7 @@
       iter.next();
     }
 
-    HashSet<ByteSequence> expected = new HashSet<ByteSequence>(allCf);
+    HashSet<ByteSequence> expected = new HashSet<>(allCf);
     expected.retainAll(cfs);
     assertEquals(expected, colFamsSeen);
 
@@ -1395,7 +1434,7 @@
       iter.next();
     }
 
-    HashSet<ByteSequence> nonExcluded = new HashSet<ByteSequence>(allCf);
+    HashSet<ByteSequence> nonExcluded = new HashSet<>(allCf);
     nonExcluded.removeAll(cfs);
     assertEquals(nonExcluded, colFamsSeen);
   }
@@ -1408,7 +1447,7 @@
 
     trf.openWriter(false);
 
-    HashSet<ByteSequence> allCf = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> allCf = new HashSet<>();
 
     trf.writer.startNewLocalityGroup("lg1", t18ncfs(0));
     for (int i = 0; i < 1; i++)
@@ -1571,6 +1610,7 @@
     runVersionTest(3);
     runVersionTest(4);
     runVersionTest(6);
+    runVersionTest(7);
   }
 
   private void runVersionTest(int version) throws IOException {
@@ -1775,10 +1815,298 @@
     conf = null;
   }
 
+  private Key nk(int r, int c) {
+    String row = String.format("r%06d", r);
+    switch (c) {
+      case 0:
+        return new Key(row, "user", "addr");
+      case 1:
+        return new Key(row, "user", "name");
+      default:
+        throw new IllegalArgumentException();
+    }
+  }
+
+  private Value nv(int r, int c) {
+    switch (c) {
+      case 0:
+        return new Value(("123" + r + " west st").getBytes());
+      case 1:
+        return new Value(("bob" + r).getBytes());
+      default:
+        throw new IllegalArgumentException();
+    }
+  }
+
+  private static void hash(Hasher hasher, Key key, Value val) {
+    hasher.putBytes(key.getRowData().toArray());
+    hasher.putBytes(key.getColumnFamilyData().toArray());
+    hasher.putBytes(key.getColumnQualifierData().toArray());
+    hasher.putBytes(key.getColumnVisibilityData().toArray());
+    hasher.putLong(key.getTimestamp());
+    hasher.putBoolean(key.isDeleted());
+    hasher.putBytes(val.get());
+  }
+
+  private static void add(TestRFile trf, Key key, Value val, Hasher dataHasher, List<Entry<Key,Value>> sample, Sampler sampler) throws IOException {
+    if (sampler.accept(key)) {
+      sample.add(new AbstractMap.SimpleImmutableEntry<>(key, val));
+    }
+
+    hash(dataHasher, key, val);
+
+    trf.writer.append(key, val);
+  }
+
+  private List<Entry<Key,Value>> toList(SortedKeyValueIterator<Key,Value> sample) throws IOException {
+    ArrayList<Entry<Key,Value>> ret = new ArrayList<>();
+
+    while (sample.hasTop()) {
+      ret.add(new AbstractMap.SimpleImmutableEntry<>(new Key(sample.getTopKey()), new Value(sample.getTopValue())));
+      sample.next();
+    }
+
+    return ret;
+  }
+
+  private void checkSample(SortedKeyValueIterator<Key,Value> sample, List<Entry<Key,Value>> sampleData) throws IOException {
+    checkSample(sample, sampleData, EMPTY_COL_FAMS, false);
+  }
+
+  private void checkSample(SortedKeyValueIterator<Key,Value> sample, List<Entry<Key,Value>> sampleData, Collection<ByteSequence> columnFamilies,
+      boolean inclusive) throws IOException {
+
+    sample.seek(new Range(), columnFamilies, inclusive);
+    Assert.assertEquals(sampleData, toList(sample));
+
+    Random rand = new Random();
+    long seed = rand.nextLong();
+    rand = new Random(seed);
+
+    // randomly seek sample iterator and verify
+    for (int i = 0; i < 33; i++) {
+      Key startKey = null;
+      boolean startInclusive = false;
+      int startIndex = 0;
+
+      Key endKey = null;
+      boolean endInclusive = false;
+      int endIndex = sampleData.size();
+
+      if (rand.nextBoolean()) {
+        startIndex = rand.nextInt(sampleData.size());
+        startKey = sampleData.get(startIndex).getKey();
+        startInclusive = rand.nextBoolean();
+        if (!startInclusive) {
+          startIndex++;
+        }
+      }
+
+      if (startIndex < endIndex && rand.nextBoolean()) {
+        endIndex -= rand.nextInt(endIndex - startIndex);
+        endKey = sampleData.get(endIndex - 1).getKey();
+        endInclusive = rand.nextBoolean();
+        if (!endInclusive) {
+          endIndex--;
+        }
+      } else if (startIndex == endIndex) {
+        endInclusive = rand.nextBoolean();
+      }
+
+      sample.seek(new Range(startKey, startInclusive, endKey, endInclusive), columnFamilies, inclusive);
+      Assert.assertEquals("seed: " + seed, sampleData.subList(startIndex, endIndex), toList(sample));
+    }
+  }
+
+  @Test
+  public void testSample() throws IOException {
+
+    int num = 10000;
+
+    for (int sampleBufferSize : new int[] {1 << 10, 1 << 20}) {
+      // force sample buffer to flush for smaller data
+      RFile.setSampleBufferSize(sampleBufferSize);
+
+      for (int modulus : new int[] {19, 103, 1019}) {
+        Hasher dataHasher = Hashing.md5().newHasher();
+        List<Entry<Key,Value>> sampleData = new ArrayList<>();
+
+        ConfigurationCopy sampleConf = new ConfigurationCopy(conf == null ? AccumuloConfiguration.getDefaultConfiguration() : conf);
+        sampleConf.set(Property.TABLE_SAMPLER, RowSampler.class.getName());
+        sampleConf.set(Property.TABLE_SAMPLER_OPTS + "hasher", "murmur3_32");
+        sampleConf.set(Property.TABLE_SAMPLER_OPTS + "modulus", modulus + "");
+
+        Sampler sampler = SamplerFactory.newSampler(SamplerConfigurationImpl.newSamplerConfig(sampleConf), sampleConf);
+
+        TestRFile trf = new TestRFile(sampleConf);
+
+        trf.openWriter();
+
+        for (int i = 0; i < num; i++) {
+          add(trf, nk(i, 0), nv(i, 0), dataHasher, sampleData, sampler);
+          add(trf, nk(i, 1), nv(i, 1), dataHasher, sampleData, sampler);
+        }
+
+        HashCode expectedDataHash = dataHasher.hash();
+
+        trf.closeWriter();
+
+        trf.openReader();
+
+        FileSKVIterator sample = trf.reader.getSample(SamplerConfigurationImpl.newSamplerConfig(sampleConf));
+
+        checkSample(sample, sampleData);
+
+        Assert.assertEquals(expectedDataHash, hash(trf.reader));
+
+        SampleIE ie = new SampleIE(SamplerConfigurationImpl.newSamplerConfig(sampleConf).toSamplerConfiguration());
+
+        for (int i = 0; i < 3; i++) {
+          // test opening and closing deep copies a few times.
+          trf.reader.closeDeepCopies();
+
+          sample = trf.reader.getSample(SamplerConfigurationImpl.newSamplerConfig(sampleConf));
+          SortedKeyValueIterator<Key,Value> sampleDC1 = sample.deepCopy(ie);
+          SortedKeyValueIterator<Key,Value> sampleDC2 = sample.deepCopy(ie);
+          SortedKeyValueIterator<Key,Value> sampleDC3 = trf.reader.deepCopy(ie);
+          SortedKeyValueIterator<Key,Value> allDC1 = sampleDC1.deepCopy(new SampleIE(null));
+          SortedKeyValueIterator<Key,Value> allDC2 = sample.deepCopy(new SampleIE(null));
+
+          Assert.assertEquals(expectedDataHash, hash(allDC1));
+          Assert.assertEquals(expectedDataHash, hash(allDC2));
+
+          checkSample(sample, sampleData);
+          checkSample(sampleDC1, sampleData);
+          checkSample(sampleDC2, sampleData);
+          checkSample(sampleDC3, sampleData);
+        }
+
+        trf.reader.closeDeepCopies();
+
+        trf.closeReader();
+      }
+    }
+  }
+
+  private HashCode hash(SortedKeyValueIterator<Key,Value> iter) throws IOException {
+    Hasher dataHasher = Hashing.md5().newHasher();
+    iter.seek(new Range(), EMPTY_COL_FAMS, false);
+    while (iter.hasTop()) {
+      hash(dataHasher, iter.getTopKey(), iter.getTopValue());
+      iter.next();
+    }
+
+    return dataHasher.hash();
+  }
+
+  @Test
+  public void testSampleLG() throws IOException {
+
+    int num = 5000;
+
+    for (int sampleBufferSize : new int[] {1 << 10, 1 << 20}) {
+      // force sample buffer to flush for smaller data
+      RFile.setSampleBufferSize(sampleBufferSize);
+
+      for (int modulus : new int[] {19, 103, 1019}) {
+        List<Entry<Key,Value>> sampleDataLG1 = new ArrayList<>();
+        List<Entry<Key,Value>> sampleDataLG2 = new ArrayList<>();
+
+        ConfigurationCopy sampleConf = new ConfigurationCopy(conf == null ? AccumuloConfiguration.getDefaultConfiguration() : conf);
+        sampleConf.set(Property.TABLE_SAMPLER, RowSampler.class.getName());
+        sampleConf.set(Property.TABLE_SAMPLER_OPTS + "hasher", "murmur3_32");
+        sampleConf.set(Property.TABLE_SAMPLER_OPTS + "modulus", modulus + "");
+
+        Sampler sampler = SamplerFactory.newSampler(SamplerConfigurationImpl.newSamplerConfig(sampleConf), sampleConf);
+
+        TestRFile trf = new TestRFile(sampleConf);
+
+        trf.openWriter(false, 1000);
+
+        trf.writer.startNewLocalityGroup("meta-lg", ncfs("metaA", "metaB"));
+        for (int r = 0; r < num; r++) {
+          String row = String.format("r%06d", r);
+          Key k1 = new Key(row, "metaA", "q9", 7);
+          Key k2 = new Key(row, "metaB", "q8", 7);
+          Key k3 = new Key(row, "metaB", "qA", 7);
+
+          Value v1 = new Value(("" + r).getBytes());
+          Value v2 = new Value(("" + r * 93).getBytes());
+          Value v3 = new Value(("" + r * 113).getBytes());
+
+          if (sampler.accept(k1)) {
+            sampleDataLG1.add(new AbstractMap.SimpleImmutableEntry<>(k1, v1));
+            sampleDataLG1.add(new AbstractMap.SimpleImmutableEntry<>(k2, v2));
+            sampleDataLG1.add(new AbstractMap.SimpleImmutableEntry<>(k3, v3));
+          }
+
+          trf.writer.append(k1, v1);
+          trf.writer.append(k2, v2);
+          trf.writer.append(k3, v3);
+        }
+
+        trf.writer.startDefaultLocalityGroup();
+
+        for (int r = 0; r < num; r++) {
+          String row = String.format("r%06d", r);
+          Key k1 = new Key(row, "dataA", "q9", 7);
+
+          Value v1 = new Value(("" + r).getBytes());
+
+          if (sampler.accept(k1)) {
+            sampleDataLG2.add(new AbstractMap.SimpleImmutableEntry<>(k1, v1));
+          }
+
+          trf.writer.append(k1, v1);
+        }
+
+        trf.closeWriter();
+
+        Assert.assertTrue(sampleDataLG1.size() > 0);
+        Assert.assertTrue(sampleDataLG2.size() > 0);
+
+        trf.openReader(false);
+        FileSKVIterator sample = trf.reader.getSample(SamplerConfigurationImpl.newSamplerConfig(sampleConf));
+
+        checkSample(sample, sampleDataLG1, ncfs("metaA", "metaB"), true);
+        checkSample(sample, sampleDataLG1, ncfs("metaA"), true);
+        checkSample(sample, sampleDataLG1, ncfs("metaB"), true);
+        checkSample(sample, sampleDataLG1, ncfs("dataA"), false);
+
+        checkSample(sample, sampleDataLG2, ncfs("metaA", "metaB"), false);
+        checkSample(sample, sampleDataLG2, ncfs("dataA"), true);
+
+        ArrayList<Entry<Key,Value>> allSampleData = new ArrayList<>();
+        allSampleData.addAll(sampleDataLG1);
+        allSampleData.addAll(sampleDataLG2);
+
+        Collections.sort(allSampleData, new Comparator<Entry<Key,Value>>() {
+          @Override
+          public int compare(Entry<Key,Value> o1, Entry<Key,Value> o2) {
+            return o1.getKey().compareTo(o2.getKey());
+          }
+        });
+
+        checkSample(sample, allSampleData, ncfs("dataA", "metaA"), true);
+        checkSample(sample, allSampleData, EMPTY_COL_FAMS, false);
+
+        trf.closeReader();
+      }
+    }
+  }
+
+  @Test
+  public void testEncSample() throws IOException {
+    conf = setAndGetAccumuloConfig(CryptoTest.CRYPTO_ON_CONF);
+    testSample();
+    testSampleLG();
+    conf = null;
+  }
+
   @Test
   public void testBigKeys() throws IOException {
     // this test ensures that big keys do not end up index
-    ArrayList<Key> keys = new ArrayList<Key>();
+    ArrayList<Key> keys = new ArrayList<>();
 
     for (int i = 0; i < 1000; i++) {
       String row = String.format("r%06d", i);
@@ -1811,7 +2139,7 @@
     FileSKVIterator iiter = trf.reader.getIndex();
     while (iiter.hasTop()) {
       Key k = iiter.getTopKey();
-      assertTrue(k + " " + k.getSize() + " >= 20", k.getSize() < 20);
+      Assert.assertTrue(k + " " + k.getSize() + " >= 20", k.getSize() < 20);
       iiter.next();
     }
 
@@ -1819,9 +2147,9 @@
 
     for (Key key : keys) {
       trf.reader.seek(new Range(key, null), EMPTY_COL_FAMS, false);
-      assertTrue(trf.reader.hasTop());
-      assertEquals(key, trf.reader.getTopKey());
-      assertEquals(new Value((key.hashCode() + "").getBytes()), trf.reader.getTopValue());
+      Assert.assertTrue(trf.reader.hasTop());
+      Assert.assertEquals(key, trf.reader.getTopKey());
+      Assert.assertEquals(new Value((key.hashCode() + "").getBytes()), trf.reader.getTopValue());
     }
   }
 
@@ -1868,7 +2196,7 @@
 
     // mfw.startDefaultLocalityGroup();
 
-    Text tableExtent = new Text(KeyExtent.getMetadataEntry(new Text(MetadataTable.ID), MetadataSchema.TabletsSection.getRange().getEndKey().getRow()));
+    Text tableExtent = new Text(KeyExtent.getMetadataEntry(MetadataTable.ID, MetadataSchema.TabletsSection.getRange().getEndKey().getRow()));
 
     // table tablet's directory
     Key tableDirKey = new Key(tableExtent, TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.getColumnFamily(),
@@ -1886,7 +2214,7 @@
     mfw.append(tablePrevRowKey, KeyExtent.encodePrevEndRow(null));
 
     // ----------] default tablet info
-    Text defaultExtent = new Text(KeyExtent.getMetadataEntry(new Text(MetadataTable.ID), null));
+    Text defaultExtent = new Text(KeyExtent.getMetadataEntry(MetadataTable.ID, null));
 
     // default's directory
     Key defaultDirKey = new Key(defaultExtent, TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.getColumnFamily(),
diff --git a/core/src/test/java/org/apache/accumulo/core/file/rfile/RelativeKeyTest.java b/core/src/test/java/org/apache/accumulo/core/file/rfile/RelativeKeyTest.java
index e413448..4334ccc 100644
--- a/core/src/test/java/org/apache/accumulo/core/file/rfile/RelativeKeyTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/file/rfile/RelativeKeyTest.java
@@ -30,6 +30,7 @@
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.rfile.RelativeKey.SkippR;
 import org.apache.accumulo.core.util.MutableByteSequence;
 import org.apache.accumulo.core.util.UnsynchronizedBuffer;
 import org.junit.Before;
@@ -121,9 +122,9 @@
     baos = new ByteArrayOutputStream();
     DataOutputStream out = new DataOutputStream(baos);
 
-    expectedKeys = new ArrayList<Key>(initialListSize);
-    expectedValues = new ArrayList<Value>(initialListSize);
-    expectedPositions = new ArrayList<Integer>(initialListSize);
+    expectedKeys = new ArrayList<>(initialListSize);
+    expectedValues = new ArrayList<>(initialListSize);
+    expectedPositions = new ArrayList<>(initialListSize);
 
     Key prev = null;
     int val = 0;
@@ -178,7 +179,7 @@
     Key currKey = null;
     MutableByteSequence value = new MutableByteSequence(new byte[64], 0, 0);
 
-    RelativeKey.SkippR skippr = RelativeKey.fastSkip(in, seekKey, value, prevKey, currKey);
+    RelativeKey.SkippR skippr = RelativeKey.fastSkip(in, seekKey, value, prevKey, currKey, expectedKeys.size());
     assertEquals(1, skippr.skipped);
     assertEquals(new Key(), skippr.prevKey);
     assertEquals(expectedKeys.get(0), skippr.rk.getKey());
@@ -192,7 +193,7 @@
 
     seekKey = new Key("a", "b", "c", "d", 1);
     seekKey.setDeleted(true);
-    skippr = RelativeKey.fastSkip(in, seekKey, value, prevKey, currKey);
+    skippr = RelativeKey.fastSkip(in, seekKey, value, prevKey, currKey, expectedKeys.size());
     assertEquals(1, skippr.skipped);
     assertEquals(new Key(), skippr.prevKey);
     assertEquals(expectedKeys.get(0), skippr.rk.getKey());
@@ -203,13 +204,23 @@
   }
 
   @Test(expected = EOFException.class)
+  public void testSeekAfterEverythingWrongCount() throws IOException {
+    Key seekKey = new Key("s", "t", "u", "v", 1);
+    Key prevKey = new Key();
+    Key currKey = null;
+    MutableByteSequence value = new MutableByteSequence(new byte[64], 0, 0);
+
+    RelativeKey.fastSkip(in, seekKey, value, prevKey, currKey, expectedKeys.size() + 1);
+  }
+
   public void testSeekAfterEverything() throws IOException {
     Key seekKey = new Key("s", "t", "u", "v", 1);
     Key prevKey = new Key();
     Key currKey = null;
     MutableByteSequence value = new MutableByteSequence(new byte[64], 0, 0);
 
-    RelativeKey.fastSkip(in, seekKey, value, prevKey, currKey);
+    SkippR skippr = RelativeKey.fastSkip(in, seekKey, value, prevKey, currKey, expectedKeys.size());
+    assertEquals(expectedKeys.size(), skippr.skipped);
   }
 
   @Test
@@ -220,7 +231,7 @@
     Key currKey = null;
     MutableByteSequence value = new MutableByteSequence(new byte[64], 0, 0);
 
-    RelativeKey.SkippR skippr = RelativeKey.fastSkip(in, seekKey, value, prevKey, currKey);
+    RelativeKey.SkippR skippr = RelativeKey.fastSkip(in, seekKey, value, prevKey, currKey, expectedKeys.size());
 
     assertEquals(seekIndex + 1, skippr.skipped);
     assertEquals(expectedKeys.get(seekIndex - 1), skippr.prevKey);
@@ -236,14 +247,17 @@
     int i;
     for (i = seekIndex; expectedKeys.get(i).compareTo(fKey) < 0; i++) {}
 
-    skippr = RelativeKey.fastSkip(in, expectedKeys.get(i), value, prevKey, currKey);
+    int left = expectedKeys.size();
+
+    skippr = RelativeKey.fastSkip(in, expectedKeys.get(i), value, prevKey, currKey, expectedKeys.size());
     assertEquals(i + 1, skippr.skipped);
+    left -= skippr.skipped;
     assertEquals(expectedKeys.get(i - 1), skippr.prevKey);
     assertEquals(expectedKeys.get(i), skippr.rk.getKey());
     assertEquals(expectedValues.get(i).toString(), value.toString());
 
     // try fast skipping to our current location
-    skippr = RelativeKey.fastSkip(in, expectedKeys.get(i), value, expectedKeys.get(i - 1), expectedKeys.get(i));
+    skippr = RelativeKey.fastSkip(in, expectedKeys.get(i), value, expectedKeys.get(i - 1), expectedKeys.get(i), left);
     assertEquals(0, skippr.skipped);
     assertEquals(expectedKeys.get(i - 1), skippr.prevKey);
     assertEquals(expectedKeys.get(i), skippr.rk.getKey());
@@ -253,7 +267,7 @@
     fKey = expectedKeys.get(i).followingKey(PartialKey.ROW_COLFAM);
     int j;
     for (j = i; expectedKeys.get(j).compareTo(fKey) < 0; j++) {}
-    skippr = RelativeKey.fastSkip(in, fKey, value, expectedKeys.get(i - 1), expectedKeys.get(i));
+    skippr = RelativeKey.fastSkip(in, fKey, value, expectedKeys.get(i - 1), expectedKeys.get(i), left);
     assertEquals(j - i, skippr.skipped);
     assertEquals(expectedKeys.get(j - 1), skippr.prevKey);
     assertEquals(expectedKeys.get(j), skippr.rk.getKey());
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mock/TransformIterator.java b/core/src/test/java/org/apache/accumulo/core/file/streams/MockRateLimiter.java
similarity index 62%
rename from core/src/test/java/org/apache/accumulo/core/client/mock/TransformIterator.java
rename to core/src/test/java/org/apache/accumulo/core/file/streams/MockRateLimiter.java
index a7e7eef..9574d36 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mock/TransformIterator.java
+++ b/core/src/test/java/org/apache/accumulo/core/file/streams/MockRateLimiter.java
@@ -14,17 +14,25 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.client.mock;
+package org.apache.accumulo.core.file.streams;
 
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.iterators.WrappingIterator;
-import org.apache.hadoop.io.Text;
+import java.util.concurrent.atomic.AtomicLong;
+import org.apache.accumulo.core.util.ratelimit.RateLimiter;
 
-public class TransformIterator extends WrappingIterator {
+public class MockRateLimiter implements RateLimiter {
+  private final AtomicLong permitsAcquired = new AtomicLong();
 
   @Override
-  public Key getTopKey() {
-    Key k = getSource().getTopKey();
-    return new Key(new Text(k.getRow().toString().toLowerCase()), k.getColumnFamily(), k.getColumnQualifier(), k.getColumnVisibility(), k.getTimestamp());
+  public long getRate() {
+    return 0;
+  }
+
+  @Override
+  public void acquire(long permits) {
+    permitsAcquired.addAndGet(permits);
+  }
+
+  public long getPermitsAcquired() {
+    return permitsAcquired.get();
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/file/streams/RateLimitedInputStreamTest.java b/core/src/test/java/org/apache/accumulo/core/file/streams/RateLimitedInputStreamTest.java
new file mode 100644
index 0000000..6baff87
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/file/streams/RateLimitedInputStreamTest.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.file.streams;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Random;
+import org.apache.hadoop.fs.Seekable;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class RateLimitedInputStreamTest {
+
+  @Test
+  public void permitsAreProperlyAcquired() throws Exception {
+    Random randGen = new Random();
+    MockRateLimiter rateLimiter = new MockRateLimiter();
+    long bytesRetrieved = 0;
+    try (InputStream is = new RateLimitedInputStream(new RandomInputStream(), rateLimiter)) {
+      for (int i = 0; i < 100; ++i) {
+        int count = Math.abs(randGen.nextInt()) % 65536;
+        int countRead = is.read(new byte[count]);
+        Assert.assertEquals(count, countRead);
+        bytesRetrieved += count;
+      }
+    }
+    Assert.assertEquals(bytesRetrieved, rateLimiter.getPermitsAcquired());
+  }
+
+  private static class RandomInputStream extends InputStream implements Seekable {
+    private final Random r = new Random();
+
+    @Override
+    public int read() throws IOException {
+      return r.nextInt() & 0xff;
+    }
+
+    @Override
+    public void seek(long pos) throws IOException {
+      throw new UnsupportedOperationException("Not supported yet."); // To change body of generated methods, choose Tools | Templates.
+    }
+
+    @Override
+    public long getPos() throws IOException {
+      throw new UnsupportedOperationException("Not supported yet."); // To change body of generated methods, choose Tools | Templates.
+    }
+
+    @Override
+    public boolean seekToNewSource(long targetPos) throws IOException {
+      throw new UnsupportedOperationException("Not supported yet."); // To change body of generated methods, choose Tools | Templates.
+    }
+
+  }
+
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/file/streams/RateLimitedOutputStreamTest.java b/core/src/test/java/org/apache/accumulo/core/file/streams/RateLimitedOutputStreamTest.java
new file mode 100644
index 0000000..9e12354
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/file/streams/RateLimitedOutputStreamTest.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.file.streams;
+
+import com.google.common.io.ByteStreams;
+import com.google.common.io.CountingOutputStream;
+import java.io.FilterOutputStream;
+import java.io.IOException;
+import java.util.Random;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class RateLimitedOutputStreamTest {
+
+  @Test
+  public void permitsAreProperlyAcquired() throws Exception {
+    Random randGen = new Random();
+    MockRateLimiter rateLimiter = new MockRateLimiter();
+    long bytesWritten = 0;
+    try (RateLimitedOutputStream os = new RateLimitedOutputStream(new NullOutputStream(), rateLimiter)) {
+      for (int i = 0; i < 100; ++i) {
+        byte[] bytes = new byte[Math.abs(randGen.nextInt() % 65536)];
+        os.write(bytes);
+        bytesWritten += bytes.length;
+      }
+      Assert.assertEquals(bytesWritten, os.position());
+    }
+    Assert.assertEquals(bytesWritten, rateLimiter.getPermitsAcquired());
+  }
+
+  public static class NullOutputStream extends FilterOutputStream implements PositionedOutput {
+    public NullOutputStream() {
+      super(new CountingOutputStream(ByteStreams.nullOutputStream()));
+    }
+
+    @Override
+    public long position() throws IOException {
+      return ((CountingOutputStream) out).getCount();
+    }
+  }
+
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/AggregatingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/AggregatingIteratorTest.java
index e39d0d5..09064a5 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/AggregatingIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/AggregatingIteratorTest.java
@@ -39,7 +39,7 @@
 
 public class AggregatingIteratorTest {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
 
   /**
    * @deprecated since 1.4; visible only for testing
@@ -101,7 +101,7 @@
   @Test
   public void test1() throws IOException {
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that do not aggregate
     nkv(tm1, 1, 1, 1, 1, false, "2");
@@ -162,7 +162,7 @@
   @SuppressWarnings("deprecation")
   @Test
   public void test2() throws IOException {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that aggregate
     nkv(tm1, 1, 1, 1, 1, false, "2");
@@ -171,7 +171,7 @@
 
     AggregatingIterator ai = new AggregatingIterator();
 
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     opts.put("cf001", SummationAggregator.class.getName());
 
@@ -224,7 +224,7 @@
   @Test
   public void test3() throws IOException {
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that aggregate
     nkv(tm1, 1, 1, 1, 1, false, "2");
@@ -237,7 +237,7 @@
 
     AggregatingIterator ai = new AggregatingIterator();
 
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     opts.put("cf001", SummationAggregator.class.getName());
 
@@ -290,7 +290,7 @@
   @Test
   public void test4() throws IOException {
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that do not aggregate
     nkv(tm1, 0, 0, 1, 1, false, "7");
@@ -306,7 +306,7 @@
 
     AggregatingIterator ai = new AggregatingIterator();
 
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     opts.put("cf001", SummationAggregator.class.getName());
 
@@ -367,20 +367,20 @@
     // try aggregating across multiple data sets that contain
     // the exact same keys w/ different values
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     nkv(tm1, 1, 1, 1, 1, false, "2");
 
-    TreeMap<Key,Value> tm2 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm2 = new TreeMap<>();
     nkv(tm2, 1, 1, 1, 1, false, "3");
 
-    TreeMap<Key,Value> tm3 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm3 = new TreeMap<>();
     nkv(tm3, 1, 1, 1, 1, false, "4");
 
     AggregatingIterator ai = new AggregatingIterator();
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
     opts.put("cf001", SummationAggregator.class.getName());
 
-    List<SortedKeyValueIterator<Key,Value>> sources = new ArrayList<SortedKeyValueIterator<Key,Value>>(3);
+    List<SortedKeyValueIterator<Key,Value>> sources = new ArrayList<>(3);
     sources.add(new SortedMapIterator(tm1));
     sources.add(new SortedMapIterator(tm2));
     sources.add(new SortedMapIterator(tm3));
@@ -397,7 +397,7 @@
   @SuppressWarnings("deprecation")
   @Test
   public void test6() throws IOException {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that aggregate
     nkv(tm1, 1, 1, 1, 1, false, "2");
@@ -406,7 +406,7 @@
 
     AggregatingIterator ai = new AggregatingIterator();
 
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     opts.put("cf001", SummationAggregator.class.getName());
 
@@ -425,7 +425,7 @@
   public void test7() throws IOException {
     // test that delete is not aggregated
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     nkv(tm1, 1, 1, 1, 2, true, "");
     nkv(tm1, 1, 1, 1, 3, false, "4");
@@ -433,7 +433,7 @@
 
     AggregatingIterator ai = new AggregatingIterator();
 
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     opts.put("cf001", SummationAggregator.class.getName());
 
@@ -453,7 +453,7 @@
     ai.next();
     assertFalse(ai.hasTop());
 
-    tm1 = new TreeMap<Key,Value>();
+    tm1 = new TreeMap<>();
     nkv(tm1, 1, 1, 1, 2, true, "");
     ai = new AggregatingIterator();
     ai.init(new SortedMapIterator(tm1), opts, new DefaultIteratorEnvironment());
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/DefaultIteratorEnvironment.java b/core/src/test/java/org/apache/accumulo/core/iterators/DefaultIteratorEnvironment.java
index 316823c..3c68196 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/DefaultIteratorEnvironment.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/DefaultIteratorEnvironment.java
@@ -18,17 +18,16 @@
 
 import java.io.IOException;
 
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.iterators.system.MapFileIterator;
-import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 
-public class DefaultIteratorEnvironment implements IteratorEnvironment {
+public class DefaultIteratorEnvironment extends BaseIteratorEnvironment {
 
   AccumuloConfiguration conf;
 
@@ -53,23 +52,7 @@
   }
 
   @Override
-  public IteratorScope getIteratorScope() {
-    throw new UnsupportedOperationException();
+  public boolean isSamplingEnabled() {
+    return false;
   }
-
-  @Override
-  public boolean isFullMajorCompaction() {
-    throw new UnsupportedOperationException();
-  }
-
-  @Override
-  public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {
-    throw new UnsupportedOperationException();
-  }
-
-  @Override
-  public Authorizations getAuthorizations() {
-    throw new UnsupportedOperationException();
-  }
-
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/FirstEntryInRowIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/FirstEntryInRowIteratorTest.java
index 74f7462..34b01bc 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/FirstEntryInRowIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/FirstEntryInRowIteratorTest.java
@@ -22,14 +22,12 @@
 import java.util.Collections;
 import java.util.TreeMap;
 
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.iterators.system.CountingIterator;
-import org.apache.accumulo.core.security.Authorizations;
 import org.junit.Test;
 
 public class FirstEntryInRowIteratorTest {
@@ -39,38 +37,7 @@
     org.apache.accumulo.core.iterators.SortedMapIterator source = new SortedMapIterator(sourceMap);
     CountingIterator counter = new CountingIterator(source);
     FirstEntryInRowIterator feiri = new FirstEntryInRowIterator();
-    IteratorEnvironment env = new IteratorEnvironment() {
-
-      @Override
-      public AccumuloConfiguration getConfig() {
-        return null;
-      }
-
-      @Override
-      public IteratorScope getIteratorScope() {
-        return null;
-      }
-
-      @Override
-      public boolean isFullMajorCompaction() {
-        return false;
-      }
-
-      @Override
-      public void registerSideChannel(SortedKeyValueIterator<Key,Value> arg0) {
-
-      }
-
-      @Override
-      public Authorizations getAuthorizations() {
-        return null;
-      }
-
-      @Override
-      public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String arg0) throws IOException {
-        return null;
-      }
-    };
+    IteratorEnvironment env = new BaseIteratorEnvironment();
 
     feiri.init(counter, Collections.singletonMap(FirstEntryInRowIterator.NUM_SCANS_STRING_NAME, Integer.toString(numScans)), env);
 
@@ -84,12 +51,12 @@
 
   @Test
   public void test() throws IOException {
-    TreeMap<Key,Value> sourceMap = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> sourceMap = new TreeMap<>();
     Value emptyValue = new Value("".getBytes());
     sourceMap.put(new Key("r1", "cf", "cq"), emptyValue);
     sourceMap.put(new Key("r2", "cf", "cq"), emptyValue);
     sourceMap.put(new Key("r3", "cf", "cq"), emptyValue);
-    TreeMap<Key,Value> resultMap = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> resultMap = new TreeMap<>();
     long numSourceEntries = sourceMap.size();
     long numNexts = process(sourceMap, resultMap, new Range(), 10);
     assertEquals(numNexts, numSourceEntries);
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/FirstEntryInRowTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/FirstEntryInRowTest.java
index 8214c2c..9a5174b 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/FirstEntryInRowTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/FirstEntryInRowTest.java
@@ -34,8 +34,8 @@
 import org.junit.Test;
 
 public class FirstEntryInRowTest {
-  private static final Map<String,String> EMPTY_MAP = new HashMap<String,String>();
-  private static final Collection<ByteSequence> EMPTY_SET = new HashSet<ByteSequence>();
+  private static final Map<String,String> EMPTY_MAP = new HashMap<>();
+  private static final Collection<ByteSequence> EMPTY_SET = new HashSet<>();
 
   private Key nk(String row, String cf, String cq, long time) {
     return new Key(new Text(row), new Text(cf), new Text(cq), time);
@@ -73,7 +73,7 @@
 
   @Test
   public void test1() throws Exception {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq3", 5, "v2");
     put(tm1, "r2", "cf1", "cq1", 5, "v3");
@@ -94,7 +94,7 @@
 
   @Test
   public void test2() throws Exception {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     for (int r = 0; r < 5; r++) {
       for (int cf = r; cf < 100; cf++) {
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/IteratorUtilTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/IteratorUtilTest.java
index 87ad392..c9b9dcf 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/IteratorUtilTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/IteratorUtilTest.java
@@ -40,13 +40,12 @@
 import org.apache.accumulo.core.iterators.system.MultiIteratorTest;
 import org.apache.accumulo.core.iterators.user.AgeOffFilter;
 import org.apache.accumulo.core.iterators.user.SummingCombiner;
-import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.Test;
 
 public class IteratorUtilTest {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
 
   static class WrappedIter implements SortedKeyValueIterator<Key,Value> {
 
@@ -132,14 +131,14 @@
     conf.set(Property.TABLE_ITERATOR_PREFIX + IteratorScope.minc.name() + ".addIter", "1," + AddingIter.class.getName());
     conf.set(Property.TABLE_ITERATOR_PREFIX + IteratorScope.minc.name() + ".sqIter", "2," + SquaringIter.class.getName());
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     MultiIteratorTest.nkv(tm, 1, 0, false, "1");
     MultiIteratorTest.nkv(tm, 2, 0, false, "2");
 
     SortedMapIterator source = new SortedMapIterator(tm);
 
-    SortedKeyValueIterator<Key,Value> iter = IteratorUtil.loadIterators(IteratorScope.minc, source, new KeyExtent(new Text("tab"), null, null), conf,
+    SortedKeyValueIterator<Key,Value> iter = IteratorUtil.loadIterators(IteratorScope.minc, source, new KeyExtent("tab", null, null), conf,
         new DefaultIteratorEnvironment(conf));
     iter.seek(new Range(), EMPTY_COL_FAMS, false);
 
@@ -164,14 +163,14 @@
     // try loading for a different scope
     AccumuloConfiguration conf = new ConfigurationCopy();
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     MultiIteratorTest.nkv(tm, 1, 0, false, "1");
     MultiIteratorTest.nkv(tm, 2, 0, false, "2");
 
     SortedMapIterator source = new SortedMapIterator(tm);
 
-    SortedKeyValueIterator<Key,Value> iter = IteratorUtil.loadIterators(IteratorScope.majc, source, new KeyExtent(new Text("tab"), null, null), conf,
+    SortedKeyValueIterator<Key,Value> iter = IteratorUtil.loadIterators(IteratorScope.majc, source, new KeyExtent("tab", null, null), conf,
         new DefaultIteratorEnvironment(conf));
     iter.seek(new Range(), EMPTY_COL_FAMS, false);
 
@@ -197,7 +196,7 @@
 
     ConfigurationCopy conf = new ConfigurationCopy();
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     MultiIteratorTest.nkv(tm, 1, 0, false, "1");
     MultiIteratorTest.nkv(tm, 2, 0, false, "2");
@@ -207,7 +206,7 @@
     conf.set(Property.TABLE_ITERATOR_PREFIX + IteratorScope.minc.name() + ".addIter", "2," + AddingIter.class.getName());
     conf.set(Property.TABLE_ITERATOR_PREFIX + IteratorScope.minc.name() + ".sqIter", "1," + SquaringIter.class.getName());
 
-    SortedKeyValueIterator<Key,Value> iter = IteratorUtil.loadIterators(IteratorScope.minc, source, new KeyExtent(new Text("tab"), null, null), conf,
+    SortedKeyValueIterator<Key,Value> iter = IteratorUtil.loadIterators(IteratorScope.minc, source, new KeyExtent("tab", null, null), conf,
         new DefaultIteratorEnvironment(conf));
     iter.seek(new Range(), EMPTY_COL_FAMS, false);
 
@@ -236,14 +235,14 @@
     conf.set(Property.TABLE_ITERATOR_PREFIX + IteratorScope.minc.name() + ".addIter.opt.amount", "7");
     conf.set(Property.TABLE_ITERATOR_PREFIX + IteratorScope.minc.name() + ".sqIter", "2," + SquaringIter.class.getName());
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     MultiIteratorTest.nkv(tm, 1, 0, false, "1");
     MultiIteratorTest.nkv(tm, 2, 0, false, "2");
 
     SortedMapIterator source = new SortedMapIterator(tm);
 
-    SortedKeyValueIterator<Key,Value> iter = IteratorUtil.loadIterators(IteratorScope.minc, source, new KeyExtent(new Text("tab"), null, null), conf,
+    SortedKeyValueIterator<Key,Value> iter = IteratorUtil.loadIterators(IteratorScope.minc, source, new KeyExtent("tab", null, null), conf,
         new DefaultIteratorEnvironment(conf));
     iter.seek(new Range(), EMPTY_COL_FAMS, false);
 
@@ -272,14 +271,14 @@
     conf.set(Property.TABLE_ITERATOR_PREFIX + IteratorScope.minc.name() + ".filter.opt.ttl", "100");
     conf.set(Property.TABLE_ITERATOR_PREFIX + IteratorScope.minc.name() + ".filter.opt.currentTime", "1000");
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     MultiIteratorTest.nkv(tm, 1, 850, false, "1");
     MultiIteratorTest.nkv(tm, 2, 950, false, "2");
 
     SortedMapIterator source = new SortedMapIterator(tm);
 
-    SortedKeyValueIterator<Key,Value> iter = IteratorUtil.loadIterators(IteratorScope.minc, source, new KeyExtent(new Text("tab"), null, null), conf,
+    SortedKeyValueIterator<Key,Value> iter = IteratorUtil.loadIterators(IteratorScope.minc, source, new KeyExtent("tab", null, null), conf,
         new DefaultIteratorEnvironment(conf));
     iter.seek(new Range(), EMPTY_COL_FAMS, false);
 
@@ -293,7 +292,7 @@
 
   @Test
   public void onlyReadsRelevantIteratorScopeConfigurations() throws Exception {
-    Map<String,String> data = new HashMap<String,String>();
+    Map<String,String> data = new HashMap<>();
 
     // Make some configuration items, one with a bogus scope
     data.put(Property.TABLE_ITERATOR_SCAN_PREFIX + "foo", "50," + SummingCombiner.class.getName());
@@ -303,8 +302,8 @@
 
     AccumuloConfiguration conf = new ConfigurationCopy(data);
 
-    List<IterInfo> iterators = new ArrayList<IterInfo>();
-    Map<String,Map<String,String>> options = new HashMap<String,Map<String,String>>();
+    List<IterInfo> iterators = new ArrayList<>();
+    Map<String,Map<String,String>> options = new HashMap<>();
 
     IteratorUtil.parseIterConf(IteratorScope.scan, iterators, options, conf);
 
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/SortedMapIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/SortedMapIteratorTest.java
new file mode 100644
index 0000000..d4080e1
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/SortedMapIteratorTest.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.iterators;
+
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.client.SampleNotPresentException;
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.junit.Test;
+
+public class SortedMapIteratorTest {
+
+  @Test(expected = SampleNotPresentException.class)
+  public void testSampleNotPresent() {
+    SortedMapIterator smi = new SortedMapIterator(new TreeMap<Key,Value>());
+    smi.deepCopy(new BaseIteratorEnvironment() {
+      @Override
+      public boolean isSamplingEnabled() {
+        return true;
+      }
+
+      @Override
+      public SamplerConfiguration getSamplerConfiguration() {
+        return new SamplerConfiguration(RowSampler.class.getName());
+      }
+    });
+  }
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/system/ColumnFamilySkippingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/system/ColumnFamilySkippingIteratorTest.java
index fbe7fd5..33c398f 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/system/ColumnFamilySkippingIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/system/ColumnFamilySkippingIteratorTest.java
@@ -32,7 +32,7 @@
 
 public class ColumnFamilySkippingIteratorTest extends TestCase {
 
-  private static final Collection<ByteSequence> EMPTY_SET = new HashSet<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_SET = new HashSet<>();
 
   Key nk(String row, String cf, String cq, long time) {
     return new Key(new Text(row), new Text(cf), new Text(cq), time);
@@ -62,7 +62,7 @@
   }
 
   public void test1() throws Exception {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq3", 5, "v2");
     put(tm1, "r2", "cf1", "cq1", 5, "v3");
@@ -77,14 +77,14 @@
 
     cfi.seek(new Range(), EMPTY_SET, false);
     assertTrue(cfi.hasTop());
-    TreeMap<Key,Value> tm2 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm2 = new TreeMap<>();
     while (cfi.hasTop()) {
       tm2.put(cfi.getTopKey(), cfi.getTopValue());
       cfi.next();
     }
     assertEquals(tm1, tm2);
 
-    HashSet<ByteSequence> colfams = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> colfams = new HashSet<>();
     colfams.add(new ArrayByteSequence("cf2"));
     cfi.seek(new Range(), colfams, true);
     aten(cfi, "r2", "cf2", "cq4", 5, "v4");
@@ -108,7 +108,7 @@
   }
 
   public void test2() throws Exception {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     for (int r = 0; r < 10; r++) {
       for (int cf = 0; cf < 1000; cf++) {
@@ -118,13 +118,13 @@
       }
     }
 
-    HashSet<ByteSequence> allColfams = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> allColfams = new HashSet<>();
     for (int cf = 0; cf < 1000; cf++) {
       allColfams.add(new ArrayByteSequence(String.format("%06d", cf)));
     }
 
     ColumnFamilySkippingIterator cfi = new ColumnFamilySkippingIterator(new SortedMapIterator(tm1));
-    HashSet<ByteSequence> colfams = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> colfams = new HashSet<>();
 
     runTest(cfi, 30000, 0, allColfams, colfams);
 
@@ -162,11 +162,11 @@
   private void runTest(ColumnFamilySkippingIterator cfi, int total, int expected, HashSet<ByteSequence> allColfams, HashSet<ByteSequence> colfams)
       throws Exception {
     cfi.seek(new Range(), colfams, true);
-    HashSet<ByteSequence> excpected1 = new HashSet<ByteSequence>(colfams);
+    HashSet<ByteSequence> excpected1 = new HashSet<>(colfams);
     excpected1.retainAll(allColfams);
     runTest(cfi, expected, excpected1);
 
-    HashSet<ByteSequence> excpected2 = new HashSet<ByteSequence>(allColfams);
+    HashSet<ByteSequence> excpected2 = new HashSet<>(allColfams);
     excpected2.removeAll(colfams);
     cfi.seek(new Range(), colfams, false);
     runTest(cfi, total - expected, excpected2);
@@ -175,7 +175,7 @@
   private void runTest(ColumnFamilySkippingIterator cfi, int expected, HashSet<ByteSequence> colfams) throws Exception {
     int count = 0;
 
-    HashSet<ByteSequence> ocf = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> ocf = new HashSet<>();
 
     while (cfi.hasTop()) {
       count++;
@@ -189,7 +189,7 @@
 
   public void test3() throws Exception {
     // construct test where ColumnFamilySkippingIterator might try to seek past the end of the user supplied range
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     for (int r = 0; r < 3; r++) {
       for (int cf = 4; cf < 1000; cf++) {
@@ -201,7 +201,7 @@
 
     CountingIterator ci = new CountingIterator(new SortedMapIterator(tm1));
     ColumnFamilySkippingIterator cfi = new ColumnFamilySkippingIterator(ci);
-    HashSet<ByteSequence> colfams = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> colfams = new HashSet<>();
     colfams.add(new ArrayByteSequence(String.format("%06d", 4)));
 
     Range range = new Range(nk(0, 4, 0, 6), true, nk(0, 400, 0, 6), true);
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/system/ColumnFilterTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/system/ColumnFilterTest.java
index 3fd66b4..cfed90f 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/system/ColumnFilterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/system/ColumnFilterTest.java
@@ -40,7 +40,7 @@
   }
 
   public void test1() {
-    HashSet<Column> columns = new HashSet<Column>();
+    HashSet<Column> columns = new HashSet<>();
 
     columns.add(nc("cf1"));
 
@@ -52,7 +52,7 @@
   }
 
   public void test2() {
-    HashSet<Column> columns = new HashSet<Column>();
+    HashSet<Column> columns = new HashSet<>();
 
     columns.add(nc("cf1"));
     columns.add(nc("cf2", "cq1"));
@@ -65,7 +65,7 @@
   }
 
   public void test3() {
-    HashSet<Column> columns = new HashSet<Column>();
+    HashSet<Column> columns = new HashSet<>();
 
     columns.add(nc("cf2", "cq1"));
 
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/system/DeletingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/system/DeletingIteratorTest.java
index 4fd48d5..9082a36 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/system/DeletingIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/system/DeletingIteratorTest.java
@@ -33,7 +33,7 @@
 
 public class DeletingIteratorTest extends TestCase {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
 
   public void test1() {
     Text colf = new Text("a");
@@ -42,7 +42,7 @@
     Value dvDel = new Value("old".getBytes());
     Value dvNew = new Value("new".getBytes());
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
     Key k;
 
     for (int i = 0; i < 2; i++) {
@@ -67,7 +67,7 @@
       DeletingIterator it = new DeletingIterator(new SortedMapIterator(tm), false);
       it.seek(new Range(), EMPTY_COL_FAMS, false);
 
-      TreeMap<Key,Value> tmOut = new TreeMap<Key,Value>();
+      TreeMap<Key,Value> tmOut = new TreeMap<>();
       while (it.hasTop()) {
         tmOut.put(it.getTopKey(), it.getTopValue());
         it.next();
@@ -88,7 +88,7 @@
     try {
       DeletingIterator it = new DeletingIterator(new SortedMapIterator(tm), true);
       it.seek(new Range(), EMPTY_COL_FAMS, false);
-      TreeMap<Key,Value> tmOut = new TreeMap<Key,Value>();
+      TreeMap<Key,Value> tmOut = new TreeMap<>();
       while (it.hasTop()) {
         tmOut.put(it.getTopKey(), it.getTopValue());
         it.next();
@@ -115,7 +115,7 @@
 
   // seek test
   public void test2() throws IOException {
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     nkv(tm, "r000", 4, false, "v4");
     nkv(tm, "r000", 3, false, "v3");
@@ -165,7 +165,7 @@
 
   // test delete with same timestamp as existing key
   public void test3() throws IOException {
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     nkv(tm, "r000", 3, false, "v3");
     nkv(tm, "r000", 2, false, "v2");
@@ -190,7 +190,7 @@
 
   // test range inclusiveness
   public void test4() throws IOException {
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     nkv(tm, "r000", 3, false, "v3");
     nkv(tm, "r000", 2, false, "v2");
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/system/MultiIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/system/MultiIteratorTest.java
index 3fbf92d..8949c92 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/system/MultiIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/system/MultiIteratorTest.java
@@ -22,8 +22,6 @@
 import java.util.List;
 import java.util.TreeMap;
 
-import junit.framework.TestCase;
-
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
@@ -34,9 +32,11 @@
 import org.apache.accumulo.core.util.LocalityGroupUtil;
 import org.apache.hadoop.io.Text;
 
+import junit.framework.TestCase;
+
 public class MultiIteratorTest extends TestCase {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
 
   public static Key nk(int row, long ts) {
     return new Key(nr(row), ts);
@@ -57,7 +57,7 @@
   }
 
   void verify(int start, int end, Key seekKey, Text endRow, Text prevEndRow, boolean init, boolean incrRow, List<TreeMap<Key,Value>> maps) throws IOException {
-    List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<SortedKeyValueIterator<Key,Value>>(maps.size());
+    List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<>(maps.size());
 
     for (TreeMap<Key,Value> map : maps) {
       iters.add(new SortedMapIterator(map));
@@ -121,14 +121,14 @@
   public void test1() throws IOException {
     // TEST non overlapping inputs
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
-    List<TreeMap<Key,Value>> tmpList = new ArrayList<TreeMap<Key,Value>>(2);
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
+    List<TreeMap<Key,Value>> tmpList = new ArrayList<>(2);
 
     for (int i = 0; i < 4; i++) {
       nkv(tm1, 0, i, false, "v" + i);
     }
     tmpList.add(tm1);
-    tm1 = new TreeMap<Key,Value>();
+    tm1 = new TreeMap<>();
     for (int i = 4; i < 8; i++) {
       nkv(tm1, 0, i, false, "v" + i);
     }
@@ -144,9 +144,9 @@
   public void test2() throws IOException {
     // TEST overlapping inputs
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
-    TreeMap<Key,Value> tm2 = new TreeMap<Key,Value>();
-    List<TreeMap<Key,Value>> tmpList = new ArrayList<TreeMap<Key,Value>>(2);
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
+    TreeMap<Key,Value> tm2 = new TreeMap<>();
+    List<TreeMap<Key,Value>> tmpList = new ArrayList<>(2);
 
     for (int i = 0; i < 8; i++) {
       if (i % 2 == 0)
@@ -167,8 +167,8 @@
   public void test3() throws IOException {
     // TEST single input
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
-    List<TreeMap<Key,Value>> tmpList = new ArrayList<TreeMap<Key,Value>>(2);
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
+    List<TreeMap<Key,Value>> tmpList = new ArrayList<>(2);
 
     for (int i = 0; i < 8; i++) {
       nkv(tm1, 0, i, false, "v" + i);
@@ -186,9 +186,9 @@
   public void test4() throws IOException {
     // TEST empty input
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
-    List<SortedKeyValueIterator<Key,Value>> skvil = new ArrayList<SortedKeyValueIterator<Key,Value>>(1);
+    List<SortedKeyValueIterator<Key,Value>> skvil = new ArrayList<>(1);
     skvil.add(new SortedMapIterator(tm1));
     MultiIterator mi = new MultiIterator(skvil, true);
 
@@ -201,9 +201,9 @@
   public void test5() throws IOException {
     // TEST overlapping inputs AND prevRow AND endRow AND seek
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
-    TreeMap<Key,Value> tm2 = new TreeMap<Key,Value>();
-    List<TreeMap<Key,Value>> tmpList = new ArrayList<TreeMap<Key,Value>>(2);
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
+    TreeMap<Key,Value> tm2 = new TreeMap<>();
+    List<TreeMap<Key,Value>> tmpList = new ArrayList<>(2);
 
     for (int i = 0; i < 8; i++) {
       if (i % 2 == 0)
@@ -257,12 +257,12 @@
 
   public void test6() throws IOException {
     // TEst setting an endKey
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     nkv(tm1, 3, 0, false, "1");
     nkv(tm1, 4, 0, false, "2");
     nkv(tm1, 6, 0, false, "3");
 
-    List<SortedKeyValueIterator<Key,Value>> skvil = new ArrayList<SortedKeyValueIterator<Key,Value>>(1);
+    List<SortedKeyValueIterator<Key,Value>> skvil = new ArrayList<>(1);
     skvil.add(new SortedMapIterator(tm1));
     MultiIterator mi = new MultiIterator(skvil, true);
     mi.seek(new Range(null, true, nk(5, 9), false), EMPTY_COL_FAMS, false);
@@ -330,7 +330,7 @@
 
   public void test7() throws IOException {
     // TEst setting an endKey
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     nkv(tm1, 0, 3, false, "1");
     nkv(tm1, 0, 2, false, "2");
     nkv(tm1, 0, 1, false, "3");
@@ -341,10 +341,10 @@
     nkv(tm1, 2, 1, false, "8");
     nkv(tm1, 2, 0, false, "9");
 
-    List<SortedKeyValueIterator<Key,Value>> skvil = new ArrayList<SortedKeyValueIterator<Key,Value>>(1);
+    List<SortedKeyValueIterator<Key,Value>> skvil = new ArrayList<>(1);
     skvil.add(new SortedMapIterator(tm1));
 
-    KeyExtent extent = new KeyExtent(new Text("tablename"), nr(1), nr(0));
+    KeyExtent extent = new KeyExtent("tablename", nr(1), nr(0));
     MultiIterator mi = new MultiIterator(skvil, extent);
 
     Range r1 = new Range((Text) null, (Text) null);
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/system/SourceSwitchingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/system/SourceSwitchingIteratorTest.java
index 7567871..1ebf9df 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/system/SourceSwitchingIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/system/SourceSwitchingIteratorTest.java
@@ -60,7 +60,7 @@
 
     DataSource next;
     SortedKeyValueIterator<Key,Value> iter;
-    List<TestDataSource> copies = new ArrayList<TestDataSource>();
+    List<TestDataSource> copies = new ArrayList<>();
     AtomicBoolean iflag;
 
     TestDataSource(SortedKeyValueIterator<Key,Value> iter) {
@@ -111,7 +111,7 @@
   }
 
   public void test1() throws Exception {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq3", 5, "v2");
     put(tm1, "r2", "cf1", "cq1", 5, "v3");
@@ -128,7 +128,7 @@
   }
 
   public void test2() throws Exception {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq3", 5, "v2");
     put(tm1, "r2", "cf1", "cq1", 5, "v3");
@@ -140,7 +140,7 @@
     ssi.seek(new Range(), new ArrayList<ByteSequence>(), false);
     ane(ssi, "r1", "cf1", "cq1", 5, "v1", true);
 
-    TreeMap<Key,Value> tm2 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm2 = new TreeMap<>();
     put(tm2, "r1", "cf1", "cq1", 5, "v4");
     put(tm2, "r1", "cf1", "cq3", 5, "v5");
     put(tm2, "r2", "cf1", "cq1", 5, "v6");
@@ -157,7 +157,7 @@
   public void test3() throws Exception {
     // test switching after a row
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq2", 5, "v2");
     put(tm1, "r1", "cf1", "cq3", 5, "v3");
@@ -172,7 +172,7 @@
     ssi.seek(new Range(), new ArrayList<ByteSequence>(), false);
     ane(ssi, "r1", "cf1", "cq1", 5, "v1", true);
 
-    TreeMap<Key,Value> tm2 = new TreeMap<Key,Value>(tm1);
+    TreeMap<Key,Value> tm2 = new TreeMap<>(tm1);
     put(tm2, "r1", "cf1", "cq5", 5, "v7"); // should not see this because it should not switch until the row is finished
     put(tm2, "r2", "cf1", "cq1", 5, "v8"); // should see this new row after it switches
 
@@ -192,7 +192,7 @@
 
   public void test4() throws Exception {
     // ensure switch is done on initial seek
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq2", 5, "v2");
 
@@ -200,7 +200,7 @@
     TestDataSource tds = new TestDataSource(smi);
     SourceSwitchingIterator ssi = new SourceSwitchingIterator(tds, false);
 
-    TreeMap<Key,Value> tm2 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm2 = new TreeMap<>();
     put(tm2, "r1", "cf1", "cq1", 6, "v3");
     put(tm2, "r1", "cf1", "cq2", 6, "v4");
 
@@ -217,7 +217,7 @@
 
   public void test5() throws Exception {
     // esnure switchNow() works w/ deepCopy()
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq2", 5, "v2");
 
@@ -227,7 +227,7 @@
 
     SortedKeyValueIterator<Key,Value> dc1 = ssi.deepCopy(null);
 
-    TreeMap<Key,Value> tm2 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm2 = new TreeMap<>();
     put(tm2, "r1", "cf1", "cq1", 6, "v3");
     put(tm2, "r2", "cf1", "cq2", 6, "v4");
 
@@ -248,7 +248,7 @@
 
   public void testSetInterrupt() throws Exception {
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
 
     SortedMapIterator smi = new SortedMapIterator(tm1);
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/system/TimeSettingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/system/TimeSettingIteratorTest.java
index 3dbe7ca..9a363a1 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/system/TimeSettingIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/system/TimeSettingIteratorTest.java
@@ -34,7 +34,7 @@
 
   @Test
   public void test1() throws Exception {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     tm1.put(new Key("r0", "cf1", "cq1", 9l), new Value("v0".getBytes()));
     tm1.put(new Key("r1", "cf1", "cq1", Long.MAX_VALUE), new Value("v1".getBytes()));
@@ -87,7 +87,7 @@
 
   @Test
   public void testAvoidKeyCopy() throws Exception {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     final Key k = new Key("r0", "cf1", "cq1", 9l);
 
     tm1.put(k, new Value("v0".getBytes()));
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/system/VisibilityFilterTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/system/VisibilityFilterTest.java
index 667aa5f..68323c6 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/system/VisibilityFilterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/system/VisibilityFilterTest.java
@@ -34,7 +34,7 @@
 public class VisibilityFilterTest extends TestCase {
 
   public void testBadVisibility() throws IOException {
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     tm.put(new Key("r1", "cf1", "cq1", "A&"), new Value(new byte[0]));
     VisibilityFilter filter = new VisibilityFilter(new SortedMapIterator(tm), new Authorizations("A"), "".getBytes());
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/BigDecimalCombinerTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/BigDecimalCombinerTest.java
index dfcb869..861ce02 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/BigDecimalCombinerTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/BigDecimalCombinerTest.java
@@ -42,7 +42,7 @@
 
 public class BigDecimalCombinerTest {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
   private static double delta = 0.00001;
 
   Encoder<BigDecimal> encoder;
@@ -53,7 +53,7 @@
   @Before
   public void setup() {
     encoder = new BigDecimalCombiner.BigDecimalEncoder();
-    tm1 = new TreeMap<Key,Value>();
+    tm1 = new TreeMap<>();
     columns = Collections.singletonList(new IteratorSetting.Column("cf001"));
 
     // keys that will aggregate
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/ColumnSliceFilterTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/ColumnSliceFilterTest.java
index 11ad192..698d9ec 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/ColumnSliceFilterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/ColumnSliceFilterTest.java
@@ -40,9 +40,9 @@
 
 public class ColumnSliceFilterTest {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
 
-  private static final SortedMap<Key,Value> TEST_DATA = new TreeMap<Key,Value>();
+  private static final SortedMap<Key,Value> TEST_DATA = new TreeMap<>();
   private static final Key KEY_1 = nkv(TEST_DATA, "boo1", "yup", "20080201", "dog");
   private static final Key KEY_2 = nkv(TEST_DATA, "boo1", "yap", "20080202", "cat");
   private static final Key KEY_3 = nkv(TEST_DATA, "boo2", "yap", "20080203", "hamster");
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/CombinerTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/CombinerTest.java
index a442534..6300532 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/CombinerTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/CombinerTest.java
@@ -58,7 +58,7 @@
 
 public class CombinerTest {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
 
   static class CombinerIteratorEnvironment extends DefaultIteratorEnvironment {
 
@@ -115,7 +115,7 @@
   public void test1() throws IOException {
     Encoder<Long> encoder = LongCombiner.VAR_LEN_ENCODER;
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that do not aggregate
     nkv(tm1, 1, 1, 1, 1, false, 2l, encoder);
@@ -180,7 +180,7 @@
   public void test2() throws IOException {
     Encoder<Long> encoder = LongCombiner.VAR_LEN_ENCODER;
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that aggregate
     nkv(tm1, 1, 1, 1, 1, false, 2l, encoder);
@@ -242,7 +242,7 @@
   public void test3() throws IOException {
     Encoder<Long> encoder = LongCombiner.FIXED_LEN_ENCODER;
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that aggregate
     nkv(tm1, 1, 1, 1, 1, false, 2l, encoder);
@@ -308,7 +308,7 @@
   public void testDeepCopy() throws IOException {
     Encoder<Long> encoder = LongCombiner.FIXED_LEN_ENCODER;
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that aggregate
     nkv(tm1, 1, 1, 1, 1, false, 2l, encoder);
@@ -376,7 +376,7 @@
   public void test4() throws IOException {
     Encoder<Long> encoder = LongCombiner.STRING_ENCODER;
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that do not aggregate
     nkv(tm1, 0, 0, 1, 1, false, 7l, encoder);
@@ -481,13 +481,13 @@
     // try aggregating across multiple data sets that contain
     // the exact same keys w/ different values
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     nkv(tm1, 1, 1, 1, 1, false, 2l, encoder);
 
-    TreeMap<Key,Value> tm2 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm2 = new TreeMap<>();
     nkv(tm2, 1, 1, 1, 1, false, 3l, encoder);
 
-    TreeMap<Key,Value> tm3 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm3 = new TreeMap<>();
     nkv(tm3, 1, 1, 1, 1, false, 4l, encoder);
 
     Combiner ai = new SummingCombiner();
@@ -496,7 +496,7 @@
     LongCombiner.setEncodingType(is, StringEncoder.class);
     Combiner.setColumns(is, Collections.singletonList(new IteratorSetting.Column("cf001")));
 
-    List<SortedKeyValueIterator<Key,Value>> sources = new ArrayList<SortedKeyValueIterator<Key,Value>>(3);
+    List<SortedKeyValueIterator<Key,Value>> sources = new ArrayList<>(3);
     sources.add(new SortedMapIterator(tm1));
     sources.add(new SortedMapIterator(tm2));
     sources.add(new SortedMapIterator(tm3));
@@ -513,7 +513,7 @@
   @Test
   public void test6() throws IOException {
     Encoder<Long> encoder = LongCombiner.VAR_LEN_ENCODER;
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that aggregate
     nkv(tm1, 1, 1, 1, 1, false, 2l, encoder);
@@ -542,7 +542,7 @@
 
     // test that delete is not aggregated
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     nkv(tm1, 1, 1, 1, 2, true, 0l, encoder);
     nkv(tm1, 1, 1, 1, 3, false, 4l, encoder);
@@ -570,7 +570,7 @@
     ai.next();
     assertFalse(ai.hasTop());
 
-    tm1 = new TreeMap<Key,Value>();
+    tm1 = new TreeMap<>();
     nkv(tm1, 1, 1, 1, 2, true, 0l, encoder);
     ai = new SummingCombiner();
     ai.init(new SortedMapIterator(tm1), is.getOptions(), SCAN_IE);
@@ -587,7 +587,7 @@
 
   @Test
   public void valueIteratorTest() throws IOException {
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
     tm.put(new Key("r", "f", "q", 1), new Value("1".getBytes()));
     tm.put(new Key("r", "f", "q", 2), new Value("2".getBytes()));
     SortedMapIterator smi = new SortedMapIterator(tm);
@@ -600,7 +600,7 @@
 
   @Test
   public void sumAllColumns() throws IOException {
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
     tm.put(new Key("r", "count", "a", 1), new Value("1".getBytes()));
     tm.put(new Key("r", "count", "a", 2), new Value("1".getBytes()));
     tm.put(new Key("r", "count", "b", 3), new Value("1".getBytes()));
@@ -636,7 +636,7 @@
   public void maxMinTest() throws IOException {
     Encoder<Long> encoder = LongCombiner.VAR_LEN_ENCODER;
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that aggregate
     nkv(tm1, 1, 1, 1, 1, false, 4l, encoder);
@@ -675,7 +675,7 @@
   }
 
   public static List<Long> nal(Long... longs) {
-    List<Long> al = new ArrayList<Long>(longs.length);
+    List<Long> al = new ArrayList<>(longs.length);
     for (Long l : longs) {
       al.add(l);
     }
@@ -692,7 +692,7 @@
       IllegalAccessException {
     Encoder<List<Long>> encoder = encoderClass.newInstance();
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
 
     // keys that aggregate
     nkv(tm1, 1, 1, 1, 1, false, nal(1l, 2l), encoder);
@@ -773,11 +773,11 @@
 
     @Override
     public List<Long> decode(byte[] b) {
-      return new ArrayList<Long>();
+      return new ArrayList<>();
     }
 
     public List<Long> decode(byte[] b, int offset, int len) {
-      return new ArrayList<Long>();
+      return new ArrayList<>();
     }
 
   }
@@ -821,7 +821,7 @@
   }
 
   private TreeMap<Key,Value> readAll(SortedKeyValueIterator<Key,Value> combiner) throws Exception {
-    TreeMap<Key,Value> ret = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> ret = new TreeMap<>();
 
     combiner.seek(new Range(), EMPTY_COL_FAMS, false);
 
@@ -895,7 +895,7 @@
   public void testDeleteHandling() throws Exception {
     Encoder<Long> encoder = LongCombiner.STRING_ENCODER;
 
-    TreeMap<Key,Value> input = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> input = new TreeMap<>();
 
     IteratorEnvironment paritalMajcIe = new CombinerIteratorEnvironment(IteratorScope.majc, false);
     IteratorEnvironment fullMajcIe = new CombinerIteratorEnvironment(IteratorScope.majc, true);
@@ -906,7 +906,7 @@
     nkv(input, 1, 1, 1, 3, false, 2l, encoder);
     nkv(input, 1, 1, 1, 4, false, 9l, encoder);
 
-    TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expected = new TreeMap<>();
     nkv(expected, 1, 1, 1, 1, false, 4l, encoder);
     nkv(expected, 1, 1, 1, 2, true, 0l, encoder);
     nkv(expected, 1, 1, 1, 4, false, 11l, encoder);
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/FilterTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/FilterTest.java
index 0c4ffa2..e7e2266 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/FilterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/FilterTest.java
@@ -48,8 +48,8 @@
 
 public class FilterTest {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
-  private static final Map<String,String> EMPTY_OPTS = new HashMap<String,String>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
+  private static final Map<String,String> EMPTY_OPTS = new HashMap<>();
 
   public static class SimpleFilter extends Filter {
     @Override
@@ -85,7 +85,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
     Value dv = new Value();
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     for (int i = 0; i < 1000; i++) {
       Key k = new Key(new Text(String.format("%03d", i)), colf, colq);
@@ -120,7 +120,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
     Value dv = new Value();
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     for (int i = 0; i < 1000; i++) {
       Key k = new Key(new Text(String.format("%03d", i)), colf, colq);
@@ -157,7 +157,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
     Value dv = new Value();
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     for (int i = 0; i < 1000; i++) {
       Key k = new Key(new Text(String.format("%03d", i)), colf, colq);
@@ -185,7 +185,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
     Value dv = new Value();
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     for (int i = 0; i < 1000; i++) {
       Key k = new Key(new Text(String.format("%03d", i)), colf, colq);
@@ -218,7 +218,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
     Value dv = new Value();
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
     IteratorSetting is = new IteratorSetting(1, ColumnAgeOffFilter.class);
     ColumnAgeOffFilter.addTTL(is, new IteratorSetting.Column("a"), 901l);
     long ts = System.currentTimeMillis();
@@ -250,11 +250,92 @@
     assertEquals(size(a), 902);
   }
 
+  /**
+   * Test for fix to ACCUMULO-1604: ColumnAgeOffFilter was throwing an error when using negate
+   */
+  @Test
+  public void test2aNegate() throws IOException {
+    Text colf = new Text("a");
+    Text colq = new Text("b");
+    Value dv = new Value();
+    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    IteratorSetting is = new IteratorSetting(1, ColumnAgeOffFilter.class);
+    ColumnAgeOffFilter.addTTL(is, new IteratorSetting.Column("a"), 901l);
+    ColumnAgeOffFilter.setNegate(is, true);
+    long ts = System.currentTimeMillis();
+
+    for (long i = 0; i < 1000; i++) {
+      Key k = new Key(new Text(String.format("%03d", i)), colf, colq, ts - i);
+      tm.put(k, dv);
+    }
+    assertTrue(tm.size() == 1000);
+
+    ColumnAgeOffFilter a = new ColumnAgeOffFilter();
+    assertTrue(a.validateOptions(is.getOptions()));
+    a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
+    a.overrideCurrentTime(ts);
+    a.seek(new Range(), EMPTY_COL_FAMS, false);
+    assertEquals(size(a), 98);
+
+    ColumnAgeOffFilter.addTTL(is, new IteratorSetting.Column("a", "b"), 101l);
+    a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
+    a.overrideCurrentTime(ts);
+    a.seek(new Range(), EMPTY_COL_FAMS, false);
+    assertEquals(size(a), 898);
+
+    ColumnAgeOffFilter.removeTTL(is, new IteratorSetting.Column("a", "b"));
+    a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
+    a = (ColumnAgeOffFilter) a.deepCopy(null);
+    a.overrideCurrentTime(ts);
+    a.seek(new Range(), EMPTY_COL_FAMS, false);
+    assertEquals(size(a), 98);
+  }
+
+  /**
+   * Test for fix to ACCUMULO-1604: ColumnAgeOffFilter was throwing an error when using negate Test case for when "negate" is an actual column name
+   */
+  @Test
+  public void test2b() throws IOException {
+    Text colf = new Text("negate");
+    Text colq = new Text("b");
+    Value dv = new Value();
+    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    IteratorSetting is = new IteratorSetting(1, ColumnAgeOffFilter.class);
+    ColumnAgeOffFilter.addTTL(is, new IteratorSetting.Column("negate"), 901l);
+    long ts = System.currentTimeMillis();
+
+    for (long i = 0; i < 1000; i++) {
+      Key k = new Key(new Text(String.format("%03d", i)), colf, colq, ts - i);
+      tm.put(k, dv);
+    }
+    assertTrue(tm.size() == 1000);
+
+    ColumnAgeOffFilter a = new ColumnAgeOffFilter();
+    assertTrue(a.validateOptions(is.getOptions()));
+    a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
+    a.overrideCurrentTime(ts);
+    a.seek(new Range(), EMPTY_COL_FAMS, false);
+    assertEquals(size(a), 902);
+
+    ColumnAgeOffFilter.addTTL(is, new IteratorSetting.Column("negate", "b"), 101l);
+    a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
+    a.overrideCurrentTime(ts);
+    a.seek(new Range(), EMPTY_COL_FAMS, false);
+    assertEquals(size(a), 102);
+
+    ColumnAgeOffFilter.removeTTL(is, new IteratorSetting.Column("negate", "b"));
+    a.init(new SortedMapIterator(tm), is.getOptions(), new DefaultIteratorEnvironment());
+    a = (ColumnAgeOffFilter) a.deepCopy(null);
+    a.overrideCurrentTime(ts);
+    a.seek(new Range(), EMPTY_COL_FAMS, false);
+    assertEquals(size(a), 902);
+  }
+
   @Test
   public void test3() throws IOException {
     Value dv = new Value();
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
-    HashSet<Column> hsc = new HashSet<Column>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
+    HashSet<Column> hsc = new HashSet<>();
     hsc.add(new Column("c".getBytes(), null, null));
 
     Text colf1 = new Text("a");
@@ -281,14 +362,14 @@
     a.seek(new Range(), EMPTY_COL_FAMS, false);
     assertEquals(size(a), 1000);
 
-    hsc = new HashSet<Column>();
+    hsc = new HashSet<>();
     hsc.add(new Column("a".getBytes(), "b".getBytes(), null));
     a = new ColumnQualifierFilter(new SortedMapIterator(tm), hsc);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
     int size = size(a);
     assertTrue("size was " + size, size == 500);
 
-    hsc = new HashSet<Column>();
+    hsc = new HashSet<>();
     a = new ColumnQualifierFilter(new SortedMapIterator(tm), hsc);
     a.seek(new Range(), EMPTY_COL_FAMS, false);
     size = size(a);
@@ -298,7 +379,7 @@
   @Test
   public void test4() throws IOException {
     Value dv = new Value();
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     ColumnVisibility le1 = new ColumnVisibility("L1");
     ColumnVisibility le2 = new ColumnVisibility("L0&OFFICIAL");
@@ -320,7 +401,7 @@
   }
 
   private ColumnQualifierFilter ncqf(TreeMap<Key,Value> tm, Column... columns) throws IOException {
-    HashSet<Column> hsc = new HashSet<Column>();
+    HashSet<Column> hsc = new HashSet<>();
 
     for (Column column : columns) {
       hsc.add(column);
@@ -334,7 +415,7 @@
   @Test
   public void test5() throws IOException {
     Value dv = new Value();
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     tm.put(new Key(new Text(String.format("%03d", 1)), new Text("a"), new Text("x")), dv);
     tm.put(new Key(new Text(String.format("%03d", 2)), new Text("a"), new Text("y")), dv);
@@ -365,7 +446,7 @@
 
   @Test
   public void testNoVisFilter() throws IOException {
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
     Value v = new Value();
     for (int i = 0; i < 1000; i++) {
       Key k = new Key(String.format("%03d", i), "a", "b", i % 10 == 0 ? "vis" : "");
@@ -385,7 +466,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
     Value dv = new Value();
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     for (int i = 0; i < 100; i++) {
       Key k = new Key(new Text(String.format("%02d", i)), colf, colq);
@@ -482,7 +563,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
     Value dv = new Value();
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     Key k = new Key(new Text("0"), colf, colq);
     tm.put(k, dv);
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/GrepIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/GrepIteratorTest.java
index 23af994..3a47f5a 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/GrepIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/GrepIteratorTest.java
@@ -38,14 +38,14 @@
 import org.junit.Test;
 
 public class GrepIteratorTest {
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
   SortedMap<Key,Value> input;
   SortedMap<Key,Value> output;
 
   @Before
   public void init() {
-    input = new TreeMap<Key,Value>();
-    output = new TreeMap<Key,Value>();
+    input = new TreeMap<>();
+    output = new TreeMap<>();
     input.put(new Key("abcdef", "xyz", "xyz", 0), new Value("xyz".getBytes()));
     output.put(new Key("abcdef", "xyz", "xyz", 0), new Value("xyz".getBytes()));
 
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/IndexedDocIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/IndexedDocIteratorTest.java
index 117fcac..cb6d3f7 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/IndexedDocIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/IndexedDocIteratorTest.java
@@ -46,7 +46,7 @@
 
   private static final Logger log = Logger.getLogger(IndexedDocIteratorTest.class);
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
   private static final byte[] nullByte = {0};
 
   private static IteratorEnvironment env = new DefaultIteratorEnvironment();
@@ -71,7 +71,7 @@
     StringBuilder sb = new StringBuilder();
     Random r = new Random();
     Value v = new Value(new byte[0]);
-    TreeMap<Key,Value> map = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> map = new TreeMap<>();
     boolean[] negateMask = new boolean[columnFamilies.length];
 
     for (int i = 0; i < columnFamilies.length; i++) {
@@ -189,7 +189,7 @@
     otherColumnFamilies[3] = new Text("F");
 
     float hitRatio = 0.5f;
-    HashSet<Text> docs = new HashSet<Text>();
+    HashSet<Text> docs = new HashSet<>();
     SortedKeyValueIterator<Key,Value> source = createIteratorStack(hitRatio, NUM_ROWS, NUM_DOCIDS, columnFamilies, otherColumnFamilies, docs);
     IteratorSetting is = new IteratorSetting(1, IndexedDocIterator.class);
     IndexedDocIterator.setColumnFamilies(is, columnFamilies);
@@ -227,7 +227,7 @@
     otherColumnFamilies[3] = new Text("F");
 
     float hitRatio = 0.5f;
-    HashSet<Text> docs = new HashSet<Text>();
+    HashSet<Text> docs = new HashSet<>();
     SortedKeyValueIterator<Key,Value> source = createIteratorStack(hitRatio, NUM_ROWS, NUM_DOCIDS, columnFamilies, otherColumnFamilies, docs);
     IteratorSetting is = new IteratorSetting(1, IndexedDocIterator.class);
     IndexedDocIterator.setColumnFamilies(is, columnFamilies);
@@ -264,10 +264,10 @@
     otherColumnFamilies[3] = new Text("F");
 
     float hitRatio = 0.5f;
-    HashSet<Text> docs = new HashSet<Text>();
+    HashSet<Text> docs = new HashSet<>();
     SortedKeyValueIterator<Key,Value> source = createIteratorStack(hitRatio, NUM_ROWS, NUM_DOCIDS, columnFamilies, otherColumnFamilies, docs);
     SortedKeyValueIterator<Key,Value> source2 = createIteratorStack(hitRatio, NUM_ROWS, NUM_DOCIDS, columnFamilies, otherColumnFamilies, docs);
-    ArrayList<SortedKeyValueIterator<Key,Value>> sourceIters = new ArrayList<SortedKeyValueIterator<Key,Value>>();
+    ArrayList<SortedKeyValueIterator<Key,Value>> sourceIters = new ArrayList<>();
     sourceIters.add(source);
     sourceIters.add(source2);
     MultiIterator mi = new MultiIterator(sourceIters, false);
@@ -310,7 +310,7 @@
     otherColumnFamilies[3] = new Text("F");
 
     float hitRatio = 0.5f;
-    HashSet<Text> docs = new HashSet<Text>();
+    HashSet<Text> docs = new HashSet<>();
     SortedKeyValueIterator<Key,Value> source = createIteratorStack(hitRatio, NUM_ROWS, NUM_DOCIDS, columnFamilies, otherColumnFamilies, docs, negatedColumns);
     IteratorSetting is = new IteratorSetting(1, IndexedDocIterator.class);
     IndexedDocIterator.setColumnFamilies(is, columnFamilies, notFlags);
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/IntersectingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/IntersectingIteratorTest.java
index 365cee4..aad2e54 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/IntersectingIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/IntersectingIteratorTest.java
@@ -16,28 +16,18 @@
  */
 package org.apache.accumulo.core.iterators.user;
 
+import static org.junit.Assert.assertTrue;
+
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
-import java.util.Collections;
 import java.util.HashSet;
-import java.util.Iterator;
-import java.util.Map.Entry;
 import java.util.Random;
 import java.util.TreeMap;
 
-import junit.framework.TestCase;
-
-import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.DefaultIteratorEnvironment;
@@ -45,19 +35,18 @@
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.SortedMapIterator;
 import org.apache.accumulo.core.iterators.system.MultiIterator;
-import org.apache.accumulo.core.security.Authorizations;
 import org.apache.hadoop.io.Text;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
 
-public class IntersectingIteratorTest extends TestCase {
+public class IntersectingIteratorTest {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
-  private static final Logger log = Logger.getLogger(IntersectingIterator.class);
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
   private static IteratorEnvironment env = new DefaultIteratorEnvironment();
 
   TreeMap<Key,Value> map;
-  HashSet<Text> docs = new HashSet<Text>();
+  HashSet<Text> docs = new HashSet<>();
   Text[] columnFamilies;
   Text[] negatedColumns;
   Text[] otherColumnFamilies;
@@ -66,15 +55,11 @@
 
   int docid = 0;
 
-  static {
-    log.setLevel(Level.OFF);
-  }
-
   private TreeMap<Key,Value> createSortedMap(float hitRatio, int numRows, int numDocsPerRow, Text[] columnFamilies, Text[] otherColumnFamilies,
       HashSet<Text> docs, Text[] negatedColumns) {
     Random r = new Random();
     Value v = new Value(new byte[0]);
-    TreeMap<Key,Value> map = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> map = new TreeMap<>();
     boolean[] negateMask = new boolean[columnFamilies.length];
 
     for (int i = 0; i < columnFamilies.length; i++) {
@@ -130,16 +115,13 @@
     docid = 0;
   }
 
-  public void testNull() {}
-
-  @Override
-  public void setUp() {
-    Logger.getRootLogger().setLevel(Level.ERROR);
-  }
-
   private static final int NUM_ROWS = 10;
   private static final int NUM_DOCIDS = 1000;
 
+  @Rule
+  public TestName test = new TestName();
+
+  @Test
   public void test1() throws IOException {
     columnFamilies = new Text[2];
     columnFamilies[0] = new Text("C");
@@ -168,6 +150,7 @@
     cleanup();
   }
 
+  @Test
   public void test2() throws IOException {
     columnFamilies = new Text[3];
     columnFamilies[0] = new Text("A");
@@ -197,6 +180,7 @@
     cleanup();
   }
 
+  @Test
   public void test3() throws IOException {
     columnFamilies = new Text[6];
     columnFamilies[0] = new Text("C");
@@ -214,7 +198,7 @@
     float hitRatio = 0.5f;
     SortedKeyValueIterator<Key,Value> source = createIteratorStack(hitRatio, NUM_ROWS, NUM_DOCIDS, columnFamilies, otherColumnFamilies, docs);
     SortedKeyValueIterator<Key,Value> source2 = createIteratorStack(hitRatio, NUM_ROWS, NUM_DOCIDS, columnFamilies, otherColumnFamilies, docs);
-    ArrayList<SortedKeyValueIterator<Key,Value>> sourceIters = new ArrayList<SortedKeyValueIterator<Key,Value>>();
+    ArrayList<SortedKeyValueIterator<Key,Value>> sourceIters = new ArrayList<>();
     sourceIters.add(source);
     sourceIters.add(source2);
     MultiIterator mi = new MultiIterator(sourceIters, false);
@@ -234,6 +218,7 @@
     cleanup();
   }
 
+  @Test
   public void test4() throws IOException {
     columnFamilies = new Text[3];
     notFlags = new boolean[3];
@@ -270,6 +255,7 @@
     cleanup();
   }
 
+  @Test
   public void test6() throws IOException {
     columnFamilies = new Text[1];
     columnFamilies[0] = new Text("C");
@@ -296,29 +282,4 @@
     assertTrue(hitCount == docs.size());
     cleanup();
   }
-
-  public void testWithBatchScanner() throws Exception {
-    Value empty = new Value(new byte[] {});
-    MockInstance inst = new MockInstance("mockabye");
-    Connector connector = inst.getConnector("user", new PasswordToken("pass"));
-    connector.tableOperations().create("index");
-    BatchWriter bw = connector.createBatchWriter("index", new BatchWriterConfig());
-    Mutation m = new Mutation("000012");
-    m.put("rvy", "5000000000000000", empty);
-    m.put("15qh", "5000000000000000", empty);
-    bw.addMutation(m);
-    bw.close();
-
-    BatchScanner bs = connector.createBatchScanner("index", Authorizations.EMPTY, 10);
-    IteratorSetting ii = new IteratorSetting(20, IntersectingIterator.class);
-    IntersectingIterator.setColumnFamilies(ii, new Text[] {new Text("rvy"), new Text("15qh")});
-    bs.addScanIterator(ii);
-    bs.setRanges(Collections.singleton(new Range()));
-    Iterator<Entry<Key,Value>> iterator = bs.iterator();
-    assertTrue(iterator.hasNext());
-    Entry<Key,Value> next = iterator.next();
-    Key key = next.getKey();
-    assertEquals(key.getColumnQualifier(), new Text("5000000000000000"));
-    assertFalse(iterator.hasNext());
-  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/LargeRowFilterTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/LargeRowFilterTest.java
index af610ca..1d5a108 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/LargeRowFilterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/LargeRowFilterTest.java
@@ -66,18 +66,18 @@
   }
 
   public void testBasic() throws Exception {
-    TreeMap<Key,Value> testData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> testData = new TreeMap<>();
 
     genTestData(testData, 20);
 
     for (int i = 1; i <= 20; i++) {
-      TreeMap<Key,Value> expectedData = new TreeMap<Key,Value>();
+      TreeMap<Key,Value> expectedData = new TreeMap<>();
       genTestData(expectedData, i);
 
       LargeRowFilter lrfi = setupIterator(testData, i, IteratorScope.scan);
       lrfi.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
 
-      TreeMap<Key,Value> filteredData = new TreeMap<Key,Value>();
+      TreeMap<Key,Value> filteredData = new TreeMap<>();
 
       while (lrfi.hasTop()) {
         filteredData.put(lrfi.getTopKey(), lrfi.getTopValue());
@@ -89,17 +89,17 @@
   }
 
   public void testSeek() throws Exception {
-    TreeMap<Key,Value> testData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> testData = new TreeMap<>();
 
     genTestData(testData, 20);
 
     for (int i = 1; i <= 20; i++) {
-      TreeMap<Key,Value> expectedData = new TreeMap<Key,Value>();
+      TreeMap<Key,Value> expectedData = new TreeMap<>();
       genTestData(expectedData, i);
 
       LargeRowFilter lrfi = setupIterator(testData, i, IteratorScope.scan);
 
-      TreeMap<Key,Value> filteredData = new TreeMap<Key,Value>();
+      TreeMap<Key,Value> filteredData = new TreeMap<>();
 
       // seek to each row... rows that exceed max columns should be filtered
       for (int j = 1; j <= i; j++) {
@@ -117,7 +117,7 @@
   }
 
   public void testSeek2() throws Exception {
-    TreeMap<Key,Value> testData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> testData = new TreeMap<>();
 
     genTestData(testData, 20);
 
@@ -130,10 +130,10 @@
 
     lrfi.seek(new Range(new Key(genRow(10), "cf001", genCQ(4), 5), true, new Key(genRow(10)).followingKey(PartialKey.ROW), false),
         LocalityGroupUtil.EMPTY_CF_SET, false);
-    TreeMap<Key,Value> expectedData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expectedData = new TreeMap<>();
     genRow(expectedData, 10, 4, 10);
 
-    TreeMap<Key,Value> filteredData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> filteredData = new TreeMap<>();
     while (lrfi.hasTop()) {
       filteredData.put(lrfi.getTopKey(), lrfi.getTopValue());
       lrfi.next();
@@ -143,14 +143,14 @@
   }
 
   public void testCompaction() throws Exception {
-    TreeMap<Key,Value> testData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> testData = new TreeMap<>();
 
     genTestData(testData, 20);
 
     LargeRowFilter lrfi = setupIterator(testData, 13, IteratorScope.majc);
     lrfi.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
 
-    TreeMap<Key,Value> compactedData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> compactedData = new TreeMap<>();
     while (lrfi.hasTop()) {
       compactedData.put(lrfi.getTopKey(), lrfi.getTopValue());
       lrfi.next();
@@ -167,10 +167,10 @@
     lrfi.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
 
     // only expect to see 13 rows
-    TreeMap<Key,Value> expectedData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expectedData = new TreeMap<>();
     genTestData(expectedData, 13);
 
-    TreeMap<Key,Value> filteredData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> filteredData = new TreeMap<>();
     while (lrfi.hasTop()) {
       filteredData.put(lrfi.getTopKey(), lrfi.getTopValue());
       lrfi.next();
@@ -185,7 +185,7 @@
     assertFalse(lrfi.hasTop());
 
     // test seeking w/ column families
-    HashSet<ByteSequence> colfams = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> colfams = new HashSet<>();
     colfams.add(new ArrayByteSequence("cf001"));
     lrfi.seek(new Range(new Key(genRow(15), "cf001", genCQ(4), 5), true, new Key(genRow(15)).followingKey(PartialKey.ROW), false), colfams, true);
     assertFalse(lrfi.hasTop());
@@ -194,20 +194,20 @@
   // in other test data is generated in such a way that once a row
   // is suppressed, all subsequent rows are suppressed
   public void testSuppressInner() throws Exception {
-    TreeMap<Key,Value> testData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> testData = new TreeMap<>();
     genRow(testData, 1, 0, 2);
     genRow(testData, 2, 0, 50);
     genRow(testData, 3, 0, 15);
     genRow(testData, 4, 0, 5);
 
-    TreeMap<Key,Value> expectedData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expectedData = new TreeMap<>();
     genRow(expectedData, 1, 0, 2);
     genRow(expectedData, 4, 0, 5);
 
     LargeRowFilter lrfi = setupIterator(testData, 13, IteratorScope.scan);
     lrfi.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
 
-    TreeMap<Key,Value> filteredData = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> filteredData = new TreeMap<>();
     while (lrfi.hasTop()) {
       filteredData.put(lrfi.getTopKey(), lrfi.getTopValue());
       lrfi.next();
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/RegExFilterTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/RegExFilterTest.java
index 2649f90..f31514a 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/RegExFilterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/RegExFilterTest.java
@@ -17,40 +17,27 @@
 package org.apache.accumulo.core.iterators.user;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.TreeMap;
 
-import junit.framework.TestCase;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.TableExistsException;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.DefaultIteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedMapIterator;
-import org.apache.accumulo.core.security.Authorizations;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class RegExFilterTest extends TestCase {
+public class RegExFilterTest {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
 
   private Key nkv(TreeMap<Key,Value> tm, String row, String cf, String cq, String val) {
     Key k = nk(row, cf, cq);
@@ -62,8 +49,9 @@
     return new Key(new Text(row), new Text(cf), new Text(cq));
   }
 
+  @Test
   public void test1() throws IOException {
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     Key k1 = nkv(tm, "boo1", "yup", "20080201", "dog");
     Key k2 = nkv(tm, "boo1", "yap", "20080202", "cat");
@@ -253,8 +241,8 @@
   }
 
   @Test
-  public void testNullByteInKey() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException {
-    String table = "nullRegexTest";
+  public void testNullByteInKey() throws IOException {
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     String s1 = "first", s2 = "second";
     byte[] b1 = s1.getBytes(), b2 = s2.getBytes(), ball;
@@ -263,25 +251,17 @@
     ball[b1.length] = (byte) 0;
     System.arraycopy(b2, 0, ball, b1.length + 1, b2.length);
 
-    Instance instance = new MockInstance();
-    Connector conn = instance.getConnector("root", new PasswordToken(new byte[0]));
-
-    conn.tableOperations().create(table);
-    BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
-    Mutation m = new Mutation(ball);
-    m.put(new byte[0], new byte[0], new byte[0]);
-    bw.addMutation(m);
-    bw.close();
+    Key key = new Key(ball, new byte[0], new byte[0], new byte[0], 90, false);
+    Value val = new Value(new byte[0]);
+    tm.put(key, val);
 
     IteratorSetting is = new IteratorSetting(5, RegExFilter.class);
     RegExFilter.setRegexs(is, s2, null, null, null, true, true);
 
-    Scanner scanner = conn.createScanner(table, new Authorizations());
-    scanner.addScanIterator(is);
+    RegExFilter filter = new RegExFilter();
+    filter.init(new SortedMapIterator(tm), is.getOptions(), null);
+    filter.seek(new Range(), EMPTY_COL_FAMS, false);
 
-    assertTrue("Client side iterator couldn't find a match when it should have", scanner.iterator().hasNext());
-
-    conn.tableOperations().attachIterator(table, is);
-    assertTrue("server side iterator couldn't find a match when it should have", conn.createScanner(table, new Authorizations()).iterator().hasNext());
+    assertTrue("iterator couldn't find a match when it should have", filter.hasTop());
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/RowDeletingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/RowDeletingIteratorTest.java
index a3c1cca..4ec0269 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/RowDeletingIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/RowDeletingIteratorTest.java
@@ -16,30 +16,26 @@
  */
 package org.apache.accumulo.core.iterators.user;
 
-import java.io.IOException;
 import java.util.ArrayList;
 import java.util.HashSet;
 import java.util.TreeMap;
 
-import junit.framework.TestCase;
-
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
 import org.apache.accumulo.core.data.ArrayByteSequence;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
-import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.SortedMapIterator;
 import org.apache.accumulo.core.iterators.system.ColumnFamilySkippingIterator;
-import org.apache.accumulo.core.security.Authorizations;
 import org.apache.hadoop.io.Text;
 
+import junit.framework.TestCase;
+
 public class RowDeletingIteratorTest extends TestCase {
 
-  public static class TestIE implements IteratorEnvironment {
+  public static class TestIE extends BaseIteratorEnvironment {
 
     private IteratorScope scope;
     private boolean fmc;
@@ -50,11 +46,6 @@
     }
 
     @Override
-    public AccumuloConfiguration getConfig() {
-      return null;
-    }
-
-    @Override
     public IteratorScope getIteratorScope() {
       return scope;
     }
@@ -63,19 +54,6 @@
     public boolean isFullMajorCompaction() {
       return fmc;
     }
-
-    @Override
-    public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String mapFileName) throws IOException {
-      return null;
-    }
-
-    @Override
-    public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {}
-
-    @Override
-    public Authorizations getAuthorizations() {
-      return null;
-    }
   }
 
   Key nk(String row, String cf, String cq, long time) {
@@ -98,7 +76,7 @@
 
   public void test1() throws Exception {
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "", "", 10, RowDeletingIterator.DELETE_ROW_VALUE);
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq3", 5, "v1");
@@ -137,7 +115,7 @@
 
   public void test2() throws Exception {
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "", "", 10, RowDeletingIterator.DELETE_ROW_VALUE);
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq3", 15, "v1");
@@ -178,7 +156,7 @@
 
   public void test3() throws Exception {
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "", "", 10, RowDeletingIterator.DELETE_ROW_VALUE);
     put(tm1, "r1", "", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
@@ -188,7 +166,7 @@
     RowDeletingIterator rdi = new RowDeletingIterator();
     rdi.init(new ColumnFamilySkippingIterator(new SortedMapIterator(tm1)), null, new TestIE(IteratorScope.scan, false));
 
-    HashSet<ByteSequence> cols = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> cols = new HashSet<>();
     cols.add(new ArrayByteSequence("cf1".getBytes()));
 
     rdi.seek(new Range(), cols, true);
@@ -208,7 +186,7 @@
 
   public void test4() throws Exception {
 
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
     put(tm1, "r1", "", "", 10, RowDeletingIterator.DELETE_ROW_VALUE);
     put(tm1, "r1", "cf1", "cq1", 5, "v1");
     put(tm1, "r1", "cf1", "cq3", 15, "v1");
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/RowEncodingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/RowEncodingIteratorTest.java
index 8f228f5..d531517 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/RowEncodingIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/RowEncodingIteratorTest.java
@@ -16,26 +16,15 @@
  */
 package org.apache.accumulo.core.iterators.user;
 
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.data.ByteSequence;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.IteratorEnvironment;
-import org.apache.accumulo.core.iterators.IteratorUtil;
-import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
-import org.apache.accumulo.core.iterators.SortedMapIterator;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.commons.collections.BufferOverflowException;
-import org.apache.hadoop.io.Text;
-import org.junit.Test;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 import java.io.ByteArrayInputStream;
 import java.io.ByteArrayOutputStream;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.IOException;
-
 import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.List;
@@ -43,23 +32,20 @@
 import java.util.SortedMap;
 import java.util.TreeMap;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorUtil;
+import org.apache.accumulo.core.iterators.SortedMapIterator;
+import org.apache.commons.collections.BufferOverflowException;
+import org.apache.hadoop.io.Text;
+import org.junit.Test;
 
 public class RowEncodingIteratorTest {
 
-  private static final class DummyIteratorEnv implements IteratorEnvironment {
-    @Override
-    public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String mapFileName) throws IOException {
-      return null;
-    }
-
-    @Override
-    public AccumuloConfiguration getConfig() {
-      return null;
-    }
-
+  private static final class DummyIteratorEnv extends BaseIteratorEnvironment {
     @Override
     public IteratorUtil.IteratorScope getIteratorScope() {
       return IteratorUtil.IteratorScope.scan;
@@ -69,16 +55,6 @@
     public boolean isFullMajorCompaction() {
       return false;
     }
-
-    @Override
-    public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {
-
-    }
-
-    @Override
-    public Authorizations getAuthorizations() {
-      return null;
-    }
   }
 
   private static final class RowEncodingIteratorImpl extends RowEncodingIterator {
@@ -86,9 +62,9 @@
     public static SortedMap<Key,Value> decodeRow(Key rowKey, Value rowValue) throws IOException {
       DataInputStream dis = new DataInputStream(new ByteArrayInputStream(rowValue.get()));
       int numKeys = dis.readInt();
-      List<Key> decodedKeys = new ArrayList<Key>();
-      List<Value> decodedValues = new ArrayList<Value>();
-      SortedMap<Key,Value> out = new TreeMap<Key,Value>();
+      List<Key> decodedKeys = new ArrayList<>();
+      List<Value> decodedValues = new ArrayList<>();
+      SortedMap<Key,Value> out = new TreeMap<>();
       for (int i = 0; i < numKeys; i++) {
         Key k = new Key();
         k.readFields(dis);
@@ -139,26 +115,26 @@
   public void testEncodeAll() throws IOException {
     byte[] kbVal = new byte[1024];
     // This code is shamelessly borrowed from the WholeRowIteratorTest.
-    SortedMap<Key,Value> map1 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map1 = new TreeMap<>();
     pkv(map1, "row1", "cf1", "cq1", "cv1", 5, kbVal);
     pkv(map1, "row1", "cf1", "cq2", "cv1", 6, kbVal);
 
-    SortedMap<Key,Value> map2 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map2 = new TreeMap<>();
     pkv(map2, "row2", "cf1", "cq1", "cv1", 5, kbVal);
     pkv(map2, "row2", "cf1", "cq2", "cv1", 6, kbVal);
 
-    SortedMap<Key,Value> map3 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map3 = new TreeMap<>();
     pkv(map3, "row3", "cf1", "cq1", "cv1", 5, kbVal);
     pkv(map3, "row3", "cf1", "cq2", "cv1", 6, kbVal);
 
-    SortedMap<Key,Value> map = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map = new TreeMap<>();
     map.putAll(map1);
     map.putAll(map2);
     map.putAll(map3);
     SortedMapIterator src = new SortedMapIterator(map);
     Range range = new Range(new Text("row1"), true, new Text("row2"), true);
     RowEncodingIteratorImpl iter = new RowEncodingIteratorImpl();
-    Map<String,String> bigBufferOpts = new HashMap<String,String>();
+    Map<String,String> bigBufferOpts = new HashMap<>();
     bigBufferOpts.put(RowEncodingIterator.MAX_BUFFER_SIZE_OPT, "3K");
     iter.init(src, bigBufferOpts, new DummyIteratorEnv());
     iter.seek(range, new ArrayList<ByteSequence>(), false);
@@ -183,16 +159,16 @@
   public void testEncodeSome() throws IOException {
     byte[] kbVal = new byte[1024];
     // This code is shamelessly borrowed from the WholeRowIteratorTest.
-    SortedMap<Key,Value> map1 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map1 = new TreeMap<>();
     pkv(map1, "row1", "cf1", "cq1", "cv1", 5, kbVal);
     pkv(map1, "row1", "cf1", "cq2", "cv1", 6, kbVal);
 
-    SortedMap<Key,Value> map = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map = new TreeMap<>();
     map.putAll(map1);
     SortedMapIterator src = new SortedMapIterator(map);
     Range range = new Range(new Text("row1"), true, new Text("row2"), true);
     RowEncodingIteratorImpl iter = new RowEncodingIteratorImpl();
-    Map<String,String> bigBufferOpts = new HashMap<String,String>();
+    Map<String,String> bigBufferOpts = new HashMap<>();
     bigBufferOpts.put(RowEncodingIterator.MAX_BUFFER_SIZE_OPT, "1K");
     iter.init(src, bigBufferOpts, new DummyIteratorEnv());
     iter.seek(range, new ArrayList<ByteSequence>(), false);
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/RowFilterTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/RowFilterTest.java
index 7914ec0..4294eb3 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/RowFilterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/RowFilterTest.java
@@ -25,17 +25,10 @@
 import java.util.HashSet;
 import java.util.LinkedList;
 import java.util.List;
-import java.util.Map.Entry;
 import java.util.Set;
 import java.util.TreeMap;
 
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.data.ArrayByteSequence;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.ColumnUpdate;
 import org.apache.accumulo.core.data.Key;
@@ -45,13 +38,11 @@
 import org.apache.accumulo.core.iterators.DefaultIteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.SortedMapIterator;
-import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.iterators.system.ColumnFamilySkippingIterator;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-/**
- *
- */
+import com.google.common.collect.ImmutableSet;
 
 public class RowFilterTest {
 
@@ -74,7 +65,7 @@
       }
 
       // ensure that seeks are confined to the row
-      rowIterator.seek(new Range(), new HashSet<ByteSequence>(), false);
+      rowIterator.seek(new Range(null, false, firstKey == null ? null : firstKey.getRow(), false), new HashSet<ByteSequence>(), false);
       while (rowIterator.hasTop()) {
         sum2 += Integer.parseInt(rowIterator.getTopValue().toString());
         rowIterator.next();
@@ -86,13 +77,13 @@
         rowIterator.next();
       }
 
-      return sum == 2 && sum2 == 2;
+      return sum == 2 && sum2 == 0;
     }
 
   }
 
   public static class RowZeroOrOneFilter extends RowFilter {
-    private static final Set<String> passRows = new HashSet<String>(Arrays.asList("0", "1"));
+    private static final Set<String> passRows = new HashSet<>(Arrays.asList("0", "1"));
 
     @Override
     public boolean acceptRow(SortedKeyValueIterator<Key,Value> rowIterator) throws IOException {
@@ -101,7 +92,7 @@
   }
 
   public static class RowOneOrTwoFilter extends RowFilter {
-    private static final Set<String> passRows = new HashSet<String>(Arrays.asList("1", "2"));
+    private static final Set<String> passRows = new HashSet<>(Arrays.asList("1", "2"));
 
     @Override
     public boolean acceptRow(SortedKeyValueIterator<Key,Value> rowIterator) throws IOException {
@@ -117,7 +108,7 @@
   }
 
   public List<Mutation> createMutations() {
-    List<Mutation> mutations = new LinkedList<Mutation>();
+    List<Mutation> mutations = new LinkedList<>();
     Mutation m = new Mutation("0");
     m.put("cf1", "cq1", "1");
     m.put("cf1", "cq2", "1");
@@ -134,7 +125,7 @@
 
     m = new Mutation("1");
     m.put("cf1", "cq1", "1");
-    m.put("cf1", "cq2", "2");
+    m.put("cf2", "cq2", "2");
     mutations.add(m);
 
     m = new Mutation("2");
@@ -144,7 +135,7 @@
 
     m = new Mutation("3");
     m.put("cf1", "cq1", "0");
-    m.put("cf1", "cq2", "2");
+    m.put("cf2", "cq2", "2");
     mutations.add(m);
 
     m = new Mutation("4");
@@ -166,7 +157,7 @@
 
   public TreeMap<Key,Value> createKeyValues() {
     List<Mutation> mutations = createMutations();
-    TreeMap<Key,Value> keyValues = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> keyValues = new TreeMap<>();
 
     final Text cf = new Text(), cq = new Text();
     for (Mutation m : mutations) {
@@ -187,78 +178,63 @@
 
   @Test
   public void test1() throws Exception {
-    MockInstance instance = new MockInstance("rft1");
-    Connector conn = instance.getConnector("", new PasswordToken(""));
+    ColumnFamilySkippingIterator source = new ColumnFamilySkippingIterator(new SortedMapIterator(createKeyValues()));
 
-    conn.tableOperations().create("table1");
-    BatchWriter bw = conn.createBatchWriter("table1", new BatchWriterConfig());
+    RowFilter filter = new SummingRowFilter();
+    filter.init(source, Collections.<String,String> emptyMap(), new DefaultIteratorEnvironment());
 
-    for (Mutation m : createMutations()) {
-      bw.addMutation(m);
-    }
-    IteratorSetting is = new IteratorSetting(40, SummingRowFilter.class);
-    conn.tableOperations().attachIterator("table1", is);
+    filter.seek(new Range(), Collections.<ByteSequence> emptySet(), false);
 
-    Scanner scanner = conn.createScanner("table1", Authorizations.EMPTY);
-    assertEquals(new HashSet<String>(Arrays.asList("2", "3")), getRows(scanner));
+    assertEquals(new HashSet<>(Arrays.asList("2", "3")), getRows(filter));
 
-    scanner.fetchColumn(new Text("cf1"), new Text("cq2"));
-    assertEquals(new HashSet<String>(Arrays.asList("1", "3")), getRows(scanner));
+    ByteSequence cf = new ArrayByteSequence("cf2");
 
-    scanner.clearColumns();
-    scanner.fetchColumn(new Text("cf1"), new Text("cq1"));
-    assertEquals(new HashSet<String>(), getRows(scanner));
+    filter.seek(new Range(), ImmutableSet.of(cf), true);
+    assertEquals(new HashSet<>(Arrays.asList("1", "3", "0", "4")), getRows(filter));
 
-    scanner.setRange(new Range("0", "4"));
-    scanner.clearColumns();
-    assertEquals(new HashSet<String>(Arrays.asList("2", "3")), getRows(scanner));
+    filter.seek(new Range("0", "4"), Collections.<ByteSequence> emptySet(), false);
+    assertEquals(new HashSet<>(Arrays.asList("2", "3")), getRows(filter));
 
-    scanner.setRange(new Range("2"));
-    scanner.clearColumns();
-    assertEquals(new HashSet<String>(Arrays.asList("2")), getRows(scanner));
+    filter.seek(new Range("2"), Collections.<ByteSequence> emptySet(), false);
+    assertEquals(new HashSet<>(Arrays.asList("2")), getRows(filter));
 
-    scanner.setRange(new Range("4"));
-    scanner.clearColumns();
-    assertEquals(new HashSet<String>(), getRows(scanner));
+    filter.seek(new Range("4"), Collections.<ByteSequence> emptySet(), false);
+    assertEquals(new HashSet<String>(), getRows(filter));
 
-    scanner.setRange(new Range("4"));
-    scanner.clearColumns();
-    scanner.fetchColumn(new Text("cf1"), new Text("cq2"));
-    scanner.fetchColumn(new Text("cf1"), new Text("cq4"));
-    assertEquals(new HashSet<String>(Arrays.asList("4")), getRows(scanner));
+    filter.seek(new Range("4"), ImmutableSet.of(cf), true);
+    assertEquals(new HashSet<>(Arrays.asList("4")), getRows(filter));
 
   }
 
   @Test
   public void testChainedRowFilters() throws Exception {
-    MockInstance instance = new MockInstance("rft1");
-    Connector conn = instance.getConnector("", new PasswordToken(""));
+    SortedMapIterator source = new SortedMapIterator(createKeyValues());
 
-    conn.tableOperations().create("chained_row_filters");
-    BatchWriter bw = conn.createBatchWriter("chained_row_filters", new BatchWriterConfig());
-    for (Mutation m : createMutations()) {
-      bw.addMutation(m);
-    }
-    conn.tableOperations().attachIterator("chained_row_filters", new IteratorSetting(40, "trueFilter1", TrueFilter.class));
-    conn.tableOperations().attachIterator("chained_row_filters", new IteratorSetting(41, "trueFilter2", TrueFilter.class));
-    Scanner scanner = conn.createScanner("chained_row_filters", Authorizations.EMPTY);
-    assertEquals(new HashSet<String>(Arrays.asList("0", "1", "2", "3", "4")), getRows(scanner));
+    RowFilter filter0 = new TrueFilter();
+    filter0.init(source, Collections.<String,String> emptyMap(), new DefaultIteratorEnvironment());
+
+    RowFilter filter = new TrueFilter();
+    filter.init(filter0, Collections.<String,String> emptyMap(), new DefaultIteratorEnvironment());
+
+    filter.seek(new Range(), Collections.<ByteSequence> emptySet(), false);
+
+    assertEquals(new HashSet<>(Arrays.asList("0", "1", "2", "3", "4")), getRows(filter));
   }
 
   @Test
   public void testFilterConjunction() throws Exception {
-    MockInstance instance = new MockInstance("rft1");
-    Connector conn = instance.getConnector("", new PasswordToken(""));
 
-    conn.tableOperations().create("filter_conjunction");
-    BatchWriter bw = conn.createBatchWriter("filter_conjunction", new BatchWriterConfig());
-    for (Mutation m : createMutations()) {
-      bw.addMutation(m);
-    }
-    conn.tableOperations().attachIterator("filter_conjunction", new IteratorSetting(40, "rowZeroOrOne", RowZeroOrOneFilter.class));
-    conn.tableOperations().attachIterator("filter_conjunction", new IteratorSetting(41, "rowOneOrTwo", RowOneOrTwoFilter.class));
-    Scanner scanner = conn.createScanner("filter_conjunction", Authorizations.EMPTY);
-    assertEquals(new HashSet<String>(Arrays.asList("1")), getRows(scanner));
+    SortedMapIterator source = new SortedMapIterator(createKeyValues());
+
+    RowFilter filter0 = new RowZeroOrOneFilter();
+    filter0.init(source, Collections.<String,String> emptyMap(), new DefaultIteratorEnvironment());
+
+    RowFilter filter = new RowOneOrTwoFilter();
+    filter.init(filter0, Collections.<String,String> emptyMap(), new DefaultIteratorEnvironment());
+
+    filter.seek(new Range(), Collections.<ByteSequence> emptySet(), false);
+
+    assertEquals(new HashSet<>(Arrays.asList("1")), getRows(filter));
   }
 
   @Test
@@ -307,10 +283,11 @@
     assertTrue("Expected next key read to be greater than the previous after deepCopy", lastKeyRead.compareTo(finalKeyRead) < 0);
   }
 
-  private HashSet<String> getRows(Scanner scanner) {
-    HashSet<String> rows = new HashSet<String>();
-    for (Entry<Key,Value> entry : scanner) {
-      rows.add(entry.getKey().getRow().toString());
+  private HashSet<String> getRows(RowFilter filter) throws IOException {
+    HashSet<String> rows = new HashSet<>();
+    while (filter.hasTop()) {
+      rows.add(filter.getTopKey().getRowData().toString());
+      filter.next();
     }
     return rows;
   }
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSlice.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSlice.java
new file mode 100644
index 0000000..20d9bcb
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSlice.java
@@ -0,0 +1,415 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.iterators.user;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Random;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.accumulo.core.client.lexicoder.Lexicoder;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.iterators.SortedMapIterator;
+import org.apache.accumulo.core.iterators.ValueFormatException;
+import org.apache.hadoop.io.Text;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public abstract class TestCfCqSlice {
+
+  private static final Range INFINITY = new Range();
+  private static final Lexicoder<Long> LONG_LEX = new ReadableLongLexicoder(4);
+  private static final AtomicLong ROW_ID_GEN = new AtomicLong();
+
+  private static final boolean easyThereSparky = false;
+  private static final int LR_DIM = easyThereSparky ? 5 : 50;
+
+  private static final Map<String,String> EMPTY_OPTS = Collections.emptyMap();
+  private static final Set<ByteSequence> EMPTY_CF_SET = Collections.emptySet();
+
+  protected abstract Class<? extends SortedKeyValueIterator<Key,Value>> getFilterClass();
+
+  private static TreeMap<Key,Value> data;
+
+  @BeforeClass
+  public static void setupData() {
+    data = createMap(LR_DIM, LR_DIM, LR_DIM);
+  }
+
+  @AfterClass
+  public static void clearData() {
+    data = null;
+  }
+
+  @Test
+  public void testAllRowsFullSlice() throws Exception {
+    boolean[][][] foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    loadKvs(foundKvs, EMPTY_OPTS, INFINITY);
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          assertTrue("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must be found in scan", foundKvs[i][j][k]);
+        }
+      }
+    }
+  }
+
+  @Test
+  public void testSingleRowFullSlice() throws Exception {
+    boolean[][][] foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    int rowId = LR_DIM / 2;
+    loadKvs(foundKvs, EMPTY_OPTS, Range.exact(new Text(LONG_LEX.encode((long) rowId))));
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          if (rowId == i) {
+            assertTrue("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must be found in scan", foundKvs[i][j][k]);
+          } else {
+            assertFalse("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must not be found in scan", foundKvs[i][j][k]);
+          }
+        }
+      }
+    }
+  }
+
+  @Test
+  public void testAllRowsSlice() throws Exception {
+    boolean[][][] foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    long sliceMinCf = 20;
+    long sliceMinCq = 30;
+    long sliceMaxCf = 25;
+    long sliceMaxCq = 35;
+    assertTrue("slice param must be less than LR_DIM", sliceMinCf < LR_DIM);
+    assertTrue("slice param must be less than LR_DIM", sliceMinCq < LR_DIM);
+    assertTrue("slice param must be less than LR_DIM", sliceMaxCf < LR_DIM);
+    assertTrue("slice param must be less than LR_DIM", sliceMaxCq < LR_DIM);
+    Map<String,String> opts = new HashMap<>();
+    opts.put(CfCqSliceOpts.OPT_MIN_CF, new String(LONG_LEX.encode(sliceMinCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MIN_CQ, new String(LONG_LEX.encode(sliceMinCq), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CF, new String(LONG_LEX.encode(sliceMaxCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CQ, new String(LONG_LEX.encode(sliceMaxCq), UTF_8));
+    loadKvs(foundKvs, opts, INFINITY);
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          if (j >= sliceMinCf && j <= sliceMaxCf && k >= sliceMinCq && k <= sliceMaxCq) {
+            assertTrue("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must be found in scan", foundKvs[i][j][k]);
+          } else {
+            assertFalse("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must not be found in scan", foundKvs[i][j][k]);
+          }
+        }
+      }
+    }
+  }
+
+  @Test
+  public void testSingleColumnSlice() throws Exception {
+    boolean[][][] foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    long sliceMinCf = 20;
+    long sliceMinCq = 20;
+    long sliceMaxCf = 20;
+    long sliceMaxCq = 20;
+    Map<String,String> opts = new HashMap<>();
+    opts.put(CfCqSliceOpts.OPT_MIN_CF, new String(LONG_LEX.encode(sliceMinCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MIN_CQ, new String(LONG_LEX.encode(sliceMinCq), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CF, new String(LONG_LEX.encode(sliceMaxCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CQ, new String(LONG_LEX.encode(sliceMaxCq), UTF_8));
+    loadKvs(foundKvs, opts, INFINITY);
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          if (j == sliceMinCf && k == sliceMinCq) {
+            assertTrue("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must be found in scan", foundKvs[i][j][k]);
+          } else {
+            assertFalse("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must not be found in scan", foundKvs[i][j][k]);
+          }
+        }
+      }
+    }
+  }
+
+  @Test
+  public void testSingleColumnSliceByExclude() throws Exception {
+    boolean[][][] foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    long sliceMinCf = 20;
+    long sliceMinCq = 20;
+    long sliceMaxCf = 22;
+    long sliceMaxCq = 22;
+    Map<String,String> opts = new HashMap<>();
+    opts.put(CfCqSliceOpts.OPT_MIN_CF, new String(LONG_LEX.encode(sliceMinCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MIN_CQ, new String(LONG_LEX.encode(sliceMinCq), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CF, new String(LONG_LEX.encode(sliceMaxCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CQ, new String(LONG_LEX.encode(sliceMaxCq), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_INCLUSIVE, "false");
+    opts.put(CfCqSliceOpts.OPT_MIN_INCLUSIVE, "false");
+    loadKvs(foundKvs, opts, INFINITY);
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          if (j == 21 && k == 21) {
+            assertTrue("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must be found in scan", foundKvs[i][j][k]);
+          } else {
+            assertFalse("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must not be found in scan", foundKvs[i][j][k]);
+          }
+        }
+      }
+    }
+  }
+
+  @Test
+  public void testAllCfsCqSlice() throws Exception {
+    boolean[][][] foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    long sliceMinCq = 10;
+    long sliceMaxCq = 30;
+    Map<String,String> opts = new HashMap<>();
+    opts.put(CfCqSliceOpts.OPT_MIN_CQ, new String(LONG_LEX.encode(sliceMinCq), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CQ, new String(LONG_LEX.encode(sliceMaxCq), UTF_8));
+    loadKvs(foundKvs, opts, INFINITY);
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          if (k >= sliceMinCq && k <= sliceMaxCq) {
+            assertTrue("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must be found in scan", foundKvs[i][j][k]);
+          } else {
+            assertFalse("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must not be found in scan", foundKvs[i][j][k]);
+          }
+        }
+      }
+    }
+  }
+
+  @Test
+  public void testSliceCfsAllCqs() throws Exception {
+    boolean[][][] foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    long sliceMinCf = 10;
+    long sliceMaxCf = 30;
+    Map<String,String> opts = new HashMap<>();
+    opts.put(CfCqSliceOpts.OPT_MIN_CF, new String(LONG_LEX.encode(sliceMinCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CF, new String(LONG_LEX.encode(sliceMaxCf), UTF_8));
+    loadKvs(foundKvs, opts, INFINITY);
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          if (j >= sliceMinCf && j <= sliceMaxCf) {
+            assertTrue("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must be found in scan", foundKvs[i][j][k]);
+          } else {
+            assertFalse("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must not be found in scan", foundKvs[i][j][k]);
+          }
+        }
+      }
+    }
+  }
+
+  @Test
+  public void testEmptySlice() throws Exception {
+    boolean[][][] foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    long sliceMinCf = LR_DIM + 1;
+    long sliceMinCq = LR_DIM + 1;
+    long sliceMaxCf = LR_DIM + 1;
+    long sliceMaxCq = LR_DIM + 1;
+    Map<String,String> opts = new HashMap<>();
+    opts.put(CfCqSliceOpts.OPT_MIN_CF, new String(LONG_LEX.encode(sliceMinCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MIN_CQ, new String(LONG_LEX.encode(sliceMinCq), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CF, new String(LONG_LEX.encode(sliceMaxCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CQ, new String(LONG_LEX.encode(sliceMaxCq), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_INCLUSIVE, "false");
+    opts.put(CfCqSliceOpts.OPT_MIN_INCLUSIVE, "false");
+    loadKvs(foundKvs, opts, INFINITY);
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          assertFalse("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must not be found in scan", foundKvs[i][j][k]);
+        }
+      }
+    }
+  }
+
+  @Test
+  public void testStackedFilters() throws Exception {
+    Map<String,String> firstOpts = new HashMap<>();
+    Map<String,String> secondOpts = new HashMap<>();
+    boolean[][][] foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    long sliceMinCf = 20;
+    long sliceMaxCf = 25;
+    long sliceMinCq = 30;
+    long sliceMaxCq = 35;
+    assertTrue("slice param must be less than LR_DIM", sliceMinCf < LR_DIM);
+    assertTrue("slice param must be less than LR_DIM", sliceMinCq < LR_DIM);
+    assertTrue("slice param must be less than LR_DIM", sliceMaxCf < LR_DIM);
+    assertTrue("slice param must be less than LR_DIM", sliceMaxCq < LR_DIM);
+    firstOpts.put(CfCqSliceOpts.OPT_MIN_CF, new String(LONG_LEX.encode(sliceMinCf), UTF_8));
+    firstOpts.put(CfCqSliceOpts.OPT_MAX_CF, new String(LONG_LEX.encode(sliceMaxCf), UTF_8));
+    secondOpts.put(CfCqSliceOpts.OPT_MIN_CQ, new String(LONG_LEX.encode(sliceMinCq), UTF_8));
+    secondOpts.put(CfCqSliceOpts.OPT_MAX_CQ, new String(LONG_LEX.encode(sliceMaxCq), UTF_8));
+    SortedKeyValueIterator<Key,Value> skvi = getFilterClass().newInstance();
+    skvi.init(new SortedMapIterator(data), firstOpts, null);
+    loadKvs(skvi.deepCopy(null), foundKvs, secondOpts, INFINITY);
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          if (j >= sliceMinCf && j <= sliceMaxCf && k >= sliceMinCq && k <= sliceMaxCq) {
+            assertTrue("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must be found in scan", foundKvs[i][j][k]);
+          } else {
+            assertFalse("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must not be found in scan", foundKvs[i][j][k]);
+          }
+        }
+      }
+    }
+  }
+
+  @Test
+  public void testSeekMinExclusive() throws Exception {
+    boolean[][][] foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    long sliceMinCf = 20;
+    long sliceMinCq = 30;
+    long sliceMaxCf = 25;
+    long sliceMaxCq = 35;
+    assertTrue("slice param must be less than LR_DIM", sliceMinCf < LR_DIM);
+    assertTrue("slice param must be less than LR_DIM", sliceMinCq < LR_DIM);
+    assertTrue("slice param must be less than LR_DIM", sliceMaxCf < LR_DIM);
+    assertTrue("slice param must be less than LR_DIM", sliceMaxCq < LR_DIM);
+    Map<String,String> opts = new HashMap<>();
+    opts.put(CfCqSliceOpts.OPT_MIN_CF, new String(LONG_LEX.encode(sliceMinCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MIN_INCLUSIVE, "false");
+    opts.put(CfCqSliceOpts.OPT_MIN_CQ, new String(LONG_LEX.encode(sliceMinCq), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CF, new String(LONG_LEX.encode(sliceMaxCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CQ, new String(LONG_LEX.encode(sliceMaxCq), UTF_8));
+    Range startsAtMinCf = new Range(new Key(LONG_LEX.encode(0l), LONG_LEX.encode(sliceMinCf), LONG_LEX.encode(sliceMinCq), new byte[] {}, Long.MAX_VALUE), null);
+    loadKvs(foundKvs, opts, startsAtMinCf);
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          if (j > sliceMinCf && j <= sliceMaxCf && k > sliceMinCq && k <= sliceMaxCq) {
+            assertTrue("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must be found in scan", foundKvs[i][j][k]);
+          } else {
+            assertFalse("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must not be found in scan", foundKvs[i][j][k]);
+          }
+        }
+      }
+    }
+    foundKvs = new boolean[LR_DIM][LR_DIM][LR_DIM];
+    sliceMinCq = 0;
+    sliceMaxCq = 10;
+    opts.put(CfCqSliceOpts.OPT_MIN_CF, new String(LONG_LEX.encode(sliceMinCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MIN_INCLUSIVE, "false");
+    opts.put(CfCqSliceOpts.OPT_MIN_CQ, new String(LONG_LEX.encode(sliceMinCq), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CF, new String(LONG_LEX.encode(sliceMaxCf), UTF_8));
+    opts.put(CfCqSliceOpts.OPT_MAX_CQ, new String(LONG_LEX.encode(sliceMaxCq), UTF_8));
+    loadKvs(foundKvs, opts, INFINITY);
+    for (int i = 0; i < LR_DIM; i++) {
+      for (int j = 0; j < LR_DIM; j++) {
+        for (int k = 0; k < LR_DIM; k++) {
+          if (j > sliceMinCf && j <= sliceMaxCf && k > sliceMinCq && k <= sliceMaxCq) {
+            assertTrue("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must be found in scan", foundKvs[i][j][k]);
+          } else {
+            assertFalse("(r, cf, cq) == (" + i + ", " + j + ", " + k + ") must not be found in scan", foundKvs[i][j][k]);
+          }
+        }
+      }
+    }
+  }
+
+  private void loadKvs(boolean[][][] foundKvs, Map<String,String> options, Range range) {
+    loadKvs(new SortedMapIterator(data), foundKvs, options, range);
+  }
+
+  private void loadKvs(SortedKeyValueIterator<Key,Value> parent, boolean[][][] foundKvs, Map<String,String> options, Range range) {
+    try {
+      SortedKeyValueIterator<Key,Value> skvi = getFilterClass().newInstance();
+      skvi.init(parent, options, null);
+      skvi.seek(range, EMPTY_CF_SET, false);
+
+      Random random = new Random();
+
+      while (skvi.hasTop()) {
+        Key k = skvi.getTopKey();
+        int row = LONG_LEX.decode(k.getRow().copyBytes()).intValue();
+        int cf = LONG_LEX.decode(k.getColumnFamily().copyBytes()).intValue();
+        int cq = LONG_LEX.decode(k.getColumnQualifier().copyBytes()).intValue();
+
+        assertFalse("Duplicate " + row + " " + cf + " " + cq, foundKvs[row][cf][cq]);
+        foundKvs[row][cf][cq] = true;
+
+        if (random.nextInt(100) == 0) {
+          skvi.seek(new Range(k, false, range.getEndKey(), range.isEndKeyInclusive()), EMPTY_CF_SET, false);
+        } else {
+          skvi.next();
+        }
+      }
+
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  /**
+   * Rows 0..(LR_DIM - 1) will each have LR_DIM CFs, each with LR_DIM CQs
+   *
+   * For instance if LR_DIM is 3, (cf,cq) r: val
+   *
+   * (0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2) 0 0 1 2 3 4 5 6 7 8 1 9 10 11 12 13 14 15 16 17 2 18 19 20 21 22 23 24 25 26
+   */
+  static TreeMap<Key,Value> createMap(int numRows, int numCfs, int numCqs) {
+    TreeMap<Key,Value> data = new TreeMap<>();
+    for (int i = 0; i < numRows; i++) {
+      byte[] rowId = LONG_LEX.encode(ROW_ID_GEN.getAndIncrement());
+      for (int j = 0; j < numCfs; j++) {
+        for (int k = 0; k < numCqs; k++) {
+          byte[] cf = LONG_LEX.encode((long) j);
+          byte[] cq = LONG_LEX.encode((long) k);
+          byte[] val = LONG_LEX.encode((long) (i * numCfs + j * numCqs + k));
+          data.put(new Key(rowId, cf, cq, new byte[0], 9), new Value(val));
+        }
+      }
+    }
+    return data;
+  }
+
+  static class ReadableLongLexicoder implements Lexicoder<Long> {
+    final String fmtStr;
+
+    public ReadableLongLexicoder() {
+      this(20);
+    }
+
+    public ReadableLongLexicoder(int numDigits) {
+      fmtStr = "%0" + numDigits + "d";
+    }
+
+    @Override
+    public byte[] encode(Long l) {
+      return String.format(fmtStr, l).getBytes(UTF_8);
+    }
+
+    @Override
+    public Long decode(byte[] b) throws ValueFormatException {
+      return Long.parseLong(new String(b, UTF_8));
+    }
+  }
+}
diff --git a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSliceFilter.java
similarity index 67%
copy from core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
copy to core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSliceFilter.java
index 01f5fa8..3f92963 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSliceFilter.java
@@ -14,19 +14,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.util;
+package org.apache.accumulo.core.iterators.user;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class UtilWaitThread {
-  private static final Logger log = LoggerFactory.getLogger(UtilWaitThread.class);
-
-  public static void sleep(long millis) {
-    try {
-      Thread.sleep(millis);
-    } catch (InterruptedException e) {
-      log.error("{}", e.getMessage(), e);
-    }
+public class TestCfCqSliceFilter extends TestCfCqSlice {
+  @Override
+  protected Class<CfCqSliceFilter> getFilterClass() {
+    return CfCqSliceFilter.class;
   }
 }
diff --git a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSliceSeekingFilter.java
similarity index 67%
rename from core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
rename to core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSliceSeekingFilter.java
index 01f5fa8..314b510 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/TestCfCqSliceSeekingFilter.java
@@ -14,19 +14,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.util;
+package org.apache.accumulo.core.iterators.user;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class UtilWaitThread {
-  private static final Logger log = LoggerFactory.getLogger(UtilWaitThread.class);
-
-  public static void sleep(long millis) {
-    try {
-      Thread.sleep(millis);
-    } catch (InterruptedException e) {
-      log.error("{}", e.getMessage(), e);
-    }
+public class TestCfCqSliceSeekingFilter extends TestCfCqSlice {
+  @Override
+  protected Class<CfCqSliceSeekingFilter> getFilterClass() {
+    return CfCqSliceSeekingFilter.class;
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/TransformingIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/TransformingIteratorTest.java
index 758f718..d02b7f2 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/TransformingIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/TransformingIteratorTest.java
@@ -20,81 +20,89 @@
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
+
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
-import java.util.Collections;
 import java.util.HashMap;
 import java.util.HashSet;
-import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
 
-import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
 import org.apache.accumulo.core.data.ArrayByteSequence;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.iterators.SortedMapIterator;
 import org.apache.accumulo.core.iterators.WrappingIterator;
+import org.apache.accumulo.core.iterators.system.ColumnFamilySkippingIterator;
+import org.apache.accumulo.core.iterators.system.VisibilityFilter;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.hadoop.io.Text;
+import org.easymock.EasyMock;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
+import com.google.common.collect.ImmutableMap;
+
 public class TransformingIteratorTest {
-  private static final String TABLE_NAME = "test_table";
+
   private static Authorizations authorizations = new Authorizations("vis0", "vis1", "vis2", "vis3", "vis4");
-  private Connector connector;
-  private Scanner scanner;
+  private static final Map<String,String> EMPTY_OPTS = ImmutableMap.of();
+  private TransformingIterator titer;
+
+  private TreeMap<Key,Value> data = new TreeMap<>();
 
   @Before
-  public void setUpMockAccumulo() throws Exception {
-    MockInstance instance = new MockInstance("test");
-    connector = instance.getConnector("user", new PasswordToken("password"));
-    connector.securityOperations().changeUserAuthorizations("user", authorizations);
-
-    if (connector.tableOperations().exists(TABLE_NAME))
-      connector.tableOperations().delete(TABLE_NAME);
-    connector.tableOperations().create(TABLE_NAME);
-    BatchWriterConfig bwCfg = new BatchWriterConfig();
-    bwCfg.setMaxWriteThreads(1);
-
-    BatchWriter bw = connector.createBatchWriter(TABLE_NAME, bwCfg);
-    bw.addMutation(createDefaultMutation("row1"));
-    bw.addMutation(createDefaultMutation("row2"));
-    bw.addMutation(createDefaultMutation("row3"));
-
-    bw.flush();
-    bw.close();
-
-    scanner = connector.createScanner(TABLE_NAME, authorizations);
-    scanner.addScanIterator(new IteratorSetting(20, ReuseIterator.class));
+  public void createData() throws Exception {
+    data.clear();
+    generateRow(data, "row1");
+    generateRow(data, "row2");
+    generateRow(data, "row3");
   }
 
-  private void setUpTransformIterator(Class<? extends TransformingIterator> clazz) {
-    IteratorSetting cfg = new IteratorSetting(21, clazz);
-    cfg.setName("keyTransformIter");
-    TransformingIterator.setAuthorizations(cfg, new Authorizations("vis0", "vis1", "vis2", "vis3"));
-    scanner.addScanIterator(cfg);
+  private void setUpTransformIterator(Class<? extends TransformingIterator> clazz) throws IOException {
+    setUpTransformIterator(clazz, true);
+  }
+
+  private void setUpTransformIterator(Class<? extends TransformingIterator> clazz, boolean setupAuths) throws IOException {
+    SortedMapIterator source = new SortedMapIterator(data);
+    ColumnFamilySkippingIterator cfsi = new ColumnFamilySkippingIterator(source);
+    VisibilityFilter visFilter = new VisibilityFilter(cfsi, authorizations, new byte[0]);
+    ReuseIterator reuserIter = new ReuseIterator();
+    reuserIter.init(visFilter, EMPTY_OPTS, null);
+    try {
+      titer = clazz.newInstance();
+    } catch (InstantiationException | IllegalAccessException e) {
+      throw new RuntimeException(e);
+    }
+
+    IteratorEnvironment iterEnv = EasyMock.createMock(IteratorEnvironment.class);
+    EasyMock.expect(iterEnv.getIteratorScope()).andReturn(IteratorScope.scan).anyTimes();
+    EasyMock.replay(iterEnv);
+
+    Map<String,String> opts;
+    if (setupAuths) {
+      IteratorSetting cfg = new IteratorSetting(21, clazz);
+      TransformingIterator.setAuthorizations(cfg, new Authorizations("vis0", "vis1", "vis2", "vis3"));
+      opts = cfg.getOptions();
+    } else {
+      opts = ImmutableMap.of();
+    }
+    titer.init(reuserIter, opts, iterEnv);
   }
 
   @Test
@@ -105,7 +113,7 @@
     // the same key/value pair for every getTopKey/getTopValue call. The code
     // will always return the final key/value if we didn't copy the original key
     // in the iterator.
-    TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expected = new TreeMap<>();
     for (int row = 1; row <= 3; ++row) {
       for (int cf = 1; cf <= 3; ++cf) {
         for (int cq = 1; cq <= 3; ++cq) {
@@ -121,19 +129,18 @@
 
   @Test
   public void testNoRangeScan() throws Exception {
-    List<Class<? extends ReversingKeyTransformingIterator>> classes = new ArrayList<Class<? extends ReversingKeyTransformingIterator>>();
+    List<Class<? extends ReversingKeyTransformingIterator>> classes = new ArrayList<>();
     classes.add(ColFamReversingKeyTransformingIterator.class);
     classes.add(ColQualReversingKeyTransformingIterator.class);
     classes.add(ColVisReversingKeyTransformingIterator.class);
 
     // Test transforming col fam, col qual, col vis
     for (Class<? extends ReversingKeyTransformingIterator> clazz : classes) {
-      scanner.removeScanIterator("keyTransformIter");
       setUpTransformIterator(clazz);
 
       // All rows with visibilities reversed
       TransformingIterator iter = clazz.newInstance();
-      TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+      TreeMap<Key,Value> expected = new TreeMap<>();
       for (int row = 1; row <= 3; ++row) {
         for (int cf = 1; cf <= 3; ++cf) {
           for (int cq = 1; cq <= 3; ++cq) {
@@ -158,9 +165,8 @@
     // Source data has vis1, vis2, vis3 so vis0 is a new one that is introduced.
     // Make sure it shows up in the output with the default test auths which include
     // vis0.
-    scanner.removeScanIterator("keyTransformIter");
     setUpTransformIterator(ColVisReversingKeyTransformingIterator.class);
-    TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expected = new TreeMap<>();
     for (int row = 1; row <= 3; ++row) {
       for (int cf = 1; cf <= 3; ++cf) {
         for (int cq = 1; cq <= 3; ++cq) {
@@ -176,13 +182,10 @@
   @Test
   public void testCreatingIllegalVisbility() throws Exception {
     // illegal visibility created by transform should be filtered on scan, even if evaluation is done
-    IteratorSetting cfg = new IteratorSetting(21, IllegalVisKeyTransformingIterator.class);
-    cfg.setName("keyTransformIter");
-    scanner.addScanIterator(cfg);
+    setUpTransformIterator(IllegalVisKeyTransformingIterator.class, false);
     checkExpected(new TreeMap<Key,Value>());
 
     // ensure illegal vis is supressed when evaluations is done
-    scanner.removeScanIterator("keyTransformIter");
     setUpTransformIterator(IllegalVisKeyTransformingIterator.class);
     checkExpected(new TreeMap<Key,Value>());
   }
@@ -190,26 +193,24 @@
   @Test
   public void testRangeStart() throws Exception {
     setUpTransformIterator(ColVisReversingKeyTransformingIterator.class);
-    scanner.setRange(new Range(new Key("row1", "cf2", "cq2", "vis1"), true, new Key("row1", "cf2", "cq3"), false));
 
-    TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expected = new TreeMap<>();
     putExpected(expected, 1, 2, 2, 1, PartialKey.ROW_COLFAM_COLQUAL); // before the range start, but transforms in the range
     putExpected(expected, 1, 2, 2, 2, PartialKey.ROW_COLFAM_COLQUAL);
 
-    checkExpected(expected);
+    checkExpected(new Range(new Key("row1", "cf2", "cq2", "vis1"), true, new Key("row1", "cf2", "cq3"), false), expected);
   }
 
   @Test
   public void testRangeEnd() throws Exception {
     setUpTransformIterator(ColVisReversingKeyTransformingIterator.class);
-    scanner.setRange(new Range(new Key("row1", "cf2", "cq2"), true, new Key("row1", "cf2", "cq2", "vis2"), false));
 
-    TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expected = new TreeMap<>();
     // putExpected(expected, 1, 2, 2, 1, part); // transforms vis outside range end
     putExpected(expected, 1, 2, 2, 2, PartialKey.ROW_COLFAM_COLQUAL);
     putExpected(expected, 1, 2, 2, 3, PartialKey.ROW_COLFAM_COLQUAL);
 
-    checkExpected(expected);
+    checkExpected(new Range(new Key("row1", "cf2", "cq2"), true, new Key("row1", "cf2", "cq2", "vis2"), false), expected);
   }
 
   @Test
@@ -218,13 +219,12 @@
     // Set a range that is before all of the untransformed data. However,
     // the data with untransformed col fam cf3 will transform to cf0 and
     // be inside the range.
-    scanner.setRange(new Range(new Key("row1", "cf0"), true, new Key("row1", "cf1"), false));
 
-    TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expected = new TreeMap<>();
     for (int cq = 1; cq <= 3; ++cq)
       for (int cv = 1; cv <= 3; ++cv)
         putExpected(expected, 1, 3, cq, cv, PartialKey.ROW);
-    checkExpected(expected);
+    checkExpected(new Range(new Key("row1", "cf0"), true, new Key("row1", "cf1"), false), expected);
   }
 
   @Test
@@ -232,8 +232,7 @@
     // Set a range that's after all data and make sure we don't
     // somehow return something.
     setUpTransformIterator(ColFamReversingKeyTransformingIterator.class);
-    scanner.setRange(new Range(new Key("row4"), null));
-    checkExpected(new TreeMap<Key,Value>());
+    checkExpected(new Range(new Key("row4"), null), new TreeMap<Key,Value>());
   }
 
   @Test
@@ -266,56 +265,47 @@
     // put in the expectations.
     int expectedCF = 1;
     setUpTransformIterator(ColFamReversingKeyTransformingIterator.class);
-    scanner.fetchColumnFamily(new Text("cf2"));
 
-    TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expected = new TreeMap<>();
     for (int row = 1; row <= 3; ++row)
       for (int cq = 1; cq <= 3; ++cq)
         for (int cv = 1; cv <= 3; ++cv)
           putExpected(expected, row, expectedCF, cq, cv, PartialKey.ROW);
-    checkExpected(expected);
+    checkExpected(expected, "cf2");
   }
 
   @Test
   public void testDeepCopy() throws Exception {
-    MockInstance instance = new MockInstance("test");
-    Connector connector = instance.getConnector("user", new PasswordToken("password"));
-
-    connector.tableOperations().create("shard_table");
-
-    BatchWriter bw = connector.createBatchWriter("shard_table", new BatchWriterConfig());
-
     ColumnVisibility vis1 = new ColumnVisibility("vis1");
     ColumnVisibility vis3 = new ColumnVisibility("vis3");
 
-    Mutation m1 = new Mutation("shard001");
-    m1.put("foo", "doc02", vis1, "");
-    m1.put("dog", "doc02", vis3, "");
-    m1.put("cat", "doc02", vis3, "");
+    data.clear();
 
-    m1.put("bar", "doc03", vis1, "");
-    m1.put("dog", "doc03", vis3, "");
-    m1.put("cat", "doc03", vis3, "");
+    Value ev = new Value("".getBytes());
 
-    bw.addMutation(m1);
-    bw.close();
+    data.put(new Key("shard001", "foo", "doc02", vis1, 78), ev);
+    data.put(new Key("shard001", "dog", "doc02", vis3, 78), ev);
+    data.put(new Key("shard001", "cat", "doc02", vis3, 78), ev);
 
-    BatchScanner bs = connector.createBatchScanner("shard_table", authorizations, 1);
+    data.put(new Key("shard001", "bar", "doc03", vis1, 78), ev);
+    data.put(new Key("shard001", "dog", "doc03", vis3, 78), ev);
+    data.put(new Key("shard001", "cat", "doc03", vis3, 78), ev);
 
-    bs.addScanIterator(new IteratorSetting(21, ColVisReversingKeyTransformingIterator.class));
+    setUpTransformIterator(ColVisReversingKeyTransformingIterator.class);
+
+    IntersectingIterator iiIter = new IntersectingIterator();
     IteratorSetting iicfg = new IteratorSetting(22, IntersectingIterator.class);
     IntersectingIterator.setColumnFamilies(iicfg, new Text[] {new Text("foo"), new Text("dog"), new Text("cat")});
-    bs.addScanIterator(iicfg);
-    bs.setRanges(Collections.singleton(new Range()));
+    iiIter.init(titer, iicfg.getOptions(), null);
 
-    Iterator<Entry<Key,Value>> iter = bs.iterator();
-    assertTrue(iter.hasNext());
-    Key docKey = iter.next().getKey();
+    iiIter.seek(new Range(), new HashSet<ByteSequence>(), false);
+
+    assertTrue(iiIter.hasTop());
+    Key docKey = iiIter.getTopKey();
     assertEquals("shard001", docKey.getRowData().toString());
     assertEquals("doc02", docKey.getColumnQualifierData().toString());
-    assertFalse(iter.hasNext());
-
-    bs.close();
+    iiIter.next();
+    assertFalse(iiIter.hasTop());
   }
 
   @Test
@@ -326,14 +316,13 @@
     // put in the expectations.
     int expectedCF = 1;
     setUpTransformIterator(ColFamReversingCompactionKeyTransformingIterator.class);
-    scanner.fetchColumnFamily(new Text("cf2"));
 
-    TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expected = new TreeMap<>();
     for (int row = 1; row <= 3; ++row)
       for (int cq = 1; cq <= 3; ++cq)
         for (int cv = 1; cv <= 3; ++cv)
           putExpected(expected, row, expectedCF, cq, cv, PartialKey.ROW);
-    checkExpected(expected);
+    checkExpected(expected, "cf2");
   }
 
   @Test
@@ -343,7 +332,7 @@
     // should still show up.
     setUpTransformIterator(BadVisCompactionKeyTransformingIterator.class);
 
-    TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> expected = new TreeMap<>();
     for (int rowID = 1; rowID <= 3; ++rowID) {
       for (int cfID = 1; cfID <= 3; ++cfID) {
         for (int cqID = 1; cqID <= 3; ++cqID) {
@@ -378,9 +367,12 @@
   public void testDupes() throws Exception {
     setUpTransformIterator(DupeTransformingIterator.class);
 
+    titer.seek(new Range(), new HashSet<ByteSequence>(), false);
+
     int count = 0;
-    for (Entry<Key,Value> entry : scanner) {
-      Key key = entry.getKey();
+    while (titer.hasTop()) {
+      Key key = titer.getTopKey();
+      titer.next();
       assertEquals("cf1", key.getColumnFamily().toString());
       assertEquals("cq1", key.getColumnQualifier().toString());
       assertEquals("", key.getColumnVisibility().toString());
@@ -399,7 +391,7 @@
     TransformingIterator.setMaxBufferSize(is, 10000000);
     Assert.assertTrue(ti.validateOptions(is.getOptions()));
 
-    Map<String,String> opts = new HashMap<String,String>();
+    Map<String,String> opts = new HashMap<>();
 
     opts.put(TransformingIterator.MAX_BUFFER_SIZE_OPT, "10M");
     Assert.assertTrue(ti.validateOptions(is.getOptions()));
@@ -426,13 +418,31 @@
     return key;
   }
 
-  private void checkExpected(TreeMap<Key,Value> expectedEntries) {
-    for (Entry<Key,Value> entry : scanner) {
-      Entry<Key,Value> expected = expectedEntries.pollFirstEntry();
-      Key actualKey = entry.getKey();
-      Value actualValue = entry.getValue();
+  private void checkExpected(Range range, TreeMap<Key,Value> expectedEntries) throws IOException {
+    checkExpected(range, new HashSet<ByteSequence>(), expectedEntries);
+  }
 
-      assertNotNull("Ran out of expected entries on: " + entry, expected);
+  private void checkExpected(TreeMap<Key,Value> expectedEntries, String... fa) throws IOException {
+
+    HashSet<ByteSequence> families = new HashSet<>();
+    for (String family : fa) {
+      families.add(new ArrayByteSequence(family));
+    }
+
+    checkExpected(new Range(), families, expectedEntries);
+  }
+
+  private void checkExpected(Range range, Set<ByteSequence> families, TreeMap<Key,Value> expectedEntries) throws IOException {
+
+    titer.seek(range, families, families.size() != 0);
+
+    while (titer.hasTop()) {
+      Entry<Key,Value> expected = expectedEntries.pollFirstEntry();
+      Key actualKey = titer.getTopKey();
+      Value actualValue = titer.getTopValue();
+      titer.next();
+
+      assertNotNull("Ran out of expected entries on: " + actualKey, expected);
       assertEquals("Key mismatch", expected.getKey(), actualKey);
       assertEquals("Value mismatch", expected.getValue(), actualValue);
     }
@@ -477,8 +487,8 @@
     return new Text(sb.toString());
   }
 
-  private static Mutation createDefaultMutation(String row) {
-    Mutation m = new Mutation(row);
+  private static void generateRow(TreeMap<Key,Value> data, String row) {
+
     for (int cfID = 1; cfID <= 3; ++cfID) {
       for (int cqID = 1; cqID <= 3; ++cqID) {
         for (int cvID = 1; cvID <= 3; ++cvID) {
@@ -488,11 +498,13 @@
           long ts = 100 * cfID + 10 * cqID + cvID;
           String val = "val" + ts;
 
-          m.put(cf, cq, new ColumnVisibility(cv), ts, val);
+          Key k = new Key(row, cf, cq, cv, ts);
+          Value v = new Value(val.getBytes());
+          data.put(k, v);
         }
       }
     }
-    return m;
+
   }
 
   private static Key reverseKeyPart(Key originalKey, PartialKey part) {
@@ -571,7 +583,7 @@
 
     @Override
     protected Collection<ByteSequence> untransformColumnFamilies(Collection<ByteSequence> columnFamilies) {
-      HashSet<ByteSequence> untransformed = new HashSet<ByteSequence>();
+      HashSet<ByteSequence> untransformed = new HashSet<>();
       for (ByteSequence cf : columnFamilies)
         untransformed.add(untransformColumnFamily(cf));
       return untransformed;
@@ -587,7 +599,7 @@
   public static class ColFamReversingCompactionKeyTransformingIterator extends ColFamReversingKeyTransformingIterator {
     @Override
     public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
-      env = new MajCIteratorEnvironmentAdapter(env);
+      env = new MajCIteratorEnvironmentAdapter();
       super.init(source, options, env);
     }
   }
@@ -627,7 +639,7 @@
   public static class IllegalVisCompactionKeyTransformingIterator extends IllegalVisKeyTransformingIterator {
     @Override
     public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
-      env = new MajCIteratorEnvironmentAdapter(env);
+      env = new MajCIteratorEnvironmentAdapter();
       super.init(source, options, env);
     }
   }
@@ -653,7 +665,7 @@
   public static class BadVisCompactionKeyTransformingIterator extends BadVisKeyTransformingIterator {
     @Override
     public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
-      env = new MajCIteratorEnvironmentAdapter(env);
+      env = new MajCIteratorEnvironmentAdapter();
       super.init(source, options, env);
     }
   }
@@ -663,6 +675,13 @@
     private Value topValue = new Value();
 
     @Override
+    public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
+      ReuseIterator rei = new ReuseIterator();
+      rei.setSource(getSource().deepCopy(env));
+      return rei;
+    }
+
+    @Override
     public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
       super.seek(range, columnFamilies, inclusive);
       loadTop();
@@ -692,41 +711,10 @@
     }
   }
 
-  private static class MajCIteratorEnvironmentAdapter implements IteratorEnvironment {
-    private IteratorEnvironment delegate;
-
-    public MajCIteratorEnvironmentAdapter(IteratorEnvironment delegate) {
-      this.delegate = delegate;
-    }
-
-    @Override
-    public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String mapFileName) throws IOException {
-      return delegate.reserveMapFileReader(mapFileName);
-    }
-
-    @Override
-    public AccumuloConfiguration getConfig() {
-      return delegate.getConfig();
-    }
-
+  private static class MajCIteratorEnvironmentAdapter extends BaseIteratorEnvironment {
     @Override
     public IteratorScope getIteratorScope() {
       return IteratorScope.majc;
     }
-
-    @Override
-    public boolean isFullMajorCompaction() {
-      return delegate.isFullMajorCompaction();
-    }
-
-    @Override
-    public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {
-      delegate.registerSideChannel(iter);
-    }
-
-    @Override
-    public Authorizations getAuthorizations() {
-      return null;
-    }
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/VersioningIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/VersioningIteratorTest.java
index fa42998..cdd0074 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/VersioningIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/VersioningIteratorTest.java
@@ -40,7 +40,7 @@
 
 public class VersioningIteratorTest {
   // add test for seek function
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
   private static final Encoder<Long> encoder = LongCombiner.FIXED_LEN_ENCODER;
   private static final Logger log = LoggerFactory.getLogger(VersioningIteratorTest.class);
 
@@ -56,7 +56,7 @@
   }
 
   TreeMap<Key,Value> iteratorOverTestData(VersioningIterator it) throws IOException {
-    TreeMap<Key,Value> tmOut = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tmOut = new TreeMap<>();
     while (it.hasTop()) {
       tmOut.put(it.getTopKey(), it.getTopValue());
       it.next();
@@ -70,7 +70,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     createTestData(tm, colf, colq);
 
@@ -101,7 +101,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     createTestData(tm, colf, colq);
 
@@ -137,7 +137,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     createTestData(tm, colf, colq);
 
@@ -186,7 +186,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     createTestData(tm, colf, colq);
 
@@ -215,7 +215,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     createTestData(tm, colf, colq);
 
@@ -237,7 +237,7 @@
     Text colf = new Text("a");
     Text colq = new Text("b");
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     createTestData(tm, colf, colq);
 
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/VisibilityFilterTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/VisibilityFilterTest.java
index 810c355..8de1472 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/VisibilityFilterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/VisibilityFilterTest.java
@@ -39,7 +39,7 @@
 
 public class VisibilityFilterTest {
 
-  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<ByteSequence>();
+  private static final Collection<ByteSequence> EMPTY_COL_FAMS = new ArrayList<>();
 
   private static final Text BAD = new Text("bad");
   private static final Text GOOD = new Text("good");
@@ -50,7 +50,7 @@
   private static final Value EMPTY_VALUE = new Value(new byte[0]);
 
   private TreeMap<Key,Value> createUnprotectedSource(int numPublic, int numHidden) {
-    TreeMap<Key,Value> source = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> source = new TreeMap<>();
     for (int i = 0; i < numPublic; i++)
       source.put(new Key(new Text(String.format("%03d", i)), GOOD, GOOD, EMPTY_VIS), EMPTY_VALUE);
     for (int i = 0; i < numHidden; i++)
@@ -59,7 +59,7 @@
   }
 
   private TreeMap<Key,Value> createPollutedSource(int numGood, int numBad) {
-    TreeMap<Key,Value> source = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> source = new TreeMap<>();
     for (int i = 0; i < numGood; i++)
       source.put(new Key(new Text(String.format("%03d", i)), GOOD, GOOD, GOOD_VIS), EMPTY_VALUE);
     for (int i = 0; i < numBad; i++)
@@ -68,7 +68,7 @@
   }
 
   private TreeMap<Key,Value> createSourceWithHiddenData(int numViewable, int numHidden) {
-    TreeMap<Key,Value> source = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> source = new TreeMap<>();
     for (int i = 0; i < numViewable; i++)
       source.put(new Key(new Text(String.format("%03d", i)), GOOD, GOOD, GOOD_VIS), EMPTY_VALUE);
     for (int i = 0; i < numHidden; i++)
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIteratorTest.java
index f5440cd..882c82a 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIteratorTest.java
@@ -56,9 +56,9 @@
 public class WholeColumnFamilyIteratorTest extends TestCase {
 
   public void testEmptyStuff() throws IOException {
-    SortedMap<Key,Value> map = new TreeMap<Key,Value>();
-    SortedMap<Key,Value> map2 = new TreeMap<Key,Value>();
-    final Map<Text,Boolean> toInclude = new HashMap<Text,Boolean>();
+    SortedMap<Key,Value> map = new TreeMap<>();
+    SortedMap<Key,Value> map2 = new TreeMap<>();
+    final Map<Text,Boolean> toInclude = new HashMap<>();
     map.put(new Key(new Text("r1"), new Text("cf1"), new Text("cq1"), new Text("cv1"), 1l), new Value("val1".getBytes()));
     map.put(new Key(new Text("r1"), new Text("cf1"), new Text("cq2"), new Text("cv1"), 2l), new Value("val2".getBytes()));
     map.put(new Key(new Text("r2"), new Text("cf1"), new Text("cq1"), new Text("cv1"), 3l), new Value("val3".getBytes()));
@@ -88,7 +88,7 @@
     }
     SortedMapIterator source = new SortedMapIterator(map);
     WholeColumnFamilyIterator iter = new WholeColumnFamilyIterator(source);
-    SortedMap<Key,Value> resultMap = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> resultMap = new TreeMap<>();
     iter.seek(new Range(), new ArrayList<ByteSequence>(), false);
     int numRows = 0;
     while (iter.hasTop()) {
@@ -129,19 +129,19 @@
   }
 
   public void testContinue() throws Exception {
-    SortedMap<Key,Value> map1 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map1 = new TreeMap<>();
     pkv(map1, "row1", "cf1", "cq1", "cv1", 5, "foo");
     pkv(map1, "row1", "cf1", "cq2", "cv1", 6, "bar");
 
-    SortedMap<Key,Value> map2 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map2 = new TreeMap<>();
     pkv(map2, "row2", "cf1", "cq1", "cv1", 5, "foo");
     pkv(map2, "row2", "cf1", "cq2", "cv1", 6, "bar");
 
-    SortedMap<Key,Value> map3 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map3 = new TreeMap<>();
     pkv(map3, "row3", "cf1", "cq1", "cv1", 5, "foo");
     pkv(map3, "row3", "cf1", "cq2", "cv1", 6, "bar");
 
-    SortedMap<Key,Value> map = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map = new TreeMap<>();
     map.putAll(map1);
     map.putAll(map2);
     map.putAll(map3);
@@ -170,14 +170,14 @@
   }
 
   public void testBug1() throws Exception {
-    SortedMap<Key,Value> map1 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map1 = new TreeMap<>();
     pkv(map1, "row1", "cf1", "cq1", "cv1", 5, "foo");
     pkv(map1, "row1", "cf1", "cq2", "cv1", 6, "bar");
 
-    SortedMap<Key,Value> map2 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map2 = new TreeMap<>();
     pkv(map2, "row2", "cf1", "cq1", "cv1", 5, "foo");
 
-    SortedMap<Key,Value> map = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map = new TreeMap<>();
     map.putAll(map1);
     map.putAll(map2);
 
diff --git a/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeRowIteratorTest.java b/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeRowIteratorTest.java
index b47ef3e..8d37d6c 100644
--- a/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeRowIteratorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/iterators/user/WholeRowIteratorTest.java
@@ -55,9 +55,9 @@
 
   @Test
   public void testEmptyStuff() throws IOException {
-    SortedMap<Key,Value> map = new TreeMap<Key,Value>();
-    SortedMap<Key,Value> map2 = new TreeMap<Key,Value>();
-    final Map<Text,Boolean> toInclude = new HashMap<Text,Boolean>();
+    SortedMap<Key,Value> map = new TreeMap<>();
+    SortedMap<Key,Value> map2 = new TreeMap<>();
+    final Map<Text,Boolean> toInclude = new HashMap<>();
     map.put(new Key(new Text("r1"), new Text("cf1"), new Text("cq1"), new Text("cv1"), 1l), new Value("val1".getBytes()));
     map.put(new Key(new Text("r1"), new Text("cf1"), new Text("cq2"), new Text("cv1"), 2l), new Value("val2".getBytes()));
     map.put(new Key(new Text("r2"), new Text("cf1"), new Text("cq1"), new Text("cv1"), 3l), new Value("val3".getBytes()));
@@ -87,7 +87,7 @@
     }
     SortedMapIterator source = new SortedMapIterator(map);
     WholeRowIterator iter = new WholeRowIterator(source);
-    SortedMap<Key,Value> resultMap = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> resultMap = new TreeMap<>();
     iter.seek(new Range(), new ArrayList<ByteSequence>(), false);
     int numRows = 0;
     while (iter.hasTop()) {
@@ -126,19 +126,19 @@
 
   @Test
   public void testContinue() throws Exception {
-    SortedMap<Key,Value> map1 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map1 = new TreeMap<>();
     pkv(map1, "row1", "cf1", "cq1", "cv1", 5, "foo");
     pkv(map1, "row1", "cf1", "cq2", "cv1", 6, "bar");
 
-    SortedMap<Key,Value> map2 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map2 = new TreeMap<>();
     pkv(map2, "row2", "cf1", "cq1", "cv1", 5, "foo");
     pkv(map2, "row2", "cf1", "cq2", "cv1", 6, "bar");
 
-    SortedMap<Key,Value> map3 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map3 = new TreeMap<>();
     pkv(map3, "row3", "cf1", "cq1", "cv1", 5, "foo");
     pkv(map3, "row3", "cf1", "cq2", "cv1", 6, "bar");
 
-    SortedMap<Key,Value> map = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map = new TreeMap<>();
     map.putAll(map1);
     map.putAll(map2);
     map.putAll(map3);
@@ -168,14 +168,14 @@
 
   @Test
   public void testBug1() throws Exception {
-    SortedMap<Key,Value> map1 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map1 = new TreeMap<>();
     pkv(map1, "row1", "cf1", "cq1", "cv1", 5, "foo");
     pkv(map1, "row1", "cf1", "cq2", "cv1", 6, "bar");
 
-    SortedMap<Key,Value> map2 = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map2 = new TreeMap<>();
     pkv(map2, "row2", "cf1", "cq1", "cv1", 5, "foo");
 
-    SortedMap<Key,Value> map = new TreeMap<Key,Value>();
+    SortedMap<Key,Value> map = new TreeMap<>();
     map.putAll(map1);
     map.putAll(map2);
 
diff --git a/core/src/test/java/org/apache/accumulo/core/metadata/MetadataServicerTest.java b/core/src/test/java/org/apache/accumulo/core/metadata/MetadataServicerTest.java
index 0e59025..0a0a940 100644
--- a/core/src/test/java/org/apache/accumulo/core/metadata/MetadataServicerTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/metadata/MetadataServicerTest.java
@@ -21,20 +21,46 @@
 import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertTrue;
 
+import java.util.HashMap;
+
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.TableExistsException;
 import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.admin.TableOperations;
 import org.apache.accumulo.core.client.impl.ClientContext;
-import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.replication.ReplicationTable;
+import org.easymock.EasyMock;
+import org.junit.BeforeClass;
 import org.junit.Test;
 
 public class MetadataServicerTest {
 
+  private static final String userTableName = "tableName";
+  private static final String userTableId = "tableId";
+  private static ClientContext context;
+
+  @BeforeClass
+  public static void setupContext() throws Exception {
+    HashMap<String,String> tableNameToIdMap = new HashMap<>();
+    tableNameToIdMap.put(RootTable.NAME, RootTable.ID);
+    tableNameToIdMap.put(MetadataTable.NAME, MetadataTable.ID);
+    tableNameToIdMap.put(ReplicationTable.NAME, ReplicationTable.ID);
+    tableNameToIdMap.put(userTableName, userTableId);
+
+    context = EasyMock.createMock(ClientContext.class);
+    Connector conn = EasyMock.createMock(Connector.class);
+    Instance inst = EasyMock.createMock(Instance.class);
+    TableOperations tableOps = EasyMock.createMock(TableOperations.class);
+    EasyMock.expect(tableOps.tableIdMap()).andReturn(tableNameToIdMap).anyTimes();
+    EasyMock.expect(conn.tableOperations()).andReturn(tableOps).anyTimes();
+    EasyMock.expect(context.getInstance()).andReturn(inst).anyTimes();
+    EasyMock.expect(context.getConnector()).andReturn(conn).anyTimes();
+    EasyMock.replay(context, conn, inst, tableOps);
+  }
+
   @Test
   public void checkSystemTableIdentifiers() {
     assertNotEquals(RootTable.ID, MetadataTable.ID);
@@ -43,14 +69,6 @@
 
   @Test
   public void testGetCorrectServicer() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException {
-    String userTableName = "A";
-    MockInstance instance = new MockInstance("metadataTest");
-    Connector connector = instance.getConnector("root", new PasswordToken(""));
-    connector.tableOperations().create(userTableName);
-    String userTableId = connector.tableOperations().tableIdMap().get(userTableName);
-    Credentials credentials = new Credentials("root", new PasswordToken(""));
-    ClientContext context = new ClientContext(instance, credentials, new ClientConfiguration());
-
     MetadataServicer ms = MetadataServicer.forTableId(context, RootTable.ID);
     assertTrue(ms instanceof ServicerForRootTable);
     assertFalse(ms instanceof TableMetadataServicer);
@@ -62,6 +80,12 @@
     assertEquals(RootTable.NAME, ((TableMetadataServicer) ms).getServicingTableName());
     assertEquals(MetadataTable.ID, ms.getServicedTableId());
 
+    ms = MetadataServicer.forTableId(context, ReplicationTable.ID);
+    assertTrue(ms instanceof ServicerForUserTables);
+    assertTrue(ms instanceof TableMetadataServicer);
+    assertEquals(MetadataTable.NAME, ((TableMetadataServicer) ms).getServicingTableName());
+    assertEquals(ReplicationTable.ID, ms.getServicedTableId());
+
     ms = MetadataServicer.forTableId(context, userTableId);
     assertTrue(ms instanceof ServicerForUserTables);
     assertTrue(ms instanceof TableMetadataServicer);
@@ -79,6 +103,12 @@
     assertEquals(RootTable.NAME, ((TableMetadataServicer) ms).getServicingTableName());
     assertEquals(MetadataTable.ID, ms.getServicedTableId());
 
+    ms = MetadataServicer.forTableName(context, ReplicationTable.NAME);
+    assertTrue(ms instanceof ServicerForUserTables);
+    assertTrue(ms instanceof TableMetadataServicer);
+    assertEquals(MetadataTable.NAME, ((TableMetadataServicer) ms).getServicingTableName());
+    assertEquals(ReplicationTable.ID, ms.getServicedTableId());
+
     ms = MetadataServicer.forTableName(context, userTableName);
     assertTrue(ms instanceof ServicerForUserTables);
     assertTrue(ms instanceof TableMetadataServicer);
diff --git a/core/src/test/java/org/apache/accumulo/core/replication/ReplicationConfigurationUtilTest.java b/core/src/test/java/org/apache/accumulo/core/replication/ReplicationConfigurationUtilTest.java
index c060917..6d87005 100644
--- a/core/src/test/java/org/apache/accumulo/core/replication/ReplicationConfigurationUtilTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/replication/ReplicationConfigurationUtilTest.java
@@ -46,39 +46,39 @@
 
   @Test
   public void rootTableExtent() {
-    KeyExtent extent = new KeyExtent(new Text(RootTable.ID), null, null);
+    KeyExtent extent = new KeyExtent(RootTable.ID, null, null);
     Assert.assertFalse("The root table should never be replicated", ReplicationConfigurationUtil.isEnabled(extent, conf));
   }
 
   @Test
   public void metadataTableExtent() {
-    KeyExtent extent = new KeyExtent(new Text(MetadataTable.ID), null, null);
+    KeyExtent extent = new KeyExtent(MetadataTable.ID, null, null);
     Assert.assertFalse("The metadata table should never be replicated", ReplicationConfigurationUtil.isEnabled(extent, conf));
   }
 
   @Test
   public void rootTableExtentEmptyConf() {
-    KeyExtent extent = new KeyExtent(new Text(RootTable.ID), null, null);
+    KeyExtent extent = new KeyExtent(RootTable.ID, null, null);
     Assert.assertFalse("The root table should never be replicated",
         ReplicationConfigurationUtil.isEnabled(extent, new ConfigurationCopy(new HashMap<String,String>())));
   }
 
   @Test
   public void metadataTableExtentEmptyConf() {
-    KeyExtent extent = new KeyExtent(new Text(MetadataTable.ID), null, null);
+    KeyExtent extent = new KeyExtent(MetadataTable.ID, null, null);
     Assert.assertFalse("The metadata table should never be replicated",
         ReplicationConfigurationUtil.isEnabled(extent, new ConfigurationCopy(new HashMap<String,String>())));
   }
 
   @Test
   public void regularTable() {
-    KeyExtent extent = new KeyExtent(new Text("1"), new Text("b"), new Text("a"));
+    KeyExtent extent = new KeyExtent("1", new Text("b"), new Text("a"));
     Assert.assertTrue("Table should be replicated", ReplicationConfigurationUtil.isEnabled(extent, conf));
   }
 
   @Test
   public void regularNonEnabledTable() {
-    KeyExtent extent = new KeyExtent(new Text("1"), new Text("b"), new Text("a"));
+    KeyExtent extent = new KeyExtent("1", new Text("b"), new Text("a"));
     Assert.assertFalse("Table should not be replicated", ReplicationConfigurationUtil.isEnabled(extent, new ConfigurationCopy(new HashMap<String,String>())));
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/replication/ReplicationSchemaTest.java b/core/src/test/java/org/apache/accumulo/core/replication/ReplicationSchemaTest.java
index 3822641..ca5eaa4 100644
--- a/core/src/test/java/org/apache/accumulo/core/replication/ReplicationSchemaTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/replication/ReplicationSchemaTest.java
@@ -61,18 +61,16 @@
 
   @Test
   public void extractTableId() {
-    Text tableId = new Text("1");
-    Key k = new Key(new Text("foo"), StatusSection.NAME, tableId);
-    Assert.assertEquals(tableId.toString(), StatusSection.getTableId(k));
+    String tableId = "1";
+    Key k = new Key(new Text("foo"), StatusSection.NAME, new Text(tableId));
+    Assert.assertEquals(tableId, StatusSection.getTableId(k));
   }
 
   @Test
   public void extractTableIdUsingText() {
-    Text tableId = new Text("1");
-    Key k = new Key(new Text("foo"), StatusSection.NAME, tableId);
-    Text buffer = new Text();
-    StatusSection.getTableId(k, buffer);
-    Assert.assertEquals(tableId.toString(), buffer.toString());
+    String tableId = "1";
+    Key k = new Key(new Text("foo"), StatusSection.NAME, new Text(tableId));
+    Assert.assertEquals(tableId, StatusSection.getTableId(k));
   }
 
   @Test(expected = NullPointerException.class)
diff --git a/core/src/test/java/org/apache/accumulo/core/security/CredentialsTest.java b/core/src/test/java/org/apache/accumulo/core/security/CredentialsTest.java
index 0457caa..bd4b1ba 100644
--- a/core/src/test/java/org/apache/accumulo/core/security/CredentialsTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/security/CredentialsTest.java
@@ -30,24 +30,37 @@
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.SecurityErrorCode;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken.AuthenticationTokenSerializer;
 import org.apache.accumulo.core.client.security.tokens.NullToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.security.thrift.TCredentials;
+import org.apache.accumulo.core.util.DeprecationUtil;
+import org.easymock.EasyMock;
+import org.junit.Before;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.TestName;
 
-/**
- *
- */
 public class CredentialsTest {
 
+  @Rule
+  public TestName test = new TestName();
+
+  private Instance inst;
+
+  @Before
+  public void setupInstance() {
+    inst = EasyMock.createMock(Instance.class);
+    EasyMock.expect(inst.getInstanceID()).andReturn(test.getMethodName()).anyTimes();
+    EasyMock.replay(inst);
+  }
+
   @Test
   public void testToThrift() throws DestroyFailedException {
     // verify thrift serialization
     Credentials creds = new Credentials("test", new PasswordToken("testing"));
-    TCredentials tCreds = creds.toThrift(new MockInstance());
+    TCredentials tCreds = creds.toThrift(inst);
     assertEquals("test", tCreds.getPrincipal());
     assertEquals(PasswordToken.class.getName(), tCreds.getTokenClassName());
     assertArrayEquals(AuthenticationTokenSerializer.serialize(new PasswordToken("testing")), tCreds.getToken());
@@ -55,7 +68,7 @@
     // verify that we can't serialize if it's destroyed
     creds.getToken().destroy();
     try {
-      creds.toThrift(new MockInstance());
+      creds.toThrift(inst);
       fail();
     } catch (Exception e) {
       assertTrue(e instanceof RuntimeException);
@@ -67,14 +80,14 @@
   @Test
   public void roundtripThrift() throws DestroyFailedException {
     Credentials creds = new Credentials("test", new PasswordToken("testing"));
-    TCredentials tCreds = creds.toThrift(new MockInstance());
+    TCredentials tCreds = creds.toThrift(inst);
     Credentials roundtrip = Credentials.fromThrift(tCreds);
     assertEquals("Roundtrip through thirft changed credentials equality", creds, roundtrip);
   }
 
   @Test
   public void testMockConnector() throws AccumuloException, DestroyFailedException, AccumuloSecurityException {
-    Instance inst = new MockInstance();
+    Instance inst = DeprecationUtil.makeMockInstance(test.getMethodName());
     Connector rootConnector = inst.getConnector("root", new PasswordToken());
     PasswordToken testToken = new PasswordToken("testPass");
     rootConnector.securityOperations().createLocalUser("testUser", testToken);
diff --git a/core/src/test/java/org/apache/accumulo/core/util/ByteBufferUtilTest.java b/core/src/test/java/org/apache/accumulo/core/util/ByteBufferUtilTest.java
index 5a8c0dc..85a36fa 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/ByteBufferUtilTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/ByteBufferUtilTest.java
@@ -19,6 +19,7 @@
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 
+import java.io.ByteArrayInputStream;
 import java.io.ByteArrayOutputStream;
 import java.io.DataOutputStream;
 import java.io.IOException;
@@ -54,6 +55,15 @@
     }
 
     Assert.assertEquals(expected, new String(baos.toByteArray(), UTF_8));
+
+    ByteArrayInputStream bais = ByteBufferUtil.toByteArrayInputStream(bb);
+    byte[] buffer = new byte[expected.length()];
+    try {
+      bais.read(buffer);
+      Assert.assertEquals(expected, new String(buffer, UTF_8));
+    } catch (IOException e) {
+      throw new RuntimeException(e);
+    }
   }
 
   @Test
diff --git a/core/src/test/java/org/apache/accumulo/core/util/LocalityGroupUtilTest.java b/core/src/test/java/org/apache/accumulo/core/util/LocalityGroupUtilTest.java
index 60464a6..c4c46cb 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/LocalityGroupUtilTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/LocalityGroupUtilTest.java
@@ -80,8 +80,8 @@
     assertEquals(ecf, LocalityGroupUtil.encodeColumnFamily(bs2));
 
     // test encoding multiple column fams containing binary data
-    HashSet<Text> in = new HashSet<Text>();
-    HashSet<ByteSequence> in2 = new HashSet<ByteSequence>();
+    HashSet<Text> in = new HashSet<>();
+    HashSet<ByteSequence> in2 = new HashSet<>();
     in.add(new Text(test1));
     in2.add(new ArrayByteSequence(test1));
     in.add(new Text(test2));
diff --git a/core/src/test/java/org/apache/accumulo/core/util/MergeTest.java b/core/src/test/java/org/apache/accumulo/core/util/MergeTest.java
index 5b0dc55..b7804f6 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/MergeTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/MergeTest.java
@@ -32,8 +32,8 @@
 public class MergeTest {
 
   static class MergeTester extends Merge {
-    public List<List<Size>> merges = new ArrayList<List<Size>>();
-    public List<Size> tablets = new ArrayList<Size>();
+    public List<List<Size>> merges = new ArrayList<>();
+    public List<Size> tablets = new ArrayList<>();
 
     MergeTester(Integer... sizes) {
       Text start = null;
@@ -43,7 +43,7 @@
           end = null;
         else
           end = new Text(String.format("%05d", tablets.size()));
-        KeyExtent extent = new KeyExtent(new Text("table"), end, start);
+        KeyExtent extent = new KeyExtent("table", end, start);
         start = end;
         tablets.add(new Size(extent, size));
       }
@@ -95,7 +95,7 @@
 
     @Override
     protected void merge(Connector conn, String table, List<Size> sizes, int numToMerge) throws MergeException {
-      List<Size> merge = new ArrayList<Size>();
+      List<Size> merge = new ArrayList<>();
       for (int i = 0; i < numToMerge; i++) {
         merge.add(sizes.get(i));
       }
diff --git a/core/src/test/java/org/apache/accumulo/core/util/OpTimerTest.java b/core/src/test/java/org/apache/accumulo/core/util/OpTimerTest.java
new file mode 100644
index 0000000..a824497
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/util/OpTimerTest.java
@@ -0,0 +1,201 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.util;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.util.concurrent.TimeUnit;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Exercise basic timer (org.apache.hadoop.util.StopWatch) functionality. Current usage requires ability to reset timer.
+ */
+public class OpTimerTest {
+
+  private static Logger log = LoggerFactory.getLogger(OpTimerTest.class);
+
+  /**
+   * Validate reset functionality
+   */
+  @Test
+  public void verifyReset() {
+
+    OpTimer timer = new OpTimer().start();
+
+    try {
+      Thread.sleep(50);
+    } catch (InterruptedException ex) {
+      log.info("sleep sleep interrupted");
+      Thread.currentThread().interrupt();
+    }
+
+    timer.stop();
+
+    long tValue = timer.now();
+
+    log.debug("Time value before reset {}", String.format("%.3f ms", timer.scale(TimeUnit.MILLISECONDS)));
+
+    timer.reset().start();
+
+    try {
+      Thread.sleep(1);
+    } catch (InterruptedException ex) {
+      log.info("sleep sleep interrupted");
+      Thread.currentThread().interrupt();
+    }
+
+    timer.stop();
+
+    assertTrue(timer.now() > 0);
+
+    assertTrue(tValue > timer.now());
+
+    timer.reset();
+
+    log.debug("Time value after reset {}", String.format("%.3f ms", timer.scale(TimeUnit.MILLISECONDS)));
+
+    assertEquals(0, timer.now());
+
+  }
+
+  /**
+   * Verify that IllegalStateException is thrown when calling stop when timer has not been started.
+   */
+  @Test(expected = IllegalStateException.class)
+  public void verifyExceptionCallingStopWhenNotStarted() {
+
+    OpTimer timer = new OpTimer();
+
+    assertFalse(timer.isRunning());
+
+    // should throw exception - not running
+    timer.stop();
+  }
+
+  /**
+   * Verify that IllegalStateException is thrown when calling start on running timer.
+   */
+  @Test(expected = IllegalStateException.class)
+  public void verifyExceptionCallingStartWhenRunning() {
+
+    OpTimer timer = new OpTimer().start();
+
+    try {
+      Thread.sleep(50);
+    } catch (InterruptedException ex) {
+      log.info("sleep sleep interrupted");
+      Thread.currentThread().interrupt();
+    }
+
+    assertTrue(timer.isRunning());
+
+    // should throw exception - already running
+    timer.start();
+  }
+
+  /**
+   * Verify that IllegalStateException is thrown when calling stop when not running.
+   */
+  @Test(expected = IllegalStateException.class)
+  public void verifyExceptionCallingStopWhenNotRunning() {
+
+    OpTimer timer = new OpTimer().start();
+
+    try {
+      Thread.sleep(50);
+    } catch (InterruptedException ex) {
+      log.info("sleep sleep interrupted");
+      Thread.currentThread().interrupt();
+    }
+
+    assertTrue(timer.isRunning());
+
+    timer.stop();
+
+    assertFalse(timer.isRunning());
+
+    // should throw exception
+    timer.stop();
+  }
+
+  /**
+   * Validate that start / stop accumulates time.
+   */
+  @Test
+  public void verifyElapsed() {
+
+    OpTimer timer = new OpTimer().start();
+
+    try {
+      Thread.sleep(50);
+    } catch (InterruptedException ex) {
+      log.info("sleep sleep interrupted");
+      Thread.currentThread().interrupt();
+    }
+
+    timer.stop();
+
+    long tValue = timer.now();
+
+    log.debug("Time value after first stop {}", String.format("%.3f ms", timer.scale(TimeUnit.MILLISECONDS)));
+
+    timer.start();
+
+    try {
+      Thread.sleep(10);
+    } catch (InterruptedException ex) {
+      log.info("sleep sleep interrupted");
+      Thread.currentThread().interrupt();
+    }
+
+    timer.stop();
+
+    log.debug("Time value after second stop {}", String.format("%.3f ms", timer.scale(TimeUnit.MILLISECONDS)));
+
+    assertTrue(tValue < timer.now());
+
+  }
+
+  /**
+   * Validate that scale returns correct values.
+   */
+  @Test
+  public void scale() {
+    OpTimer timer = new OpTimer().start();
+
+    try {
+      Thread.sleep(50);
+    } catch (InterruptedException ex) {
+      log.info("sleep sleep interrupted");
+      Thread.currentThread().interrupt();
+    }
+
+    timer.stop();
+
+    long tValue = timer.now();
+
+    assertEquals(tValue / 1000000.0, timer.scale(TimeUnit.MILLISECONDS), 0.00000001);
+
+    assertEquals(tValue / 1000000000.0, timer.scale(TimeUnit.SECONDS), 0.00000001);
+
+  }
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/util/PairTest.java b/core/src/test/java/org/apache/accumulo/core/util/PairTest.java
index 60af90e..6effc9e 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/PairTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/PairTest.java
@@ -30,8 +30,8 @@
    */
   @Test
   public void testEqualsObject() {
-    Pair<Integer,String> pair = new Pair<Integer,String>(25, "twenty-five");
-    Pair<Integer,String> pair2 = new Pair<Integer,String>(25, "twenty-five");
+    Pair<Integer,String> pair = new Pair<>(25, "twenty-five");
+    Pair<Integer,String> pair2 = new Pair<>(25, "twenty-five");
     assertEquals(pair, pair2);
   }
 
@@ -40,7 +40,7 @@
    */
   @Test
   public void testGetFirst() {
-    Pair<Integer,String> pair = new Pair<Integer,String>(25, "twenty-five");
+    Pair<Integer,String> pair = new Pair<>(25, "twenty-five");
     assertEquals((Integer) 25, pair.getFirst());
   }
 
@@ -49,7 +49,7 @@
    */
   @Test
   public void testGetSecond() {
-    Pair<Integer,String> pair = new Pair<Integer,String>(25, "twenty-five");
+    Pair<Integer,String> pair = new Pair<>(25, "twenty-five");
     assertEquals("twenty-five", pair.getSecond());
   }
 
@@ -58,7 +58,7 @@
    */
   @Test
   public void testToString() {
-    Pair<Integer,String> pair = new Pair<Integer,String>(25, "twenty-five");
+    Pair<Integer,String> pair = new Pair<>(25, "twenty-five");
     assertEquals("(25,twenty-five)", pair.toString());
   }
 
@@ -67,7 +67,7 @@
    */
   @Test
   public void testToStringStringStringString() {
-    Pair<Integer,String> pair = new Pair<Integer,String>(25, "twenty-five");
+    Pair<Integer,String> pair = new Pair<>(25, "twenty-five");
     assertEquals("---25~~~twenty-five+++", pair.toString("---", "~~~", "+++"));
   }
 
@@ -76,7 +76,7 @@
    */
   @Test
   public void testToMapEntry() {
-    Pair<Integer,String> pair = new Pair<Integer,String>(10, "IO");
+    Pair<Integer,String> pair = new Pair<>(10, "IO");
 
     Entry<Integer,String> entry = pair.toMapEntry();
     assertEquals(pair.getFirst(), entry.getKey());
@@ -88,9 +88,9 @@
    */
   @Test
   public void testSwap() {
-    Pair<Integer,String> pair = new Pair<Integer,String>(25, "twenty-five");
+    Pair<Integer,String> pair = new Pair<>(25, "twenty-five");
     assertEquals(pair, pair.swap().swap());
-    Pair<String,Integer> pair2 = new Pair<String,Integer>("twenty-five", 25);
+    Pair<String,Integer> pair2 = new Pair<>("twenty-five", 25);
     assertEquals(pair, pair2.swap());
     assertEquals(pair2, pair.swap());
   }
@@ -100,7 +100,7 @@
    */
   @Test
   public void testFromEntry() {
-    Entry<Integer,String> entry = new SimpleImmutableEntry<Integer,String>(10, "IO");
+    Entry<Integer,String> entry = new SimpleImmutableEntry<>(10, "IO");
 
     Pair<Integer,String> pair0 = Pair.fromEntry(entry);
     assertEquals(entry.getKey(), pair0.getFirst());
diff --git a/core/src/test/java/org/apache/accumulo/core/util/PartitionerTest.java b/core/src/test/java/org/apache/accumulo/core/util/PartitionerTest.java
index 8ab2beb..7568aba 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/PartitionerTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/PartitionerTest.java
@@ -112,7 +112,7 @@
   }
 
   private Set<Key> toKeySet(Mutation... expected) {
-    HashSet<Key> ret = new HashSet<Key>();
+    HashSet<Key> ret = new HashSet<>();
     for (Mutation mutation : expected)
       for (ColumnUpdate cu : mutation.getUpdates())
         ret.add(new Key(mutation.getRow(), cu.getColumnFamily(), cu.getColumnQualifier(), cu.getColumnVisibility(), cu.getTimestamp()));
diff --git a/core/src/test/java/org/apache/accumulo/core/util/ValidatorTest.java b/core/src/test/java/org/apache/accumulo/core/util/ValidatorTest.java
index 2be05f0..01bad35 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/ValidatorTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/ValidatorTest.java
@@ -32,7 +32,7 @@
     }
 
     @Override
-    public boolean isValid(String argument) {
+    public boolean apply(String argument) {
       return s.equals(argument);
     }
   }
@@ -45,7 +45,7 @@
     }
 
     @Override
-    public boolean isValid(String argument) {
+    public boolean apply(String argument) {
       return (argument != null && argument.matches(ps));
     }
   }
@@ -77,24 +77,24 @@
   @Test
   public void testAnd() {
     Validator<String> vand = v3.and(v);
-    assertTrue(vand.isValid("correct"));
-    assertFalse(vand.isValid("righto"));
-    assertFalse(vand.isValid("coriander"));
+    assertTrue(vand.apply("correct"));
+    assertFalse(vand.apply("righto"));
+    assertFalse(vand.apply("coriander"));
   }
 
   @Test
   public void testOr() {
     Validator<String> vor = v.or(v2);
-    assertTrue(vor.isValid("correct"));
-    assertTrue(vor.isValid("righto"));
-    assertFalse(vor.isValid("coriander"));
+    assertTrue(vor.apply("correct"));
+    assertTrue(vor.apply("righto"));
+    assertFalse(vor.apply("coriander"));
   }
 
   @Test
   public void testNot() {
     Validator<String> vnot = v3.not();
-    assertFalse(vnot.isValid("correct"));
-    assertFalse(vnot.isValid("coriander"));
-    assertTrue(vnot.isValid("righto"));
+    assertFalse(vnot.apply("correct"));
+    assertFalse(vnot.apply("coriander"));
+    assertTrue(vnot.apply("righto"));
   }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/util/format/DateFormatSupplierTest.java b/core/src/test/java/org/apache/accumulo/core/util/format/DateFormatSupplierTest.java
new file mode 100644
index 0000000..b095b04
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/util/format/DateFormatSupplierTest.java
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.util.format;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotSame;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
+
+import java.text.DateFormat;
+import java.util.Date;
+import java.util.TimeZone;
+import org.junit.Test;
+
+public class DateFormatSupplierTest {
+
+  /** Asserts two supplier instance create independent objects */
+  private void assertSuppliersIndependent(ThreadLocal<DateFormat> supplierA, ThreadLocal<DateFormat> supplierB) {
+    DateFormat getA1 = supplierA.get();
+    DateFormat getA2 = supplierA.get();
+    assertSame(getA1, getA2);
+
+    DateFormat getB1 = supplierB.get();
+    DateFormat getB2 = supplierB.get();
+
+    assertSame(getB1, getB2);
+    assertNotSame(getA1, getB1);
+  }
+
+  @Test
+  public void testCreateDefaultFormatSupplier() throws Exception {
+    ThreadLocal<DateFormat> supplierA = DateFormatSupplier.createDefaultFormatSupplier();
+    ThreadLocal<DateFormat> supplierB = DateFormatSupplier.createDefaultFormatSupplier();
+    assertSuppliersIndependent(supplierA, supplierB);
+  }
+
+  @Test
+  public void testCreateSimpleFormatSupplier() throws Exception {
+    final String format = DateFormatSupplier.HUMAN_READABLE_FORMAT;
+    DateFormatSupplier supplierA = DateFormatSupplier.createSimpleFormatSupplier(format);
+    DateFormatSupplier supplierB = DateFormatSupplier.createSimpleFormatSupplier(format);
+    assertSuppliersIndependent(supplierA, supplierB);
+
+    // since dfA and dfB come from different suppliers, altering the TimeZone on one does not affect the other
+    supplierA.setTimeZone(TimeZone.getTimeZone("UTC"));
+    final DateFormat dfA = supplierA.get();
+
+    supplierB.setTimeZone(TimeZone.getTimeZone("EST"));
+    final DateFormat dfB = supplierB.get();
+
+    final String resultA = dfA.format(new Date(0));
+    assertEquals("1970/01/01 00:00:00.000", resultA);
+
+    final String resultB = dfB.format(new Date(0));
+    assertEquals("1969/12/31 19:00:00.000", resultB);
+
+    assertTrue(!resultA.equals(resultB));
+
+  }
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/util/format/DateStringFormatterTest.java b/core/src/test/java/org/apache/accumulo/core/util/format/DateStringFormatterTest.java
index 1b121f3..22af5b0 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/format/DateStringFormatterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/format/DateStringFormatterTest.java
@@ -22,12 +22,12 @@
 import java.util.Map;
 import java.util.TimeZone;
 import java.util.TreeMap;
-
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 import org.junit.Before;
 import org.junit.Test;
 
+@SuppressWarnings("deprecation")
 public class DateStringFormatterTest {
   DateStringFormatter formatter;
 
@@ -36,17 +36,34 @@
   @Before
   public void setUp() {
     formatter = new DateStringFormatter();
-    data = new TreeMap<Key,Value>();
+    data = new TreeMap<>();
     data.put(new Key("", "", "", 0), new Value());
   }
 
+  private void testFormatterIgnoresConfig(FormatterConfig config, DateStringFormatter formatter) {
+    // ignores config's DateFormatSupplier and substitutes its own
+    formatter.initialize(data.entrySet(), config);
+
+    assertTrue(formatter.hasNext());
+    final String next = formatter.next();
+    assertTrue(next, next.endsWith("1970/01/01 00:00:00.000"));
+  }
+
   @Test
   public void testTimestamps() {
-    formatter.initialize(data.entrySet(), true);
-    formatter.setTimeZone(TimeZone.getTimeZone("UTC"));
+    final TimeZone utc = TimeZone.getTimeZone("UTC");
+    final TimeZone est = TimeZone.getTimeZone("EST");
+    final FormatterConfig config = new FormatterConfig().setPrintTimestamps(true);
+    DateStringFormatter formatter;
 
-    assertTrue(formatter.hasNext());
-    assertTrue(formatter.next().endsWith("1970/01/01 00:00:00.000"));
+    formatter = new DateStringFormatter(utc);
+    testFormatterIgnoresConfig(config, formatter);
+
+    // even though config says to use EST and only print year, the Formatter will override these
+    formatter = new DateStringFormatter(utc);
+    DateFormatSupplier dfSupplier = DateFormatSupplier.createSimpleFormatSupplier("YYYY", est);
+    config.setDateFormatSupplier(dfSupplier);
+    testFormatterIgnoresConfig(config, formatter);
   }
 
   @Test
@@ -55,7 +72,7 @@
 
     assertEquals(2, data.size());
 
-    formatter.initialize(data.entrySet(), false);
+    formatter.initialize(data.entrySet(), new FormatterConfig());
 
     assertEquals(formatter.next(), formatter.next());
   }
diff --git a/core/src/test/java/org/apache/accumulo/core/util/format/DefaultFormatterTest.java b/core/src/test/java/org/apache/accumulo/core/util/format/DefaultFormatterTest.java
index 7b654d0..9c688db 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/format/DefaultFormatterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/format/DefaultFormatterTest.java
@@ -19,8 +19,10 @@
 import static org.junit.Assert.assertEquals;
 
 import java.util.Collections;
+import java.util.Map;
 import java.util.Map.Entry;
-
+import java.util.TimeZone;
+import java.util.TreeMap;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 import org.apache.hadoop.io.Text;
@@ -29,6 +31,8 @@
 
 public class DefaultFormatterTest {
 
+  public static final TimeZone UTC = TimeZone.getTimeZone("UTC");
+  public static final TimeZone EST = TimeZone.getTimeZone("EST");
   DefaultFormatter df;
   Iterable<Entry<Key,Value>> empty = Collections.<Key,Value> emptyMap().entrySet();
 
@@ -39,8 +43,9 @@
 
   @Test(expected = IllegalStateException.class)
   public void testDoubleInitialize() {
-    df.initialize(empty, true);
-    df.initialize(empty, true);
+    final FormatterConfig timestampConfig = new FormatterConfig().setPrintTimestamps(true);
+    df.initialize(empty, timestampConfig);
+    df.initialize(empty, timestampConfig);
   }
 
   @Test(expected = IllegalStateException.class)
@@ -59,4 +64,55 @@
     DefaultFormatter.appendText(sb, new Text(data));
     assertEquals("\\x00\\\\x\\xFF", sb.toString());
   }
+
+  @Test
+  public void testFormatEntry() {
+    final long timestamp = 0;
+    Map<Key,Value> map = new TreeMap<>();
+    map.put(new Key("a", "ab", "abc", timestamp), new Value("abcd".getBytes()));
+
+    FormatterConfig config;
+    String answer;
+
+    // no timestamp, no max
+    config = new FormatterConfig();
+    df = new DefaultFormatter();
+    df.initialize(map.entrySet(), config);
+    answer = df.next();
+    assertEquals("a ab:abc []\tabcd", answer);
+
+    // yes timestamp, no max
+    config.setPrintTimestamps(true);
+    df = new DefaultFormatter();
+    df.initialize(map.entrySet(), config);
+    answer = df.next();
+    assertEquals("a ab:abc [] " + timestamp + "\tabcd", answer);
+
+    // yes timestamp, max of 1
+    config.setPrintTimestamps(true).setShownLength(1);
+    df = new DefaultFormatter();
+    df.initialize(map.entrySet(), config);
+    answer = df.next();
+    assertEquals("a a:a [] " + timestamp + "\ta", answer);
+
+    // yes timestamp, no max, new DateFormat
+    config.setPrintTimestamps(true).doNotLimitShowLength().setDateFormatSupplier(DateFormatSupplier.createSimpleFormatSupplier("YYYY"));
+    df = new DefaultFormatter();
+    df.initialize(map.entrySet(), config);
+    answer = df.next();
+    assertEquals("a ab:abc [] 1970\tabcd", answer);
+
+    // yes timestamp, no max, new DateFormat, different TimeZone
+    config.setPrintTimestamps(true).doNotLimitShowLength().setDateFormatSupplier(DateFormatSupplier.createSimpleFormatSupplier("HH", UTC));
+    df = new DefaultFormatter();
+    df.initialize(map.entrySet(), config);
+    answer = df.next();
+    assertEquals("a ab:abc [] 00\tabcd", answer);
+
+    config.setPrintTimestamps(true).doNotLimitShowLength().setDateFormatSupplier(DateFormatSupplier.createSimpleFormatSupplier("HH", EST));
+    df = new DefaultFormatter();
+    df.initialize(map.entrySet(), config);
+    answer = df.next();
+    assertEquals("a ab:abc [] 19\tabcd", answer);
+  }
 }
diff --git a/core/src/test/java/org/apache/accumulo/core/util/format/FormatterConfigTest.java b/core/src/test/java/org/apache/accumulo/core/util/format/FormatterConfigTest.java
new file mode 100644
index 0000000..aa88e03
--- /dev/null
+++ b/core/src/test/java/org/apache/accumulo/core/util/format/FormatterConfigTest.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.util.format;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotSame;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.fail;
+
+import java.text.DateFormat;
+import org.junit.Test;
+
+public class FormatterConfigTest {
+
+  @Test
+  public void testConstructor() {
+    FormatterConfig config = new FormatterConfig();
+    assertEquals(false, config.willLimitShowLength());
+    assertEquals(false, config.willPrintTimestamps());
+  }
+
+  @Test
+  public void testSetShownLength() throws Exception {
+    FormatterConfig config = new FormatterConfig();
+    try {
+      config.setShownLength(-1);
+      fail("Should throw on negative length.");
+    } catch (IllegalArgumentException e) {}
+
+    config.setShownLength(0);
+    assertEquals(0, config.getShownLength());
+    assertEquals(true, config.willLimitShowLength());
+
+    config.setShownLength(1);
+    assertEquals(1, config.getShownLength());
+    assertEquals(true, config.willLimitShowLength());
+  }
+
+  @Test
+  public void testDoNotLimitShowLength() {
+    FormatterConfig config = new FormatterConfig();
+    assertEquals(false, config.willLimitShowLength());
+
+    config.setShownLength(1);
+    assertEquals(true, config.willLimitShowLength());
+
+    config.doNotLimitShowLength();
+    assertEquals(false, config.willLimitShowLength());
+  }
+
+  @Test
+  public void testGetDateFormat() {
+    FormatterConfig config1 = new FormatterConfig();
+    DateFormat df1 = config1.getDateFormatSupplier().get();
+
+    FormatterConfig config2 = new FormatterConfig();
+    assertNotSame(df1, config2.getDateFormatSupplier().get());
+
+    config2.setDateFormatSupplier(config1.getDateFormatSupplier());
+    assertSame(df1, config2.getDateFormatSupplier().get());
+
+    // even though copying, it can't copy the Generator, so will pull out the same DateFormat
+    FormatterConfig configCopy = new FormatterConfig(config1);
+    assertSame(df1, configCopy.getDateFormatSupplier().get());
+  }
+
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/util/format/FormatterFactoryTest.java b/core/src/test/java/org/apache/accumulo/core/util/format/FormatterFactoryTest.java
index d379dee..b6b91d3 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/format/FormatterFactoryTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/format/FormatterFactoryTest.java
@@ -37,8 +37,9 @@
 
   @Test
   public void testGetDefaultFormatter() {
-    Formatter defaultFormatter = FormatterFactory.getDefaultFormatter(scanner, true);
-    Formatter bogusFormatter = FormatterFactory.getFormatter(Formatter.class, scanner, true);
+    final FormatterConfig timestampConfig = new FormatterConfig().setPrintTimestamps(true);
+    Formatter defaultFormatter = FormatterFactory.getDefaultFormatter(scanner, timestampConfig);
+    Formatter bogusFormatter = FormatterFactory.getFormatter(Formatter.class, scanner, timestampConfig);
     assertEquals(defaultFormatter.getClass(), bogusFormatter.getClass());
   }
 
diff --git a/core/src/test/java/org/apache/accumulo/core/util/format/HexFormatterTest.java b/core/src/test/java/org/apache/accumulo/core/util/format/HexFormatterTest.java
index 4745ad3..c267fbe 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/format/HexFormatterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/format/HexFormatterTest.java
@@ -35,14 +35,14 @@
 
   @Before
   public void setUp() {
-    data = new TreeMap<Key,Value>();
+    data = new TreeMap<>();
     formatter = new HexFormatter();
   }
 
   @Test
   public void testInitialize() {
     data.put(new Key(), new Value());
-    formatter.initialize(data.entrySet(), false);
+    formatter.initialize(data.entrySet(), new FormatterConfig());
 
     assertTrue(formatter.hasNext());
     assertEquals("  " + "  " + " [" + "] ", formatter.next());
@@ -59,7 +59,7 @@
     Text bytes = new Text(new byte[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15});
     data.put(new Key(bytes), new Value());
 
-    formatter.initialize(data.entrySet(), false);
+    formatter.initialize(data.entrySet(), new FormatterConfig());
 
     String row = formatter.next().split(" ")[0];
     assertEquals("0001-0203-0405-0607-0809-0a0b-0c0d-0e0f", row);
@@ -80,7 +80,7 @@
   public void testTimestamps() {
     long now = System.currentTimeMillis();
     data.put(new Key("", "", "", now), new Value());
-    formatter.initialize(data.entrySet(), true);
+    formatter.initialize(data.entrySet(), new FormatterConfig().setPrintTimestamps(true));
     String entry = formatter.next().split("\\s+")[2];
     assertEquals(now, Long.parseLong(entry));
   }
diff --git a/core/src/test/java/org/apache/accumulo/core/util/format/ShardedTableDistributionFormatterTest.java b/core/src/test/java/org/apache/accumulo/core/util/format/ShardedTableDistributionFormatterTest.java
index e8879a5..ce733fe 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/format/ShardedTableDistributionFormatterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/format/ShardedTableDistributionFormatterTest.java
@@ -39,7 +39,7 @@
 
   @Before
   public void setUp() {
-    data = new TreeMap<Key,Value>();
+    data = new TreeMap<>();
     formatter = new ShardedTableDistributionFormatter();
   }
 
@@ -47,7 +47,7 @@
   public void testInitialize() {
     data.put(new Key(), new Value());
     data.put(new Key("r", "~tab"), new Value());
-    formatter.initialize(data.entrySet(), false);
+    formatter.initialize(data.entrySet(), new FormatterConfig());
 
     assertTrue(formatter.hasNext());
     formatter.next();
@@ -60,7 +60,7 @@
     data.put(new Key("t;19700101", "~tab", "loc", 0), new Value("srv1".getBytes(UTF_8)));
     data.put(new Key("t;19700101", "~tab", "loc", 1), new Value("srv2".getBytes(UTF_8)));
 
-    formatter.initialize(data.entrySet(), false);
+    formatter.initialize(data.entrySet(), new FormatterConfig());
 
     String[] resultLines = formatter.next().split("\n");
     List<String> results = Arrays.asList(resultLines).subList(2, 4);
diff --git a/core/src/test/java/org/apache/accumulo/core/util/format/StatisticsDisplayFormatterTest.java b/core/src/test/java/org/apache/accumulo/core/util/format/StatisticsDisplayFormatterTest.java
index 93c948c..d559d73 100644
--- a/core/src/test/java/org/apache/accumulo/core/util/format/StatisticsDisplayFormatterTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/util/format/StatisticsDisplayFormatterTest.java
@@ -35,14 +35,14 @@
 
   @Before
   public void setUp() {
-    data = new TreeMap<Key,Value>();
+    data = new TreeMap<>();
     formatter = new StatisticsDisplayFormatter();
   }
 
   @Test
   public void testInitialize() {
     data.put(new Key(), new Value());
-    formatter.initialize(data.entrySet(), false);
+    formatter.initialize(data.entrySet(), new FormatterConfig());
 
     assertTrue(formatter.hasNext());
   }
@@ -51,7 +51,7 @@
   public void testAggregate() {
     data.put(new Key("", "", "", 1), new Value());
     data.put(new Key("", "", "", 2), new Value());
-    formatter.initialize(data.entrySet(), false);
+    formatter.initialize(data.entrySet(), new FormatterConfig());
 
     String[] output = formatter.next().split("\n");
     assertTrue(output[2].endsWith(": 1"));
diff --git a/core/src/test/resources/org/apache/accumulo/core/file/rfile/ver_7.rf b/core/src/test/resources/org/apache/accumulo/core/file/rfile/ver_7.rf
new file mode 100644
index 0000000..7d2c9f7
--- /dev/null
+++ b/core/src/test/resources/org/apache/accumulo/core/file/rfile/ver_7.rf
Binary files differ
diff --git a/docs/pom.xml b/docs/pom.xml
index 9c5a333..853474b 100644
--- a/docs/pom.xml
+++ b/docs/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-docs</artifactId>
   <packaging>pom</packaging>
diff --git a/docs/src/main/asciidoc/accumulo_user_manual.asciidoc b/docs/src/main/asciidoc/accumulo_user_manual.asciidoc
index 4bb87cd..288a8bd 100644
--- a/docs/src/main/asciidoc/accumulo_user_manual.asciidoc
+++ b/docs/src/main/asciidoc/accumulo_user_manual.asciidoc
@@ -43,6 +43,8 @@
 
 include::chapters/iterator_design.txt[]
 
+include::chapters/iterator_test_harness.txt[]
+
 include::chapters/table_design.txt[]
 
 include::chapters/high_speed_ingest.txt[]
@@ -59,6 +61,8 @@
 
 include::chapters/kerberos.txt[]
 
+include::chapters/sampling.txt[]
+
 include::chapters/administration.txt[]
 
 include::chapters/multivolume.txt[]
diff --git a/docs/src/main/asciidoc/chapters/administration.txt b/docs/src/main/asciidoc/chapters/administration.txt
index daf6d58..a2dab8e 100644
--- a/docs/src/main/asciidoc/chapters/administration.txt
+++ b/docs/src/main/asciidoc/chapters/administration.txt
@@ -49,12 +49,12 @@
 |Port | Description | Property Name
 |4445 | Shutdown Port (Accumulo MiniCluster) | n/a
 |4560 | Accumulo monitor (for centralized log display) | monitor.port.log4j
+|9995 | Accumulo HTTP monitor | monitor.port.client
 |9997 | Tablet Server | tserver.port.client
+|9998 | Accumulo GC | gc.port.client
 |9999 | Master Server | master.port.client
 |12234 | Accumulo Tracer | trace.port.client
 |42424 | Accumulo Proxy Server | n/a
-|50091 | Accumulo GC | gc.port.client
-|50095 | Accumulo HTTP monitor | monitor.port.client
 |10001 | Master Replication service | master.replication.coordinator.port
 |10002 | TabletServer Replication service | replication.receipt.service.port
 |====
@@ -63,7 +63,10 @@
 ephemeral port is likely to be unique and not already bound. Thus, configuring ports to
 use +0+ instead of an explicit value, should, in most cases, work around any issues of
 running multiple distinct Accumulo instances (or any other process which tries to use the
-same default ports) on the same hardware.
+same default ports) on the same hardware. Finally, the *.port.client properties will work
+with the port range syntax (M-N) allowing the user to specify a range of ports for the
+service to attempt to bind. The ports in the range will be tried in a 1-up manner starting
+at the low end of the range to, and including, the high end of the range.
 
 === Installation
 Choose a directory for the Accumulo installation. This directory will be referenced
@@ -345,6 +348,49 @@
 which leverage the user of custom properties should take these warnings into
 consideration. There is no enforcement of these warnings via the API.
 
+==== Configuring the ClassLoader
+
+Accumulo loads classes from the locations specified in the +general.classpaths+ property. Additionally, Accumulo will load classes
+from the locations specified in the +general.dynamic.classpaths+ property and will monitor and reload them if they change. The reloading 
+feature is useful during the development and testing of iterators as new or modified iterator classes can be deployed to Accumulo without
+having to restart the database.
+
+Accumulo also has an alternate configuration for the classloader which will allow it to load classes from remote locations. This mechanism
+uses Apache Commons VFS which enables locations such as http and hdfs to be used. This alternate configuration also uses the
++general.classpaths+ property in the same manner described above. It differs in that you need to configure the
++general.vfs.classpaths+ property instead of the +general.dynamic.classpath+ property. As in the default configuration, this alternate
+configuration will also monitor the vfs locations for changes and reload if necessary.
+
+===== ClassLoader Contexts
+
+With the addition of the VFS based classloader, we introduced the notion of classloader contexts. A context is identified
+by a name and references a set of locations from which to load classes and can be specified in the accumulo-site.xml file or added
+using the +config+ command in the shell. Below is an example for specify the app1 context in the accumulo-site.xml file:
+
+[source,xml]
+<property>
+  <name>general.vfs.context.classpath.app1</name>
+  <value>hdfs://localhost:8020/applicationA/classpath/.*.jar,file:///opt/applicationA/lib/.*.jar</value>
+  <description>Application A classpath, loads jars from HDFS and local file system</description>
+</property>
+
+The default behavior follows the Java ClassLoader contract in that classes, if they exists, are loaded from the parent classloader first.
+You can override this behavior by delegating to the parent classloader after looking in this classloader first. An example of this
+configuration is:
+
+[source,xml]
+<property>
+  <name>general.vfs.context.classpath.app1.delegation=post</name>
+  <value>hdfs://localhost:8020/applicationA/classpath/.*.jar,file:///opt/applicationA/lib/.*.jar</value>
+  <description>Application A classpath, loads jars from HDFS and local file system</description>
+</property>
+
+To use contexts in your application you can set the +table.classpath.context+ on your tables or use the +setClassLoaderContext()+ method on Scanner
+and BatchScanner passing in the name of the context, app1 in the example above. Setting the property on the table allows your minc, majc, and scan 
+iterators to load classes from the locations defined by the context. Passing the context name to the scanners allows you to override the table setting
+to load only scan time iterators from a different location. 
+
+
 === Initialization
 
 Accumulo must be initialized to create the structures it uses internally to locate
@@ -430,45 +476,49 @@
 ensure that the tabletserver is cleanly stopped and recovery will not need to be performed
 when the tablets are re-hosted.
 
+===== A note on rolling restarts
+
+For sufficiently large Accumulo clusters, restarting multiple TabletServers within a short window can place significant 
+load on the Master server.  If slightly lower availability is acceptable, this load can be reduced by globally setting 
++table.suspend.duration+ to a positive value.  
+
+With +table.suspend.duration+ set to, say, +5m+, Accumulo will wait 
+for 5 minutes for any dead TabletServer to return before reassigning that TabletServer's responsibilities to other TabletServers.
+If the TabletServer returns to the cluster before the specified timeout has elapsed, Accumulo will assign the TabletServer 
+its original responsibilities.
+
+It is important not to choose too large a value for +table.suspend.duration+, as during this time, all scans against the 
+data that TabletServer had hosted will block (or time out).
+
 ==== Running multiple TabletServers on a single node
 
 With very powerful nodes, it may be beneficial to run more than one TabletServer on a given
 node. This decision should be made carefully and with much deliberation as Accumulo is designed
 to be able to scale to using 10's of GB of RAM and 10's of CPU cores.
 
-To run multiple TabletServers on a single host, it is necessary to create multiple Accumulo configuration
-directories. Ensuring that these properties are appropriately set (and remain consistent) are an exercise
-for the user.
+To run multiple TabletServers on a single host you will need to change the +NUM_TSERVERS+ property
+in the accumulo-env.sh file from 1 to the number of TabletServers that you want to run. On NUMA
+hardware, with numactl installed, the TabletServer will interleave its memory allocations across
+the NUMA nodes and the processes will be scheduled on all the NUMA cores without restriction. To
+change this behavior you can uncomment the +TSERVER_NUMA_OPTIONS+ example in accumulo-env.sh and
+set the numactl options for each TabletServer.
 
 Accumulo TabletServers bind certain ports on the host to accommodate remote procedure calls to/from
-other nodes. This requires additional configuration values in +accumulo-site.xml+:
+other nodes. Running more than one TabletServer on a host requires that you set the following
+properties in +accumulo-site.xml+:
 
-* +tserver.port.client+
-* +replication.receipt.service.port+
+  <property>
+    <name>tserver.port.client</name>
+    <value>0</value>
+  </property>
+  <property>
+    <name>replication.receipt.service.port</name>
+    <value>0</value>
+  </property>
 
-Normally, setting a value of +0+ for these configuration properties is sufficient. In some
-environment, the ports used by Accumulo must be well-known for security reasons and require a
-separate copy of the configuration files to use a static port for each TabletServer instance.
-
-It is also necessary to update the following exported variables in +accumulo-env.sh+.
-
-* +ACCUMULO_LOG_DIR+
-
-The values for these properties are left up to the user to define; there are no constraints
-other than ensuring that the directory exists and the user running Accumulo has the permission
-to read/write into that directory.
-
-Accumulo's provided scripts for stopping a cluster operate under the assumption that one process
-is running per host. As such, starting and stopping multiple TabletServers on one host requires
-more effort on the user. It is important to ensure that +ACCUMULO_CONF_DIR+ is correctly
-set for the instance of the TabletServer being started.
-
-  $ACCUMULO_CONF_DIR=$ACCUMULO_HOME/conf $ACCUMULO_HOME/bin/accumulo tserver --address <your_server_ip> &
-
-To stop TabletServers, the normal +stop-all.sh+ will stop all instances of TabletServers across all nodes.
-Using the provided +kill+ command by your operation system is an option to stop a single instance on
-a single node. +stop-server.sh+ can be used to stop all TabletServers on a single node.
-
+Accumulo's provided scripts for starting and stopping the cluster should work normally with multiple
+TabletServers on a host. Sanity checks are provided in the scripts and will output an error when there
+is a configuration mismatch.
 
 [[monitoring]]
 === Monitoring
@@ -476,7 +526,7 @@
 ==== Accumulo Monitor
 The Accumulo Monitor provides an interface for monitoring the status and health of
 Accumulo components. The Accumulo Monitor provides a web UI for accessing this information at
-+http://_monitorhost_:50095/+.
++http://_monitorhost_:9995/+.
 
 Things highlighted in yellow may be in need of attention.
 If anything is highlighted in red on the monitor page, it is something that definitely needs attention.
diff --git a/docs/src/main/asciidoc/chapters/design.txt b/docs/src/main/asciidoc/chapters/design.txt
index 34cd459..6c77cb6 100644
--- a/docs/src/main/asciidoc/chapters/design.txt
+++ b/docs/src/main/asciidoc/chapters/design.txt
@@ -175,6 +175,6 @@
 that are destined for the tablets they have now been assigned.
 
 TabletServer failures are noted on the Master's monitor page, accessible via
-+http://master-address:50095/monitor+.
++http://master-address:9995/monitor+.
 
 image::failure_handling.png[width=500]
diff --git a/docs/src/main/asciidoc/chapters/implementation.txt b/docs/src/main/asciidoc/chapters/implementation.txt
index 9ec66ff..520f538 100644
--- a/docs/src/main/asciidoc/chapters/implementation.txt
+++ b/docs/src/main/asciidoc/chapters/implementation.txt
@@ -49,7 +49,7 @@
 operation. Accumulo provides an Accumulo shell command to interact with fate.
 
 The +fate+ shell command accepts a number of arguments for different functionality:
-+list+/+print+, +fail+, +delete+.
++list+/+print+, +fail+, +delete+, +dump+.
 
 ==== List/Print
 
@@ -73,3 +73,14 @@
 holds. Like the fail command, this command should only be used in extreme circumstances
 by an administrator that understands the implications of the command they are about to
 invoke. It is not normal to invoke this command.
+
+==== Dump
+
+This command accepts zero more transaction IDs.  If given no transaction IDs,
+it will dump all active transactions.  A FATE operations is compromised as a
+sequence of REPOs.  In order to start a FATE transaction, a REPO is pushed onto
+a per transaction REPO stack.  The top of the stack always contains the next
+REPO the FATE transaction should execute.  When a REPO is successful it may
+return another REPO which is pushed on the stack.  The +dump+ command will
+print all of the REPOs on each transactions stack.  The REPOs are serialized to
+JSON in order to make them human readable.
diff --git a/docs/src/main/asciidoc/chapters/iterator_test_harness.txt b/docs/src/main/asciidoc/chapters/iterator_test_harness.txt
new file mode 100644
index 0000000..91ae53a
--- /dev/null
+++ b/docs/src/main/asciidoc/chapters/iterator_test_harness.txt
@@ -0,0 +1,110 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements. See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License. You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+== Iterator Testing
+
+Iterators, while extremely powerful, are notoriously difficult to test. While the API defines
+the methods an Iterator must implement and each method's functionality, the actual invocation
+of these methods by Accumulo TabletServers can be surprisingly difficult to mimic in unit tests.
+
+The Apache Accumulo "Iterator Test Harness" is designed to provide a generalized testing framework
+for all Accumulo Iterators to leverage to identify common pitfalls in user-created Iterators.
+
+=== Framework Use
+
+The harness provides an abstract class for use with JUnit4. Users must define the following for this
+abstract class:
+
+  * A `SortedMap` of input data (`Key`-`Value` pairs)
+  * A `Range` to use in tests
+  * A `Map` of options (`String` to `String` pairs)
+  * A `SortedMap` of output data (`Key`-`Value` pairs)
+  * A list of `IteratorTestCase`s (these can be automatically discovered)
+
+The majority of effort a user must make is in creating the input dataset and the expected
+output dataset for the iterator being tested.
+
+=== Normal Test Outline
+
+Most iterator tests will follow the given outline:
+
+[source,java]
+----
+import java.util.List;
+import java.util.SortedMap;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.iteratortest.IteratorTestCaseFinder;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.junit4.BaseJUnit4IteratorTest;
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+import org.junit.runners.Parameterized.Parameters;
+
+public class MyIteratorTest extends BaseJUnit4IteratorTest {
+
+  @Parameters
+  public static Object[][] parameters() {
+    final IteratorTestInput input = createIteratorInput();
+    final IteratorTestOutput output = createIteratorOutput();
+    final List<IteratorTestCase> testCases = IteratorTestCaseFinder.findAllTestCases();
+    return BaseJUnit4IteratorTest.createParameters(input, output, tests);
+  }
+
+  private static SortedMap<Key,Value> INPUT_DATA = createInputData();
+  private static SortedMap<Key,Value> OUTPUT_DATA = createOutputData();
+
+  private static SortedMap<Key,Value> createInputData() {
+    // TODO -- implement this method
+  }
+
+  private static SortedMap<Key,Value> createOutputData() {
+    // TODO -- implement this method
+  }
+
+  private static IteratorTestInput createIteratorInput() {
+    final Map<String,String> options = createIteratorOptions(); 
+    final Range range = createRange();
+    return new IteratorTestInput(MyIterator.class, options, range, INPUT_DATA);
+  }
+
+  private static Map<String,String> createIteratorOptions() {
+    // TODO -- implement this method
+    // Tip: Use INPUT_DATA if helpful in generating output
+  }
+
+  private static Range createRange() {
+    // TODO -- implement this method
+  }
+
+  private static IteratorTestOutput createIteratorOutput() {
+    return new IteratorTestOutput(OUTPUT_DATA);
+  }
+
+}
+----
+
+=== Limitations
+
+While the provided `IteratorTestCase`s should exercise common edge-cases in user iterators,
+there are still many limitations to the existing test harness. Some of them are:
+
+  * Can only specify a single iterator, not many (a "stack")
+  * No control over provided IteratorEnvironment for tests
+  * Exercising delete keys (especially with major compactions that do not include all files)
+
+These are left as future improvements to the harness.
diff --git a/docs/src/main/asciidoc/chapters/sampling.txt b/docs/src/main/asciidoc/chapters/sampling.txt
new file mode 100644
index 0000000..f035c56
--- /dev/null
+++ b/docs/src/main/asciidoc/chapters/sampling.txt
@@ -0,0 +1,86 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+== Sampling
+
+=== Overview
+
+Accumulo has the ability to generate and scan a per table set of sample data.
+This sample data is kept up to date as a table is mutated.  What key values are
+placed in the sample data is configurable per table.
+
+This feature can be used for query estimation and optimization.  For an example
+of estimaiton assume an Accumulo table is configured to generate a sample
+containing one millionth of a tables data.   If a query is executed against the
+sample and returns one thousand results, then the same query against all the
+data would probably return a billion results.  A nice property of having
+Accumulo generate the sample is that its always up to date.  So estimations
+will be accurate even when querying the most recently written data.
+
+An example of a query optimization is an iterator using sample data to get an
+estimate, and then making decisions based on the estimate.
+
+=== Configuring
+
+Inorder to use sampling, an Accumulo table must be configured with a class that
+implements +org.apache.accumulo.core.sample.Sampler+ along with options for
+that class.  For guidance on implementing a Sampler see that interface's
+javadoc.  Accumulo provides a few implementations out of the box.   For
+information on how to use the samplers that ship with Accumulo look in the
+package `org.apache.accumulo.core.sample` and consult the javadoc of the
+classes there.  See +README.sample+ and +SampleExample.java+ for examples of
+how to configure a Sampler on a table.
+
+Once a table is configured with a sampler all writes after that point will
+generate sample data.  For data written before sampling was configured sample
+data will not be present.  A compaction can be initiated that only compacts the
+files in the table that do not have sample data.   The example readme shows how
+to do this.
+
+If the sampling configuration of a table is changed, then Accumulo will start
+generating new sample data with the new configuration.   However old data will
+still have sample data generated with the previous configuration.  A selective
+compaction can also be issued in this case to regenerate the sample data.
+
+=== Scanning sample data
+
+Inorder to scan sample data, use the +setSamplerConfiguration(...)+  method on
++Scanner+ or +BatchScanner+.  Please consult this methods javadocs for more
+information.
+
+Sample data can also be scanned from within an Accumulo
++SortedKeyValueIterator+.  To see how to do this look at the example iterator
+referenced in README.sample.  Also, consult the javadoc on
++org.apache.accumulo.core.iterators.IteratorEnvironment.cloneWithSamplingEnabled()+.
+
+Map reduce jobs using the +AccumuloInputFormat+ can also read sample data.  See
+the javadoc for the +setSamplerConfiguration()+ method on
++AccumuloInputFormat+.
+
+Scans over sample data will throw a +SampleNotPresentException+ in the following cases :
+
+. sample data is not present,
+. sample data is present but was generated with multiple configurations
+. sample data is partially present
+
+So a scan over sample data can only succeed if all data written has sample data
+generated with the same configuration.
+
+=== Bulk import
+
+When generating rfiles to bulk import into Accumulo, those rfiles can contain
+sample data.  To use this feature, look at the javadoc on the
++AccumuloFileOutputFormat.setSampler(...)+ method.
+
diff --git a/docs/src/main/asciidoc/chapters/shell.txt b/docs/src/main/asciidoc/chapters/shell.txt
index 7afcd7d..a1cdd00 100644
--- a/docs/src/main/asciidoc/chapters/shell.txt
+++ b/docs/src/main/asciidoc/chapters/shell.txt
@@ -127,3 +127,32 @@
 
 root@myinstance bobstable> revoke System.CREATE_TABLE -s -u bob
 ----
+
+=== JSR-223 Support in the Shell
+
+The script command can be used to invoke programs written in languages supported by installed JSR-223
+engines. You can get a list of installed engines with the -l argument. Below is an example of the output
+of the command when running the Shell with Java 7.
+
+----
+root@fake> script -l
+    Engine Alias: ECMAScript
+    Engine Alias: JavaScript
+    Engine Alias: ecmascript
+    Engine Alias: javascript
+    Engine Alias: js
+    Engine Alias: rhino
+    Language: ECMAScript (1.8)
+    Script Engine: Mozilla Rhino (1.7 release 3 PRERELEASE)
+ScriptEngineFactory Info
+----
+
+ A list of compatible languages can be found at https://en.wikipedia.org/wiki/List_of_JVM_languages. The
+rhino javascript engine is provided with the JVM. Typically putting a jar on the classpath is all that is
+needed to install a new engine.
+
+ When writing scripts to run in the shell, you will have a variable called connection already available
+to you. This variable is a reference to an Accumulo Connector object, the same connection that the Shell
+is using to communicate with the Accumulo servers. At this point you can use any of the public API methods
+within your script. Reference the script command help to see all of the execution options. Script and script
+invocation examples can be found in ACCUMULO-1399.
diff --git a/docs/src/main/asciidoc/chapters/troubleshooting.txt b/docs/src/main/asciidoc/chapters/troubleshooting.txt
index cd2923c..667303f 100644
--- a/docs/src/main/asciidoc/chapters/troubleshooting.txt
+++ b/docs/src/main/asciidoc/chapters/troubleshooting.txt
@@ -44,7 +44,7 @@
 components that make up a running Accumulo instance. It will highlight
 unusual or unexpected conditions.
 
-*A*: Point your browser to the monitor (typically the master host, on port 50095).  Is anything red or yellow?
+*A*: Point your browser to the monitor (typically the master host, on port 9995).  Is anything red or yellow?
 
 *Q*: My browser is reporting connection refused, and I cannot get to the monitor
 
@@ -65,7 +65,7 @@
 It is sometimes helpful to use a text-only browser to sanity-check the
 monitor while on the machine running the monitor:
 
-    $ links http://localhost:50095
+    $ links http://localhost:9995
 
 *A*: Verify that you are not firewalled from the monitor if it is running on a remote host.
 
@@ -229,6 +229,61 @@
 
 *A*: Ensure the tablet server JVM is not running low on memory.
 
+*Q*: I'm seeing errors in tablet server logs that include the words "MutationsRejectedException" and "# constraint violations: 1". Moments after that the server died.
+
+The error you are seeing is part of a failing tablet server scenario.
+This is a bit complicated, so name two of your tablet servers A and B.
+
+Tablet server A is hosting a tablet, let's call it a-tablet.
+
+Tablet server B is hosting a metadata tablet, let's call it m-tablet.
+
+m-tablet records the information about a-tablet, for example, the names of the files it is using to store data.
+
+When A ingests some data, it eventually flushes the updates from memory to a file.
+
+Tablet server A then writes this new information to m-tablet, on Tablet server B.
+
+Here's a likely failure scenario:
+
+Tablet server A does not have enough memory for all the processes running on it.
+The operating system sees a large chunk of the tablet server being unused, and swaps it out to disk to make room for other processes.
+Tablet server A does a java memory garbage collection, which causes it to start using all the memory allocated to it.
+As the server starts pulling data from swap, it runs very slowly.
+It fails to send the keep-alive messages to zookeeper in a timely fashion, and it looses its zookeeper session.
+
+But, it's running so slowly, that it takes a moment to realize it should no longer be hosting tablets.
+
+The thread that is flushing a-tablet memory attempts to update m-tablet with the new file information.
+
+Fortunately there's a constraint on m-tablet.
+Mutations to the metadata table must contain a valid zookeeper session.
+This prevents tablet server A from making updates to m-tablet when it no long has the right to host the tablet.
+
+The "MutationsRejectedException" error is from tablet server A making an update to tablet server B's m-tablet.
+It's getting a constraint violation: tablet server A has lost its zookeeper session, and will fail momentarily.
+
+*A*: Ensure that memory is not over-allocated.  Monitor swap usage, or turn swap off.
+
+*Q*: My accumulo client is getting a MutationsRejectedException. The monitor is displaying "No Such SessionID" errors.
+
+When your client starts sending mutations to accumulo, it creates a session. Once the session is created,
+mutations are streamed to accumulo, without acknowledgement, against this session.  Once the client is done,
+it will close the session, and get an acknowledgement.
+
+If the client fails to communicate with accumulo, it will release the session, assuming that the client has died.
+If the client then attempts to send more mutations against the session, you will see "No Such SessionID" errors on
+the server, and MutationRejectedExceptions in the client.
+
+The client library should be either actively using the connection to the tablet servers,
+or closing the connection and sessions. If the session times out, something is causing your client
+to pause.
+
+The most frequent source of these pauses are java garbage collection pauses
+due to the JVM running out of memory, or being swapped out to disk.
+
+*A*: Ensure your client has adequate memory and is not being swapped out to disk.
+
 ### Tools
 
 The accumulo script can be used to run classes from the command line.
@@ -766,3 +821,8 @@
 slows down ingest performance, so knowing there are many files like this tells you that the system
 is struggling to keep up with ingest vs the compaction strategy which reduces the number of files.
 
+### HDFS Decommissioning Issues
+
+*Q*: My Hadoop DataNode is hung for hours trying to decommission.
+
+*A*: Write Ahead Logs stay open until they hit the size threshold, which could be many hours or days in some cases. These open files will prevent a DN from finishing its decommissioning process (HDFS-3599) in some versions of Hadoop 2. If you stop the DN, then the WALog file will not be closed and you could lose data. To work around this issue, we now close WALogs on a time period specified by the property +tserver.walog.max.age+ with a default period of 24 hours.
diff --git a/docs/src/main/resources/examples/README b/docs/src/main/resources/examples/README
index 4211050..03c2e05 100644
--- a/docs/src/main/resources/examples/README
+++ b/docs/src/main/resources/examples/README
@@ -80,6 +80,8 @@
    README.rowhash:     Using MapReduce to read a table and write to a new
                        column in the same table.
 
+   README.sample:      Building and using sample data in Accumulo.
+
    README.shard:       Using the intersecting iterator with a term index
                        partitioned by document.
 
diff --git a/docs/src/main/resources/examples/README.classpath b/docs/src/main/resources/examples/README.classpath
index 79da239..710560f 100644
--- a/docs/src/main/resources/examples/README.classpath
+++ b/docs/src/main/resources/examples/README.classpath
@@ -29,7 +29,7 @@
 
 Execute following in Accumulo shell to setup classpath context
 
-    root@test15> config -s general.vfs.context.classpath.cx1=hdfs://<namenode host>:<namenode port>/user1/lib
+    root@test15> config -s general.vfs.context.classpath.cx1=hdfs://<namenode host>:<namenode port>/user1/lib/[^.].*.jar
 
 Create a table
 
diff --git a/docs/src/main/resources/examples/README.helloworld b/docs/src/main/resources/examples/README.helloworld
index 7d41ba3..618e301 100644
--- a/docs/src/main/resources/examples/README.helloworld
+++ b/docs/src/main/resources/examples/README.helloworld
@@ -35,7 +35,7 @@
 
 On the accumulo status page at the URL below (where 'master' is replaced with the name or IP of your accumulo master), you should see 50K entries
 
-    http://master:50095/
+    http://master:9995/
 
 To view the entries, use the shell to scan the table:
 
diff --git a/docs/src/main/resources/examples/README.sample b/docs/src/main/resources/examples/README.sample
new file mode 100644
index 0000000..3642cc6
--- /dev/null
+++ b/docs/src/main/resources/examples/README.sample
@@ -0,0 +1,192 @@
+Title: Apache Accumulo Batch Writing and Scanning Example
+Notice:    Licensed to the Apache Software Foundation (ASF) under one
+           or more contributor license agreements.  See the NOTICE file
+           distributed with this work for additional information
+           regarding copyright ownership.  The ASF licenses this file
+           to you under the Apache License, Version 2.0 (the
+           "License"); you may not use this file except in compliance
+           with the License.  You may obtain a copy of the License at
+           .
+             http://www.apache.org/licenses/LICENSE-2.0
+           .
+           Unless required by applicable law or agreed to in writing,
+           software distributed under the License is distributed on an
+           "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+           KIND, either express or implied.  See the License for the
+           specific language governing permissions and limitations
+           under the License.
+
+
+Basic Sampling Example
+----------------------
+
+Accumulo supports building a set of sample data that can be efficiently
+accessed by scanners.  What data is included in the sample set is configurable.
+Below, some data representing documents are inserted.  
+
+    root@instance sampex> createtable sampex
+    root@instance sampex> insert 9255 doc content 'abcde'
+    root@instance sampex> insert 9255 doc url file://foo.txt
+    root@instance sampex> insert 8934 doc content 'accumulo scales'
+    root@instance sampex> insert 8934 doc url file://accumulo_notes.txt
+    root@instance sampex> insert 2317 doc content 'milk, eggs, bread, parmigiano-reggiano'
+    root@instance sampex> insert 2317 doc url file://groceries/9.txt
+    root@instance sampex> insert 3900 doc content 'EC2 ate my homework'
+    root@instance sampex> insert 3900 doc uril file://final_project.txt
+
+Below the table sampex is configured to build a sample set.  The configuration
+causes Accumulo to include any row where `murmur3_32(row) % 3 ==0` in the
+tables sample data.
+
+    root@instance sampex> config -t sampex -s table.sampler.opt.hasher=murmur3_32
+    root@instance sampex> config -t sampex -s table.sampler.opt.modulus=3
+    root@instance sampex> config -t sampex -s table.sampler=org.apache.accumulo.core.client.sample.RowSampler
+
+Below, attempting to scan the sample returns an error.  This is because data
+was inserted before the sample set was configured.
+
+    root@instance sampex> scan --sample
+    2015-09-09 12:21:50,643 [shell.Shell] ERROR: org.apache.accumulo.core.client.SampleNotPresentException: Table sampex(ID:2) does not have sampling configured or built
+
+To remedy this problem, the following command will flush in memory data and
+compact any files that do not contain the correct sample data.   
+
+    root@instance sampex> compact -t sampex --sf-no-sample
+
+After the compaction, the sample scan works.  
+
+    root@instance sampex> scan --sample
+    2317 doc:content []    milk, eggs, bread, parmigiano-reggiano
+    2317 doc:url []    file://groceries/9.txt
+
+The commands below show that updates to data in the sample are seen when
+scanning the sample.
+
+    root@instance sampex> insert 2317 doc content 'milk, eggs, bread, parmigiano-reggiano, butter'
+    root@instance sampex> scan --sample
+    2317 doc:content []    milk, eggs, bread, parmigiano-reggiano, butter
+    2317 doc:url []    file://groceries/9.txt
+
+Inorder to make scanning the sample fast, sample data is partitioned as data is
+written to Accumulo.  This means if the sample configuration is changed, that
+data written previously is partitioned using a different criteria.  Accumulo
+will detect this situation and fail sample scans.  The commands below show this
+failure and fixiing the problem with a compaction.
+
+    root@instance sampex> config -t sampex -s table.sampler.opt.modulus=2
+    root@instance sampex> scan --sample
+    2015-09-09 12:22:51,058 [shell.Shell] ERROR: org.apache.accumulo.core.client.SampleNotPresentException: Table sampex(ID:2) does not have sampling configured or built
+    root@instance sampex> compact -t sampex --sf-no-sample
+    2015-09-09 12:23:07,242 [shell.Shell] INFO : Compaction of table sampex started for given range
+    root@instance sampex> scan --sample
+    2317 doc:content []    milk, eggs, bread, parmigiano-reggiano
+    2317 doc:url []    file://groceries/9.txt
+    3900 doc:content []    EC2 ate my homework
+    3900 doc:uril []    file://final_project.txt
+    9255 doc:content []    abcde
+    9255 doc:url []    file://foo.txt
+
+The example above is replicated in a java program using the Accumulo API.
+Below is the program name and the command to run it.
+
+    ./bin/accumulo org.apache.accumulo.examples.simple.sample.SampleExample -i instance -z localhost -u root -p secret
+
+The commands below look under the hood to give some insight into how this
+feature works.  The commands determine what files the sampex table is using.
+
+    root@instance sampex> tables -l
+    accumulo.metadata    =>        !0
+    accumulo.replication =>      +rep
+    accumulo.root        =>        +r
+    sampex               =>         2
+    trace                =>         1
+    root@instance sampex> scan -t accumulo.metadata -c file -b 2 -e 2<
+    2< file:hdfs://localhost:10000/accumulo/tables/2/default_tablet/A000000s.rf []    702,8
+
+Below shows running `accumulo rfile-info` on the file above.  This shows the
+rfile has a normal default locality group and a sample default locality group.
+The output also shows the configuration used to create the sample locality
+group.  The sample configuration within a rfile must match the tables sample
+configuration for sample scan to work.
+
+    $ ./bin/accumulo rfile-info hdfs://localhost:10000/accumulo/tables/2/default_tablet/A000000s.rf
+    Reading file: hdfs://localhost:10000/accumulo/tables/2/default_tablet/A000000s.rf
+    RFile Version            : 8
+    
+    Locality group           : <DEFAULT>
+    	Start block            : 0
+    	Num   blocks           : 1
+    	Index level 0          : 35 bytes  1 blocks
+    	First key              : 2317 doc:content [] 1437672014986 false
+    	Last key               : 9255 doc:url [] 1437672014875 false
+    	Num entries            : 8
+    	Column families        : [doc]
+    
+    Sample Configuration     :
+    	Sampler class          : org.apache.accumulo.core.client.sample.RowSampler
+    	Sampler options        : {hasher=murmur3_32, modulus=2}
+
+    Sample Locality group    : <DEFAULT>
+    	Start block            : 0
+    	Num   blocks           : 1
+    	Index level 0          : 36 bytes  1 blocks
+    	First key              : 2317 doc:content [] 1437672014986 false
+    	Last key               : 9255 doc:url [] 1437672014875 false
+    	Num entries            : 6
+    	Column families        : [doc]
+    
+    Meta block     : BCFile.index
+          Raw size             : 4 bytes
+          Compressed size      : 12 bytes
+          Compression type     : gz
+
+    Meta block     : RFile.index
+          Raw size             : 309 bytes
+          Compressed size      : 176 bytes
+          Compression type     : gz
+
+
+Shard Sampling Example
+-------------------------
+
+`README.shard` shows how to index and search files using Accumulo.  That
+example indexes documents into a table named `shard`.  The indexing scheme used
+in that example places the document name in the column qualifier.  A useful
+sample of this indexing scheme should contain all data for any document in the
+sample.   To accomplish this, the following commands build a sample for the
+shard table based on the column qualifier.
+
+    root@instance shard> config -t shard -s table.sampler.opt.hasher=murmur3_32
+    root@instance shard> config -t shard -s table.sampler.opt.modulus=101
+    root@instance shard> config -t shard -s table.sampler.opt.qualifier=true
+    root@instance shard> config -t shard -s table.sampler=org.apache.accumulo.core.client.sample.RowColumnSampler
+    root@instance shard> compact -t shard --sf-no-sample -w
+    2015-07-23 15:00:09,280 [shell.Shell] INFO : Compacting table ...
+    2015-07-23 15:00:10,134 [shell.Shell] INFO : Compaction of table shard completed for given range
+
+After enabling sampling, the command below counts the number of documents in
+the sample containing the words `import` and `int`.     
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query --sample -i instance16 -z localhost -t shard -u root -p secret import int | fgrep '.java' | wc
+         11      11    1246
+
+The command below counts the total number of documents containing the words
+`import` and `int`.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance16 -z localhost -t shard -u root -p secret import int | fgrep '.java' | wc
+       1085    1085  118175
+
+The counts 11 out of 1085 total are around what would be expected for a modulus
+of 101.  Querying the sample first provides a quick way to estimate how much data
+the real query will bring back. 
+
+Another way sample data could be used with the shard example is with a
+specialized iterator.  In the examples source code there is an iterator named
+CutoffIntersectingIterator.  This iterator first checks how many documents are
+found in the sample data.  If too many documents are found in the sample data,
+then it returns nothing.   Otherwise it proceeds to query the full data set.
+To experiment with this iterator, use the following command.  The
+`--sampleCutoff` option below will cause the query to return nothing if based
+on the sample it appears a query would return more than 1000 documents.
+
+    $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query --sampleCutoff 1000 -i instance16 -z localhost -t shard -u root -p secret import int | fgrep '.java' | wc
diff --git a/examples/simple/pom.xml b/examples/simple/pom.xml
index 486c551..b15d774 100644
--- a/examples/simple/pom.xml
+++ b/examples/simple/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-examples-simple</artifactId>
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RandomBatchScanner.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RandomBatchScanner.java
index 4bd1e61..e762e7d 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RandomBatchScanner.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RandomBatchScanner.java
@@ -116,8 +116,8 @@
    */
   static boolean doRandomQueries(int num, long min, long max, int evs, Random r, BatchScanner tsbr) {
 
-    HashSet<Range> ranges = new HashSet<Range>(num);
-    HashMap<Text,Boolean> expectedRows = new java.util.HashMap<Text,Boolean>();
+    HashSet<Range> ranges = new HashSet<>(num);
+    HashMap<Text,Boolean> expectedRows = new java.util.HashMap<>();
 
     generateRandomQueries(num, min, max, r, ranges, expectedRows);
 
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RandomBatchWriter.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RandomBatchWriter.java
index 05a737f..51aee8f 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RandomBatchWriter.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RandomBatchWriter.java
@@ -143,7 +143,7 @@
     ColumnVisibility cv = opts.visiblity;
 
     // Generate num unique row ids in the given range
-    HashSet<Long> rowids = new HashSet<Long>(opts.num);
+    HashSet<Long> rowids = new HashSet<>(opts.num);
     while (rowids.size() < opts.num) {
       rowids.add((abs(r.nextLong()) % (opts.max - opts.min)) + opts.min);
     }
@@ -156,12 +156,13 @@
       bw.close();
     } catch (MutationsRejectedException e) {
       if (e.getSecurityErrorCodes().size() > 0) {
-        HashMap<String,Set<SecurityErrorCode>> tables = new HashMap<String,Set<SecurityErrorCode>>();
+        HashMap<String,Set<SecurityErrorCode>> tables = new HashMap<>();
         for (Entry<TabletId,Set<SecurityErrorCode>> ke : e.getSecurityErrorCodes().entrySet()) {
-          Set<SecurityErrorCode> secCodes = tables.get(ke.getKey().getTableId().toString());
+          String tableId = ke.getKey().getTableId().toString();
+          Set<SecurityErrorCode> secCodes = tables.get(tableId);
           if (secCodes == null) {
-            secCodes = new HashSet<SecurityErrorCode>();
-            tables.put(ke.getKey().getTableId().toString(), secCodes);
+            secCodes = new HashSet<>();
+            tables.put(tableId, secCodes);
           }
           secCodes.addAll(ke.getValue());
         }
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/ReadWriteExample.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/ReadWriteExample.java
index 70effb1..44d4b6f 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/ReadWriteExample.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/ReadWriteExample.java
@@ -88,7 +88,7 @@
 
     // create table
     if (opts.createtable) {
-      SortedSet<Text> partitionKeys = new TreeSet<Text>();
+      SortedSet<Text> partitionKeys = new TreeSet<>();
       for (int i = Byte.MIN_VALUE; i < Byte.MAX_VALUE; i++)
         partitionKeys.add(new Text(new byte[] {(byte) i}));
       conn.tableOperations().create(opts.getTableName());
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RowOperations.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RowOperations.java
index d0898f0..007619d 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RowOperations.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/client/RowOperations.java
@@ -48,7 +48,7 @@
   private static final Logger log = LoggerFactory.getLogger(RowOperations.class);
 
   private static Connector connector;
-  private static String table = "example";
+  private static String tableName = "example";
   private static BatchWriter bw;
 
   public static void main(String[] args) throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException,
@@ -63,7 +63,7 @@
     connector = opts.getConnector();
 
     // lets create an example table
-    connector.tableOperations().create(table);
+    connector.tableOperations().create(tableName);
 
     // lets create 3 rows of information
     Text row1 = new Text("row1");
@@ -98,7 +98,7 @@
     mut3.put(new Text("column"), col4, System.currentTimeMillis(), new Value("This is the value for this key".getBytes(UTF_8)));
 
     // Now we'll make a Batch Writer
-    bw = connector.createBatchWriter(table, bwOpts.getBatchWriterConfig());
+    bw = connector.createBatchWriter(tableName, bwOpts.getBatchWriterConfig());
 
     // And add the mutations
     bw.addMutation(mut1);
@@ -160,7 +160,7 @@
     bw.close();
 
     // and lets clean up our mess
-    connector.tableOperations().delete(table);
+    connector.tableOperations().delete(tableName);
 
     // fin~
 
@@ -204,7 +204,7 @@
    */
   private static Scanner getRow(ScannerOpts scanOpts, Text row) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
     // Create a scanner
-    Scanner scanner = connector.createScanner(table, Authorizations.EMPTY);
+    Scanner scanner = connector.createScanner(tableName, Authorizations.EMPTY);
     scanner.setBatchSize(scanOpts.scanBatchSize);
     // Say start key is the one with key of row
     // and end key is the one that immediately follows the row
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/constraints/AlphaNumKeyConstraint.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/constraints/AlphaNumKeyConstraint.java
index f265e18..14e3c8e 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/constraints/AlphaNumKeyConstraint.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/constraints/AlphaNumKeyConstraint.java
@@ -55,7 +55,7 @@
 
   private Set<Short> addViolation(Set<Short> violations, short violation) {
     if (violations == null) {
-      violations = new LinkedHashSet<Short>();
+      violations = new LinkedHashSet<>();
       violations.add(violation);
     } else if (!violations.contains(violation)) {
       violations.add(violation);
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/FileCount.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/FileCount.java
index dabb4c1..111fae0 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/FileCount.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/FileCount.java
@@ -233,7 +233,7 @@
     }
   }
 
-  FileCount(Opts opts, ScannerOpts scanOpts, BatchWriterOpts bwOpts) throws Exception {
+  public FileCount(Opts opts, ScannerOpts scanOpts, BatchWriterOpts bwOpts) throws Exception {
     this.opts = opts;
     this.scanOpts = scanOpts;
     this.bwOpts = bwOpts;
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/Ingest.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/Ingest.java
index 17c9ee8..c0808fe 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/Ingest.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/Ingest.java
@@ -132,7 +132,7 @@
     @Parameter(names = "--chunkSize", description = "the size of chunks when breaking down files")
     int chunkSize = 100000;
     @Parameter(description = "<dir> { <dir> ... }")
-    List<String> directories = new ArrayList<String>();
+    List<String> directories = new ArrayList<>();
   }
 
   public static void main(String[] args) throws Exception {
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/QueryUtil.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/QueryUtil.java
index a79b9d2..2c76264 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/QueryUtil.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/dirlist/QueryUtil.java
@@ -143,7 +143,7 @@
       path = path.substring(0, path.length() - 1);
     Scanner scanner = conn.createScanner(tableName, auths);
     scanner.setRange(new Range(getRow(path)));
-    Map<String,String> data = new TreeMap<String,String>();
+    Map<String,String> data = new TreeMap<>();
     for (Entry<Key,Value> e : scanner) {
       String type = getType(e.getKey().getColumnFamily());
       data.put("fullname", e.getKey().getRow().toString().substring(3));
@@ -161,7 +161,7 @@
   public Map<String,Map<String,String>> getDirList(String path) throws TableNotFoundException {
     if (!path.endsWith("/"))
       path = path + "/";
-    Map<String,Map<String,String>> fim = new TreeMap<String,Map<String,String>>();
+    Map<String,Map<String,String>> fim = new TreeMap<>();
     Scanner scanner = conn.createScanner(tableName, auths);
     scanner.setRange(Range.prefix(getRow(path)));
     for (Entry<Key,Value> e : scanner) {
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkInputFormat.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkInputFormat.java
index f5da4e5..bb7715b 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkInputFormat.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkInputFormat.java
@@ -45,8 +45,8 @@
       @Override
       public void initialize(InputSplit inSplit, TaskAttemptContext attempt) throws IOException {
         super.initialize(inSplit, attempt);
-        peekingScannerIterator = new PeekingIterator<Entry<Key,Value>>(scannerIterator);
-        currentK = new ArrayList<Entry<Key,Value>>();
+        peekingScannerIterator = new PeekingIterator<>(scannerIterator);
+        currentK = new ArrayList<>();
         currentV = new ChunkInputStream();
       }
 
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkInputStream.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkInputStream.java
index 0e6e319..1774227 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkInputStream.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkInputStream.java
@@ -59,7 +59,7 @@
     if (source != null)
       throw new IOException("setting new source without closing old one");
     this.source = in;
-    currentVis = new TreeSet<Text>();
+    currentVis = new TreeSet<>();
     count = pos = 0;
     if (!source.hasNext()) {
       log.debug("source has no next");
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/FileDataIngest.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/FileDataIngest.java
index e899ff5..1a0ec5d 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/FileDataIngest.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/FileDataIngest.java
@@ -178,7 +178,7 @@
     int chunkSize = 64 * 1024;
 
     @Parameter(description = "<file> { <file> ... }")
-    List<String> files = new ArrayList<String>();
+    List<String> files = new ArrayList<>();
   }
 
   public static void main(String[] args) throws Exception {
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/FileDataQuery.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/FileDataQuery.java
index 75e32ae..48746d0 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/FileDataQuery.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/FileDataQuery.java
@@ -49,7 +49,7 @@
       throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
     ZooKeeperInstance instance = new ZooKeeperInstance(ClientConfiguration.loadDefault().withInstance(instanceName).withZkHosts(zooKeepers));
     conn = instance.getConnector(user, token);
-    lastRefs = new ArrayList<Entry<Key,Value>>();
+    lastRefs = new ArrayList<>();
     cis = new ChunkInputStream();
     scanner = conn.createScanner(tableName, auths);
   }
@@ -62,7 +62,7 @@
     scanner.setRange(new Range(hash));
     scanner.setBatchSize(1);
     lastRefs.clear();
-    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<Entry<Key,Value>>(scanner.iterator());
+    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<>(scanner.iterator());
     if (pi.hasNext()) {
       while (!pi.peek().getKey().getColumnFamily().equals(FileDataIngest.CHUNK_CF)) {
         lastRefs.add(pi.peek());
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/KeyUtil.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/KeyUtil.java
index 2f09785..f9c52ba 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/KeyUtil.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/KeyUtil.java
@@ -50,7 +50,7 @@
    * @return an array of strings
    */
   public static String[] splitNullSepText(Text t) {
-    ArrayList<String> s = new ArrayList<String>();
+    ArrayList<String> s = new ArrayList<>();
     byte[] b = t.getBytes();
     int lastindex = 0;
     for (int i = 0; i < t.getLength(); i++) {
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/VisibilityCombiner.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/VisibilityCombiner.java
index ab2e7fc..b205ec1 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/VisibilityCombiner.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/VisibilityCombiner.java
@@ -25,7 +25,7 @@
  */
 public class VisibilityCombiner {
 
-  private TreeSet<String> visibilities = new TreeSet<String>();
+  private TreeSet<String> visibilities = new TreeSet<>();
 
   void add(ByteSequence cv) {
     if (cv.length() == 0)
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/isolation/InterferenceTest.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/isolation/InterferenceTest.java
index 9fe6857..a2afcdf 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/isolation/InterferenceTest.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/isolation/InterferenceTest.java
@@ -108,7 +108,7 @@
 
         // all columns in a row should have the same value,
         // use this hash set to track that
-        HashSet<String> values = new HashSet<String>();
+        HashSet<String> values = new HashSet<>();
 
         for (Entry<Key,Value> entry : scanner) {
           if (row == null)
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/NGramIngest.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/NGramIngest.java
index 441f6ad..3355454 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/NGramIngest.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/NGramIngest.java
@@ -87,7 +87,7 @@
     if (!opts.getConnector().tableOperations().exists(opts.getTableName())) {
       log.info("Creating table " + opts.getTableName());
       opts.getConnector().tableOperations().create(opts.getTableName());
-      SortedSet<Text> splits = new TreeSet<Text>();
+      SortedSet<Text> splits = new TreeSet<>();
       String numbers[] = "1 2 3 4 5 6 7 8 9".split("\\s");
       String lower[] = "a b c d e f g h i j k l m n o p q r s t u v w x y z".split("\\s");
       String upper[] = "A B C D E F G H I J K L M N O P Q R S T U V W X Y Z".split("\\s");
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
index 9af2563..d27758e 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/RowHash.java
@@ -75,7 +75,7 @@
     Text cf = new Text(idx < 0 ? col : col.substring(0, idx));
     Text cq = idx < 0 ? null : new Text(col.substring(idx + 1));
     if (cf.getLength() > 0)
-      AccumuloInputFormat.fetchColumns(job, Collections.singleton(new Pair<Text,Text>(cf, cq)));
+      AccumuloInputFormat.fetchColumns(job, Collections.singleton(new Pair<>(cf, cq)));
 
     job.setMapperClass(HashDataMapper.class);
     job.setMapOutputKeyClass(Text.class);
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/TableToFile.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/TableToFile.java
index 7eb6b42..96603ad 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/TableToFile.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/TableToFile.java
@@ -60,7 +60,7 @@
   public static class TTFMapper extends Mapper<Key,Value,NullWritable,Text> {
     @Override
     public void map(Key row, Value data, Context context) throws IOException, InterruptedException {
-      Map.Entry<Key,Value> entry = new SimpleImmutableEntry<Key,Value>(row, data);
+      Map.Entry<Key,Value> entry = new SimpleImmutableEntry<>(row, data);
       context.write(NullWritable.get(), new Text(DefaultFormatter.formatEntry(entry, false)));
       context.setStatus("Outputed Value");
     }
@@ -77,13 +77,13 @@
     job.setInputFormatClass(AccumuloInputFormat.class);
     opts.setAccumuloConfigs(job);
 
-    HashSet<Pair<Text,Text>> columnsToFetch = new HashSet<Pair<Text,Text>>();
+    HashSet<Pair<Text,Text>> columnsToFetch = new HashSet<>();
     for (String col : opts.columns.split(",")) {
       int idx = col.indexOf(":");
       Text cf = new Text(idx < 0 ? col : col.substring(0, idx));
       Text cq = idx < 0 ? null : new Text(col.substring(idx + 1));
       if (cf.getLength() > 0)
-        columnsToFetch.add(new Pair<Text,Text>(cf, cq));
+        columnsToFetch.add(new Pair<>(cf, cq));
     }
     if (!columnsToFetch.isEmpty())
       AccumuloInputFormat.fetchColumns(job, columnsToFetch);
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/TeraSortIngest.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/TeraSortIngest.java
index b535513..b0b5177 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/TeraSortIngest.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/TeraSortIngest.java
@@ -167,7 +167,7 @@
       int numSplits = job.getConfiguration().getInt(NUMSPLITS, 1);
       long rowsPerSplit = totalRows / numSplits;
       System.out.println("Generating " + totalRows + " using " + numSplits + " maps with step of " + rowsPerSplit);
-      ArrayList<InputSplit> splits = new ArrayList<InputSplit>(numSplits);
+      ArrayList<InputSplit> splits = new ArrayList<>(numSplits);
       long currentRow = 0;
       for (int split = 0; split < numSplits - 1; ++split) {
         splits.add(new RangeInputSplit(currentRow, rowsPerSplit));
@@ -225,7 +225,7 @@
    * The Mapper class that given a row number, will generate the appropriate output line.
    */
   public static class SortGenMapper extends Mapper<LongWritable,NullWritable,Text,Mutation> {
-    private Text table = null;
+    private Text tableName = null;
     private int minkeylength = 0;
     private int maxkeylength = 0;
     private int minvaluelength = 0;
@@ -329,7 +329,7 @@
           new Value(value.toString().getBytes())); // data
 
       context.setStatus("About to add to accumulo");
-      context.write(table, m);
+      context.write(tableName, m);
       context.setStatus("Added to accumulo " + key.toString());
     }
 
@@ -339,7 +339,7 @@
       maxkeylength = job.getConfiguration().getInt("cloudgen.maxkeylength", 0);
       minvaluelength = job.getConfiguration().getInt("cloudgen.minvaluelength", 0);
       maxvaluelength = job.getConfiguration().getInt("cloudgen.maxvaluelength", 0);
-      table = new Text(job.getConfiguration().get("cloudgen.tablename"));
+      tableName = new Text(job.getConfiguration().get("cloudgen.tablename"));
     }
   }
 
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/SetupTable.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/SetupTable.java
index 8651c39..0fc3110 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/SetupTable.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/mapreduce/bulk/SetupTable.java
@@ -30,7 +30,7 @@
 
   static class Opts extends ClientOnRequiredTable {
     @Parameter(description = "<split> { <split> ... } ")
-    List<String> splits = new ArrayList<String>();
+    List<String> splits = new ArrayList<>();
   }
 
   public static void main(String[] args) throws Exception {
@@ -40,7 +40,7 @@
     conn.tableOperations().create(opts.getTableName());
     if (!opts.splits.isEmpty()) {
       // create a table with initial partitions
-      TreeSet<Text> intialPartitions = new TreeSet<Text>();
+      TreeSet<Text> intialPartitions = new TreeSet<>();
       for (String split : opts.splits) {
         intialPartitions.add(new Text(split));
       }
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/reservations/ARS.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/reservations/ARS.java
index b9e1a83..eff8e21 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/reservations/ARS.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/reservations/ARS.java
@@ -20,8 +20,6 @@
 import java.util.List;
 import java.util.Map.Entry;
 
-import jline.console.ConsoleReader;
-
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.ConditionalWriter;
 import org.apache.accumulo.core.client.ConditionalWriter.Status;
@@ -41,6 +39,8 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import jline.console.ConsoleReader;
+
 /**
  * Accumulo Reservation System : An example reservation system using Accumulo. Supports atomic reservations of a resource at a date. Wait list are also
  * supported. In order to keep the example simple, no checking is done of the date. Also the code is inefficient, if interested in improving it take a look at
@@ -88,9 +88,9 @@
 
     ReservationResult result = ReservationResult.RESERVED;
 
-    ConditionalWriter cwriter = conn.createConditionalWriter(rTable, new ConditionalWriterConfig());
-
-    try {
+    // it is important to use an isolated scanner so that only whole mutations are seen
+    try (ConditionalWriter cwriter = conn.createConditionalWriter(rTable, new ConditionalWriterConfig());
+        Scanner scanner = new IsolatedScanner(conn.createScanner(rTable, Authorizations.EMPTY))) {
       while (true) {
         Status status = cwriter.write(update).getStatus();
         switch (status) {
@@ -109,8 +109,6 @@
         // that attempted to make a reservation by putting them later in the list. A more complex solution could involve having independent sub-queues within
         // the row that approximately maintain arrival order and use exponential back off to fairly merge the sub-queues into the main queue.
 
-        // it is important to use an isolated scanner so that only whole mutations are seen
-        Scanner scanner = new IsolatedScanner(conn.createScanner(rTable, Authorizations.EMPTY));
         scanner.setRange(new Range(row));
 
         int seq = -1;
@@ -152,10 +150,7 @@
         else
           result = ReservationResult.WAIT_LISTED;
       }
-    } finally {
-      cwriter.close();
     }
-
   }
 
   public void cancel(String what, String when, String who) throws Exception {
@@ -166,13 +161,10 @@
     // will cause any concurrent reservations to retry. If this delete were done using a batch writer, then a concurrent reservation could report WAIT_LISTED
     // when it actually got the reservation.
 
-    ConditionalWriter cwriter = conn.createConditionalWriter(rTable, new ConditionalWriterConfig());
-
-    try {
+    // its important to use an isolated scanner so that only whole mutations are seen
+    try (ConditionalWriter cwriter = conn.createConditionalWriter(rTable, new ConditionalWriterConfig());
+        Scanner scanner = new IsolatedScanner(conn.createScanner(rTable, Authorizations.EMPTY))) {
       while (true) {
-
-        // its important to use an isolated scanner so that only whole mutations are seen
-        Scanner scanner = new IsolatedScanner(conn.createScanner(rTable, Authorizations.EMPTY));
         scanner.setRange(new Range(row));
 
         int seq = -1;
@@ -217,8 +209,6 @@
         }
 
       }
-    } finally {
-      cwriter.close();
     }
   }
 
@@ -226,18 +216,19 @@
     String row = what + ":" + when;
 
     // its important to use an isolated scanner so that only whole mutations are seen
-    Scanner scanner = new IsolatedScanner(conn.createScanner(rTable, Authorizations.EMPTY));
-    scanner.setRange(new Range(row));
-    scanner.fetchColumnFamily(new Text("res"));
+    try (Scanner scanner = new IsolatedScanner(conn.createScanner(rTable, Authorizations.EMPTY))) {
+      scanner.setRange(new Range(row));
+      scanner.fetchColumnFamily(new Text("res"));
 
-    List<String> reservations = new ArrayList<String>();
+      List<String> reservations = new ArrayList<>();
 
-    for (Entry<Key,Value> entry : scanner) {
-      String val = entry.getValue().toString();
-      reservations.add(val);
+      for (Entry<Key,Value> entry : scanner) {
+        String val = entry.getValue().toString();
+        reservations.add(val);
+      }
+
+      return reservations;
     }
-
-    return reservations;
   }
 
   public static void main(String[] args) throws Exception {
@@ -255,7 +246,7 @@
         // start up multiple threads all trying to reserve the same resource, no more than one should succeed
 
         final ARS fars = ars;
-        ArrayList<Thread> threads = new ArrayList<Thread>();
+        ArrayList<Thread> threads = new ArrayList<>();
         for (int i = 3; i < tokens.length; i++) {
           final int whoIndex = i;
           Runnable reservationTask = new Runnable() {
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/sample/SampleExample.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/sample/SampleExample.java
new file mode 100644
index 0000000..262e63d
--- /dev/null
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/sample/SampleExample.java
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.examples.simple.sample;
+
+import java.util.Collections;
+import java.util.Map.Entry;
+
+import org.apache.accumulo.core.cli.BatchWriterOpts;
+import org.apache.accumulo.core.cli.ClientOnDefaultTable;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.SampleNotPresentException;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.admin.CompactionConfig;
+import org.apache.accumulo.core.client.admin.CompactionStrategyConfig;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.examples.simple.client.RandomBatchWriter;
+import org.apache.accumulo.examples.simple.shard.CutoffIntersectingIterator;
+
+import com.google.common.collect.ImmutableMap;
+
+/**
+ * A simple example of using Accumulo's sampling feature. This example does something similar to what README.sample shows using the shell. Also see
+ * {@link CutoffIntersectingIterator} and README.sample for an example of how to use sample data from within an iterator.
+ */
+public class SampleExample {
+
+  // a compaction strategy that only selects files for compaction that have no sample data or sample data created in a different way than the tables
+  static final CompactionStrategyConfig NO_SAMPLE_STRATEGY = new CompactionStrategyConfig(
+      "org.apache.accumulo.tserver.compaction.strategies.ConfigurableCompactionStrategy").setOptions(Collections.singletonMap("SF_NO_SAMPLE", ""));
+
+  static class Opts extends ClientOnDefaultTable {
+    public Opts() {
+      super("sampex");
+    }
+  }
+
+  public static void main(String[] args) throws Exception {
+    Opts opts = new Opts();
+    BatchWriterOpts bwOpts = new BatchWriterOpts();
+    opts.parseArgs(RandomBatchWriter.class.getName(), args, bwOpts);
+
+    Connector conn = opts.getConnector();
+
+    if (!conn.tableOperations().exists(opts.getTableName())) {
+      conn.tableOperations().create(opts.getTableName());
+    } else {
+      System.out.println("Table exists, not doing anything.");
+      return;
+    }
+
+    // write some data
+    BatchWriter bw = conn.createBatchWriter(opts.getTableName(), bwOpts.getBatchWriterConfig());
+    bw.addMutation(createMutation("9225", "abcde", "file://foo.txt"));
+    bw.addMutation(createMutation("8934", "accumulo scales", "file://accumulo_notes.txt"));
+    bw.addMutation(createMutation("2317", "milk, eggs, bread, parmigiano-reggiano", "file://groceries/9/txt"));
+    bw.addMutation(createMutation("3900", "EC2 ate my homework", "file://final_project.txt"));
+    bw.flush();
+
+    SamplerConfiguration sc1 = new SamplerConfiguration(RowSampler.class.getName());
+    sc1.setOptions(ImmutableMap.of("hasher", "murmur3_32", "modulus", "3"));
+
+    conn.tableOperations().setSamplerConfiguration(opts.getTableName(), sc1);
+
+    Scanner scanner = conn.createScanner(opts.getTableName(), Authorizations.EMPTY);
+    System.out.println("Scanning all data :");
+    print(scanner);
+    System.out.println();
+
+    System.out.println("Scanning with sampler configuration.  Data was written before sampler was set on table, scan should fail.");
+    scanner.setSamplerConfiguration(sc1);
+    try {
+      print(scanner);
+    } catch (SampleNotPresentException e) {
+      System.out.println("  Saw sample not present exception as expected.");
+    }
+    System.out.println();
+
+    // compact table to recreate sample data
+    conn.tableOperations().compact(opts.getTableName(), new CompactionConfig().setCompactionStrategy(NO_SAMPLE_STRATEGY));
+
+    System.out.println("Scanning after compaction (compaction should have created sample data) : ");
+    print(scanner);
+    System.out.println();
+
+    // update a document in the sample data
+    bw.addMutation(createMutation("2317", "milk, eggs, bread, parmigiano-reggiano, butter", "file://groceries/9/txt"));
+    bw.close();
+    System.out.println("Scanning sample after updating content for docId 2317 (should see content change in sample data) : ");
+    print(scanner);
+    System.out.println();
+
+    // change tables sampling configuration...
+    SamplerConfiguration sc2 = new SamplerConfiguration(RowSampler.class.getName());
+    sc2.setOptions(ImmutableMap.of("hasher", "murmur3_32", "modulus", "2"));
+    conn.tableOperations().setSamplerConfiguration(opts.getTableName(), sc2);
+    // compact table to recreate sample data using new configuration
+    conn.tableOperations().compact(opts.getTableName(), new CompactionConfig().setCompactionStrategy(NO_SAMPLE_STRATEGY));
+
+    System.out.println("Scanning with old sampler configuration.  Sample data was created using new configuration with a compaction.  Scan should fail.");
+    try {
+      // try scanning with old sampler configuration
+      print(scanner);
+    } catch (SampleNotPresentException e) {
+      System.out.println("  Saw sample not present exception as expected ");
+    }
+    System.out.println();
+
+    // update expected sampler configuration on scanner
+    scanner.setSamplerConfiguration(sc2);
+
+    System.out.println("Scanning with new sampler configuration : ");
+    print(scanner);
+    System.out.println();
+
+  }
+
+  private static void print(Scanner scanner) {
+    for (Entry<Key,Value> entry : scanner) {
+      System.out.println("  " + entry.getKey() + " " + entry.getValue());
+    }
+  }
+
+  private static Mutation createMutation(String docId, String content, String url) {
+    Mutation m = new Mutation(docId);
+    m.put("doc", "context", content);
+    m.put("doc", "url", url);
+    return m;
+  }
+}
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/ContinuousQuery.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/ContinuousQuery.java
index 00ec5a3..604c851 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/ContinuousQuery.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/ContinuousQuery.java
@@ -49,7 +49,7 @@
 
   static class Opts extends ClientOpts {
     @Parameter(names = "--shardTable", required = true, description = "name of the shard table")
-    String table = null;
+    String tableName = null;
     @Parameter(names = "--doc2Term", required = true, description = "name of the doc2Term table")
     String doc2Term;
     @Parameter(names = "--terms", required = true, description = "the number of terms in the query")
@@ -69,7 +69,7 @@
 
     Random rand = new Random();
 
-    BatchScanner bs = conn.createBatchScanner(opts.table, opts.auths, bsOpts.scanThreads);
+    BatchScanner bs = conn.createBatchScanner(opts.tableName, opts.auths, bsOpts.scanThreads);
     bs.setTimeout(bsOpts.scanTimeout, TimeUnit.MILLISECONDS);
 
     for (long i = 0; i < opts.iterations; i += 1) {
@@ -98,8 +98,8 @@
 
     Text currentRow = null;
 
-    ArrayList<Text> words = new ArrayList<Text>();
-    ArrayList<Text[]> ret = new ArrayList<Text[]>();
+    ArrayList<Text> words = new ArrayList<>();
+    ArrayList<Text[]> ret = new ArrayList<>();
 
     Random rand = new Random();
 
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/CutoffIntersectingIterator.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/CutoffIntersectingIterator.java
new file mode 100644
index 0000000..f5dce1d
--- /dev/null
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/CutoffIntersectingIterator.java
@@ -0,0 +1,123 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.examples.simple.shard;
+
+import static com.google.common.base.Preconditions.checkArgument;
+import static java.util.Objects.requireNonNull;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.sample.RowColumnSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.iterators.user.IntersectingIterator;
+
+/**
+ * This iterator uses a sample built from the Column Qualifier to quickly avoid intersecting iterator queries that may return too many documents.
+ */
+
+public class CutoffIntersectingIterator extends IntersectingIterator {
+
+  private IntersectingIterator sampleII;
+  private int sampleMax;
+  private boolean hasTop;
+
+  public static void setCutoff(IteratorSetting iterCfg, int cutoff) {
+    checkArgument(cutoff >= 0);
+    iterCfg.addOption("cutoff", cutoff + "");
+  }
+
+  @Override
+  public boolean hasTop() {
+    return hasTop && super.hasTop();
+  }
+
+  @Override
+  public void seek(Range range, Collection<ByteSequence> seekColumnFamilies, boolean inclusive) throws IOException {
+
+    sampleII.seek(range, seekColumnFamilies, inclusive);
+
+    // this check will be redone whenever iterator stack is torn down and recreated.
+    int count = 0;
+    while (count <= sampleMax && sampleII.hasTop()) {
+      sampleII.next();
+      count++;
+    }
+
+    if (count > sampleMax) {
+      // In a real application would probably want to return a key value that indicates too much data. Since this would execute for each tablet, some tablets
+      // may return data. For tablets that did not return data, would want an indication.
+      hasTop = false;
+    } else {
+      hasTop = true;
+      super.seek(range, seekColumnFamilies, inclusive);
+    }
+  }
+
+  @Override
+  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
+    super.init(source, options, env);
+
+    IteratorEnvironment sampleEnv = env.cloneWithSamplingEnabled();
+
+    setMax(sampleEnv, options);
+
+    SortedKeyValueIterator<Key,Value> sampleDC = source.deepCopy(sampleEnv);
+    sampleII = new IntersectingIterator();
+    sampleII.init(sampleDC, options, env);
+
+  }
+
+  static void validateSamplerConfig(SamplerConfiguration sampleConfig) {
+    requireNonNull(sampleConfig);
+    checkArgument(sampleConfig.getSamplerClassName().equals(RowColumnSampler.class.getName()), "Unexpected Sampler " + sampleConfig.getSamplerClassName());
+    checkArgument(sampleConfig.getOptions().get("qualifier").equals("true"), "Expected sample on column qualifier");
+    checkArgument(isNullOrFalse(sampleConfig.getOptions(), "row", "family", "visibility"), "Expected sample on column qualifier only");
+  }
+
+  private void setMax(IteratorEnvironment sampleEnv, Map<String,String> options) {
+    String cutoffValue = options.get("cutoff");
+    SamplerConfiguration sampleConfig = sampleEnv.getSamplerConfiguration();
+
+    // Ensure the sample was constructed in an expected way. If the sample is not built as expected, then can not draw conclusions based on sample.
+    requireNonNull(cutoffValue, "Expected cutoff option is missing");
+    validateSamplerConfig(sampleConfig);
+
+    int modulus = Integer.parseInt(sampleConfig.getOptions().get("modulus"));
+
+    sampleMax = Math.round(Float.parseFloat(cutoffValue) / modulus);
+  }
+
+  private static boolean isNullOrFalse(Map<String,String> options, String... keys) {
+    for (String key : keys) {
+      String val = options.get(key);
+      if (val != null && val.equals("true")) {
+        return false;
+      }
+    }
+    return true;
+  }
+}
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/Index.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/Index.java
index bc76c03..ba1e32e 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/Index.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/Index.java
@@ -53,7 +53,7 @@
 
     Mutation m = new Mutation(partition);
 
-    HashSet<String> tokensSeen = new HashSet<String>();
+    HashSet<String> tokensSeen = new HashSet<>();
 
     for (String token : tokens) {
       token = token.toLowerCase();
@@ -98,7 +98,7 @@
     @Parameter(names = "--partitions", required = true, description = "the number of shards to create")
     int partitions;
     @Parameter(required = true, description = "<file> { <file> ... }")
-    List<String> files = new ArrayList<String>();
+    List<String> files = new ArrayList<>();
   }
 
   public static void main(String[] args) throws Exception {
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/Query.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/Query.java
index 41d5dc7..13adcca 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/Query.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shard/Query.java
@@ -27,6 +27,7 @@
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
@@ -45,21 +46,37 @@
 
   static class Opts extends ClientOnRequiredTable {
     @Parameter(description = " term { <term> ... }")
-    List<String> terms = new ArrayList<String>();
+    List<String> terms = new ArrayList<>();
+
+    @Parameter(names = {"--sample"}, description = "Do queries against sample, useful when sample is built using column qualifier")
+    private boolean useSample = false;
+
+    @Parameter(names = {"--sampleCutoff"},
+        description = "Use sample data to determine if a query might return a number of documents over the cutoff.  This check is per tablet.")
+    private Integer sampleCutoff = null;
   }
 
-  public static List<String> query(BatchScanner bs, List<String> terms) {
+  public static List<String> query(BatchScanner bs, List<String> terms, Integer cutoff) {
 
     Text columns[] = new Text[terms.size()];
     int i = 0;
     for (String term : terms) {
       columns[i++] = new Text(term);
     }
-    IteratorSetting ii = new IteratorSetting(20, "ii", IntersectingIterator.class);
+
+    IteratorSetting ii;
+
+    if (cutoff != null) {
+      ii = new IteratorSetting(20, "ii", CutoffIntersectingIterator.class);
+      CutoffIntersectingIterator.setCutoff(ii, cutoff);
+    } else {
+      ii = new IteratorSetting(20, "ii", IntersectingIterator.class);
+    }
+
     IntersectingIterator.setColumnFamilies(ii, columns);
     bs.addScanIterator(ii);
     bs.setRanges(Collections.singleton(new Range()));
-    List<String> result = new ArrayList<String>();
+    List<String> result = new ArrayList<>();
     for (Entry<Key,Value> entry : bs) {
       result.add(entry.getKey().getColumnQualifier().toString());
     }
@@ -73,9 +90,15 @@
     Connector conn = opts.getConnector();
     BatchScanner bs = conn.createBatchScanner(opts.getTableName(), opts.auths, bsOpts.scanThreads);
     bs.setTimeout(bsOpts.scanTimeout, TimeUnit.MILLISECONDS);
-
-    for (String entry : query(bs, opts.terms))
+    if (opts.useSample) {
+      SamplerConfiguration samplerConfig = conn.tableOperations().getSamplerConfiguration(opts.getTableName());
+      CutoffIntersectingIterator.validateSamplerConfig(conn.tableOperations().getSamplerConfiguration(opts.getTableName()));
+      bs.setSamplerConfiguration(samplerConfig);
+    }
+    for (String entry : query(bs, opts.terms, opts.sampleCutoff))
       System.out.println("  " + entry);
+
+    bs.close();
   }
 
 }
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shell/DebugCommand.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shell/DebugCommand.java
index e429f62..4395fe7 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shell/DebugCommand.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/shell/DebugCommand.java
@@ -27,7 +27,7 @@
 
   @Override
   public int execute(String fullCommand, CommandLine cl, Shell shellState) throws Exception {
-    Set<String> lines = new TreeSet<String>();
+    Set<String> lines = new TreeSet<>();
     lines.add("This is a test");
     shellState.printLines(lines.iterator(), true);
     return 0;
diff --git a/examples/simple/src/test/java/org/apache/accumulo/examples/simple/dirlist/CountTest.java b/examples/simple/src/test/java/org/apache/accumulo/examples/simple/dirlist/CountTest.java
deleted file mode 100644
index f089d42..0000000
--- a/examples/simple/src/test/java/org/apache/accumulo/examples/simple/dirlist/CountTest.java
+++ /dev/null
@@ -1,99 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.examples.simple.dirlist;
-
-import java.util.ArrayList;
-import java.util.Map.Entry;
-
-import junit.framework.TestCase;
-
-import org.apache.accumulo.core.cli.BatchWriterOpts;
-import org.apache.accumulo.core.cli.ClientOpts.Password;
-import org.apache.accumulo.core.cli.ScannerOpts;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.examples.simple.dirlist.FileCount.Opts;
-import org.apache.hadoop.io.Text;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class CountTest extends TestCase {
-
-  private static final Logger log = LoggerFactory.getLogger(CountTest.class);
-
-  {
-    try {
-      Connector conn = new MockInstance("counttest").getConnector("root", new PasswordToken(""));
-      conn.tableOperations().create("dirlisttable");
-      BatchWriter bw = conn.createBatchWriter("dirlisttable", new BatchWriterConfig());
-      ColumnVisibility cv = new ColumnVisibility();
-      // / has 1 dir
-      // /local has 2 dirs 1 file
-      // /local/user1 has 2 files
-      bw.addMutation(Ingest.buildMutation(cv, "/local", true, false, true, 272, 12345, null));
-      bw.addMutation(Ingest.buildMutation(cv, "/local/user1", true, false, true, 272, 12345, null));
-      bw.addMutation(Ingest.buildMutation(cv, "/local/user2", true, false, true, 272, 12345, null));
-      bw.addMutation(Ingest.buildMutation(cv, "/local/file", false, false, false, 1024, 12345, null));
-      bw.addMutation(Ingest.buildMutation(cv, "/local/file", false, false, false, 1024, 23456, null));
-      bw.addMutation(Ingest.buildMutation(cv, "/local/user1/file1", false, false, false, 2024, 12345, null));
-      bw.addMutation(Ingest.buildMutation(cv, "/local/user1/file2", false, false, false, 1028, 23456, null));
-      bw.close();
-    } catch (Exception e) {
-      log.error("Could not add mutations in initializer.", e);
-    }
-  }
-
-  public void test() throws Exception {
-    Scanner scanner = new MockInstance("counttest").getConnector("root", new PasswordToken("")).createScanner("dirlisttable", new Authorizations());
-    scanner.fetchColumn(new Text("dir"), new Text("counts"));
-    assertFalse(scanner.iterator().hasNext());
-
-    Opts opts = new Opts();
-    ScannerOpts scanOpts = new ScannerOpts();
-    BatchWriterOpts bwOpts = new BatchWriterOpts();
-    opts.instance = "counttest";
-    opts.setTableName("dirlisttable");
-    opts.setPassword(new Password("secret"));
-    opts.mock = true;
-    opts.setPassword(new Opts.Password(""));
-    FileCount fc = new FileCount(opts, scanOpts, bwOpts);
-    fc.run();
-
-    ArrayList<Pair<String,String>> expected = new ArrayList<Pair<String,String>>();
-    expected.add(new Pair<String,String>(QueryUtil.getRow("").toString(), "1,0,3,3"));
-    expected.add(new Pair<String,String>(QueryUtil.getRow("/local").toString(), "2,1,2,3"));
-    expected.add(new Pair<String,String>(QueryUtil.getRow("/local/user1").toString(), "0,2,0,2"));
-    expected.add(new Pair<String,String>(QueryUtil.getRow("/local/user2").toString(), "0,0,0,0"));
-
-    int i = 0;
-    for (Entry<Key,Value> e : scanner) {
-      assertEquals(e.getKey().getRow().toString(), expected.get(i).getFirst());
-      assertEquals(e.getValue().toString(), expected.get(i).getSecond());
-      i++;
-    }
-    assertEquals(i, expected.size());
-  }
-}
diff --git a/examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkCombinerTest.java b/examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkCombinerTest.java
index 9efd68b..40f4bb9 100644
--- a/examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkCombinerTest.java
+++ b/examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkCombinerTest.java
@@ -135,22 +135,22 @@
 
   @Override
   protected void setUp() {
-    row1 = new TreeMap<Key,Value>();
-    row2 = new TreeMap<Key,Value>();
-    row3 = new TreeMap<Key,Value>();
-    allRows = new TreeMap<Key,Value>();
+    row1 = new TreeMap<>();
+    row2 = new TreeMap<>();
+    row3 = new TreeMap<>();
+    allRows = new TreeMap<>();
 
-    cRow1 = new TreeMap<Key,Value>();
-    cRow2 = new TreeMap<Key,Value>();
-    cRow3 = new TreeMap<Key,Value>();
-    allCRows = new TreeMap<Key,Value>();
+    cRow1 = new TreeMap<>();
+    cRow2 = new TreeMap<>();
+    cRow3 = new TreeMap<>();
+    allCRows = new TreeMap<>();
 
-    cOnlyRow1 = new TreeMap<Key,Value>();
-    cOnlyRow2 = new TreeMap<Key,Value>();
-    cOnlyRow3 = new TreeMap<Key,Value>();
-    allCOnlyRows = new TreeMap<Key,Value>();
+    cOnlyRow1 = new TreeMap<>();
+    cOnlyRow2 = new TreeMap<>();
+    cOnlyRow3 = new TreeMap<>();
+    allCOnlyRows = new TreeMap<>();
 
-    badrow = new TreeMap<Key,Value>();
+    badrow = new TreeMap<>();
 
     String refs = FileDataIngest.REFS_CF.toString();
     String fileext = FileDataIngest.REFS_FILE_EXT;
@@ -218,7 +218,7 @@
     allCOnlyRows.putAll(cOnlyRow3);
   }
 
-  private static final Collection<ByteSequence> emptyColfs = new HashSet<ByteSequence>();
+  private static final Collection<ByteSequence> emptyColfs = new HashSet<>();
 
   public void test1() throws IOException {
     runTest(false, allRows, allCRows, emptyColfs);
@@ -241,7 +241,7 @@
     iter = iter.deepCopy(null);
     iter.seek(new Range(), cols, true);
 
-    TreeMap<Key,Value> seen = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> seen = new TreeMap<>();
 
     while (iter.hasTop()) {
       assertFalse("already contains " + iter.getTopKey(), seen.containsKey(iter.getTopKey()));
diff --git a/examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkInputStreamTest.java b/examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkInputStreamTest.java
index 3d860ce..2796d47 100644
--- a/examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkInputStreamTest.java
+++ b/examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkInputStreamTest.java
@@ -25,36 +25,25 @@
 import java.util.List;
 import java.util.Map.Entry;
 
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.TableExistsException;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.KeyValue;
-import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.accumulo.core.util.PeekingIterator;
 import org.apache.hadoop.io.Text;
+import org.junit.Before;
 import org.junit.Test;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 public class ChunkInputStreamTest {
   private static final Logger log = LoggerFactory.getLogger(ChunkInputStream.class);
-  List<Entry<Key,Value>> data;
-  List<Entry<Key,Value>> baddata;
-  List<Entry<Key,Value>> multidata;
+  private List<Entry<Key,Value>> data;
+  private List<Entry<Key,Value>> baddata;
+  private List<Entry<Key,Value>> multidata;
 
-  {
-    data = new ArrayList<Entry<Key,Value>>();
+  @Before
+  public void setupData() {
+    data = new ArrayList<>();
     addData(data, "a", "refs", "id\0ext", "A&B", "ext");
     addData(data, "a", "refs", "id\0name", "A&B", "name");
     addData(data, "a", "~chunk", 100, 0, "A&B", "asdfjkl;");
@@ -72,7 +61,7 @@
     addData(data, "d", "~chunk", 100, 0, "A&B", "");
     addData(data, "e", "~chunk", 100, 0, "A&B", "asdfjkl;");
     addData(data, "e", "~chunk", 100, 1, "A&B", "");
-    baddata = new ArrayList<Entry<Key,Value>>();
+    baddata = new ArrayList<>();
     addData(baddata, "a", "~chunk", 100, 0, "A", "asdfjkl;");
     addData(baddata, "b", "~chunk", 100, 0, "B", "asdfjkl;");
     addData(baddata, "b", "~chunk", 100, 2, "C", "");
@@ -86,7 +75,7 @@
     addData(baddata, "e", "~chunk", 100, 2, "I", "asdfjkl;");
     addData(baddata, "f", "~chunk", 100, 2, "K", "asdfjkl;");
     addData(baddata, "g", "~chunk", 100, 0, "L", "");
-    multidata = new ArrayList<Entry<Key,Value>>();
+    multidata = new ArrayList<>();
     addData(multidata, "a", "~chunk", 100, 0, "A&B", "asdfjkl;");
     addData(multidata, "a", "~chunk", 100, 1, "A&B", "");
     addData(multidata, "a", "~chunk", 200, 0, "B&C", "asdfjkl;");
@@ -97,11 +86,11 @@
     addData(multidata, "c", "~chunk", 100, 1, "B&C", "");
   }
 
-  public static void addData(List<Entry<Key,Value>> data, String row, String cf, String cq, String vis, String value) {
+  private static void addData(List<Entry<Key,Value>> data, String row, String cf, String cq, String vis, String value) {
     data.add(new KeyValue(new Key(new Text(row), new Text(cf), new Text(cq), new Text(vis)), value.getBytes()));
   }
 
-  public static void addData(List<Entry<Key,Value>> data, String row, String cf, int chunkSize, int chunkCount, String vis, String value) {
+  private static void addData(List<Entry<Key,Value>> data, String row, String cf, int chunkSize, int chunkCount, String vis, String value) {
     Text chunkCQ = new Text(FileDataIngest.intToBytes(chunkSize));
     chunkCQ.append(FileDataIngest.intToBytes(chunkCount), 0, 4);
     data.add(new KeyValue(new Key(new Text(row), new Text(cf), chunkCQ, new Text(vis)), value.getBytes()));
@@ -110,8 +99,8 @@
   @Test
   public void testExceptionOnMultipleSetSourceWithoutClose() throws IOException {
     ChunkInputStream cis = new ChunkInputStream();
-    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<Entry<Key,Value>>(data.iterator());
-    pi = new PeekingIterator<Entry<Key,Value>>(data.iterator());
+    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<>(data.iterator());
+    pi = new PeekingIterator<>(data.iterator());
     cis.setSource(pi);
     try {
       cis.setSource(pi);
@@ -125,7 +114,7 @@
   @Test
   public void testExceptionOnGetVisBeforeClose() throws IOException {
     ChunkInputStream cis = new ChunkInputStream();
-    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<Entry<Key,Value>>(data.iterator());
+    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<>(data.iterator());
 
     cis.setSource(pi);
     try {
@@ -143,7 +132,7 @@
     ChunkInputStream cis = new ChunkInputStream();
     byte[] b = new byte[5];
 
-    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<Entry<Key,Value>>(data.iterator());
+    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<>(data.iterator());
 
     cis.setSource(pi);
     int read;
@@ -195,60 +184,7 @@
     ChunkInputStream cis = new ChunkInputStream();
     byte[] b = new byte[20];
     int read;
-    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<Entry<Key,Value>>(data.iterator());
-
-    cis.setSource(pi);
-    assertEquals(read = cis.read(b), 8);
-    assertEquals(new String(b, 0, read), "asdfjkl;");
-    assertEquals(read = cis.read(b), -1);
-
-    cis.setSource(pi);
-    assertEquals(read = cis.read(b), 10);
-    assertEquals(new String(b, 0, read), "qwertyuiop");
-    assertEquals(read = cis.read(b), -1);
-    assertEquals(cis.getVisibilities().toString(), "[A&B, B&C, D]");
-    cis.close();
-
-    cis.setSource(pi);
-    assertEquals(read = cis.read(b), 16);
-    assertEquals(new String(b, 0, read), "asdfjkl;asdfjkl;");
-    assertEquals(read = cis.read(b), -1);
-    assertEquals(cis.getVisibilities().toString(), "[A&B]");
-    cis.close();
-
-    cis.setSource(pi);
-    assertEquals(read = cis.read(b), -1);
-    cis.close();
-
-    cis.setSource(pi);
-    assertEquals(read = cis.read(b), 8);
-    assertEquals(new String(b, 0, read), "asdfjkl;");
-    assertEquals(read = cis.read(b), -1);
-    cis.close();
-
-    assertFalse(pi.hasNext());
-  }
-
-  @Test
-  public void testWithAccumulo() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException, IOException {
-    Connector conn = new MockInstance().getConnector("root", new PasswordToken(""));
-    conn.tableOperations().create("test");
-    BatchWriter bw = conn.createBatchWriter("test", new BatchWriterConfig());
-
-    for (Entry<Key,Value> e : data) {
-      Key k = e.getKey();
-      Mutation m = new Mutation(k.getRow());
-      m.put(k.getColumnFamily(), k.getColumnQualifier(), new ColumnVisibility(k.getColumnVisibility()), e.getValue());
-      bw.addMutation(m);
-    }
-    bw.close();
-
-    Scanner scan = conn.createScanner("test", new Authorizations("A", "B", "C", "D"));
-
-    ChunkInputStream cis = new ChunkInputStream();
-    byte[] b = new byte[20];
-    int read;
-    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<Entry<Key,Value>>(scan.iterator());
+    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<>(data.iterator());
 
     cis.setSource(pi);
     assertEquals(read = cis.read(b), 8);
@@ -307,7 +243,7 @@
     ChunkInputStream cis = new ChunkInputStream();
     byte[] b = new byte[20];
     int read;
-    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<Entry<Key,Value>>(baddata.iterator());
+    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<>(baddata.iterator());
 
     cis.setSource(pi);
     assumeExceptionOnRead(cis, b);
@@ -353,7 +289,7 @@
 
     assertFalse(pi.hasNext());
 
-    pi = new PeekingIterator<Entry<Key,Value>>(baddata.iterator());
+    pi = new PeekingIterator<>(baddata.iterator());
     cis.setSource(pi);
     assumeExceptionOnClose(cis);
   }
@@ -363,7 +299,7 @@
     ChunkInputStream cis = new ChunkInputStream();
     byte[] b = new byte[20];
     int read;
-    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<Entry<Key,Value>>(baddata.iterator());
+    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<>(baddata.iterator());
 
     cis.setSource(pi);
     assumeExceptionOnRead(cis, b);
@@ -404,7 +340,7 @@
 
     assertFalse(pi.hasNext());
 
-    pi = new PeekingIterator<Entry<Key,Value>>(baddata.iterator());
+    pi = new PeekingIterator<>(baddata.iterator());
     cis.setSource(pi);
     assumeExceptionOnClose(cis);
   }
@@ -414,7 +350,7 @@
     ChunkInputStream cis = new ChunkInputStream();
     byte[] b = new byte[20];
     int read;
-    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<Entry<Key,Value>>(multidata.iterator());
+    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<>(multidata.iterator());
 
     b = new byte[20];
 
@@ -441,7 +377,7 @@
   @Test
   public void testSingleByteRead() throws IOException {
     ChunkInputStream cis = new ChunkInputStream();
-    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<Entry<Key,Value>>(data.iterator());
+    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<>(data.iterator());
 
     cis.setSource(pi);
     assertEquals((byte) 'a', (byte) cis.read());
diff --git a/fate/pom.xml b/fate/pom.xml
index adca357..059e4f1 100644
--- a/fate/pom.xml
+++ b/fate/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-fate</artifactId>
   <name>Apache Accumulo Fate</name>
diff --git a/fate/src/main/java/org/apache/accumulo/fate/AcceptableException.java b/fate/src/main/java/org/apache/accumulo/fate/AcceptableException.java
new file mode 100644
index 0000000..39683c1
--- /dev/null
+++ b/fate/src/main/java/org/apache/accumulo/fate/AcceptableException.java
@@ -0,0 +1,29 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.fate;
+
+/**
+ * An exception for FATE operations to use to denote when an Exception is acceptable and should not trigger warning messages. This exception is intended to wrap
+ * an existing exception from a FATE op implementation so that the FATE runner can know that the exception doesn't need to warn.
+ * <p>
+ * Often times, problems that map well into the FATE execution model have states in which it is impossible to know ahead of time if an exception will be thrown.
+ * For example, with concurrent create table operations, one of the operations will fail because the table already exists, but this is not an error condition
+ * for the system. It is normal and expected.
+ */
+public interface AcceptableException {
+
+}
diff --git a/fate/src/main/java/org/apache/accumulo/fate/AdminUtil.java b/fate/src/main/java/org/apache/accumulo/fate/AdminUtil.java
index 8532e92..f6aa811 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/AdminUtil.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/AdminUtil.java
@@ -68,15 +68,15 @@
 
   public void print(ReadOnlyTStore<T> zs, IZooReaderWriter zk, String lockPath, Formatter fmt, Set<Long> filterTxid, EnumSet<TStatus> filterStatus)
       throws KeeperException, InterruptedException {
-    Map<Long,List<String>> heldLocks = new HashMap<Long,List<String>>();
-    Map<Long,List<String>> waitingLocks = new HashMap<Long,List<String>>();
+    Map<Long,List<String>> heldLocks = new HashMap<>();
+    Map<Long,List<String>> waitingLocks = new HashMap<>();
 
     List<String> lockedIds = zk.getChildren(lockPath);
 
     for (String id : lockedIds) {
       try {
         List<String> lockNodes = zk.getChildren(lockPath + "/" + id);
-        lockNodes = new ArrayList<String>(lockNodes);
+        lockNodes = new ArrayList<>(lockNodes);
         Collections.sort(lockNodes);
 
         int pos = 0;
@@ -104,7 +104,7 @@
 
             List<String> tables = locks.get(Long.parseLong(lda[1], 16));
             if (tables == null) {
-              tables = new ArrayList<String>();
+              tables = new ArrayList<>();
               locks.put(Long.parseLong(lda[1], 16), tables);
             }
 
diff --git a/fate/src/main/java/org/apache/accumulo/fate/AgeOffStore.java b/fate/src/main/java/org/apache/accumulo/fate/AgeOffStore.java
index d023c27..376dad4 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/AgeOffStore.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/AgeOffStore.java
@@ -70,7 +70,7 @@
   }
 
   public void ageOff() {
-    HashSet<Long> oldTxs = new HashSet<Long>();
+    HashSet<Long> oldTxs = new HashSet<>();
 
     synchronized (this) {
       long time = timeSource.currentTimeMillis();
@@ -114,7 +114,7 @@
     this.store = store;
     this.ageOffTime = ageOffTime;
     this.timeSource = timeSource;
-    candidates = new HashMap<Long,Long>();
+    candidates = new HashMap<>();
 
     minTime = Long.MAX_VALUE;
 
@@ -231,4 +231,9 @@
   public List<Long> list() {
     return store.list();
   }
+
+  @Override
+  public List<ReadOnlyRepo<T>> getStack(long tid) {
+    return store.getStack(tid);
+  }
 }
diff --git a/fate/src/main/java/org/apache/accumulo/fate/Fate.java b/fate/src/main/java/org/apache/accumulo/fate/Fate.java
index 7ac573e..4e482ec 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/Fate.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/Fate.java
@@ -111,7 +111,14 @@
 
     private void transitionToFailed(long tid, Repo<T> op, Exception e) {
       String tidStr = String.format("%016x", tid);
-      log.warn("Failed to execute Repo, tid=" + tidStr, e);
+      final String msg = "Failed to execute Repo, tid=" + tidStr;
+      // Certain FATE ops that throw exceptions don't need to be propagated up to the Monitor
+      // as a warning. They're a normal, handled failure condition.
+      if (e instanceof AcceptableException) {
+        log.debug(msg, e.getCause());
+      } else {
+        log.warn(msg, e);
+      }
       store.setProperty(tid, EXCEPTION_PROP, e);
       store.setStatus(tid, TStatus.FAILED_IN_PROGRESS);
       log.info("Updated status for Repo with tid=" + tidStr + " to FAILED_IN_PROGRESS");
diff --git a/fate/src/main/java/org/apache/accumulo/fate/ReadOnlyStore.java b/fate/src/main/java/org/apache/accumulo/fate/ReadOnlyStore.java
index 0ca59dd..5d5aeab 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/ReadOnlyStore.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/ReadOnlyStore.java
@@ -86,7 +86,7 @@
 
   @Override
   public ReadOnlyRepo<T> top(long tid) {
-    return new ReadOnlyRepoWrapper<T>(store.top(tid));
+    return new ReadOnlyRepoWrapper<>(store.top(tid));
   }
 
   @Override
@@ -108,4 +108,9 @@
   public List<Long> list() {
     return store.list();
   }
+
+  @Override
+  public List<ReadOnlyRepo<T>> getStack(long tid) {
+    return store.getStack(tid);
+  }
 }
diff --git a/fate/src/main/java/org/apache/accumulo/fate/ReadOnlyTStore.java b/fate/src/main/java/org/apache/accumulo/fate/ReadOnlyTStore.java
index 5c1344a..9039ad2 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/ReadOnlyTStore.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/ReadOnlyTStore.java
@@ -87,6 +87,11 @@
   ReadOnlyRepo<T> top(long tid);
 
   /**
+   * Get all operations on a transactions stack. Element 0 contains the most recent operation pushed or the top.
+   */
+  List<ReadOnlyRepo<T>> getStack(long tid);
+
+  /**
    * Get the state of a given transaction.
    *
    * Caller must have already reserved tid.
diff --git a/fate/src/main/java/org/apache/accumulo/fate/ZooStore.java b/fate/src/main/java/org/apache/accumulo/fate/ZooStore.java
index 4b4a83f..2a68f44 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/ZooStore.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/ZooStore.java
@@ -16,8 +16,8 @@
  */
 package org.apache.accumulo.fate;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
 import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.io.ByteArrayInputStream;
 import java.io.ByteArrayOutputStream;
@@ -97,8 +97,8 @@
 
     this.path = path;
     this.zk = zk;
-    this.reserved = new HashSet<Long>();
-    this.defered = new HashMap<Long,Long>();
+    this.reserved = new HashSet<>();
+    this.defered = new HashMap<>();
     this.idgenerator = new SecureRandom();
 
     zk.putPersistentData(path, new byte[0], NodeExistsPolicy.SKIP);
@@ -130,7 +130,7 @@
           events = statusChangeEvents;
         }
 
-        List<String> txdirs = new ArrayList<String>(zk.getChildren(path));
+        List<String> txdirs = new ArrayList<>(zk.getChildren(path));
         Collections.sort(txdirs);
 
         synchronized (this) {
@@ -287,7 +287,7 @@
   private String findTop(String txpath) throws KeeperException, InterruptedException {
     List<String> ops = zk.getChildren(txpath);
 
-    ops = new ArrayList<String>(ops);
+    ops = new ArrayList<>(ops);
 
     String max = "";
 
@@ -448,7 +448,7 @@
   @Override
   public List<Long> list() {
     try {
-      ArrayList<Long> l = new ArrayList<Long>();
+      ArrayList<Long> l = new ArrayList<>();
       List<String> transactions = zk.getChildren(path);
       for (String txid : transactions) {
         l.add(parseTid(txid));
@@ -458,4 +458,44 @@
       throw new RuntimeException(e);
     }
   }
+
+  @Override
+  public List<ReadOnlyRepo<T>> getStack(long tid) {
+    String txpath = getTXPath(tid);
+
+    outer: while (true) {
+      List<String> ops;
+      try {
+        ops = zk.getChildren(txpath);
+      } catch (KeeperException.NoNodeException e) {
+        return Collections.emptyList();
+      } catch (KeeperException | InterruptedException e1) {
+        throw new RuntimeException(e1);
+      }
+
+      ops = new ArrayList<>(ops);
+      Collections.sort(ops, Collections.reverseOrder());
+
+      ArrayList<ReadOnlyRepo<T>> dops = new ArrayList<>();
+
+      for (String child : ops) {
+        if (child.startsWith("repo_")) {
+          byte[] ser;
+          try {
+            ser = zk.getData(txpath + "/" + child, null);
+            @SuppressWarnings("unchecked")
+            ReadOnlyRepo<T> repo = (ReadOnlyRepo<T>) deserialize(ser);
+            dops.add(repo);
+          } catch (KeeperException.NoNodeException e) {
+            // children changed so start over
+            continue outer;
+          } catch (KeeperException | InterruptedException e) {
+            throw new RuntimeException(e);
+          }
+        }
+      }
+
+      return dops;
+    }
+  }
 }
diff --git a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/DistributedReadWriteLock.java b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/DistributedReadWriteLock.java
index 624ce5d..fe31011 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/DistributedReadWriteLock.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/DistributedReadWriteLock.java
@@ -37,7 +37,7 @@
 
   static enum LockType {
     READ, WRITE,
-  };
+  }
 
   // serializer for lock type and user data
   static class ParsedLock {
diff --git a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/TransactionWatcher.java b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/TransactionWatcher.java
index dda7db9..b10ddea 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/TransactionWatcher.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/TransactionWatcher.java
@@ -33,7 +33,7 @@
   }
 
   private static final Logger log = LoggerFactory.getLogger(TransactionWatcher.class);
-  final private Map<Long,AtomicInteger> counts = new HashMap<Long,AtomicInteger>();
+  final private Map<Long,AtomicInteger> counts = new HashMap<>();
   final private Arbitrator arbitrator;
 
   public TransactionWatcher(Arbitrator arbitrator) {
diff --git a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java
index 503e56c..66234fb 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java
@@ -79,8 +79,9 @@
     @Override
     public void process(WatchedEvent event) {
 
-      if (log.isTraceEnabled())
+      if (log.isTraceEnabled()) {
         log.trace("{}", event);
+      }
 
       switch (event.getType()) {
         case NodeDataChanged:
@@ -155,9 +156,9 @@
    */
   public ZooCache(ZooReader reader, Watcher watcher) {
     this.zReader = reader;
-    this.cache = new HashMap<String,byte[]>();
-    this.statCache = new HashMap<String,Stat>();
-    this.childrenCache = new HashMap<String,List<String>>();
+    this.cache = new HashMap<>();
+    this.statCache = new HashMap<>();
+    this.childrenCache = new HashMap<>();
     this.externalWatcher = watcher;
   }
 
@@ -209,11 +210,12 @@
           log.debug("Wait in retry() was interrupted.", e);
         }
         LockSupport.parkNanos(sleepTime);
-        if (sleepTime < 10000) {
+        if (sleepTime < 10_000) {
           sleepTime = (int) (sleepTime + sleepTime * Math.random());
         }
       }
     }
+
   }
 
   /**
diff --git a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCacheFactory.java b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCacheFactory.java
index 1475928..9fecf2e 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCacheFactory.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCacheFactory.java
@@ -26,7 +26,7 @@
  */
 public class ZooCacheFactory {
   // TODO: make this better - LRU, soft references, ...
-  private static Map<String,ZooCache> instances = new HashMap<String,ZooCache>();
+  private static Map<String,ZooCache> instances = new HashMap<>();
 
   /**
    * Gets a {@link ZooCache}. The same object may be returned for multiple calls with the same arguments.
diff --git a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooLock.java b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooLock.java
index 992a444..90fb4aa 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooLock.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooLock.java
@@ -404,7 +404,7 @@
       return false;
     }
 
-    children = new ArrayList<String>(children);
+    children = new ArrayList<>(children);
     Collections.sort(children);
 
     String lockNode = children.get(0);
@@ -437,7 +437,7 @@
       return null;
     }
 
-    children = new ArrayList<String>(children);
+    children = new ArrayList<>(children);
     Collections.sort(children);
 
     String lockNode = children.get(0);
@@ -456,7 +456,7 @@
       return 0;
     }
 
-    children = new ArrayList<String>(children);
+    children = new ArrayList<>(children);
     Collections.sort(children);
 
     String lockNode = children.get(0);
diff --git a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooQueueLock.java b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooQueueLock.java
index 1b22dc9..25c735b 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooQueueLock.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooQueueLock.java
@@ -68,7 +68,7 @@
 
   @Override
   public SortedMap<Long,byte[]> getEarlierEntries(long entry) {
-    SortedMap<Long,byte[]> result = new TreeMap<Long,byte[]>();
+    SortedMap<Long,byte[]> result = new TreeMap<>();
     try {
       List<String> children = Collections.emptyList();
       try {
diff --git a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooSession.java b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooSession.java
index 837785f..b9fedac 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooSession.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooSession.java
@@ -51,7 +51,7 @@
     ZooKeeper zooKeeper;
   }
 
-  private static Map<String,ZooSessionInfo> sessions = new HashMap<String,ZooSessionInfo>();
+  private static Map<String,ZooSessionInfo> sessions = new HashMap<>();
 
   private static String sessionKey(String keepers, int timeout, String scheme, byte[] auth) {
     return keepers + ":" + timeout + ":" + (scheme == null ? "" : scheme) + ":" + (auth == null ? "" : new String(auth, UTF_8));
diff --git a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooUtil.java b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooUtil.java
index 4c4aa13..6ea10d0 100644
--- a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooUtil.java
+++ b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooUtil.java
@@ -161,9 +161,9 @@
   private static final RetryFactory RETRY_FACTORY;
 
   static {
-    PRIVATE = new ArrayList<ACL>();
+    PRIVATE = new ArrayList<>();
     PRIVATE.addAll(Ids.CREATOR_ALL_ACL);
-    PUBLIC = new ArrayList<ACL>();
+    PUBLIC = new ArrayList<>();
     PUBLIC.addAll(PRIVATE);
     PUBLIC.add(new ACL(Perms.READ, Ids.ANYONE_ID_UNSAFE));
     RETRY_FACTORY = RetryFactory.DEFAULT_INSTANCE;
@@ -483,7 +483,7 @@
       return null;
     }
 
-    children = new ArrayList<String>(children);
+    children = new ArrayList<>(children);
     Collections.sort(children);
 
     String lockNode = children.get(0);
diff --git a/fate/src/test/java/org/apache/accumulo/fate/AgeOffStoreTest.java b/fate/src/test/java/org/apache/accumulo/fate/AgeOffStoreTest.java
index 2c3b813..9549bd8 100644
--- a/fate/src/test/java/org/apache/accumulo/fate/AgeOffStoreTest.java
+++ b/fate/src/test/java/org/apache/accumulo/fate/AgeOffStoreTest.java
@@ -43,8 +43,8 @@
   public void testBasic() {
 
     TestTimeSource tts = new TestTimeSource();
-    SimpleStore<String> sstore = new SimpleStore<String>();
-    AgeOffStore<String> aoStore = new AgeOffStore<String>(sstore, 10, tts);
+    SimpleStore<String> sstore = new SimpleStore<>();
+    AgeOffStore<String> aoStore = new AgeOffStore<>(sstore, 10, tts);
 
     aoStore.ageOff();
 
@@ -73,22 +73,22 @@
 
     aoStore.ageOff();
 
-    Assert.assertEquals(new HashSet<Long>(Arrays.asList(txid1, txid2, txid3, txid4)), new HashSet<Long>(aoStore.list()));
-    Assert.assertEquals(4, new HashSet<Long>(aoStore.list()).size());
+    Assert.assertEquals(new HashSet<>(Arrays.asList(txid1, txid2, txid3, txid4)), new HashSet<>(aoStore.list()));
+    Assert.assertEquals(4, new HashSet<>(aoStore.list()).size());
 
     tts.time = 15;
 
     aoStore.ageOff();
 
-    Assert.assertEquals(new HashSet<Long>(Arrays.asList(txid1, txid3, txid4)), new HashSet<Long>(aoStore.list()));
-    Assert.assertEquals(3, new HashSet<Long>(aoStore.list()).size());
+    Assert.assertEquals(new HashSet<>(Arrays.asList(txid1, txid3, txid4)), new HashSet<>(aoStore.list()));
+    Assert.assertEquals(3, new HashSet<>(aoStore.list()).size());
 
     tts.time = 30;
 
     aoStore.ageOff();
 
-    Assert.assertEquals(new HashSet<Long>(Arrays.asList(txid1)), new HashSet<Long>(aoStore.list()));
-    Assert.assertEquals(1, new HashSet<Long>(aoStore.list()).size());
+    Assert.assertEquals(new HashSet<>(Arrays.asList(txid1)), new HashSet<>(aoStore.list()));
+    Assert.assertEquals(1, new HashSet<>(aoStore.list()).size());
   }
 
   @Test
@@ -96,7 +96,7 @@
     // test age off when source store starts off non empty
 
     TestTimeSource tts = new TestTimeSource();
-    SimpleStore<String> sstore = new SimpleStore<String>();
+    SimpleStore<String> sstore = new SimpleStore<>();
     Long txid1 = sstore.create();
     sstore.reserve(txid1);
     sstore.setStatus(txid1, TStatus.IN_PROGRESS);
@@ -116,22 +116,22 @@
 
     Long txid4 = sstore.create();
 
-    AgeOffStore<String> aoStore = new AgeOffStore<String>(sstore, 10, tts);
+    AgeOffStore<String> aoStore = new AgeOffStore<>(sstore, 10, tts);
 
-    Assert.assertEquals(new HashSet<Long>(Arrays.asList(txid1, txid2, txid3, txid4)), new HashSet<Long>(aoStore.list()));
-    Assert.assertEquals(4, new HashSet<Long>(aoStore.list()).size());
+    Assert.assertEquals(new HashSet<>(Arrays.asList(txid1, txid2, txid3, txid4)), new HashSet<>(aoStore.list()));
+    Assert.assertEquals(4, new HashSet<>(aoStore.list()).size());
 
     aoStore.ageOff();
 
-    Assert.assertEquals(new HashSet<Long>(Arrays.asList(txid1, txid2, txid3, txid4)), new HashSet<Long>(aoStore.list()));
-    Assert.assertEquals(4, new HashSet<Long>(aoStore.list()).size());
+    Assert.assertEquals(new HashSet<>(Arrays.asList(txid1, txid2, txid3, txid4)), new HashSet<>(aoStore.list()));
+    Assert.assertEquals(4, new HashSet<>(aoStore.list()).size());
 
     tts.time = 15;
 
     aoStore.ageOff();
 
-    Assert.assertEquals(new HashSet<Long>(Arrays.asList(txid1)), new HashSet<Long>(aoStore.list()));
-    Assert.assertEquals(1, new HashSet<Long>(aoStore.list()).size());
+    Assert.assertEquals(new HashSet<>(Arrays.asList(txid1)), new HashSet<>(aoStore.list()));
+    Assert.assertEquals(1, new HashSet<>(aoStore.list()).size());
 
     aoStore.reserve(txid1);
     aoStore.setStatus(txid1, TStatus.FAILED_IN_PROGRESS);
@@ -141,8 +141,8 @@
 
     aoStore.ageOff();
 
-    Assert.assertEquals(new HashSet<Long>(Arrays.asList(txid1)), new HashSet<Long>(aoStore.list()));
-    Assert.assertEquals(1, new HashSet<Long>(aoStore.list()).size());
+    Assert.assertEquals(new HashSet<>(Arrays.asList(txid1)), new HashSet<>(aoStore.list()));
+    Assert.assertEquals(1, new HashSet<>(aoStore.list()).size());
 
     aoStore.reserve(txid1);
     aoStore.setStatus(txid1, TStatus.FAILED);
@@ -150,13 +150,13 @@
 
     aoStore.ageOff();
 
-    Assert.assertEquals(new HashSet<Long>(Arrays.asList(txid1)), new HashSet<Long>(aoStore.list()));
-    Assert.assertEquals(1, new HashSet<Long>(aoStore.list()).size());
+    Assert.assertEquals(new HashSet<>(Arrays.asList(txid1)), new HashSet<>(aoStore.list()));
+    Assert.assertEquals(1, new HashSet<>(aoStore.list()).size());
 
     tts.time = 42;
 
     aoStore.ageOff();
 
-    Assert.assertEquals(0, new HashSet<Long>(aoStore.list()).size());
+    Assert.assertEquals(0, new HashSet<>(aoStore.list()).size());
   }
 }
diff --git a/fate/src/test/java/org/apache/accumulo/fate/ReadOnlyStoreTest.java b/fate/src/test/java/org/apache/accumulo/fate/ReadOnlyStoreTest.java
index eea5f1b..e8f0cee 100644
--- a/fate/src/test/java/org/apache/accumulo/fate/ReadOnlyStoreTest.java
+++ b/fate/src/test/java/org/apache/accumulo/fate/ReadOnlyStoreTest.java
@@ -51,7 +51,7 @@
     EasyMock.replay(repo);
     EasyMock.replay(mock);
 
-    ReadOnlyTStore<String> store = new ReadOnlyStore<String>(mock);
+    ReadOnlyTStore<String> store = new ReadOnlyStore<>(mock);
     Assert.assertEquals(0xdeadbeefl, store.reserve());
     store.reserve(0xdeadbeefl);
     ReadOnlyRepo<String> top = store.top(0xdeadbeefl);
diff --git a/fate/src/test/java/org/apache/accumulo/fate/SimpleStore.java b/fate/src/test/java/org/apache/accumulo/fate/SimpleStore.java
index f0bac88..3277270 100644
--- a/fate/src/test/java/org/apache/accumulo/fate/SimpleStore.java
+++ b/fate/src/test/java/org/apache/accumulo/fate/SimpleStore.java
@@ -33,8 +33,8 @@
 public class SimpleStore<T> implements TStore<T> {
 
   private long nextId = 1;
-  private Map<Long,TStatus> statuses = new HashMap<Long,TStore.TStatus>();
-  private Set<Long> reserved = new HashSet<Long>();
+  private Map<Long,TStatus> statuses = new HashMap<>();
+  private Set<Long> reserved = new HashSet<>();
 
   @Override
   public long create() {
@@ -120,7 +120,12 @@
 
   @Override
   public List<Long> list() {
-    return new ArrayList<Long>(statuses.keySet());
+    return new ArrayList<>(statuses.keySet());
+  }
+
+  @Override
+  public List<ReadOnlyRepo<T>> getStack(long tid) {
+    throw new NotImplementedException();
   }
 
 }
diff --git a/fate/src/test/java/org/apache/accumulo/fate/zookeeper/DistributedReadWriteLockTest.java b/fate/src/test/java/org/apache/accumulo/fate/zookeeper/DistributedReadWriteLockTest.java
index 0b04bd1..486a4c6 100644
--- a/fate/src/test/java/org/apache/accumulo/fate/zookeeper/DistributedReadWriteLockTest.java
+++ b/fate/src/test/java/org/apache/accumulo/fate/zookeeper/DistributedReadWriteLockTest.java
@@ -34,11 +34,11 @@
   public static class MockQueueLock implements QueueLock {
 
     long next = 0L;
-    final SortedMap<Long,byte[]> locks = new TreeMap<Long,byte[]>();
+    final SortedMap<Long,byte[]> locks = new TreeMap<>();
 
     @Override
     synchronized public SortedMap<Long,byte[]> getEarlierEntries(long entry) {
-      SortedMap<Long,byte[]> result = new TreeMap<Long,byte[]>();
+      SortedMap<Long,byte[]> result = new TreeMap<>();
       result.putAll(locks.headMap(entry + 1));
       return result;
     }
diff --git a/fate/src/test/java/org/apache/accumulo/fate/zookeeper/TransactionWatcherTest.java b/fate/src/test/java/org/apache/accumulo/fate/zookeeper/TransactionWatcherTest.java
index 0e4e329..6779f6c 100644
--- a/fate/src/test/java/org/apache/accumulo/fate/zookeeper/TransactionWatcherTest.java
+++ b/fate/src/test/java/org/apache/accumulo/fate/zookeeper/TransactionWatcherTest.java
@@ -28,13 +28,13 @@
 public class TransactionWatcherTest {
 
   static class SimpleArbitrator implements TransactionWatcher.Arbitrator {
-    Map<String,List<Long>> started = new HashMap<String,List<Long>>();
-    Map<String,List<Long>> cleanedUp = new HashMap<String,List<Long>>();
+    Map<String,List<Long>> started = new HashMap<>();
+    Map<String,List<Long>> cleanedUp = new HashMap<>();
 
     public synchronized void start(String txType, Long txid) throws Exception {
       List<Long> txids = started.get(txType);
       if (txids == null)
-        txids = new ArrayList<Long>();
+        txids = new ArrayList<>();
       if (txids.contains(txid))
         throw new Exception("transaction already started");
       txids.add(txid);
@@ -42,7 +42,7 @@
 
       txids = cleanedUp.get(txType);
       if (txids == null)
-        txids = new ArrayList<Long>();
+        txids = new ArrayList<>();
       if (txids.contains(txid))
         throw new IllegalStateException("transaction was started but not cleaned up");
       txids.add(txid);
@@ -124,7 +124,7 @@
       });
       Assert.fail("work against stopped transaction should fail");
     } catch (Exception ex) {
-      ;
+
     }
     final long txid2 = 9;
     sa.start(txType, txid2);
diff --git a/fate/src/test/java/org/apache/accumulo/fate/zookeeper/ZooCacheTest.java b/fate/src/test/java/org/apache/accumulo/fate/zookeeper/ZooCacheTest.java
index 5dd6f61..6c35ed1 100644
--- a/fate/src/test/java/org/apache/accumulo/fate/zookeeper/ZooCacheTest.java
+++ b/fate/src/test/java/org/apache/accumulo/fate/zookeeper/ZooCacheTest.java
@@ -40,6 +40,7 @@
 import org.apache.zookeeper.ZooKeeper;
 import org.apache.zookeeper.data.Stat;
 import org.easymock.Capture;
+import org.easymock.EasyMock;
 import org.junit.Before;
 import org.junit.Test;
 
@@ -266,7 +267,7 @@
   }
 
   private Watcher watchData(byte[] initialData) throws Exception {
-    Capture<Watcher> cw = new Capture<Watcher>();
+    Capture<Watcher> cw = EasyMock.newCapture();
     Stat existsStat = new Stat();
     if (initialData != null) {
       expect(zk.exists(eq(ZPATH), capture(cw))).andReturn(existsStat);
@@ -335,7 +336,7 @@
   }
 
   private Watcher watchChildren(List<String> initialChildren) throws Exception {
-    Capture<Watcher> cw = new Capture<Watcher>();
+    Capture<Watcher> cw = EasyMock.newCapture();
     expect(zk.getChildren(eq(ZPATH), capture(cw))).andReturn(initialChildren);
     replay(zk);
     zc.getChildren(ZPATH);
diff --git a/iterator-test-harness/.gitignore b/iterator-test-harness/.gitignore
new file mode 100644
index 0000000..e7d7fb1
--- /dev/null
+++ b/iterator-test-harness/.gitignore
@@ -0,0 +1,26 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Maven ignores
+/target/
+
+# IDE ignores
+/.settings/
+/.project
+/.classpath
+/.pydevproject
+/.idea
+/*.iml
+/target/
diff --git a/iterator-test-harness/pom.xml b/iterator-test-harness/pom.xml
new file mode 100644
index 0000000..d54a086
--- /dev/null
+++ b/iterator-test-harness/pom.xml
@@ -0,0 +1,51 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+  <parent>
+    <groupId>org.apache.accumulo</groupId>
+    <artifactId>accumulo-project</artifactId>
+    <version>1.8.0-SNAPSHOT</version>
+  </parent>
+  <artifactId>accumulo-iterator-test-harness</artifactId>
+  <name>Apache Accumulo Iterator Test Harness</name>
+  <description>A library for testing Apache Accumulo Iterators.</description>
+  <dependencies>
+    <!--TODO Don't force downstream users to have JUnit -->
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>log4j</groupId>
+      <artifactId>log4j</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.accumulo</groupId>
+      <artifactId>accumulo-core</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-client</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-api</artifactId>
+    </dependency>
+  </dependencies>
+</project>
diff --git a/iterator-test-harness/src/main/findbugs/exclude-filter.xml b/iterator-test-harness/src/main/findbugs/exclude-filter.xml
new file mode 100644
index 0000000..c801230
--- /dev/null
+++ b/iterator-test-harness/src/main/findbugs/exclude-filter.xml
@@ -0,0 +1,18 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<FindBugsFilter>
+</FindBugsFilter>
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestCaseFinder.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestCaseFinder.java
new file mode 100644
index 0000000..7546460
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestCaseFinder.java
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest;
+
+import java.io.IOException;
+import java.lang.reflect.Modifier;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.ImmutableSet;
+import com.google.common.reflect.ClassPath;
+import com.google.common.reflect.ClassPath.ClassInfo;
+
+/**
+ * A class to ease finding published test cases.
+ */
+public class IteratorTestCaseFinder {
+  private static final Logger log = LoggerFactory.getLogger(IteratorTestCaseFinder.class);
+
+  /**
+   * Instantiates all test cases provided.
+   *
+   * @return A list of {@link IteratorTestCase}s.
+   */
+  public static List<IteratorTestCase> findAllTestCases() {
+    log.info("Searching {}", IteratorTestCase.class.getPackage().getName());
+    ClassPath cp;
+    try {
+      cp = ClassPath.from(IteratorTestCaseFinder.class.getClassLoader());
+    } catch (IOException e) {
+      throw new RuntimeException(e);
+    }
+    ImmutableSet<ClassInfo> classes = cp.getTopLevelClasses(IteratorTestCase.class.getPackage().getName());
+
+    final List<IteratorTestCase> testCases = new ArrayList<>();
+    // final Set<Class<? extends IteratorTestCase>> classes = reflections.getSubTypesOf(IteratorTestCase.class);
+    for (ClassInfo classInfo : classes) {
+      Class<?> clz;
+      try {
+        clz = Class.forName(classInfo.getName());
+      } catch (Exception e) {
+        log.warn("Could not get class for " + classInfo.getName(), e);
+        continue;
+      }
+
+      if (clz.isInterface() || Modifier.isAbstract(clz.getModifiers()) || !IteratorTestCase.class.isAssignableFrom(clz)) {
+        log.debug("Skipping " + clz);
+        continue;
+      }
+
+      try {
+        testCases.add((IteratorTestCase) clz.newInstance());
+      } catch (IllegalAccessException | InstantiationException e) {
+        log.warn("Could not instantiate {}", clz, e);
+      }
+    }
+
+    return testCases;
+  }
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestInput.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestInput.java
new file mode 100644
index 0000000..dfffdeb
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestInput.java
@@ -0,0 +1,89 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest;
+
+import static java.util.Objects.requireNonNull;
+
+import java.util.Collections;
+import java.util.Map;
+import java.util.SortedMap;
+
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+
+/**
+ * The necessary user-input to invoke a test on a {@link SortedKeyValueIterator}.
+ */
+public class IteratorTestInput {
+
+  private final Class<? extends SortedKeyValueIterator<Key,Value>> iteratorClass;
+  private final Map<String,String> iteratorOptions;
+  private final Range range;
+  private final SortedMap<Key,Value> input;
+
+  /**
+   * Construct an instance of the test input.
+   *
+   * @param iteratorClass
+   *          The class for the iterator to test
+   * @param iteratorOptions
+   *          Options, if any, to provide to the iterator ({@link IteratorSetting}'s Map of properties)
+   * @param range
+   *          The Range of data to query ({@link Scanner#setRange(Range)})
+   * @param input
+   *          A sorted collection of Key-Value pairs acting as the table.
+   */
+  public IteratorTestInput(Class<? extends SortedKeyValueIterator<Key,Value>> iteratorClass, Map<String,String> iteratorOptions, Range range,
+      SortedMap<Key,Value> input) {
+    // Already immutable
+    this.iteratorClass = requireNonNull(iteratorClass);
+    // Make it immutable to the test
+    this.iteratorOptions = Collections.unmodifiableMap(requireNonNull(iteratorOptions));
+    // Already immutable
+    this.range = requireNonNull(range);
+    // Make it immutable to the test
+    this.input = Collections.unmodifiableSortedMap((requireNonNull(input)));
+  }
+
+  public Class<? extends SortedKeyValueIterator<Key,Value>> getIteratorClass() {
+    return iteratorClass;
+  }
+
+  public Map<String,String> getIteratorOptions() {
+    return iteratorOptions;
+  }
+
+  public Range getRange() {
+    return range;
+  }
+
+  public SortedMap<Key,Value> getInput() {
+    return input;
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder(64);
+    sb.append("[iteratorClass=").append(iteratorClass).append(", iteratorOptions=").append(iteratorOptions).append(", range=").append(range)
+        .append(", input='").append(input).append("']");
+    return sb.toString();
+  }
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestOutput.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestOutput.java
new file mode 100644
index 0000000..4b670bb
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestOutput.java
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest;
+
+import static java.util.Objects.requireNonNull;
+
+import java.util.Collections;
+import java.util.SortedMap;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+
+/**
+ * The expected results from invoking a {@link IteratorTestCase} on a {@link IteratorTestInput}. The output will be either a {@link SortedMap} of Keys and
+ * Values or an exception but never both. If one of these members is null, the other is guaranteed to be non-null.
+ */
+public class IteratorTestOutput {
+
+  /**
+   * An outcome about what happened during a test case.
+   */
+  public enum TestOutcome {
+    /**
+     * The IteratorTestCase proactively passed.
+     */
+    PASSED, /**
+     * The IteratorTestCase proactively failed.
+     */
+    FAILED, /**
+     * The IteratorTestCase completed, but the pass/fail should be determined by the other context.
+     */
+    COMPLETED
+  }
+
+  private final SortedMap<Key,Value> output;
+  private final Exception exception;
+  private final TestOutcome outcome;
+
+  public IteratorTestOutput(TestOutcome outcome) {
+    this.outcome = outcome;
+    if (outcome == TestOutcome.COMPLETED) {
+      throw new IllegalArgumentException("This constructor is only for use with PASSED and FAILED");
+    }
+    output = null;
+    exception = null;
+  }
+
+  /**
+   * Create an instance of the class.
+   *
+   * @param output
+   *          The sorted collection of Key-Value pairs generated by an Iterator.
+   */
+  public IteratorTestOutput(SortedMap<Key,Value> output) {
+    this.output = Collections.unmodifiableSortedMap(requireNonNull(output));
+    this.exception = null;
+    this.outcome = TestOutcome.COMPLETED;
+  }
+
+  public IteratorTestOutput(Exception e) {
+    this.output = null;
+    this.exception = requireNonNull(e);
+    this.outcome = TestOutcome.FAILED;
+  }
+
+  /**
+   * @return The outcome of the test.
+   */
+  public TestOutcome getTestOutcome() {
+    return outcome;
+  }
+
+  /**
+   * Returns the output from the iterator.
+   *
+   * @return The sorted Key-Value pairs from an iterator, null if an exception was thrown.
+   */
+  public SortedMap<Key,Value> getOutput() {
+    return output;
+  }
+
+  /**
+   * @return True if there is output, false if the output is null.
+   */
+  public boolean hasOutput() {
+    return null != output;
+  }
+
+  /**
+   * Returns the exception thrown by the iterator.
+   *
+   * @return The exception thrown by the iterator, null if no exception was thrown.
+   */
+  public Exception getException() {
+    return exception;
+  }
+
+  /**
+   * @return True if there is an exception, null if the iterator successfully generated Key-Value pairs.
+   */
+  public boolean hasException() {
+    return null != exception;
+  }
+
+  @Override
+  public int hashCode() {
+    final int prime = 31;
+    int result = 1;
+    result = prime * result + ((exception == null) ? 0 : exception.hashCode());
+    result = prime * result + ((outcome == null) ? 0 : outcome.hashCode());
+    result = prime * result + ((output == null) ? 0 : output.hashCode());
+    return result;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    if (!(o instanceof IteratorTestOutput)) {
+      return false;
+    }
+
+    IteratorTestOutput other = (IteratorTestOutput) o;
+
+    if (outcome != other.outcome) {
+      return false;
+    }
+
+    if (hasOutput()) {
+      if (!other.hasOutput()) {
+        return false;
+      }
+      return output.equals(other.output);
+    }
+
+    if (!other.hasException()) {
+      return false;
+    }
+    return exception.equals(other.getException());
+
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder(64);
+    sb.append("[outcome=").append(outcome).append(", output='").append(output).append("', exception=").append(exception).append("]");
+    return sb.toString();
+  }
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestReport.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestReport.java
new file mode 100644
index 0000000..ea2b264
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestReport.java
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest;
+
+import static java.util.Objects.requireNonNull;
+
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+
+/**
+ * A summary of the invocation of an {@link IteratorTestInput} over a {@link IteratorTestCase} with the expected {@link IteratorTestOutput}.
+ */
+public class IteratorTestReport {
+
+  private final IteratorTestInput input;
+  private final IteratorTestOutput expectedOutput;
+  private final IteratorTestCase testCase;
+  private final IteratorTestOutput actualOutput;
+
+  public IteratorTestReport(IteratorTestInput input, IteratorTestOutput expectedOutput, IteratorTestOutput actualOutput, IteratorTestCase testCase) {
+    this.input = requireNonNull(input);
+    this.expectedOutput = requireNonNull(expectedOutput);
+    this.testCase = requireNonNull(testCase);
+    this.actualOutput = requireNonNull(actualOutput);
+  }
+
+  public IteratorTestInput getInput() {
+    return input;
+  }
+
+  public IteratorTestOutput getExpectedOutput() {
+    return expectedOutput;
+  }
+
+  public IteratorTestCase getTestCase() {
+    return testCase;
+  }
+
+  public IteratorTestOutput getActualOutput() {
+    return actualOutput;
+  }
+
+  /**
+   * Evaluate whether the test passed or failed.
+   *
+   * @return True if the actual output matches the expected output, false otherwise.
+   */
+  public boolean didTestSucceed() {
+    return testCase.verify(expectedOutput, actualOutput);
+  }
+
+  public String getSummary() {
+    StringBuilder sb = new StringBuilder(64);
+    // @formatter:off
+    sb.append("IteratorTestReport Summary: \n")
+        .append("\tTest Case = ").append(testCase.getClass().getName())
+        .append("\tInput Data = '").append(input).append("'\n")
+        .append("\tExpected Output = '").append(expectedOutput).append("'\n")
+        .append("\tActual Output = '").append(actualOutput).append("'\n");
+    // @formatter:on
+    return sb.toString();
+  }
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestRunner.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestRunner.java
new file mode 100644
index 0000000..99825a4
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestRunner.java
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A runner for invoking some tests over some input and expecting some output.
+ */
+public class IteratorTestRunner {
+  private static final Logger log = LoggerFactory.getLogger(IteratorTestRunner.class);
+
+  private final IteratorTestInput testInput;
+  private final IteratorTestOutput testOutput;
+  private final Collection<IteratorTestCase> testCases;
+
+  /**
+   * Construct an instance of the class.
+   *
+   * @param testInput
+   *          The input to the tests
+   * @param testOutput
+   *          The expected output given the input
+   * @param testCases
+   *          The test cases to invoke
+   */
+  public IteratorTestRunner(IteratorTestInput testInput, IteratorTestOutput testOutput, Collection<IteratorTestCase> testCases) {
+    this.testInput = testInput;
+    this.testOutput = testOutput;
+    this.testCases = testCases;
+  }
+
+  public IteratorTestInput getTestInput() {
+    return testInput;
+  }
+
+  public IteratorTestOutput getTestOutput() {
+    return testOutput;
+  }
+
+  public Collection<IteratorTestCase> getTestCases() {
+    return testCases;
+  }
+
+  /**
+   * Invokes each test case on the input, verifying the output.
+   *
+   * @return true if all tests passed, false
+   */
+  public List<IteratorTestReport> runTests() {
+    List<IteratorTestReport> testReports = new ArrayList<>(testCases.size());
+    for (IteratorTestCase testCase : testCases) {
+      log.info("Invoking {} on {}", testCase.getClass().getName(), testInput.getIteratorClass().getName());
+
+      IteratorTestOutput actualOutput = null;
+
+      try {
+        actualOutput = testCase.test(testInput);
+      } catch (Exception e) {
+        log.error("Failed to invoke {} on {}", testCase.getClass().getName(), testInput.getIteratorClass().getName(), e);
+        actualOutput = new IteratorTestOutput(e);
+      }
+
+      // Sanity-check on the IteratorTestCase implementation.
+      if (null == actualOutput) {
+        throw new IllegalStateException("IteratorTestCase implementations should always return a non-null IteratorTestOutput. " + testCase.getClass().getName()
+            + " did not!");
+      }
+
+      testReports.add(new IteratorTestReport(testInput, testOutput, actualOutput, testCase));
+    }
+
+    return testReports;
+  }
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestUtil.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestUtil.java
new file mode 100644
index 0000000..6e3c8e6
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/IteratorTestUtil.java
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest;
+
+import static java.util.Objects.requireNonNull;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.iterators.SortedMapIterator;
+import org.apache.accumulo.core.iterators.system.ColumnFamilySkippingIterator;
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+
+/**
+ * A collection of methods that are helpful to the development of {@link IteratorTestCase}s.
+ */
+public class IteratorTestUtil {
+
+  public static SortedKeyValueIterator<Key,Value> instantiateIterator(IteratorTestInput input) {
+    try {
+      return requireNonNull(input.getIteratorClass()).newInstance();
+    } catch (InstantiationException | IllegalAccessException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  public static SortedKeyValueIterator<Key,Value> createSource(IteratorTestInput input) {
+    return new SimpleKVReusingIterator(new ColumnFamilySkippingIterator(new SortedMapIterator(requireNonNull(input).getInput())));
+  }
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/SimpleKVReusingIterator.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/SimpleKVReusingIterator.java
new file mode 100644
index 0000000..9174b69
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/SimpleKVReusingIterator.java
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+
+/**
+ * Internally, Accumulo reuses the same instance of Key and Value to reduce the number of objects to be garbage collected. This iterator simulates that.
+ */
+public class SimpleKVReusingIterator implements SortedKeyValueIterator<Key,Value> {
+
+  private final SortedKeyValueIterator<Key,Value> source;
+  private final Key topKey = new Key();
+  private final Value topValue = new Value();
+
+  public SimpleKVReusingIterator(SortedKeyValueIterator<Key,Value> source) {
+    this.source = source;
+  }
+
+  @Override
+  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
+    this.source.init(source, options, env);
+  }
+
+  @Override
+  public boolean hasTop() {
+    return source.hasTop();
+  }
+
+  @Override
+  public void next() throws IOException {
+    source.next();
+    load();
+  }
+
+  @Override
+  public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
+    source.seek(range, columnFamilies, inclusive);
+    load();
+  }
+
+  @Override
+  public Key getTopKey() {
+    return topKey;
+  }
+
+  @Override
+  public Value getTopValue() {
+    return topValue;
+  }
+
+  @Override
+  public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
+    SortedKeyValueIterator<Key,Value> newSource = source.deepCopy(env);
+    return new SimpleKVReusingIterator(newSource);
+  }
+
+  private void load() {
+    if (hasTop()) {
+      topKey.set(source.getTopKey());
+      topValue.set(source.getTopValue().get());
+    }
+  }
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/environments/SimpleIteratorEnvironment.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/environments/SimpleIteratorEnvironment.java
new file mode 100644
index 0000000..bbe625d
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/environments/SimpleIteratorEnvironment.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest.environments;
+
+import java.io.IOException;
+
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.security.Authorizations;
+
+/**
+ * A simple implementation of {@link IteratorEnvironment} which is unimplemented.
+ */
+public class SimpleIteratorEnvironment implements IteratorEnvironment {
+
+  @Override
+  public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String mapFileName) throws IOException {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public AccumuloConfiguration getConfig() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public IteratorScope getIteratorScope() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public boolean isFullMajorCompaction() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public Authorizations getAuthorizations() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public IteratorEnvironment cloneWithSamplingEnabled() {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public boolean isSamplingEnabled() {
+    return false;
+  }
+
+  @Override
+  public SamplerConfiguration getSamplerConfiguration() {
+    throw new UnsupportedOperationException();
+  }
+
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/junit4/BaseJUnit4IteratorTest.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/junit4/BaseJUnit4IteratorTest.java
new file mode 100644
index 0000000..6325ae6
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/junit4/BaseJUnit4IteratorTest.java
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest.junit4;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.IteratorTestReport;
+import org.apache.accumulo.iteratortest.IteratorTestRunner;
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A base JUnit4 test class for users to leverage with the JUnit Parameterized Runner.
+ * <p>
+ * Users should extend this class and implement a static method using the {@code @Parameters} annotation.
+ *
+ * <pre>
+ * &#064;Parameters
+ * public static Object[][] data() {
+ *   IteratorTestInput input = createIteratorInput();
+ *   IteratorTestOutput expectedOutput = createIteratorOuput();
+ *   List&lt;IteratorTestCase&gt; testCases = createTestCases();
+ *   return BaseJUnit4IteratorTest.createParameters(input, expectedOutput, testCases);
+ * }
+ * </pre>
+ *
+ */
+@RunWith(Parameterized.class)
+public class BaseJUnit4IteratorTest {
+  private static final Logger log = LoggerFactory.getLogger(BaseJUnit4IteratorTest.class);
+
+  public final IteratorTestRunner runner;
+
+  public BaseJUnit4IteratorTest(IteratorTestInput input, IteratorTestOutput expectedOutput, IteratorTestCase testCase) {
+    this.runner = new IteratorTestRunner(input, expectedOutput, Collections.singleton(testCase));
+  }
+
+  /**
+   * A helper function to convert input, output and a list of test cases into a two-dimensional array for JUnit's Parameterized runner.
+   *
+   * @param input
+   *          The input
+   * @param output
+   *          The output
+   * @param testCases
+   *          A list of desired test cases to run.
+   * @return A two dimensional array suitable to pass as JUnit's parameters.
+   */
+  public static Object[][] createParameters(IteratorTestInput input, IteratorTestOutput output, Collection<IteratorTestCase> testCases) {
+    Object[][] parameters = new Object[testCases.size()][3];
+    Iterator<IteratorTestCase> testCaseIter = testCases.iterator();
+    for (int i = 0; testCaseIter.hasNext(); i++) {
+      final IteratorTestCase testCase = testCaseIter.next();
+      parameters[i] = new Object[] {input, output, testCase};
+    }
+    return parameters;
+  }
+
+  @Test
+  public void testIterator() {
+    List<IteratorTestReport> reports = runner.runTests();
+    assertEquals(1, reports.size());
+
+    IteratorTestReport report = reports.get(0);
+    assertNotNull(report);
+
+    assertTrue(report.getSummary(), report.didTestSucceed());
+
+    // Present for manual verification
+    log.trace("Expected: {}, Actual: {}", report.getExpectedOutput(), report.getActualOutput());
+  }
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/DeepCopyTestCase.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/DeepCopyTestCase.java
new file mode 100644
index 0000000..3c3e6da
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/DeepCopyTestCase.java
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest.testcases;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.IteratorTestUtil;
+import org.apache.accumulo.iteratortest.environments.SimpleIteratorEnvironment;
+
+/**
+ * Test case that verifies that an iterator can use the generated instance from {@code deepCopy}.
+ */
+public class DeepCopyTestCase extends OutputVerifyingTestCase {
+
+  @Override
+  public IteratorTestOutput test(IteratorTestInput testInput) {
+    final SortedKeyValueIterator<Key,Value> skvi = IteratorTestUtil.instantiateIterator(testInput);
+    final SortedKeyValueIterator<Key,Value> source = IteratorTestUtil.createSource(testInput);
+
+    try {
+      skvi.init(source, testInput.getIteratorOptions(), new SimpleIteratorEnvironment());
+
+      SortedKeyValueIterator<Key,Value> copy = skvi.deepCopy(new SimpleIteratorEnvironment());
+
+      copy.seek(testInput.getRange(), Collections.<ByteSequence> emptySet(), false);
+      return new IteratorTestOutput(consume(copy));
+    } catch (IOException e) {
+      return new IteratorTestOutput(e);
+    }
+  }
+
+  TreeMap<Key,Value> consume(SortedKeyValueIterator<Key,Value> skvi) throws IOException {
+    TreeMap<Key,Value> data = new TreeMap<>();
+    while (skvi.hasTop()) {
+      // Make sure to copy the K-V
+      data.put(new Key(skvi.getTopKey()), new Value(skvi.getTopValue()));
+      skvi.next();
+    }
+    return data;
+  }
+
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/InstantiationTestCase.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/InstantiationTestCase.java
new file mode 100644
index 0000000..3bbfb7f
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/InstantiationTestCase.java
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest.testcases;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput.TestOutcome;
+
+/**
+ * TestCase to assert that an Iterator has a no-args constructor.
+ */
+public class InstantiationTestCase implements IteratorTestCase {
+
+  @Override
+  public IteratorTestOutput test(IteratorTestInput testInput) {
+    Class<? extends SortedKeyValueIterator<Key,Value>> clz = testInput.getIteratorClass();
+
+    try {
+      // We should be able to instantiate the Iterator given the Class
+      @SuppressWarnings("unused")
+      SortedKeyValueIterator<Key,Value> iter = clz.newInstance();
+    } catch (Exception e) {
+      return new IteratorTestOutput(e);
+    }
+
+    return new IteratorTestOutput(TestOutcome.PASSED);
+  }
+
+  public boolean verify(IteratorTestOutput expected, IteratorTestOutput actual) {
+    // Ignore what the user provided as expected output, just check that we instantiated the iterator successfully.
+    return TestOutcome.PASSED == actual.getTestOutcome();
+  }
+
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/IsolatedDeepCopiesTestCase.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/IsolatedDeepCopiesTestCase.java
new file mode 100644
index 0000000..1b8b05f
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/IsolatedDeepCopiesTestCase.java
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest.testcases;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Random;
+import java.util.Set;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.IteratorTestUtil;
+import org.apache.accumulo.iteratortest.environments.SimpleIteratorEnvironment;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Test case that verifies that copies do not impact one another.
+ */
+public class IsolatedDeepCopiesTestCase extends OutputVerifyingTestCase {
+  private static final Logger log = LoggerFactory.getLogger(IsolatedDeepCopiesTestCase.class);
+
+  private final Random random = new Random();
+
+  @Override
+  public IteratorTestOutput test(IteratorTestInput testInput) {
+    final SortedKeyValueIterator<Key,Value> skvi = IteratorTestUtil.instantiateIterator(testInput);
+    final SortedKeyValueIterator<Key,Value> source = IteratorTestUtil.createSource(testInput);
+
+    try {
+      skvi.init(source, testInput.getIteratorOptions(), new SimpleIteratorEnvironment());
+
+      SortedKeyValueIterator<Key,Value> copy1 = skvi.deepCopy(new SimpleIteratorEnvironment());
+      SortedKeyValueIterator<Key,Value> copy2 = copy1.deepCopy(new SimpleIteratorEnvironment());
+
+      Range seekRange = testInput.getRange();
+      Set<ByteSequence> seekColumnFamilies = Collections.<ByteSequence> emptySet();
+      boolean seekInclusive = false;
+
+      skvi.seek(testInput.getRange(), seekColumnFamilies, seekInclusive);
+      copy1.seek(testInput.getRange(), seekColumnFamilies, seekInclusive);
+      copy2.seek(testInput.getRange(), seekColumnFamilies, seekInclusive);
+
+      TreeMap<Key,Value> output = consumeMany(new ArrayList<>(Arrays.asList(skvi, copy1, copy2)), seekRange, seekColumnFamilies, seekInclusive);
+
+      return new IteratorTestOutput(output);
+    } catch (IOException e) {
+      return new IteratorTestOutput(e);
+    }
+  }
+
+  TreeMap<Key,Value> consumeMany(Collection<SortedKeyValueIterator<Key,Value>> iterators, Range range, Set<ByteSequence> seekColumnFamilies,
+      boolean seekInclusive) throws IOException {
+    TreeMap<Key,Value> data = new TreeMap<>();
+    // All of the copies should have consistent results from concurrent use
+    while (allHasTop(iterators)) {
+      // occasionally deep copy one of the existing iterators
+      if (random.nextInt(3) == 0) {
+        log.debug("Deep-copying and re-seeking an iterator");
+        SortedKeyValueIterator<Key,Value> newcopy = getRandomElement(iterators).deepCopy(new SimpleIteratorEnvironment());
+        newcopy.seek(new Range(getTopKey(iterators), true, range.getEndKey(), range.isEndKeyInclusive()), seekColumnFamilies, seekInclusive);
+        // keep using the new one too, should act like the others
+        iterators.add(newcopy);
+      }
+
+      data.put(getTopKey(iterators), getTopValue(iterators));
+      next(iterators);
+    }
+
+    // All of the iterators should be consumed.
+    for (SortedKeyValueIterator<Key,Value> iter : iterators) {
+      if (iter.hasTop()) {
+        return null;
+      }
+    }
+
+    return data;
+  }
+
+  private <E> E getRandomElement(Collection<E> iterators) {
+    if (iterators == null || iterators.size() == 0)
+      throw new IllegalArgumentException("should not pass an empty collection");
+    int num = random.nextInt(iterators.size());
+    for (E e : iterators) {
+      if (num-- == 0)
+        return e;
+    }
+    throw new AssertionError();
+  }
+
+  boolean allHasTop(Collection<SortedKeyValueIterator<Key,Value>> iterators) {
+    for (SortedKeyValueIterator<Key,Value> iter : iterators) {
+      if (!iter.hasTop()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  Key getTopKey(Collection<SortedKeyValueIterator<Key,Value>> iterators) {
+    boolean first = true;
+    Key topKey = null;
+    for (SortedKeyValueIterator<Key,Value> iter : iterators) {
+      if (first) {
+        topKey = iter.getTopKey();
+        first = false;
+      } else if (!topKey.equals(iter.getTopKey())) {
+        throw new IllegalStateException("Inconsistent keys between two iterators: " + topKey + " " + iter.getTopKey());
+      }
+    }
+
+    // Copy the key
+    return new Key(topKey);
+  }
+
+  Value getTopValue(Collection<SortedKeyValueIterator<Key,Value>> iterators) {
+    boolean first = true;
+    Value topValue = null;
+    for (SortedKeyValueIterator<Key,Value> iter : iterators) {
+      if (first) {
+        topValue = iter.getTopValue();
+        first = false;
+      } else if (!topValue.equals(iter.getTopValue())) {
+        throw new IllegalStateException("Inconsistent values between two iterators: " + topValue + " " + iter.getTopValue());
+      }
+    }
+
+    // Copy the value
+    return new Value(topValue);
+  }
+
+  void next(Collection<SortedKeyValueIterator<Key,Value>> iterators) throws IOException {
+    for (SortedKeyValueIterator<Key,Value> iter : iterators) {
+      iter.next();
+    }
+  }
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/IteratorTestCase.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/IteratorTestCase.java
new file mode 100644
index 0000000..f7495af
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/IteratorTestCase.java
@@ -0,0 +1,49 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest.testcases;
+
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+
+/**
+ * An interface that accepts some input for testing a {@link SortedKeyValueIterator}, runs the specific implementation of the test and returns the outcome from
+ * that iterator.
+ */
+public interface IteratorTestCase {
+
+  /**
+   * Run the implementation's test against the given input.
+   *
+   * @param testInput
+   *          The input to test.
+   * @return The output of the test with the input.
+   */
+  IteratorTestOutput test(IteratorTestInput testInput);
+
+  /**
+   * Compute whether or not the expected and actual output is a success or failure for this implementation.
+   *
+   * @param expected
+   *          The expected output from the user.
+   * @param actual
+   *          The actual output from the test
+   * @return True if the test case passes, false if it doesn't.
+   */
+  boolean verify(IteratorTestOutput expected, IteratorTestOutput actual);
+
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/MultipleHasTopCalls.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/MultipleHasTopCalls.java
new file mode 100644
index 0000000..087516d
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/MultipleHasTopCalls.java
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest.testcases;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.Random;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.IteratorTestUtil;
+import org.apache.accumulo.iteratortest.environments.SimpleIteratorEnvironment;
+
+/**
+ * TestCase which asserts that multiple calls to {@link SortedKeyValueIterator#hasTop()} should not alter the internal state of the iterator and should not
+ * return different values due to multiple, sequential invocations.
+ * <p>
+ * This test case will call {@code hasTop()} multiple times, verifying that each call returns the same value as the first.
+ */
+public class MultipleHasTopCalls extends OutputVerifyingTestCase {
+
+  private final Random random;
+
+  public MultipleHasTopCalls() {
+    this.random = new Random();
+  }
+
+  @Override
+  public IteratorTestOutput test(IteratorTestInput testInput) {
+    final SortedKeyValueIterator<Key,Value> skvi = IteratorTestUtil.instantiateIterator(testInput);
+    final SortedKeyValueIterator<Key,Value> source = IteratorTestUtil.createSource(testInput);
+
+    try {
+      skvi.init(source, testInput.getIteratorOptions(), new SimpleIteratorEnvironment());
+      skvi.seek(testInput.getRange(), Collections.<ByteSequence> emptySet(), false);
+      return new IteratorTestOutput(consume(skvi));
+    } catch (IOException e) {
+      return new IteratorTestOutput(e);
+    }
+  }
+
+  TreeMap<Key,Value> consume(SortedKeyValueIterator<Key,Value> skvi) throws IOException {
+    TreeMap<Key,Value> data = new TreeMap<>();
+    while (skvi.hasTop()) {
+      // Check 1 to 5 times. If hasTop returned true, it should continue to return true.
+      for (int i = 0; i < random.nextInt(5) + 1; i++) {
+        if (!skvi.hasTop()) {
+          throw badStateException(true);
+        }
+      }
+      // Make sure to copy the K-V
+      data.put(new Key(skvi.getTopKey()), new Value(skvi.getTopValue()));
+      skvi.next();
+    }
+
+    // Check 1 to 5 times. Once hasTop returned false, it should continue to return false
+    for (int i = 0; i < random.nextInt(5) + 1; i++) {
+      if (skvi.hasTop()) {
+        throw badStateException(false);
+      }
+    }
+    return data;
+  }
+
+  IllegalStateException badStateException(boolean expectedState) {
+    return new IllegalStateException("Multiple sequential calls to hasTop should not alter the state or return value of the iterator. Expected '"
+        + expectedState + ", but got '" + !expectedState + "'.");
+  }
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/OutputVerifyingTestCase.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/OutputVerifyingTestCase.java
new file mode 100644
index 0000000..5a46e4e
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/OutputVerifyingTestCase.java
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest.testcases;
+
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+
+/**
+ * Base {@link IteratorTestCase} implementation that performs verifiation on the expected and actual outcome.
+ */
+public abstract class OutputVerifyingTestCase implements IteratorTestCase {
+
+  public boolean verify(IteratorTestOutput expected, IteratorTestOutput actual) {
+    return expected.equals(actual);
+  }
+
+}
diff --git a/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/ReSeekTestCase.java b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/ReSeekTestCase.java
new file mode 100644
index 0000000..512202c
--- /dev/null
+++ b/iterator-test-harness/src/main/java/org/apache/accumulo/iteratortest/testcases/ReSeekTestCase.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest.testcases;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.Random;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.IteratorTestUtil;
+import org.apache.accumulo.iteratortest.environments.SimpleIteratorEnvironment;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Test case that verifies that an iterator can use the generated instance from {@code deepCopy}.
+ */
+public class ReSeekTestCase extends OutputVerifyingTestCase {
+  private static final Logger log = LoggerFactory.getLogger(ReSeekTestCase.class);
+
+  /**
+   * Let N be a random number between [0, RESEEK_INTERVAL). After every Nth entry "returned" to the client, recreate and reseek the iterator.
+   */
+  private static final int RESEEK_INTERVAL = 4;
+
+  private final Random random;
+
+  public ReSeekTestCase() {
+    this.random = new Random();
+  }
+
+  @Override
+  public IteratorTestOutput test(IteratorTestInput testInput) {
+    final SortedKeyValueIterator<Key,Value> skvi = IteratorTestUtil.instantiateIterator(testInput);
+    final SortedKeyValueIterator<Key,Value> source = IteratorTestUtil.createSource(testInput);
+
+    try {
+      skvi.init(source, testInput.getIteratorOptions(), new SimpleIteratorEnvironment());
+      skvi.seek(testInput.getRange(), Collections.<ByteSequence> emptySet(), false);
+      return new IteratorTestOutput(consume(skvi, testInput));
+    } catch (IOException e) {
+      return new IteratorTestOutput(e);
+    }
+  }
+
+  TreeMap<Key,Value> consume(SortedKeyValueIterator<Key,Value> skvi, IteratorTestInput testInput) throws IOException {
+    final TreeMap<Key,Value> data = new TreeMap<>();
+    final Range origRange = testInput.getRange();
+    int reseekCount = random.nextInt(RESEEK_INTERVAL);
+
+    int i = 0;
+    while (skvi.hasTop()) {
+      data.put(new Key(skvi.getTopKey()), new Value(skvi.getTopValue()));
+
+      /*
+       * One of the trickiest cases in writing iterators: After any result is returned from a TabletServer to the client, the Iterator in the TabletServer's
+       * memory may be torn down. To preserve the state and guarantee that all records are received, the TabletServer does remember the last Key it returned to
+       * the client. It will recreate the Iterator (stack), and seek it using an updated Range. This range's start key is set to the last Key returned,
+       * non-inclusive.
+       */
+      if (i % RESEEK_INTERVAL == reseekCount) {
+        // Last key
+        Key reSeekStartKey = skvi.getTopKey();
+
+        // Make a new instance of the iterator
+        skvi = IteratorTestUtil.instantiateIterator(testInput);
+        final SortedKeyValueIterator<Key,Value> sourceCopy = IteratorTestUtil.createSource(testInput);
+
+        skvi.init(sourceCopy, testInput.getIteratorOptions(), new SimpleIteratorEnvironment());
+
+        // The new range, resume where we left off (non-inclusive)
+        final Range newRange = new Range(reSeekStartKey, false, origRange.getEndKey(), origRange.isEndKeyInclusive());
+        log.debug("Re-seeking to {}", newRange);
+
+        // Seek there
+        skvi.seek(newRange, Collections.<ByteSequence> emptySet(), false);
+      } else {
+        // Every other time, it's a simple call to next()
+        skvi.next();
+      }
+
+      i++;
+    }
+
+    return data;
+  }
+
+}
diff --git a/iterator-test-harness/src/test/java/org/apache/accumulo/iteratortest/framework/JUnitFrameworkTest.java b/iterator-test-harness/src/test/java/org/apache/accumulo/iteratortest/framework/JUnitFrameworkTest.java
new file mode 100644
index 0000000..133db62
--- /dev/null
+++ b/iterator-test-harness/src/test/java/org/apache/accumulo/iteratortest/framework/JUnitFrameworkTest.java
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.iteratortest.framework;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.WrappingIterator;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput.TestOutcome;
+import org.apache.accumulo.iteratortest.junit4.BaseJUnit4IteratorTest;
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+import org.junit.runners.Parameterized.Parameters;
+
+/**
+ * A Basic test asserting that the framework is functional.
+ */
+public class JUnitFrameworkTest extends BaseJUnit4IteratorTest {
+
+  /**
+   * An IteratorTestCase implementation that returns the original input without any external action.
+   */
+  public static class NoopIteratorTestCase implements IteratorTestCase {
+
+    @Override
+    public IteratorTestOutput test(IteratorTestInput testInput) {
+      return new IteratorTestOutput(TestOutcome.PASSED);
+    }
+
+    @Override
+    public boolean verify(IteratorTestOutput expected, IteratorTestOutput actual) {
+      // Always passes
+      return true;
+    }
+
+  }
+
+  @Parameters
+  public static Object[][] parameters() {
+    IteratorTestInput input = getIteratorInput();
+    IteratorTestOutput output = getIteratorOutput();
+    List<IteratorTestCase> tests = Collections.<IteratorTestCase> singletonList(new NoopIteratorTestCase());
+    return BaseJUnit4IteratorTest.createParameters(input, output, tests);
+  }
+
+  private static final TreeMap<Key,Value> DATA = createData();
+
+  private static TreeMap<Key,Value> createData() {
+    TreeMap<Key,Value> data = new TreeMap<>();
+    data.put(new Key("1", "a", ""), new Value("1a".getBytes()));
+    data.put(new Key("2", "a", ""), new Value("2a".getBytes()));
+    data.put(new Key("3", "a", ""), new Value("3a".getBytes()));
+    return data;
+  }
+
+  private static IteratorTestInput getIteratorInput() {
+    return new IteratorTestInput(IdentityIterator.class, Collections.<String,String> emptyMap(), new Range(), DATA);
+  }
+
+  private static IteratorTestOutput getIteratorOutput() {
+    return new IteratorTestOutput(DATA);
+  }
+
+  public JUnitFrameworkTest(IteratorTestInput input, IteratorTestOutput expectedOutput, IteratorTestCase testCase) {
+    super(input, expectedOutput, testCase);
+  }
+
+  /**
+   * Noop iterator implementation.
+   */
+  private static class IdentityIterator extends WrappingIterator {
+
+    @Override
+    public IdentityIterator deepCopy(IteratorEnvironment env) {
+      return new IdentityIterator();
+    }
+  }
+}
diff --git a/iterator-test-harness/src/test/resources/log4j.properties b/iterator-test-harness/src/test/resources/log4j.properties
new file mode 100644
index 0000000..3b2c8e7
--- /dev/null
+++ b/iterator-test-harness/src/test/resources/log4j.properties
@@ -0,0 +1,24 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+log4j.rootLogger=INFO, CA
+log4j.appender.CA=org.apache.log4j.ConsoleAppender
+log4j.appender.CA.layout=org.apache.log4j.PatternLayout
+log4j.appender.CA.layout.ConversionPattern=%d{ISO8601} [%-8c{2}] %-5p: %m%n
+
+log4j.logger.org.apache.accumulo.core.client.impl.ServerClient=ERROR
+log4j.logger.org.apache.zookeeper=ERROR
+log4j.logger.org.apache.accumulo.iteratortest=DEBUG
+log4j.logger.org.apache.accumulo.iteratortest.testcases=DEBUG
\ No newline at end of file
diff --git a/maven-plugin/pom.xml b/maven-plugin/pom.xml
index 66ee5bb..26ca6bc 100644
--- a/maven-plugin/pom.xml
+++ b/maven-plugin/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-maven-plugin</artifactId>
   <packaging>maven-plugin</packaging>
diff --git a/maven-plugin/src/it/plugin-test/pom.xml b/maven-plugin/src/it/plugin-test/pom.xml
index 2eb8626..f114fa4 100644
--- a/maven-plugin/src/it/plugin-test/pom.xml
+++ b/maven-plugin/src/it/plugin-test/pom.xml
@@ -114,6 +114,22 @@
         </executions>
       </plugin>
       <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-failsafe-plugin</artifactId>
+        <executions>
+          <execution>
+            <id>run-integration-tests</id>
+            <goals>
+              <goal>integration-test</goal>
+              <goal>verify</goal>
+            </goals>
+            <configuration>
+              <excludedGroups combine.self="override" />
+            </configuration>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
         <groupId>org.apache.rat</groupId>
         <artifactId>apache-rat-plugin</artifactId>
         <configuration>
diff --git a/maven-plugin/src/main/java/org/apache/accumulo/maven/plugin/AbstractAccumuloMojo.java b/maven-plugin/src/main/java/org/apache/accumulo/maven/plugin/AbstractAccumuloMojo.java
index 2028a2e..37eeb4d 100644
--- a/maven-plugin/src/main/java/org/apache/accumulo/maven/plugin/AbstractAccumuloMojo.java
+++ b/maven-plugin/src/main/java/org/apache/accumulo/maven/plugin/AbstractAccumuloMojo.java
@@ -43,7 +43,7 @@
   }
 
   void configureMiniClasspath(MiniAccumuloConfigImpl macConfig, String miniClasspath) throws MalformedURLException {
-    ArrayList<String> classpathItems = new ArrayList<String>();
+    ArrayList<String> classpathItems = new ArrayList<>();
     if (miniClasspath == null && project != null) {
       classpathItems.add(project.getBuild().getOutputDirectory());
       classpathItems.add(project.getBuild().getTestOutputDirectory());
diff --git a/minicluster/pom.xml b/minicluster/pom.xml
index 8822f36..03113a4 100644
--- a/minicluster/pom.xml
+++ b/minicluster/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-minicluster</artifactId>
   <name>Apache Accumulo MiniCluster</name>
@@ -94,6 +94,7 @@
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-minicluster</artifactId>
+      <optional>true</optional>
     </dependency>
     <dependency>
       <groupId>org.apache.zookeeper</groupId>
@@ -105,6 +106,16 @@
       <scope>test</scope>
     </dependency>
     <dependency>
+      <groupId>org.apache.curator</groupId>
+      <artifactId>curator-framework</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.curator</groupId>
+      <artifactId>curator-test</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
       <groupId>org.easymock</groupId>
       <artifactId>easymock</artifactId>
       <scope>test</scope>
diff --git a/minicluster/src/main/java/org/apache/accumulo/cluster/ClusterUser.java b/minicluster/src/main/java/org/apache/accumulo/cluster/ClusterUser.java
index 6644d44..0231242 100644
--- a/minicluster/src/main/java/org/apache/accumulo/cluster/ClusterUser.java
+++ b/minicluster/src/main/java/org/apache/accumulo/cluster/ClusterUser.java
@@ -25,6 +25,7 @@
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.hadoop.security.UserGroupInformation;
 
 /**
  * Simple wrapper around a principal and its credentials: a password or a keytab.
@@ -81,7 +82,8 @@
     if (null != password) {
       return new PasswordToken(password);
     } else if (null != keytab) {
-      return new KerberosToken(principal, keytab, true);
+      UserGroupInformation.loginUserFromKeytab(principal, keytab.getAbsolutePath());
+      return new KerberosToken();
     }
 
     throw new IllegalStateException("One of password and keytab must be non-null");
diff --git a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
index 3edf60d..febc94c 100644
--- a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
+++ b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
@@ -377,7 +377,7 @@
   protected List<String> getHosts(File f) throws IOException {
     BufferedReader reader = new BufferedReader(new FileReader(f));
     try {
-      List<String> hosts = new ArrayList<String>();
+      List<String> hosts = new ArrayList<>();
       String line = null;
       while ((line = reader.readLine()) != null) {
         line = line.trim();
diff --git a/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloCluster.java b/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloCluster.java
index 87dfff8..ba52f80 100644
--- a/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloCluster.java
+++ b/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloCluster.java
@@ -31,7 +31,7 @@
 
 /**
  * A utility class that will create Zookeeper and Accumulo processes that write all of their data to a single local directory. This class makes it easy to test
- * code against a real Accumulo instance. Its much more accurate for testing than {@link org.apache.accumulo.core.client.mock.MockAccumulo}, but much slower.
+ * code against a real Accumulo instance. The use of this utility will yield results which closely match a normal Accumulo instance.
  *
  * @since 1.5.0
  */
diff --git a/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloRunner.java b/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloRunner.java
index 13a75b5..4745e0a 100644
--- a/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloRunner.java
+++ b/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloRunner.java
@@ -75,6 +75,7 @@
   private static final String NUM_T_SERVERS_PROP = "numTServers";
   private static final String DIRECTORY_PROP = "directory";
   private static final String INSTANCE_NAME_PROP = "instanceName";
+  private static final String EXISTING_ZOO_KEEPERS_PROP = "existingZooKeepers";
 
   private static void printProperties() {
     System.out.println("#mini Accumulo cluster runner properties.");
@@ -93,6 +94,7 @@
     System.out.println("#" + TSERVER_MEMORY_PROP + "=128M");
     System.out.println("#" + ZOO_KEEPER_MEMORY_PROP + "=128M");
     System.out.println("#" + JDWP_ENABLED_PROP + "=false");
+    System.out.println("#" + EXISTING_ZOO_KEEPERS_PROP + "=localhost:2181");
 
     System.out.println();
     System.out.println("# Configuration normally placed in accumulo-site.xml can be added using a site. prefix.");
@@ -167,6 +169,8 @@
       config.setZooKeeperPort(Integer.parseInt(opts.prop.getProperty(ZOO_KEEPER_PORT_PROP)));
     if (opts.prop.containsKey(ZOO_KEEPER_STARTUP_TIME_PROP))
       config.setZooKeeperStartupTime(Long.parseLong(opts.prop.getProperty(ZOO_KEEPER_STARTUP_TIME_PROP)));
+    if (opts.prop.containsKey(EXISTING_ZOO_KEEPERS_PROP))
+      config.getImpl().setExistingZooKeepers(opts.prop.getProperty(EXISTING_ZOO_KEEPERS_PROP));
     if (opts.prop.containsKey(JDWP_ENABLED_PROP))
       config.setJDWPEnabled(Boolean.parseBoolean(opts.prop.getProperty(JDWP_ENABLED_PROP)));
     if (opts.prop.containsKey(ZOO_KEEPER_MEMORY_PROP))
@@ -180,7 +184,7 @@
     if (opts.prop.containsKey(SHUTDOWN_PORT_PROP))
       shutdownPort = Integer.parseInt(opts.prop.getProperty(SHUTDOWN_PORT_PROP));
 
-    Map<String,String> siteConfig = new HashMap<String,String>();
+    Map<String,String> siteConfig = new HashMap<>();
     for (Map.Entry<Object,Object> entry : opts.prop.entrySet()) {
       String key = (String) entry.getKey();
       if (key.startsWith("site."))
diff --git a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterControl.java b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterControl.java
index 80c4edc..9a433cf 100644
--- a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterControl.java
+++ b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterControl.java
@@ -42,6 +42,8 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.collect.Maps;
+import java.util.Collections;
+import java.util.Map;
 
 /**
  *
@@ -56,7 +58,7 @@
   Process gcProcess = null;
   Process monitor = null;
   Process tracer = null;
-  final List<Process> tabletServerProcesses = new ArrayList<Process>();
+  final List<Process> tabletServerProcesses = new ArrayList<>();
 
   public MiniAccumuloClusterControl(MiniAccumuloClusterImpl cluster) {
     requireNonNull(cluster);
@@ -132,37 +134,46 @@
 
   @Override
   public synchronized void start(ServerType server, String hostname) throws IOException {
+    start(server, hostname, Collections.<String,String> emptyMap(), Integer.MAX_VALUE);
+  }
+
+  public synchronized void start(ServerType server, String hostname, Map<String,String> configOverrides, int limit) throws IOException {
+    if (limit <= 0) {
+      return;
+    }
+
     switch (server) {
       case TABLET_SERVER:
         synchronized (tabletServerProcesses) {
-          for (int i = tabletServerProcesses.size(); i < cluster.getConfig().getNumTservers(); i++) {
-            tabletServerProcesses.add(cluster._exec(TabletServer.class, server));
+          int count = 0;
+          for (int i = tabletServerProcesses.size(); count < limit && i < cluster.getConfig().getNumTservers(); i++, ++count) {
+            tabletServerProcesses.add(cluster._exec(TabletServer.class, server, configOverrides));
           }
         }
         break;
       case MASTER:
         if (null == masterProcess) {
-          masterProcess = cluster._exec(Master.class, server);
+          masterProcess = cluster._exec(Master.class, server, configOverrides);
         }
         break;
       case ZOOKEEPER:
         if (null == zooKeeperProcess) {
-          zooKeeperProcess = cluster._exec(ZooKeeperServerMain.class, server, cluster.getZooCfgFile().getAbsolutePath());
+          zooKeeperProcess = cluster._exec(ZooKeeperServerMain.class, server, configOverrides, cluster.getZooCfgFile().getAbsolutePath());
         }
         break;
       case GARBAGE_COLLECTOR:
         if (null == gcProcess) {
-          gcProcess = cluster._exec(SimpleGarbageCollector.class, server);
+          gcProcess = cluster._exec(SimpleGarbageCollector.class, server, configOverrides);
         }
         break;
       case MONITOR:
         if (null == monitor) {
-          monitor = cluster._exec(Monitor.class, server);
+          monitor = cluster._exec(Monitor.class, server, configOverrides);
         }
         break;
       case TRACER:
         if (null == tracer) {
-          tracer = cluster._exec(TraceServer.class, server);
+          tracer = cluster._exec(TraceServer.class, server, configOverrides);
         }
         break;
       default:
diff --git a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
index 95ac79a..3e66acf 100644
--- a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
+++ b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
@@ -75,10 +75,10 @@
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.Daemon;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.master.state.SetGoalState;
+import org.apache.accumulo.minicluster.MiniAccumuloCluster;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.server.Accumulo;
 import org.apache.accumulo.server.fs.VolumeManager;
@@ -111,10 +111,11 @@
 import com.google.common.base.Joiner;
 import com.google.common.base.Predicate;
 import com.google.common.collect.Maps;
+import com.google.common.util.concurrent.Uninterruptibles;
 
 /**
- * A utility class that will create Zookeeper and Accumulo processes that write all of their data to a single local directory. This class makes it easy to test
- * code against a real Accumulo instance. Its much more accurate for testing than {@link org.apache.accumulo.core.client.mock.MockAccumulo}, but much slower.
+ * This class provides the backing implementation for {@link MiniAccumuloCluster}, and may contain features for internal testing which have not yet been
+ * promoted to the public API. It's best to use {@link MiniAccumuloCluster} whenever possible. Use of this class risks API breakage between versions.
  *
  * @since 1.6.0
  */
@@ -168,7 +169,7 @@
 
   private boolean initialized = false;
 
-  private Set<Pair<ServerType,Integer>> debugPorts = new HashSet<Pair<ServerType,Integer>>();
+  private Set<Pair<ServerType,Integer>> debugPorts = new HashSet<>();
 
   private File zooCfgFile;
   private String dfsUri;
@@ -177,11 +178,11 @@
     return logWriters;
   }
 
-  private List<LogWriter> logWriters = new ArrayList<MiniAccumuloClusterImpl.LogWriter>();
+  private List<LogWriter> logWriters = new ArrayList<>();
 
   private MiniAccumuloConfigImpl config;
   private MiniDFSCluster miniDFS = null;
-  private List<Process> cleanup = new ArrayList<Process>();
+  private List<Process> cleanup = new ArrayList<>();
 
   private ExecutorService executor;
 
@@ -196,7 +197,7 @@
   }
 
   public Process exec(Class<?> clazz, List<String> jvmArgs, String... args) throws IOException {
-    ArrayList<String> jvmArgs2 = new ArrayList<String>(1 + (jvmArgs == null ? 0 : jvmArgs.size()));
+    ArrayList<String> jvmArgs2 = new ArrayList<>(1 + (jvmArgs == null ? 0 : jvmArgs.size()));
     jvmArgs2.add("-Xmx" + config.getDefaultMemory());
     if (jvmArgs != null)
       jvmArgs2.addAll(jvmArgs);
@@ -230,7 +231,7 @@
   private String getClasspath() throws IOException {
 
     try {
-      ArrayList<ClassLoader> classloaders = new ArrayList<ClassLoader>();
+      ArrayList<ClassLoader> classloaders = new ArrayList<>();
 
       ClassLoader cl = this.getClass().getClassLoader();
 
@@ -288,7 +289,7 @@
 
     String className = clazz.getName();
 
-    ArrayList<String> argList = new ArrayList<String>();
+    ArrayList<String> argList = new ArrayList<>();
     argList.addAll(Arrays.asList(javaBin, "-Dproc=" + clazz.getSimpleName(), "-cp", classpath));
     argList.addAll(extraJvmOpts);
     for (Entry<String,String> sysProp : config.getSystemProperties().entrySet()) {
@@ -341,15 +342,22 @@
     return process;
   }
 
-  Process _exec(Class<?> clazz, ServerType serverType, String... args) throws IOException {
-
-    List<String> jvmOpts = new ArrayList<String>();
+  Process _exec(Class<?> clazz, ServerType serverType, Map<String,String> configOverrides, String... args) throws IOException {
+    List<String> jvmOpts = new ArrayList<>();
     jvmOpts.add("-Xmx" + config.getMemory(serverType));
+    if (configOverrides != null && !configOverrides.isEmpty()) {
+      File siteFile = File.createTempFile("accumulo-site", ".xml", config.getConfDir());
+      Map<String,String> confMap = new HashMap<>();
+      confMap.putAll(config.getSiteConfig());
+      confMap.putAll(configOverrides);
+      writeConfig(siteFile, confMap.entrySet());
+      jvmOpts.add("-Dorg.apache.accumulo.config.file=" + siteFile.getName());
+    }
 
     if (config.isJDWPEnabled()) {
       Integer port = PortUtils.getRandomFreePort();
       jvmOpts.addAll(buildRemoteDebugParams(port));
-      debugPorts.add(new Pair<ServerType,Integer>(serverType, port));
+      debugPorts.add(new Pair<>(serverType, port));
     }
     return _exec(clazz, jvmOpts, args);
   }
@@ -381,7 +389,8 @@
     mkdirs(config.getLibExtDir());
 
     if (!config.useExistingInstance()) {
-      mkdirs(config.getZooKeeperDir());
+      if (!config.useExistingZooKeepers())
+        mkdirs(config.getZooKeeperDir());
       mkdirs(config.getWalogDir());
       mkdirs(config.getAccumuloDir());
     }
@@ -397,6 +406,7 @@
       conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, nn.getAbsolutePath());
       conf.set(DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY, dn.getAbsolutePath());
       conf.set(DFSConfigKeys.DFS_REPLICATION_KEY, "1");
+      conf.set(DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY, "1");
       conf.set("dfs.support.append", "true");
       conf.set("dfs.datanode.synconclose", "true");
       conf.set("dfs.datanode.data.dir.perm", MiniDFSUtil.computeDatanodeDirectoryPermission());
@@ -436,7 +446,7 @@
     File siteFile = new File(config.getConfDir(), "accumulo-site.xml");
     writeConfig(siteFile, config.getSiteConfig().entrySet());
 
-    if (!config.useExistingInstance()) {
+    if (!config.useExistingInstance() && !config.useExistingZooKeepers()) {
       zooCfgFile = new File(config.getConfDir(), "zoo.cfg");
       FileWriter fileWriter = new FileWriter(zooCfgFile);
 
@@ -555,32 +565,35 @@
         });
       }
 
-      control.start(ServerType.ZOOKEEPER);
+      if (!config.useExistingZooKeepers())
+        control.start(ServerType.ZOOKEEPER);
 
       if (!initialized) {
-        // sleep a little bit to let zookeeper come up before calling init, seems to work better
-        long startTime = System.currentTimeMillis();
-        while (true) {
-          Socket s = null;
-          try {
-            s = new Socket("localhost", config.getZooKeeperPort());
-            s.setReuseAddress(true);
-            s.getOutputStream().write("ruok\n".getBytes());
-            s.getOutputStream().flush();
-            byte buffer[] = new byte[100];
-            int n = s.getInputStream().read(buffer);
-            if (n >= 4 && new String(buffer, 0, 4).equals("imok"))
-              break;
-          } catch (Exception e) {
-            if (System.currentTimeMillis() - startTime >= config.getZooKeeperStartupTime()) {
-              throw new ZooKeeperBindException("Zookeeper did not start within " + (config.getZooKeeperStartupTime() / 1000) + " seconds. Check the logs in "
-                  + config.getLogDir() + " for errors.  Last exception: " + e);
+        if (!config.useExistingZooKeepers()) {
+          // sleep a little bit to let zookeeper come up before calling init, seems to work better
+          long startTime = System.currentTimeMillis();
+          while (true) {
+            Socket s = null;
+            try {
+              s = new Socket("localhost", config.getZooKeeperPort());
+              s.setReuseAddress(true);
+              s.getOutputStream().write("ruok\n".getBytes());
+              s.getOutputStream().flush();
+              byte buffer[] = new byte[100];
+              int n = s.getInputStream().read(buffer);
+              if (n >= 4 && new String(buffer, 0, 4).equals("imok"))
+                break;
+            } catch (Exception e) {
+              if (System.currentTimeMillis() - startTime >= config.getZooKeeperStartupTime()) {
+                throw new ZooKeeperBindException("Zookeeper did not start within " + (config.getZooKeeperStartupTime() / 1000) + " seconds. Check the logs in "
+                    + config.getLogDir() + " for errors.  Last exception: " + e);
+              }
+              // Don't spin absurdly fast
+              Thread.sleep(250);
+            } finally {
+              if (s != null)
+                s.close();
             }
-            // Don't spin absurdly fast
-            Thread.sleep(250);
-          } finally {
-            if (s != null)
-              s.close();
           }
         }
 
@@ -589,6 +602,7 @@
         args.add(config.getInstanceName());
         args.add("--user");
         args.add(config.getRootUserName());
+        args.add("--clear-instance-name");
 
         // If we aren't using SASL, add in the root password
         final String saslEnabled = config.getSiteConfig().get(Property.INSTANCE_RPC_SASL_ENABLED.getKey());
@@ -615,7 +629,7 @@
       ret = exec(Main.class, SetGoalState.class.getName(), MasterGoalState.NORMAL.toString()).waitFor();
       if (ret == 0)
         break;
-      UtilWaitThread.sleep(1000);
+      Uninterruptibles.sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
     if (ret != 0) {
       throw new RuntimeException("Could not set master goal state, process returned " + ret + ". Check the logs in " + config.getLogDir() + " for errors.");
@@ -642,7 +656,7 @@
   }
 
   List<ProcessReference> references(Process... procs) {
-    List<ProcessReference> result = new ArrayList<ProcessReference>();
+    List<ProcessReference> result = new ArrayList<>();
     for (Process proc : procs) {
       result.add(new ProcessReference(proc));
     }
@@ -650,11 +664,13 @@
   }
 
   public Map<ServerType,Collection<ProcessReference>> getProcesses() {
-    Map<ServerType,Collection<ProcessReference>> result = new HashMap<ServerType,Collection<ProcessReference>>();
+    Map<ServerType,Collection<ProcessReference>> result = new HashMap<>();
     MiniAccumuloClusterControl control = getClusterControl();
     result.put(ServerType.MASTER, references(control.masterProcess));
     result.put(ServerType.TABLET_SERVER, references(control.tabletServerProcesses.toArray(new Process[0])));
-    result.put(ServerType.ZOOKEEPER, references(control.zooKeeperProcess));
+    if (null != control.zooKeeperProcess) {
+      result.put(ServerType.ZOOKEEPER, references(control.zooKeeperProcess));
+    }
     if (null != control.gcProcess) {
       result.put(ServerType.GARBAGE_COLLECTOR, references(control.gcProcess));
     }
@@ -762,7 +778,7 @@
   }
 
   int stopProcessWithTimeout(final Process proc, long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException {
-    FutureTask<Integer> future = new FutureTask<Integer>(new Callable<Integer>() {
+    FutureTask<Integer> future = new FutureTask<>(new Callable<Integer>() {
       @Override
       public Integer call() throws InterruptedException {
         proc.destroy();
diff --git a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
index c8f65d2..8e35705 100644
--- a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
+++ b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImpl.java
@@ -47,12 +47,12 @@
 
   private File dir = null;
   private String rootPassword = null;
-  private Map<String,String> siteConfig = new HashMap<String,String>();
-  private Map<String,String> configuredSiteConig = new HashMap<String,String>();
+  private Map<String,String> siteConfig = new HashMap<>();
+  private Map<String,String> configuredSiteConig = new HashMap<>();
   private int numTservers = 2;
-  private Map<ServerType,Long> memoryConfig = new HashMap<ServerType,Long>();
+  private Map<ServerType,Long> memoryConfig = new HashMap<>();
   private boolean jdwpEnabled = false;
-  private Map<String,String> systemProperties = new HashMap<String,String>();
+  private Map<String,String> systemProperties = new HashMap<>();
 
   private String instanceName = "miniInstance";
   private String rootUserName = "root";
@@ -69,6 +69,7 @@
   private int zooKeeperPort = 0;
   private int configuredZooKeeperPort = 0;
   private long zooKeeperStartupTime = 20 * 1000;
+  private String existingZooKeepers;
 
   private long defaultMemorySize = 128 * 1024 * 1024;
 
@@ -163,10 +164,17 @@
 
       if (existingInstance == null || !existingInstance) {
         existingInstance = false;
-        // zookeeper port should be set explicitly in this class, not just on the site config
-        if (zooKeeperPort == 0)
-          zooKeeperPort = PortUtils.getRandomFreePort();
-        siteConfig.put(Property.INSTANCE_ZK_HOST.getKey(), "localhost:" + zooKeeperPort);
+        String zkHost;
+        if (useExistingZooKeepers()) {
+          zkHost = existingZooKeepers;
+        } else {
+          // zookeeper port should be set explicitly in this class, not just on the site config
+          if (zooKeeperPort == 0)
+            zooKeeperPort = PortUtils.getRandomFreePort();
+
+          zkHost = "localhost:" + zooKeeperPort;
+        }
+        siteConfig.put(Property.INSTANCE_ZK_HOST.getKey(), zkHost);
       }
       initialized = true;
     }
@@ -276,8 +284,8 @@
   }
 
   private MiniAccumuloConfigImpl _setSiteConfig(Map<String,String> siteConfig) {
-    this.siteConfig = new HashMap<String,String>(siteConfig);
-    this.configuredSiteConig = new HashMap<String,String>(siteConfig);
+    this.siteConfig = new HashMap<>(siteConfig);
+    this.configuredSiteConig = new HashMap<>(siteConfig);
     return this;
   }
 
@@ -319,6 +327,19 @@
   }
 
   /**
+   * Configure an existing ZooKeeper instance to use. Calling this method is optional. If not set, a new ZooKeeper instance is created.
+   *
+   * @param existingZooKeepers
+   *          Connection string for a already-running ZooKeeper instance. A null value will turn off this feature.
+   *
+   * @since 1.8.0
+   */
+  public MiniAccumuloConfigImpl setExistingZooKeepers(String existingZooKeepers) {
+    this.existingZooKeepers = existingZooKeepers;
+    return this;
+  }
+
+  /**
    * Sets the amount of memory to use in the master process. Calling this method is optional. Default memory is 128M
    *
    * @param serverType
@@ -357,11 +378,11 @@
    * @return a copy of the site config
    */
   public Map<String,String> getSiteConfig() {
-    return new HashMap<String,String>(siteConfig);
+    return new HashMap<>(siteConfig);
   }
 
   public Map<String,String> getConfiguredSiteConfig() {
-    return new HashMap<String,String>(configuredSiteConig);
+    return new HashMap<>(configuredSiteConig);
   }
 
   /**
@@ -390,6 +411,14 @@
     return zooKeeperStartupTime;
   }
 
+  public String getExistingZooKeepers() {
+    return existingZooKeepers;
+  }
+
+  public boolean useExistingZooKeepers() {
+    return existingZooKeepers != null && !existingZooKeepers.isEmpty();
+  }
+
   File getLibDir() {
     return libDir;
   }
@@ -518,7 +547,7 @@
    * @since 1.6.0
    */
   public void setSystemProperties(Map<String,String> systemProperties) {
-    this.systemProperties = new HashMap<String,String>(systemProperties);
+    this.systemProperties = new HashMap<>(systemProperties);
   }
 
   /**
@@ -527,7 +556,7 @@
    * @since 1.6.0
    */
   public Map<String,String> getSystemProperties() {
-    return new HashMap<String,String>(systemProperties);
+    return new HashMap<>(systemProperties);
   }
 
   /**
@@ -633,7 +662,7 @@
       throw e1;
     }
 
-    Map<String,String> siteConfigMap = new HashMap<String,String>();
+    Map<String,String> siteConfigMap = new HashMap<>();
     for (Entry<String,String> e : accumuloConf) {
       siteConfigMap.put(e.getKey(), e.getValue());
     }
diff --git a/minicluster/src/test/java/org/apache/accumulo/minicluster/MiniAccumuloClusterExistingZooKeepersTest.java b/minicluster/src/test/java/org/apache/accumulo/minicluster/MiniAccumuloClusterExistingZooKeepersTest.java
new file mode 100644
index 0000000..8c21874
--- /dev/null
+++ b/minicluster/src/test/java/org/apache/accumulo/minicluster/MiniAccumuloClusterExistingZooKeepersTest.java
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.minicluster;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
+import org.apache.commons.io.FileUtils;
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.RetryOneTime;
+import org.apache.curator.test.TestingServer;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class MiniAccumuloClusterExistingZooKeepersTest {
+  private static final File BASE_DIR = new File(System.getProperty("user.dir") + "/target/mini-tests/"
+      + MiniAccumuloClusterExistingZooKeepersTest.class.getName());
+
+  private static final String SECRET = "superSecret";
+
+  private static final Logger log = LoggerFactory.getLogger(MiniAccumuloClusterExistingZooKeepersTest.class);
+  private TestingServer zooKeeper;
+  private MiniAccumuloCluster accumulo;
+
+  @Rule
+  public TestName testName = new TestName();
+
+  @Before
+  public void setupTestCluster() throws Exception {
+    assertTrue(BASE_DIR.mkdirs() || BASE_DIR.isDirectory());
+    File testDir = new File(BASE_DIR, testName.getMethodName());
+    FileUtils.deleteQuietly(testDir);
+    assertTrue(testDir.mkdir());
+
+    zooKeeper = new TestingServer();
+
+    MiniAccumuloConfig config = new MiniAccumuloConfig(testDir, SECRET);
+    config.getImpl().setExistingZooKeepers(zooKeeper.getConnectString());
+    accumulo = new MiniAccumuloCluster(config);
+    accumulo.start();
+  }
+
+  @After
+  public void teardownTestCluster() {
+    if (accumulo != null) {
+      try {
+        accumulo.stop();
+      } catch (IOException | InterruptedException e) {
+        log.warn("Failure during tear down", e);
+      }
+    }
+
+    if (zooKeeper != null) {
+      try {
+        zooKeeper.close();
+      } catch (IOException e) {
+        log.warn("Failure stopping test ZooKeeper server");
+      }
+    }
+  }
+
+  @Test
+  public void canConnectViaExistingZooKeeper() throws Exception {
+    Connector conn = accumulo.getConnector("root", SECRET);
+    Instance instance = conn.getInstance();
+    assertEquals(zooKeeper.getConnectString(), instance.getZooKeepers());
+
+    String tableName = "foo";
+    conn.tableOperations().create(tableName);
+    Map<String,String> tableIds = conn.tableOperations().tableIdMap();
+    assertTrue(tableIds.containsKey(tableName));
+
+    String zkTablePath = String.format("/accumulo/%s/tables/%s/name", instance.getInstanceID(), tableIds.get(tableName));
+    try (CuratorFramework client = CuratorFrameworkFactory.newClient(zooKeeper.getConnectString(), new RetryOneTime(1))) {
+      client.start();
+      assertNotNull(client.checkExists().forPath(zkTablePath));
+      assertEquals(tableName, new String(client.getData().forPath(zkTablePath)));
+    }
+  }
+}
diff --git a/minicluster/src/test/java/org/apache/accumulo/minicluster/MiniAccumuloClusterTest.java b/minicluster/src/test/java/org/apache/accumulo/minicluster/MiniAccumuloClusterTest.java
index 7c62384..f691bf6 100644
--- a/minicluster/src/test/java/org/apache/accumulo/minicluster/MiniAccumuloClusterTest.java
+++ b/minicluster/src/test/java/org/apache/accumulo/minicluster/MiniAccumuloClusterTest.java
@@ -69,7 +69,7 @@
 
     MiniAccumuloConfig config = new MiniAccumuloConfig(testDir, "superSecret").setJDWPEnabled(true);
     config.setZooKeeperPort(0);
-    HashMap<String,String> site = new HashMap<String,String>();
+    HashMap<String,String> site = new HashMap<>();
     site.put(Property.TSERV_WORKQ_THREADS.getKey(), "2");
     config.setSiteConfig(site);
     accumulo = new MiniAccumuloCluster(config);
@@ -216,7 +216,7 @@
   public void testConfig() {
     // ensure what user passed in is what comes back
     Assert.assertEquals(0, accumulo.getConfig().getZooKeeperPort());
-    HashMap<String,String> site = new HashMap<String,String>();
+    HashMap<String,String> site = new HashMap<>();
     site.put(Property.TSERV_WORKQ_THREADS.getKey(), "2");
     Assert.assertEquals(site, accumulo.getConfig().getSiteConfig());
   }
diff --git a/minicluster/src/test/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImplTest.java b/minicluster/src/test/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImplTest.java
index dc616df..ba12f53 100644
--- a/minicluster/src/test/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImplTest.java
+++ b/minicluster/src/test/java/org/apache/accumulo/minicluster/impl/MiniAccumuloConfigImplTest.java
@@ -66,7 +66,7 @@
   public void testSiteConfig() {
 
     // constructor site config overrides default props
-    Map<String,String> siteConfig = new HashMap<String,String>();
+    Map<String,String> siteConfig = new HashMap<>();
     siteConfig.put(Property.INSTANCE_DFS_URI.getKey(), "hdfs://");
     MiniAccumuloConfigImpl config = new MiniAccumuloConfigImpl(tempFolder.getRoot(), "password").setSiteConfig(siteConfig).initialize();
     assertEquals("hdfs://", config.getSiteConfig().get(Property.INSTANCE_DFS_URI.getKey()));
diff --git a/pom.xml b/pom.xml
index 0f57f62..77e5597 100644
--- a/pom.xml
+++ b/pom.xml
@@ -24,7 +24,7 @@
   </parent>
   <groupId>org.apache.accumulo</groupId>
   <artifactId>accumulo-project</artifactId>
-  <version>1.7.3-SNAPSHOT</version>
+  <version>1.8.0-SNAPSHOT</version>
   <packaging>pom</packaging>
   <name>Apache Accumulo Project</name>
   <description>Apache Accumulo is a sorted, distributed key/value store based
@@ -80,18 +80,15 @@
     <maven>${maven.min-version}</maven>
   </prerequisites>
   <modules>
-    <module>trace</module>
-    <module>core</module>
-    <module>shell</module>
-    <module>fate</module>
-    <module>start</module>
-    <module>examples/simple</module>
     <module>assemble</module>
-    <module>proxy</module>
-    <module>test</module>
-    <module>minicluster</module>
+    <module>core</module>
     <module>docs</module>
+    <module>examples/simple</module>
+    <module>fate</module>
+    <module>iterator-test-harness</module>
     <module>maven-plugin</module>
+    <module>minicluster</module>
+    <module>proxy</module>
     <module>server/base</module>
     <module>server/gc</module>
     <module>server/master</module>
@@ -99,6 +96,10 @@
     <module>server/native</module>
     <module>server/tracer</module>
     <module>server/tserver</module>
+    <module>shell</module>
+    <module>start</module>
+    <module>test</module>
+    <module>trace</module>
   </modules>
   <scm>
     <connection>scm:git:git://git.apache.org/accumulo.git</connection>
@@ -115,11 +116,15 @@
     <url>https://builds.apache.org/view/A-D/view/Accumulo/</url>
   </ciManagement>
   <properties>
+    <!-- Interface used to separate tests with JUnit category -->
+    <accumulo.performanceTests>org.apache.accumulo.test.PerformanceTest</accumulo.performanceTests>
     <!-- used for filtering the java source with the current version -->
     <accumulo.release.version>${project.version}</accumulo.release.version>
     <assembly.tarLongFileMode>posix</assembly.tarLongFileMode>
     <!-- bouncycastle version for test dependencies -->
-    <bouncycastle.version>1.50</bouncycastle.version>
+    <bouncycastle.version>1.54</bouncycastle.version>
+    <!-- Curator version -->
+    <curator.version>2.11.0</curator.version>
     <!-- relative path for Eclipse format; should override in child modules if necessary -->
     <eclipseFormatterStyle>${project.parent.basedir}/contrib/Eclipse-Accumulo-Codestyle.xml</eclipseFormatterStyle>
     <!-- extra release args for testing -->
@@ -128,28 +133,26 @@
     <findbugs.version>3.0.3</findbugs.version>
     <!-- surefire/failsafe plugin option -->
     <forkCount>1</forkCount>
-    <!-- overwritten in hadoop profiles -->
-    <hadoop.version>2.2.0</hadoop.version>
+    <hadoop.version>2.6.4</hadoop.version>
     <htrace.version>3.1.0-incubating</htrace.version>
     <httpclient.version>3.1</httpclient.version>
     <it.failIfNoSpecifiedTests>false</it.failIfNoSpecifiedTests>
-    <jetty.version>9.1.5.v20140505</jetty.version>
+    <!-- jetty 9.2 is the last version to support jdk less than 1.8 -->
+    <jetty.version>9.2.17.v20160517</jetty.version>
     <maven.compiler.source>1.7</maven.compiler.source>
     <maven.compiler.target>1.7</maven.compiler.target>
     <!-- the maven-release-plugin makes this recommendation, due to plugin bugs -->
     <maven.min-version>3.0.5</maven.min-version>
     <!-- surefire/failsafe plugin option -->
     <maven.test.redirectTestOutputToFile>true</maven.test.redirectTestOutputToFile>
-    <powermock.version>1.6.4</powermock.version>
+    <powermock.version>1.6.5</powermock.version>
     <!-- surefire/failsafe plugin option -->
     <reuseForks>false</reuseForks>
-    <sealJars>false</sealJars>
-    <!-- overwritten in hadoop profiles -->
-    <slf4j.version>1.7.5</slf4j.version>
+    <slf4j.version>1.7.21</slf4j.version>
     <sourceReleaseAssemblyDescriptor>source-release-tar</sourceReleaseAssemblyDescriptor>
     <surefire.failIfNoSpecifiedTests>false</surefire.failIfNoSpecifiedTests>
     <!-- Thrift version -->
-    <thrift.version>0.9.1</thrift.version>
+    <thrift.version>0.9.3</thrift.version>
     <!-- ZooKeeper version -->
     <zookeeper.version>3.4.6</zookeeper.version>
   </properties>
@@ -158,7 +161,7 @@
       <dependency>
         <groupId>com.beust</groupId>
         <artifactId>jcommander</artifactId>
-        <version>1.32</version>
+        <version>1.48</version>
       </dependency>
       <dependency>
         <groupId>com.google.auto.service</groupId>
@@ -168,7 +171,7 @@
       <dependency>
         <groupId>com.google.code.gson</groupId>
         <artifactId>gson</artifactId>
-        <version>2.2.4</version>
+        <version>2.7</version>
       </dependency>
       <!-- Hadoop-2.4.0 MiniDFSCluster uses classes dropped in Guava 15 -->
       <dependency>
@@ -240,12 +243,12 @@
       <dependency>
         <groupId>junit</groupId>
         <artifactId>junit</artifactId>
-        <version>4.11</version>
+        <version>4.12</version>
       </dependency>
       <dependency>
         <groupId>log4j</groupId>
         <artifactId>log4j</artifactId>
-        <version>1.2.16</version>
+        <version>1.2.17</version>
       </dependency>
       <dependency>
         <groupId>org.apache.accumulo</groupId>
@@ -276,6 +279,11 @@
       </dependency>
       <dependency>
         <groupId>org.apache.accumulo</groupId>
+        <artifactId>accumulo-iterator-test-harness</artifactId>
+        <version>${project.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>org.apache.accumulo</groupId>
         <artifactId>accumulo-master</artifactId>
         <version>${project.version}</version>
       </dependency>
@@ -324,6 +332,12 @@
         <groupId>org.apache.accumulo</groupId>
         <artifactId>accumulo-test</artifactId>
         <version>${project.version}</version>
+        <classifier>mrit</classifier>
+      </dependency>
+      <dependency>
+        <groupId>org.apache.accumulo</groupId>
+        <artifactId>accumulo-test</artifactId>
+        <version>${project.version}</version>
       </dependency>
       <dependency>
         <groupId>org.apache.accumulo</groupId>
@@ -352,8 +366,13 @@
       </dependency>
       <dependency>
         <groupId>org.apache.commons</groupId>
-        <artifactId>commons-math</artifactId>
-        <version>2.1</version>
+        <artifactId>commons-lang3</artifactId>
+        <version>3.1</version>
+      </dependency>
+      <dependency>
+        <groupId>org.apache.commons</groupId>
+        <artifactId>commons-math3</artifactId>
+        <version>3.6.1</version>
       </dependency>
       <dependency>
         <groupId>org.apache.commons</groupId>
@@ -361,6 +380,16 @@
         <version>2.1</version>
       </dependency>
       <dependency>
+        <groupId>org.apache.curator</groupId>
+        <artifactId>curator-framework</artifactId>
+        <version>${curator.version}</version>
+      </dependency>
+      <dependency>
+        <groupId>org.apache.curator</groupId>
+        <artifactId>curator-test</artifactId>
+        <version>${curator.version}</version>
+      </dependency>
+      <dependency>
         <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-client</artifactId>
         <version>${hadoop.version}</version>
@@ -458,7 +487,7 @@
       <dependency>
         <groupId>org.easymock</groupId>
         <artifactId>easymock</artifactId>
-        <version>3.1</version>
+        <version>3.4</version>
       </dependency>
       <dependency>
         <groupId>org.eclipse.jetty</groupId>
@@ -592,6 +621,11 @@
         </plugin>
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-shade-plugin</artifactId>
+          <version>2.3</version>
+        </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-changes-plugin</artifactId>
           <version>2.12</version>
           <configuration>
@@ -640,7 +674,7 @@
           <configuration>
             <archive>
               <manifestEntries>
-                <Sealed>${sealJars}</Sealed>
+                <Sealed>true</Sealed>
                 <Implementation-Build>${mvngit.commit.id}</Implementation-Build>
               </manifestEntries>
             </archive>
@@ -739,18 +773,6 @@
               <requireMavenVersion>
                 <version>[${maven.min-version},)</version>
               </requireMavenVersion>
-              <requireProperty>
-                <property>hadoop.profile</property>
-                <regex>(2)</regex>
-                <regexMessage>You should specify the Hadoop profile by major Hadoop generation, i.e. 1 or 2, not by a version number.
-  Use hadoop.version to use a particular Hadoop version within that generation. See README for more details.</regexMessage>
-              </requireProperty>
-              <requireProperty>
-                <property>thrift.version</property>
-                <regex>0[.]9[.]1</regex>
-                <regexMessage>Thrift version must be 0.9.1; Any alteration requires a review of ACCUMULO-1691
-                  (See server/base/src/main/java/org/apache/accumulo/server/util/CustomNonBlockingServer.java)</regexMessage>
-              </requireProperty>
             </rules>
           </configuration>
           <dependencies>
@@ -902,6 +924,10 @@
                   <property name="format" value="jline[.]internal[.]Preconditions" />
                   <property name="message" value="Please use Guava Preconditions not JLine" />
                 </module>
+                <module name="RegexpSinglelineJava">
+                  <property name="format" value="org[.]apache[.]commons[.]math[.]" />
+                  <property name="message" value="Use commons-math3 (org.apache.commons.math3.*)" />
+                </module>
                 <module name="OuterTypeFilename" />
                 <module name="LineLength">
                   <!-- needs extra, because Eclipse formatter ignores the ending left brace -->
@@ -1006,6 +1032,9 @@
               <goal>integration-test</goal>
               <goal>verify</goal>
             </goals>
+            <configuration>
+              <excludedGroups>${accumulo.performanceTests}</excludedGroups>
+            </configuration>
           </execution>
         </executions>
       </plugin>
@@ -1250,7 +1279,6 @@
         <!-- some properties to make the release build a bit faster -->
         <checkstyle.skip>true</checkstyle.skip>
         <findbugs.skip>true</findbugs.skip>
-        <sealJars>true</sealJars>
         <skipITs>true</skipITs>
         <skipTests>true</skipTests>
       </properties>
@@ -1315,42 +1343,6 @@
         <it.test>ReadWriteIT,SimpleProxyIT,ExamplesIT,ShellServerIT</it.test>
       </properties>
     </profile>
-    <!-- profile for our default Hadoop build
-         unfortunately, has to duplicate one of our
-         specified profiles. see MNG-3328 -->
-    <profile>
-      <id>hadoop-default</id>
-      <activation>
-        <property>
-          <name>!hadoop.profile</name>
-        </property>
-      </activation>
-      <properties>
-        <!-- Denotes intention and allows the enforcer plugin to pass when
-             the user is relying on default behavior; won't work to activate profile -->
-        <hadoop.profile>2</hadoop.profile>
-        <hadoop.version>2.2.0</hadoop.version>
-        <httpclient.version>3.1</httpclient.version>
-        <slf4j.version>1.7.5</slf4j.version>
-      </properties>
-    </profile>
-    <!-- profile for building against Hadoop 2.x
-     XXX Since this is the default, make sure to sync hadoop-default when changing.
-    Activate using: mvn -Dhadoop.profile=2 -->
-    <profile>
-      <id>hadoop-2</id>
-      <activation>
-        <property>
-          <name>hadoop.profile</name>
-          <value>2</value>
-        </property>
-      </activation>
-      <properties>
-        <hadoop.version>2.2.0</hadoop.version>
-        <httpclient.version>3.1</httpclient.version>
-        <slf4j.version>1.7.5</slf4j.version>
-      </properties>
-    </profile>
     <profile>
       <id>jdk8</id>
       <activation>
@@ -1374,6 +1366,33 @@
       </build>
     </profile>
     <profile>
+      <id>performanceTests</id>
+      <build>
+        <pluginManagement>
+          <plugins>
+            <!-- Add an additional execution for performance tests -->
+            <plugin>
+              <groupId>org.apache.maven.plugins</groupId>
+              <artifactId>maven-failsafe-plugin</artifactId>
+              <executions>
+                <execution>
+                  <!-- Run only the performance tests -->
+                  <id>run-performance-tests</id>
+                  <goals>
+                    <goal>integration-test</goal>
+                    <goal>verify</goal>
+                  </goals>
+                  <configuration>
+                    <groups>${accumulo.performanceTests}</groups>
+                  </configuration>
+                </execution>
+              </executions>
+            </plugin>
+          </plugins>
+        </pluginManagement>
+      </build>
+    </profile>
+    <profile>
       <id>aggregate-javadocs</id>
       <build>
         <pluginManagement>
diff --git a/proxy/examples/python/TestNamespace.py b/proxy/examples/python/TestNamespace.py
new file mode 100644
index 0000000..e7d2377
--- /dev/null
+++ b/proxy/examples/python/TestNamespace.py
@@ -0,0 +1,172 @@
+#! /usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from thrift.protocol import TCompactProtocol
+from thrift.transport import TSocket, TTransport
+
+from proxy import AccumuloProxy
+from proxy.ttypes import NamespacePermission, IteratorSetting, IteratorScope, AccumuloException
+
+
+def main():
+    transport = TSocket.TSocket('localhost', 42424)
+    transport = TTransport.TFramedTransport(transport)
+    protocol = TCompactProtocol.TCompactProtocol(transport)
+    client = AccumuloProxy.Client(protocol)
+    transport.open()
+    login = client.login('root', {'password': 'password'})
+
+    client.createLocalUser(login, 'user1', 'password1')
+
+    print client.listNamespaces(login)
+
+    # create a namespace and give the user1 all permissions
+    print 'creating namespace testing'
+    client.createNamespace(login, 'testing')
+    assert client.namespaceExists(login, 'testing')
+    print client.listNamespaces(login)
+
+    print 'testing namespace renaming'
+    client.renameNamespace(login, 'testing', 'testing2')
+    assert not client.namespaceExists(login, 'testing')
+    assert client.namespaceExists(login, 'testing2')
+    client.renameNamespace(login, 'testing2', 'testing')
+    assert not client.namespaceExists(login, 'testing2')
+    assert client.namespaceExists(login, 'testing')
+
+    print 'granting all namespace permissions to user1'
+    for k, v in NamespacePermission._VALUES_TO_NAMES.iteritems():
+        client.grantNamespacePermission(login, 'user1', 'testing', k)
+
+    # make sure the last operation worked
+    for k, v in NamespacePermission._VALUES_TO_NAMES.iteritems():
+        assert client.hasNamespacePermission(login, 'user1', 'testing', k), \
+            'user1 does\'nt have namespace permission %s' % v
+
+    print 'default namespace: ' + client.defaultNamespace()
+    print 'system namespace: ' + client.systemNamespace()
+
+    # grab the namespace properties
+    print 'retrieving namespace properties'
+    props = client.getNamespaceProperties(login, 'testing')
+    assert props and props['table.compaction.major.ratio'] == '3'
+
+    # update a property and verify it is good
+    print 'setting namespace property table.compaction.major.ratio = 4'
+    client.setNamespaceProperty(login, 'testing', 'table.compaction.major.ratio', '4')
+    props = client.getNamespaceProperties(login, 'testing')
+    assert props and props['table.compaction.major.ratio'] == '4'
+
+    print 'retrieving namespace ID map'
+    nsids = client.namespaceIdMap(login)
+    assert nsids and 'accumulo' in nsids
+
+    print 'attaching debug iterator to namespace testing'
+    setting = IteratorSetting(priority=40, name='DebugTheThings',
+                              iteratorClass='org.apache.accumulo.core.iterators.DebugIterator', properties={})
+    client.attachNamespaceIterator(login, 'testing', setting, [IteratorScope.SCAN])
+    setting = client.getNamespaceIteratorSetting(login, 'testing', 'DebugTheThings', IteratorScope.SCAN)
+    assert setting and setting.name == 'DebugTheThings'
+
+    # make sure the iterator is in the list
+    iters = client.listNamespaceIterators(login, 'testing')
+    found = False
+    for name, scopes in iters.iteritems():
+        if name == 'DebugTheThings':
+            found = True
+            break
+    assert found
+
+    print 'checking for iterator conflicts'
+
+    # this next statment should be fine since we are on a different scope
+    client.checkNamespaceIteratorConflicts(login, 'testing', setting, [IteratorScope.MINC])
+
+    # this time it should throw an exception since we have already added the iterator with this scope
+    try:
+        client.checkNamespaceIteratorConflicts(login, 'testing', setting, [IteratorScope.SCAN, IteratorScope.MINC])
+    except AccumuloException:
+        pass
+    else:
+        assert False, 'There should have been a namespace iterator conflict!'
+
+    print 'removing debug iterator from namespace testing'
+    client.removeNamespaceIterator(login, 'testing', 'DebugTheThings', [IteratorScope.SCAN])
+
+    # make sure the iterator is NOT in the list anymore
+    iters = client.listNamespaceIterators(login, 'testing')
+    found = False
+    for name, scopes in iters.iteritems():
+        if name == 'DebugTheThings':
+            found = True
+            break
+    assert not found
+
+    print 'adding max mutation size namespace constraint'
+    constraintid = client.addNamespaceConstraint(login, 'testing',
+                                                 'org.apache.accumulo.examples.simple.constraints.MaxMutationSize')
+
+    print 'make sure constraint was added'
+    constraints = client.listNamespaceConstraints(login, 'testing')
+    found = False
+    for name, cid in constraints.iteritems():
+        if cid == constraintid and name == 'org.apache.accumulo.examples.simple.constraints.MaxMutationSize':
+            found = True
+            break
+    assert found
+
+    print 'remove max mutation size namespace constraint'
+    client.removeNamespaceConstraint(login, 'testing', constraintid)
+
+    print 'make sure constraint was removed'
+    constraints = client.listNamespaceConstraints(login, 'testing')
+    found = False
+    for name, cid in constraints.iteritems():
+        if cid == constraintid and name == 'org.apache.accumulo.examples.simple.constraints.MaxMutationSize':
+            found = True
+            break
+    assert not found
+
+    print 'test a namespace class load of the VersioningIterator'
+    res = client.testNamespaceClassLoad(login, 'testing', 'org.apache.accumulo.core.iterators.user.VersioningIterator',
+                                        'org.apache.accumulo.core.iterators.SortedKeyValueIterator')
+    assert res
+
+    print 'test a bad namespace class load of the VersioningIterator'
+    res = client.testNamespaceClassLoad(login, 'testing', 'org.apache.accumulo.core.iterators.user.VersioningIterator',
+                                        'dummy')
+    assert not res
+
+    # revoke the permissions
+    print 'revoking namespace permissions for user1'
+    for k, v in NamespacePermission._VALUES_TO_NAMES.iteritems():
+        client.revokeNamespacePermission(login, 'user1', 'testing', k)
+
+    # make sure the last operation worked
+    for k, v in NamespacePermission._VALUES_TO_NAMES.iteritems():
+        assert not client.hasNamespacePermission(login, 'user1', 'testing', k), \
+            'user1 does\'nt have namespace permission %s' % v
+
+    print 'deleting namespace testing'
+    client.deleteNamespace(login, 'testing')
+    assert not client.namespaceExists(login, 'testing')
+
+    print 'deleting user1'
+    client.dropLocalUser(login, 'user1')
+
+if __name__ == "__main__":
+    main()
diff --git a/proxy/pom.xml b/proxy/pom.xml
index 3a2c862..2aee90b 100644
--- a/proxy/pom.xml
+++ b/proxy/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-proxy</artifactId>
   <name>Apache Accumulo Proxy</name>
diff --git a/proxy/src/main/cpp/AccumuloProxy.cpp b/proxy/src/main/cpp/AccumuloProxy.cpp
index b220dcb..d0add35 100644
--- a/proxy/src/main/cpp/AccumuloProxy.cpp
+++ b/proxy/src/main/cpp/AccumuloProxy.cpp
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,8 +24,14 @@
 
 namespace accumulo {
 
+
+AccumuloProxy_login_args::~AccumuloProxy_login_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_login_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -56,17 +62,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->loginProperties.clear();
-            uint32_t _size133;
-            ::apache::thrift::protocol::TType _ktype134;
-            ::apache::thrift::protocol::TType _vtype135;
-            xfer += iprot->readMapBegin(_ktype134, _vtype135, _size133);
-            uint32_t _i137;
-            for (_i137 = 0; _i137 < _size133; ++_i137)
+            uint32_t _size195;
+            ::apache::thrift::protocol::TType _ktype196;
+            ::apache::thrift::protocol::TType _vtype197;
+            xfer += iprot->readMapBegin(_ktype196, _vtype197, _size195);
+            uint32_t _i199;
+            for (_i199 = 0; _i199 < _size195; ++_i199)
             {
-              std::string _key138;
-              xfer += iprot->readString(_key138);
-              std::string& _val139 = this->loginProperties[_key138];
-              xfer += iprot->readString(_val139);
+              std::string _key200;
+              xfer += iprot->readString(_key200);
+              std::string& _val201 = this->loginProperties[_key200];
+              xfer += iprot->readString(_val201);
             }
             xfer += iprot->readMapEnd();
           }
@@ -89,6 +95,7 @@
 
 uint32_t AccumuloProxy_login_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_login_args");
 
   xfer += oprot->writeFieldBegin("principal", ::apache::thrift::protocol::T_STRING, 1);
@@ -98,11 +105,11 @@
   xfer += oprot->writeFieldBegin("loginProperties", ::apache::thrift::protocol::T_MAP, 2);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->loginProperties.size()));
-    std::map<std::string, std::string> ::const_iterator _iter140;
-    for (_iter140 = this->loginProperties.begin(); _iter140 != this->loginProperties.end(); ++_iter140)
+    std::map<std::string, std::string> ::const_iterator _iter202;
+    for (_iter202 = this->loginProperties.begin(); _iter202 != this->loginProperties.end(); ++_iter202)
     {
-      xfer += oprot->writeString(_iter140->first);
-      xfer += oprot->writeString(_iter140->second);
+      xfer += oprot->writeString(_iter202->first);
+      xfer += oprot->writeString(_iter202->second);
     }
     xfer += oprot->writeMapEnd();
   }
@@ -113,8 +120,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_login_pargs::~AccumuloProxy_login_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_login_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_login_pargs");
 
   xfer += oprot->writeFieldBegin("principal", ::apache::thrift::protocol::T_STRING, 1);
@@ -124,11 +137,11 @@
   xfer += oprot->writeFieldBegin("loginProperties", ::apache::thrift::protocol::T_MAP, 2);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>((*(this->loginProperties)).size()));
-    std::map<std::string, std::string> ::const_iterator _iter141;
-    for (_iter141 = (*(this->loginProperties)).begin(); _iter141 != (*(this->loginProperties)).end(); ++_iter141)
+    std::map<std::string, std::string> ::const_iterator _iter203;
+    for (_iter203 = (*(this->loginProperties)).begin(); _iter203 != (*(this->loginProperties)).end(); ++_iter203)
     {
-      xfer += oprot->writeString(_iter141->first);
-      xfer += oprot->writeString(_iter141->second);
+      xfer += oprot->writeString(_iter203->first);
+      xfer += oprot->writeString(_iter203->second);
     }
     xfer += oprot->writeMapEnd();
   }
@@ -139,8 +152,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_login_result::~AccumuloProxy_login_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_login_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -207,8 +226,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_login_presult::~AccumuloProxy_login_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_login_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -255,8 +280,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_addConstraint_args::~AccumuloProxy_addConstraint_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_addConstraint_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -313,6 +344,7 @@
 
 uint32_t AccumuloProxy_addConstraint_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_addConstraint_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -332,8 +364,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_addConstraint_pargs::~AccumuloProxy_addConstraint_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_addConstraint_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_addConstraint_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -353,8 +391,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_addConstraint_result::~AccumuloProxy_addConstraint_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_addConstraint_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -445,8 +489,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_addConstraint_presult::~AccumuloProxy_addConstraint_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_addConstraint_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -509,8 +559,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_addSplits_args::~AccumuloProxy_addSplits_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_addSplits_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -549,15 +605,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->splits.clear();
-            uint32_t _size142;
-            ::apache::thrift::protocol::TType _etype145;
-            xfer += iprot->readSetBegin(_etype145, _size142);
-            uint32_t _i146;
-            for (_i146 = 0; _i146 < _size142; ++_i146)
+            uint32_t _size204;
+            ::apache::thrift::protocol::TType _etype207;
+            xfer += iprot->readSetBegin(_etype207, _size204);
+            uint32_t _i208;
+            for (_i208 = 0; _i208 < _size204; ++_i208)
             {
-              std::string _elem147;
-              xfer += iprot->readBinary(_elem147);
-              this->splits.insert(_elem147);
+              std::string _elem209;
+              xfer += iprot->readBinary(_elem209);
+              this->splits.insert(_elem209);
             }
             xfer += iprot->readSetEnd();
           }
@@ -580,6 +636,7 @@
 
 uint32_t AccumuloProxy_addSplits_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_addSplits_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -593,10 +650,10 @@
   xfer += oprot->writeFieldBegin("splits", ::apache::thrift::protocol::T_SET, 3);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->splits.size()));
-    std::set<std::string> ::const_iterator _iter148;
-    for (_iter148 = this->splits.begin(); _iter148 != this->splits.end(); ++_iter148)
+    std::set<std::string> ::const_iterator _iter210;
+    for (_iter210 = this->splits.begin(); _iter210 != this->splits.end(); ++_iter210)
     {
-      xfer += oprot->writeBinary((*_iter148));
+      xfer += oprot->writeBinary((*_iter210));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -607,8 +664,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_addSplits_pargs::~AccumuloProxy_addSplits_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_addSplits_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_addSplits_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -622,10 +685,10 @@
   xfer += oprot->writeFieldBegin("splits", ::apache::thrift::protocol::T_SET, 3);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>((*(this->splits)).size()));
-    std::set<std::string> ::const_iterator _iter149;
-    for (_iter149 = (*(this->splits)).begin(); _iter149 != (*(this->splits)).end(); ++_iter149)
+    std::set<std::string> ::const_iterator _iter211;
+    for (_iter211 = (*(this->splits)).begin(); _iter211 != (*(this->splits)).end(); ++_iter211)
     {
-      xfer += oprot->writeBinary((*_iter149));
+      xfer += oprot->writeBinary((*_iter211));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -636,8 +699,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_addSplits_result::~AccumuloProxy_addSplits_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_addSplits_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -716,8 +785,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_addSplits_presult::~AccumuloProxy_addSplits_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_addSplits_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -772,8 +847,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_attachIterator_args::~AccumuloProxy_attachIterator_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_attachIterator_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -820,17 +901,17 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->scopes.clear();
-            uint32_t _size150;
-            ::apache::thrift::protocol::TType _etype153;
-            xfer += iprot->readSetBegin(_etype153, _size150);
-            uint32_t _i154;
-            for (_i154 = 0; _i154 < _size150; ++_i154)
+            uint32_t _size212;
+            ::apache::thrift::protocol::TType _etype215;
+            xfer += iprot->readSetBegin(_etype215, _size212);
+            uint32_t _i216;
+            for (_i216 = 0; _i216 < _size212; ++_i216)
             {
-              IteratorScope::type _elem155;
-              int32_t ecast156;
-              xfer += iprot->readI32(ecast156);
-              _elem155 = (IteratorScope::type)ecast156;
-              this->scopes.insert(_elem155);
+              IteratorScope::type _elem217;
+              int32_t ecast218;
+              xfer += iprot->readI32(ecast218);
+              _elem217 = (IteratorScope::type)ecast218;
+              this->scopes.insert(_elem217);
             }
             xfer += iprot->readSetEnd();
           }
@@ -853,6 +934,7 @@
 
 uint32_t AccumuloProxy_attachIterator_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_attachIterator_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -870,10 +952,10 @@
   xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>(this->scopes.size()));
-    std::set<IteratorScope::type> ::const_iterator _iter157;
-    for (_iter157 = this->scopes.begin(); _iter157 != this->scopes.end(); ++_iter157)
+    std::set<IteratorScope::type> ::const_iterator _iter219;
+    for (_iter219 = this->scopes.begin(); _iter219 != this->scopes.end(); ++_iter219)
     {
-      xfer += oprot->writeI32((int32_t)(*_iter157));
+      xfer += oprot->writeI32((int32_t)(*_iter219));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -884,8 +966,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_attachIterator_pargs::~AccumuloProxy_attachIterator_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_attachIterator_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_attachIterator_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -903,10 +991,10 @@
   xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>((*(this->scopes)).size()));
-    std::set<IteratorScope::type> ::const_iterator _iter158;
-    for (_iter158 = (*(this->scopes)).begin(); _iter158 != (*(this->scopes)).end(); ++_iter158)
+    std::set<IteratorScope::type> ::const_iterator _iter220;
+    for (_iter220 = (*(this->scopes)).begin(); _iter220 != (*(this->scopes)).end(); ++_iter220)
     {
-      xfer += oprot->writeI32((int32_t)(*_iter158));
+      xfer += oprot->writeI32((int32_t)(*_iter220));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -917,8 +1005,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_attachIterator_result::~AccumuloProxy_attachIterator_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_attachIterator_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -997,8 +1091,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_attachIterator_presult::~AccumuloProxy_attachIterator_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_attachIterator_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1053,8 +1153,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_checkIteratorConflicts_args::~AccumuloProxy_checkIteratorConflicts_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_checkIteratorConflicts_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1101,17 +1207,17 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->scopes.clear();
-            uint32_t _size159;
-            ::apache::thrift::protocol::TType _etype162;
-            xfer += iprot->readSetBegin(_etype162, _size159);
-            uint32_t _i163;
-            for (_i163 = 0; _i163 < _size159; ++_i163)
+            uint32_t _size221;
+            ::apache::thrift::protocol::TType _etype224;
+            xfer += iprot->readSetBegin(_etype224, _size221);
+            uint32_t _i225;
+            for (_i225 = 0; _i225 < _size221; ++_i225)
             {
-              IteratorScope::type _elem164;
-              int32_t ecast165;
-              xfer += iprot->readI32(ecast165);
-              _elem164 = (IteratorScope::type)ecast165;
-              this->scopes.insert(_elem164);
+              IteratorScope::type _elem226;
+              int32_t ecast227;
+              xfer += iprot->readI32(ecast227);
+              _elem226 = (IteratorScope::type)ecast227;
+              this->scopes.insert(_elem226);
             }
             xfer += iprot->readSetEnd();
           }
@@ -1134,6 +1240,7 @@
 
 uint32_t AccumuloProxy_checkIteratorConflicts_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_checkIteratorConflicts_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -1151,10 +1258,10 @@
   xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>(this->scopes.size()));
-    std::set<IteratorScope::type> ::const_iterator _iter166;
-    for (_iter166 = this->scopes.begin(); _iter166 != this->scopes.end(); ++_iter166)
+    std::set<IteratorScope::type> ::const_iterator _iter228;
+    for (_iter228 = this->scopes.begin(); _iter228 != this->scopes.end(); ++_iter228)
     {
-      xfer += oprot->writeI32((int32_t)(*_iter166));
+      xfer += oprot->writeI32((int32_t)(*_iter228));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -1165,8 +1272,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_checkIteratorConflicts_pargs::~AccumuloProxy_checkIteratorConflicts_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_checkIteratorConflicts_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_checkIteratorConflicts_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -1184,10 +1297,10 @@
   xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>((*(this->scopes)).size()));
-    std::set<IteratorScope::type> ::const_iterator _iter167;
-    for (_iter167 = (*(this->scopes)).begin(); _iter167 != (*(this->scopes)).end(); ++_iter167)
+    std::set<IteratorScope::type> ::const_iterator _iter229;
+    for (_iter229 = (*(this->scopes)).begin(); _iter229 != (*(this->scopes)).end(); ++_iter229)
     {
-      xfer += oprot->writeI32((int32_t)(*_iter167));
+      xfer += oprot->writeI32((int32_t)(*_iter229));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -1198,8 +1311,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_checkIteratorConflicts_result::~AccumuloProxy_checkIteratorConflicts_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_checkIteratorConflicts_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1278,8 +1397,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_checkIteratorConflicts_presult::~AccumuloProxy_checkIteratorConflicts_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_checkIteratorConflicts_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1334,8 +1459,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_clearLocatorCache_args::~AccumuloProxy_clearLocatorCache_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_clearLocatorCache_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1384,6 +1515,7 @@
 
 uint32_t AccumuloProxy_clearLocatorCache_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_clearLocatorCache_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -1399,8 +1531,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_clearLocatorCache_pargs::~AccumuloProxy_clearLocatorCache_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_clearLocatorCache_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_clearLocatorCache_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -1416,8 +1554,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_clearLocatorCache_result::~AccumuloProxy_clearLocatorCache_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_clearLocatorCache_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1472,8 +1616,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_clearLocatorCache_presult::~AccumuloProxy_clearLocatorCache_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_clearLocatorCache_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1512,8 +1662,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_cloneTable_args::~AccumuloProxy_cloneTable_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_cloneTable_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1568,17 +1724,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->propertiesToSet.clear();
-            uint32_t _size168;
-            ::apache::thrift::protocol::TType _ktype169;
-            ::apache::thrift::protocol::TType _vtype170;
-            xfer += iprot->readMapBegin(_ktype169, _vtype170, _size168);
-            uint32_t _i172;
-            for (_i172 = 0; _i172 < _size168; ++_i172)
+            uint32_t _size230;
+            ::apache::thrift::protocol::TType _ktype231;
+            ::apache::thrift::protocol::TType _vtype232;
+            xfer += iprot->readMapBegin(_ktype231, _vtype232, _size230);
+            uint32_t _i234;
+            for (_i234 = 0; _i234 < _size230; ++_i234)
             {
-              std::string _key173;
-              xfer += iprot->readString(_key173);
-              std::string& _val174 = this->propertiesToSet[_key173];
-              xfer += iprot->readString(_val174);
+              std::string _key235;
+              xfer += iprot->readString(_key235);
+              std::string& _val236 = this->propertiesToSet[_key235];
+              xfer += iprot->readString(_val236);
             }
             xfer += iprot->readMapEnd();
           }
@@ -1591,15 +1747,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->propertiesToExclude.clear();
-            uint32_t _size175;
-            ::apache::thrift::protocol::TType _etype178;
-            xfer += iprot->readSetBegin(_etype178, _size175);
-            uint32_t _i179;
-            for (_i179 = 0; _i179 < _size175; ++_i179)
+            uint32_t _size237;
+            ::apache::thrift::protocol::TType _etype240;
+            xfer += iprot->readSetBegin(_etype240, _size237);
+            uint32_t _i241;
+            for (_i241 = 0; _i241 < _size237; ++_i241)
             {
-              std::string _elem180;
-              xfer += iprot->readString(_elem180);
-              this->propertiesToExclude.insert(_elem180);
+              std::string _elem242;
+              xfer += iprot->readString(_elem242);
+              this->propertiesToExclude.insert(_elem242);
             }
             xfer += iprot->readSetEnd();
           }
@@ -1622,6 +1778,7 @@
 
 uint32_t AccumuloProxy_cloneTable_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_cloneTable_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -1643,11 +1800,11 @@
   xfer += oprot->writeFieldBegin("propertiesToSet", ::apache::thrift::protocol::T_MAP, 5);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->propertiesToSet.size()));
-    std::map<std::string, std::string> ::const_iterator _iter181;
-    for (_iter181 = this->propertiesToSet.begin(); _iter181 != this->propertiesToSet.end(); ++_iter181)
+    std::map<std::string, std::string> ::const_iterator _iter243;
+    for (_iter243 = this->propertiesToSet.begin(); _iter243 != this->propertiesToSet.end(); ++_iter243)
     {
-      xfer += oprot->writeString(_iter181->first);
-      xfer += oprot->writeString(_iter181->second);
+      xfer += oprot->writeString(_iter243->first);
+      xfer += oprot->writeString(_iter243->second);
     }
     xfer += oprot->writeMapEnd();
   }
@@ -1656,10 +1813,10 @@
   xfer += oprot->writeFieldBegin("propertiesToExclude", ::apache::thrift::protocol::T_SET, 6);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->propertiesToExclude.size()));
-    std::set<std::string> ::const_iterator _iter182;
-    for (_iter182 = this->propertiesToExclude.begin(); _iter182 != this->propertiesToExclude.end(); ++_iter182)
+    std::set<std::string> ::const_iterator _iter244;
+    for (_iter244 = this->propertiesToExclude.begin(); _iter244 != this->propertiesToExclude.end(); ++_iter244)
     {
-      xfer += oprot->writeString((*_iter182));
+      xfer += oprot->writeString((*_iter244));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -1670,8 +1827,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_cloneTable_pargs::~AccumuloProxy_cloneTable_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_cloneTable_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_cloneTable_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -1693,11 +1856,11 @@
   xfer += oprot->writeFieldBegin("propertiesToSet", ::apache::thrift::protocol::T_MAP, 5);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>((*(this->propertiesToSet)).size()));
-    std::map<std::string, std::string> ::const_iterator _iter183;
-    for (_iter183 = (*(this->propertiesToSet)).begin(); _iter183 != (*(this->propertiesToSet)).end(); ++_iter183)
+    std::map<std::string, std::string> ::const_iterator _iter245;
+    for (_iter245 = (*(this->propertiesToSet)).begin(); _iter245 != (*(this->propertiesToSet)).end(); ++_iter245)
     {
-      xfer += oprot->writeString(_iter183->first);
-      xfer += oprot->writeString(_iter183->second);
+      xfer += oprot->writeString(_iter245->first);
+      xfer += oprot->writeString(_iter245->second);
     }
     xfer += oprot->writeMapEnd();
   }
@@ -1706,10 +1869,10 @@
   xfer += oprot->writeFieldBegin("propertiesToExclude", ::apache::thrift::protocol::T_SET, 6);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>((*(this->propertiesToExclude)).size()));
-    std::set<std::string> ::const_iterator _iter184;
-    for (_iter184 = (*(this->propertiesToExclude)).begin(); _iter184 != (*(this->propertiesToExclude)).end(); ++_iter184)
+    std::set<std::string> ::const_iterator _iter246;
+    for (_iter246 = (*(this->propertiesToExclude)).begin(); _iter246 != (*(this->propertiesToExclude)).end(); ++_iter246)
     {
-      xfer += oprot->writeString((*_iter184));
+      xfer += oprot->writeString((*_iter246));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -1720,8 +1883,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_cloneTable_result::~AccumuloProxy_cloneTable_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_cloneTable_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1812,8 +1981,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_cloneTable_presult::~AccumuloProxy_cloneTable_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_cloneTable_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1876,8 +2051,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_compactTable_args::~AccumuloProxy_compactTable_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_compactTable_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1932,14 +2113,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->iterators.clear();
-            uint32_t _size185;
-            ::apache::thrift::protocol::TType _etype188;
-            xfer += iprot->readListBegin(_etype188, _size185);
-            this->iterators.resize(_size185);
-            uint32_t _i189;
-            for (_i189 = 0; _i189 < _size185; ++_i189)
+            uint32_t _size247;
+            ::apache::thrift::protocol::TType _etype250;
+            xfer += iprot->readListBegin(_etype250, _size247);
+            this->iterators.resize(_size247);
+            uint32_t _i251;
+            for (_i251 = 0; _i251 < _size247; ++_i251)
             {
-              xfer += this->iterators[_i189].read(iprot);
+              xfer += this->iterators[_i251].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -1986,6 +2167,7 @@
 
 uint32_t AccumuloProxy_compactTable_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_compactTable_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -2007,10 +2189,10 @@
   xfer += oprot->writeFieldBegin("iterators", ::apache::thrift::protocol::T_LIST, 5);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->iterators.size()));
-    std::vector<IteratorSetting> ::const_iterator _iter190;
-    for (_iter190 = this->iterators.begin(); _iter190 != this->iterators.end(); ++_iter190)
+    std::vector<IteratorSetting> ::const_iterator _iter252;
+    for (_iter252 = this->iterators.begin(); _iter252 != this->iterators.end(); ++_iter252)
     {
-      xfer += (*_iter190).write(oprot);
+      xfer += (*_iter252).write(oprot);
     }
     xfer += oprot->writeListEnd();
   }
@@ -2033,8 +2215,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_compactTable_pargs::~AccumuloProxy_compactTable_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_compactTable_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_compactTable_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -2056,10 +2244,10 @@
   xfer += oprot->writeFieldBegin("iterators", ::apache::thrift::protocol::T_LIST, 5);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>((*(this->iterators)).size()));
-    std::vector<IteratorSetting> ::const_iterator _iter191;
-    for (_iter191 = (*(this->iterators)).begin(); _iter191 != (*(this->iterators)).end(); ++_iter191)
+    std::vector<IteratorSetting> ::const_iterator _iter253;
+    for (_iter253 = (*(this->iterators)).begin(); _iter253 != (*(this->iterators)).end(); ++_iter253)
     {
-      xfer += (*_iter191).write(oprot);
+      xfer += (*_iter253).write(oprot);
     }
     xfer += oprot->writeListEnd();
   }
@@ -2082,8 +2270,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_compactTable_result::~AccumuloProxy_compactTable_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_compactTable_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2162,8 +2356,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_compactTable_presult::~AccumuloProxy_compactTable_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_compactTable_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2218,8 +2418,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_cancelCompaction_args::~AccumuloProxy_cancelCompaction_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_cancelCompaction_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2268,6 +2474,7 @@
 
 uint32_t AccumuloProxy_cancelCompaction_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_cancelCompaction_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -2283,8 +2490,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_cancelCompaction_pargs::~AccumuloProxy_cancelCompaction_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_cancelCompaction_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_cancelCompaction_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -2300,8 +2513,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_cancelCompaction_result::~AccumuloProxy_cancelCompaction_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_cancelCompaction_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2380,8 +2599,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_cancelCompaction_presult::~AccumuloProxy_cancelCompaction_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_cancelCompaction_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2436,8 +2661,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createTable_args::~AccumuloProxy_createTable_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_createTable_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2482,9 +2713,9 @@
         break;
       case 4:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast192;
-          xfer += iprot->readI32(ecast192);
-          this->type = (TimeType::type)ecast192;
+          int32_t ecast254;
+          xfer += iprot->readI32(ecast254);
+          this->type = (TimeType::type)ecast254;
           this->__isset.type = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -2504,6 +2735,7 @@
 
 uint32_t AccumuloProxy_createTable_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createTable_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -2527,8 +2759,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createTable_pargs::~AccumuloProxy_createTable_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_createTable_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createTable_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -2552,8 +2790,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createTable_result::~AccumuloProxy_createTable_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_createTable_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2632,8 +2876,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createTable_presult::~AccumuloProxy_createTable_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_createTable_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2688,8 +2938,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_deleteTable_args::~AccumuloProxy_deleteTable_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_deleteTable_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2738,6 +2994,7 @@
 
 uint32_t AccumuloProxy_deleteTable_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_deleteTable_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -2753,8 +3010,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_deleteTable_pargs::~AccumuloProxy_deleteTable_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_deleteTable_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_deleteTable_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -2770,8 +3033,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_deleteTable_result::~AccumuloProxy_deleteTable_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_deleteTable_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2850,8 +3119,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_deleteTable_presult::~AccumuloProxy_deleteTable_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_deleteTable_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2906,8 +3181,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_deleteRows_args::~AccumuloProxy_deleteRows_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_deleteRows_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2972,6 +3253,7 @@
 
 uint32_t AccumuloProxy_deleteRows_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_deleteRows_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -2995,8 +3277,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_deleteRows_pargs::~AccumuloProxy_deleteRows_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_deleteRows_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_deleteRows_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -3020,8 +3308,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_deleteRows_result::~AccumuloProxy_deleteRows_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_deleteRows_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3100,8 +3394,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_deleteRows_presult::~AccumuloProxy_deleteRows_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_deleteRows_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3156,8 +3456,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_exportTable_args::~AccumuloProxy_exportTable_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_exportTable_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3214,6 +3520,7 @@
 
 uint32_t AccumuloProxy_exportTable_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_exportTable_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -3233,8 +3540,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_exportTable_pargs::~AccumuloProxy_exportTable_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_exportTable_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_exportTable_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -3254,8 +3567,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_exportTable_result::~AccumuloProxy_exportTable_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_exportTable_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3334,8 +3653,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_exportTable_presult::~AccumuloProxy_exportTable_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_exportTable_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3390,8 +3715,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_flushTable_args::~AccumuloProxy_flushTable_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_flushTable_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3464,6 +3795,7 @@
 
 uint32_t AccumuloProxy_flushTable_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_flushTable_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -3491,8 +3823,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_flushTable_pargs::~AccumuloProxy_flushTable_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_flushTable_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_flushTable_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -3520,8 +3858,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_flushTable_result::~AccumuloProxy_flushTable_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_flushTable_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3600,8 +3944,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_flushTable_presult::~AccumuloProxy_flushTable_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_flushTable_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3656,8 +4006,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getDiskUsage_args::~AccumuloProxy_getDiskUsage_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getDiskUsage_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3688,15 +4044,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->tables.clear();
-            uint32_t _size193;
-            ::apache::thrift::protocol::TType _etype196;
-            xfer += iprot->readSetBegin(_etype196, _size193);
-            uint32_t _i197;
-            for (_i197 = 0; _i197 < _size193; ++_i197)
+            uint32_t _size255;
+            ::apache::thrift::protocol::TType _etype258;
+            xfer += iprot->readSetBegin(_etype258, _size255);
+            uint32_t _i259;
+            for (_i259 = 0; _i259 < _size255; ++_i259)
             {
-              std::string _elem198;
-              xfer += iprot->readString(_elem198);
-              this->tables.insert(_elem198);
+              std::string _elem260;
+              xfer += iprot->readString(_elem260);
+              this->tables.insert(_elem260);
             }
             xfer += iprot->readSetEnd();
           }
@@ -3719,6 +4075,7 @@
 
 uint32_t AccumuloProxy_getDiskUsage_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getDiskUsage_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -3728,10 +4085,10 @@
   xfer += oprot->writeFieldBegin("tables", ::apache::thrift::protocol::T_SET, 2);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->tables.size()));
-    std::set<std::string> ::const_iterator _iter199;
-    for (_iter199 = this->tables.begin(); _iter199 != this->tables.end(); ++_iter199)
+    std::set<std::string> ::const_iterator _iter261;
+    for (_iter261 = this->tables.begin(); _iter261 != this->tables.end(); ++_iter261)
     {
-      xfer += oprot->writeString((*_iter199));
+      xfer += oprot->writeString((*_iter261));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -3742,8 +4099,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getDiskUsage_pargs::~AccumuloProxy_getDiskUsage_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getDiskUsage_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getDiskUsage_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -3753,10 +4116,10 @@
   xfer += oprot->writeFieldBegin("tables", ::apache::thrift::protocol::T_SET, 2);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>((*(this->tables)).size()));
-    std::set<std::string> ::const_iterator _iter200;
-    for (_iter200 = (*(this->tables)).begin(); _iter200 != (*(this->tables)).end(); ++_iter200)
+    std::set<std::string> ::const_iterator _iter262;
+    for (_iter262 = (*(this->tables)).begin(); _iter262 != (*(this->tables)).end(); ++_iter262)
     {
-      xfer += oprot->writeString((*_iter200));
+      xfer += oprot->writeString((*_iter262));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -3767,8 +4130,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getDiskUsage_result::~AccumuloProxy_getDiskUsage_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getDiskUsage_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3791,14 +4160,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->success.clear();
-            uint32_t _size201;
-            ::apache::thrift::protocol::TType _etype204;
-            xfer += iprot->readListBegin(_etype204, _size201);
-            this->success.resize(_size201);
-            uint32_t _i205;
-            for (_i205 = 0; _i205 < _size201; ++_i205)
+            uint32_t _size263;
+            ::apache::thrift::protocol::TType _etype266;
+            xfer += iprot->readListBegin(_etype266, _size263);
+            this->success.resize(_size263);
+            uint32_t _i267;
+            for (_i267 = 0; _i267 < _size263; ++_i267)
             {
-              xfer += this->success[_i205].read(iprot);
+              xfer += this->success[_i267].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -3853,10 +4222,10 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_LIST, 0);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->success.size()));
-      std::vector<DiskUsage> ::const_iterator _iter206;
-      for (_iter206 = this->success.begin(); _iter206 != this->success.end(); ++_iter206)
+      std::vector<DiskUsage> ::const_iterator _iter268;
+      for (_iter268 = this->success.begin(); _iter268 != this->success.end(); ++_iter268)
       {
-        xfer += (*_iter206).write(oprot);
+        xfer += (*_iter268).write(oprot);
       }
       xfer += oprot->writeListEnd();
     }
@@ -3879,8 +4248,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getDiskUsage_presult::~AccumuloProxy_getDiskUsage_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getDiskUsage_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3903,14 +4278,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             (*(this->success)).clear();
-            uint32_t _size207;
-            ::apache::thrift::protocol::TType _etype210;
-            xfer += iprot->readListBegin(_etype210, _size207);
-            (*(this->success)).resize(_size207);
-            uint32_t _i211;
-            for (_i211 = 0; _i211 < _size207; ++_i211)
+            uint32_t _size269;
+            ::apache::thrift::protocol::TType _etype272;
+            xfer += iprot->readListBegin(_etype272, _size269);
+            (*(this->success)).resize(_size269);
+            uint32_t _i273;
+            for (_i273 = 0; _i273 < _size269; ++_i273)
             {
-              xfer += (*(this->success))[_i211].read(iprot);
+              xfer += (*(this->success))[_i273].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -3955,8 +4330,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getLocalityGroups_args::~AccumuloProxy_getLocalityGroups_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getLocalityGroups_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4005,6 +4386,7 @@
 
 uint32_t AccumuloProxy_getLocalityGroups_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getLocalityGroups_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -4020,8 +4402,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getLocalityGroups_pargs::~AccumuloProxy_getLocalityGroups_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getLocalityGroups_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getLocalityGroups_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -4037,8 +4425,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getLocalityGroups_result::~AccumuloProxy_getLocalityGroups_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getLocalityGroups_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4061,27 +4455,27 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->success.clear();
-            uint32_t _size212;
-            ::apache::thrift::protocol::TType _ktype213;
-            ::apache::thrift::protocol::TType _vtype214;
-            xfer += iprot->readMapBegin(_ktype213, _vtype214, _size212);
-            uint32_t _i216;
-            for (_i216 = 0; _i216 < _size212; ++_i216)
+            uint32_t _size274;
+            ::apache::thrift::protocol::TType _ktype275;
+            ::apache::thrift::protocol::TType _vtype276;
+            xfer += iprot->readMapBegin(_ktype275, _vtype276, _size274);
+            uint32_t _i278;
+            for (_i278 = 0; _i278 < _size274; ++_i278)
             {
-              std::string _key217;
-              xfer += iprot->readString(_key217);
-              std::set<std::string> & _val218 = this->success[_key217];
+              std::string _key279;
+              xfer += iprot->readString(_key279);
+              std::set<std::string> & _val280 = this->success[_key279];
               {
-                _val218.clear();
-                uint32_t _size219;
-                ::apache::thrift::protocol::TType _etype222;
-                xfer += iprot->readSetBegin(_etype222, _size219);
-                uint32_t _i223;
-                for (_i223 = 0; _i223 < _size219; ++_i223)
+                _val280.clear();
+                uint32_t _size281;
+                ::apache::thrift::protocol::TType _etype284;
+                xfer += iprot->readSetBegin(_etype284, _size281);
+                uint32_t _i285;
+                for (_i285 = 0; _i285 < _size281; ++_i285)
                 {
-                  std::string _elem224;
-                  xfer += iprot->readString(_elem224);
-                  _val218.insert(_elem224);
+                  std::string _elem286;
+                  xfer += iprot->readString(_elem286);
+                  _val280.insert(_elem286);
                 }
                 xfer += iprot->readSetEnd();
               }
@@ -4139,16 +4533,16 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
     {
       xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_SET, static_cast<uint32_t>(this->success.size()));
-      std::map<std::string, std::set<std::string> > ::const_iterator _iter225;
-      for (_iter225 = this->success.begin(); _iter225 != this->success.end(); ++_iter225)
+      std::map<std::string, std::set<std::string> > ::const_iterator _iter287;
+      for (_iter287 = this->success.begin(); _iter287 != this->success.end(); ++_iter287)
       {
-        xfer += oprot->writeString(_iter225->first);
+        xfer += oprot->writeString(_iter287->first);
         {
-          xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(_iter225->second.size()));
-          std::set<std::string> ::const_iterator _iter226;
-          for (_iter226 = _iter225->second.begin(); _iter226 != _iter225->second.end(); ++_iter226)
+          xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(_iter287->second.size()));
+          std::set<std::string> ::const_iterator _iter288;
+          for (_iter288 = _iter287->second.begin(); _iter288 != _iter287->second.end(); ++_iter288)
           {
-            xfer += oprot->writeString((*_iter226));
+            xfer += oprot->writeString((*_iter288));
           }
           xfer += oprot->writeSetEnd();
         }
@@ -4174,8 +4568,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getLocalityGroups_presult::~AccumuloProxy_getLocalityGroups_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getLocalityGroups_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4198,27 +4598,27 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             (*(this->success)).clear();
-            uint32_t _size227;
-            ::apache::thrift::protocol::TType _ktype228;
-            ::apache::thrift::protocol::TType _vtype229;
-            xfer += iprot->readMapBegin(_ktype228, _vtype229, _size227);
-            uint32_t _i231;
-            for (_i231 = 0; _i231 < _size227; ++_i231)
+            uint32_t _size289;
+            ::apache::thrift::protocol::TType _ktype290;
+            ::apache::thrift::protocol::TType _vtype291;
+            xfer += iprot->readMapBegin(_ktype290, _vtype291, _size289);
+            uint32_t _i293;
+            for (_i293 = 0; _i293 < _size289; ++_i293)
             {
-              std::string _key232;
-              xfer += iprot->readString(_key232);
-              std::set<std::string> & _val233 = (*(this->success))[_key232];
+              std::string _key294;
+              xfer += iprot->readString(_key294);
+              std::set<std::string> & _val295 = (*(this->success))[_key294];
               {
-                _val233.clear();
-                uint32_t _size234;
-                ::apache::thrift::protocol::TType _etype237;
-                xfer += iprot->readSetBegin(_etype237, _size234);
-                uint32_t _i238;
-                for (_i238 = 0; _i238 < _size234; ++_i238)
+                _val295.clear();
+                uint32_t _size296;
+                ::apache::thrift::protocol::TType _etype299;
+                xfer += iprot->readSetBegin(_etype299, _size296);
+                uint32_t _i300;
+                for (_i300 = 0; _i300 < _size296; ++_i300)
                 {
-                  std::string _elem239;
-                  xfer += iprot->readString(_elem239);
-                  _val233.insert(_elem239);
+                  std::string _elem301;
+                  xfer += iprot->readString(_elem301);
+                  _val295.insert(_elem301);
                 }
                 xfer += iprot->readSetEnd();
               }
@@ -4266,8 +4666,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getIteratorSetting_args::~AccumuloProxy_getIteratorSetting_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getIteratorSetting_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4312,9 +4718,9 @@
         break;
       case 4:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast240;
-          xfer += iprot->readI32(ecast240);
-          this->scope = (IteratorScope::type)ecast240;
+          int32_t ecast302;
+          xfer += iprot->readI32(ecast302);
+          this->scope = (IteratorScope::type)ecast302;
           this->__isset.scope = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -4334,6 +4740,7 @@
 
 uint32_t AccumuloProxy_getIteratorSetting_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getIteratorSetting_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -4357,8 +4764,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getIteratorSetting_pargs::~AccumuloProxy_getIteratorSetting_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getIteratorSetting_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getIteratorSetting_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -4382,8 +4795,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getIteratorSetting_result::~AccumuloProxy_getIteratorSetting_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getIteratorSetting_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4474,8 +4893,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getIteratorSetting_presult::~AccumuloProxy_getIteratorSetting_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getIteratorSetting_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4538,8 +4963,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getMaxRow_args::~AccumuloProxy_getMaxRow_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getMaxRow_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4578,15 +5009,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->auths.clear();
-            uint32_t _size241;
-            ::apache::thrift::protocol::TType _etype244;
-            xfer += iprot->readSetBegin(_etype244, _size241);
-            uint32_t _i245;
-            for (_i245 = 0; _i245 < _size241; ++_i245)
+            uint32_t _size303;
+            ::apache::thrift::protocol::TType _etype306;
+            xfer += iprot->readSetBegin(_etype306, _size303);
+            uint32_t _i307;
+            for (_i307 = 0; _i307 < _size303; ++_i307)
             {
-              std::string _elem246;
-              xfer += iprot->readBinary(_elem246);
-              this->auths.insert(_elem246);
+              std::string _elem308;
+              xfer += iprot->readBinary(_elem308);
+              this->auths.insert(_elem308);
             }
             xfer += iprot->readSetEnd();
           }
@@ -4641,6 +5072,7 @@
 
 uint32_t AccumuloProxy_getMaxRow_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getMaxRow_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -4654,10 +5086,10 @@
   xfer += oprot->writeFieldBegin("auths", ::apache::thrift::protocol::T_SET, 3);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->auths.size()));
-    std::set<std::string> ::const_iterator _iter247;
-    for (_iter247 = this->auths.begin(); _iter247 != this->auths.end(); ++_iter247)
+    std::set<std::string> ::const_iterator _iter309;
+    for (_iter309 = this->auths.begin(); _iter309 != this->auths.end(); ++_iter309)
     {
-      xfer += oprot->writeBinary((*_iter247));
+      xfer += oprot->writeBinary((*_iter309));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -4684,8 +5116,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getMaxRow_pargs::~AccumuloProxy_getMaxRow_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getMaxRow_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getMaxRow_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -4699,10 +5137,10 @@
   xfer += oprot->writeFieldBegin("auths", ::apache::thrift::protocol::T_SET, 3);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>((*(this->auths)).size()));
-    std::set<std::string> ::const_iterator _iter248;
-    for (_iter248 = (*(this->auths)).begin(); _iter248 != (*(this->auths)).end(); ++_iter248)
+    std::set<std::string> ::const_iterator _iter310;
+    for (_iter310 = (*(this->auths)).begin(); _iter310 != (*(this->auths)).end(); ++_iter310)
     {
-      xfer += oprot->writeBinary((*_iter248));
+      xfer += oprot->writeBinary((*_iter310));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -4729,8 +5167,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getMaxRow_result::~AccumuloProxy_getMaxRow_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getMaxRow_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4821,8 +5265,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getMaxRow_presult::~AccumuloProxy_getMaxRow_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getMaxRow_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4885,8 +5335,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getTableProperties_args::~AccumuloProxy_getTableProperties_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getTableProperties_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4935,6 +5391,7 @@
 
 uint32_t AccumuloProxy_getTableProperties_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getTableProperties_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -4950,8 +5407,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getTableProperties_pargs::~AccumuloProxy_getTableProperties_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getTableProperties_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getTableProperties_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -4967,8 +5430,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getTableProperties_result::~AccumuloProxy_getTableProperties_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getTableProperties_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -4991,17 +5460,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->success.clear();
-            uint32_t _size249;
-            ::apache::thrift::protocol::TType _ktype250;
-            ::apache::thrift::protocol::TType _vtype251;
-            xfer += iprot->readMapBegin(_ktype250, _vtype251, _size249);
-            uint32_t _i253;
-            for (_i253 = 0; _i253 < _size249; ++_i253)
+            uint32_t _size311;
+            ::apache::thrift::protocol::TType _ktype312;
+            ::apache::thrift::protocol::TType _vtype313;
+            xfer += iprot->readMapBegin(_ktype312, _vtype313, _size311);
+            uint32_t _i315;
+            for (_i315 = 0; _i315 < _size311; ++_i315)
             {
-              std::string _key254;
-              xfer += iprot->readString(_key254);
-              std::string& _val255 = this->success[_key254];
-              xfer += iprot->readString(_val255);
+              std::string _key316;
+              xfer += iprot->readString(_key316);
+              std::string& _val317 = this->success[_key316];
+              xfer += iprot->readString(_val317);
             }
             xfer += iprot->readMapEnd();
           }
@@ -5056,11 +5525,11 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
     {
       xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
-      std::map<std::string, std::string> ::const_iterator _iter256;
-      for (_iter256 = this->success.begin(); _iter256 != this->success.end(); ++_iter256)
+      std::map<std::string, std::string> ::const_iterator _iter318;
+      for (_iter318 = this->success.begin(); _iter318 != this->success.end(); ++_iter318)
       {
-        xfer += oprot->writeString(_iter256->first);
-        xfer += oprot->writeString(_iter256->second);
+        xfer += oprot->writeString(_iter318->first);
+        xfer += oprot->writeString(_iter318->second);
       }
       xfer += oprot->writeMapEnd();
     }
@@ -5083,8 +5552,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getTableProperties_presult::~AccumuloProxy_getTableProperties_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getTableProperties_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5107,17 +5582,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             (*(this->success)).clear();
-            uint32_t _size257;
-            ::apache::thrift::protocol::TType _ktype258;
-            ::apache::thrift::protocol::TType _vtype259;
-            xfer += iprot->readMapBegin(_ktype258, _vtype259, _size257);
-            uint32_t _i261;
-            for (_i261 = 0; _i261 < _size257; ++_i261)
+            uint32_t _size319;
+            ::apache::thrift::protocol::TType _ktype320;
+            ::apache::thrift::protocol::TType _vtype321;
+            xfer += iprot->readMapBegin(_ktype320, _vtype321, _size319);
+            uint32_t _i323;
+            for (_i323 = 0; _i323 < _size319; ++_i323)
             {
-              std::string _key262;
-              xfer += iprot->readString(_key262);
-              std::string& _val263 = (*(this->success))[_key262];
-              xfer += iprot->readString(_val263);
+              std::string _key324;
+              xfer += iprot->readString(_key324);
+              std::string& _val325 = (*(this->success))[_key324];
+              xfer += iprot->readString(_val325);
             }
             xfer += iprot->readMapEnd();
           }
@@ -5162,8 +5637,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_importDirectory_args::~AccumuloProxy_importDirectory_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_importDirectory_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5236,6 +5717,7 @@
 
 uint32_t AccumuloProxy_importDirectory_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_importDirectory_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -5263,8 +5745,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_importDirectory_pargs::~AccumuloProxy_importDirectory_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_importDirectory_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_importDirectory_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -5292,8 +5780,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_importDirectory_result::~AccumuloProxy_importDirectory_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_importDirectory_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5372,8 +5866,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_importDirectory_presult::~AccumuloProxy_importDirectory_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_importDirectory_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5428,8 +5928,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_importTable_args::~AccumuloProxy_importTable_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_importTable_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5486,6 +5992,7 @@
 
 uint32_t AccumuloProxy_importTable_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_importTable_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -5505,8 +6012,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_importTable_pargs::~AccumuloProxy_importTable_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_importTable_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_importTable_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -5526,8 +6039,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_importTable_result::~AccumuloProxy_importTable_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_importTable_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5606,8 +6125,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_importTable_presult::~AccumuloProxy_importTable_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_importTable_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5662,8 +6187,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listSplits_args::~AccumuloProxy_listSplits_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_listSplits_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5720,6 +6251,7 @@
 
 uint32_t AccumuloProxy_listSplits_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_listSplits_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -5739,8 +6271,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listSplits_pargs::~AccumuloProxy_listSplits_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_listSplits_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_listSplits_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -5760,8 +6298,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listSplits_result::~AccumuloProxy_listSplits_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_listSplits_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5784,14 +6328,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->success.clear();
-            uint32_t _size264;
-            ::apache::thrift::protocol::TType _etype267;
-            xfer += iprot->readListBegin(_etype267, _size264);
-            this->success.resize(_size264);
-            uint32_t _i268;
-            for (_i268 = 0; _i268 < _size264; ++_i268)
+            uint32_t _size326;
+            ::apache::thrift::protocol::TType _etype329;
+            xfer += iprot->readListBegin(_etype329, _size326);
+            this->success.resize(_size326);
+            uint32_t _i330;
+            for (_i330 = 0; _i330 < _size326; ++_i330)
             {
-              xfer += iprot->readBinary(this->success[_i268]);
+              xfer += iprot->readBinary(this->success[_i330]);
             }
             xfer += iprot->readListEnd();
           }
@@ -5846,10 +6390,10 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_LIST, 0);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
-      std::vector<std::string> ::const_iterator _iter269;
-      for (_iter269 = this->success.begin(); _iter269 != this->success.end(); ++_iter269)
+      std::vector<std::string> ::const_iterator _iter331;
+      for (_iter331 = this->success.begin(); _iter331 != this->success.end(); ++_iter331)
       {
-        xfer += oprot->writeBinary((*_iter269));
+        xfer += oprot->writeBinary((*_iter331));
       }
       xfer += oprot->writeListEnd();
     }
@@ -5872,8 +6416,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listSplits_presult::~AccumuloProxy_listSplits_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_listSplits_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5896,14 +6446,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             (*(this->success)).clear();
-            uint32_t _size270;
-            ::apache::thrift::protocol::TType _etype273;
-            xfer += iprot->readListBegin(_etype273, _size270);
-            (*(this->success)).resize(_size270);
-            uint32_t _i274;
-            for (_i274 = 0; _i274 < _size270; ++_i274)
+            uint32_t _size332;
+            ::apache::thrift::protocol::TType _etype335;
+            xfer += iprot->readListBegin(_etype335, _size332);
+            (*(this->success)).resize(_size332);
+            uint32_t _i336;
+            for (_i336 = 0; _i336 < _size332; ++_i336)
             {
-              xfer += iprot->readBinary((*(this->success))[_i274]);
+              xfer += iprot->readBinary((*(this->success))[_i336]);
             }
             xfer += iprot->readListEnd();
           }
@@ -5948,8 +6498,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listTables_args::~AccumuloProxy_listTables_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_listTables_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -5990,6 +6546,7 @@
 
 uint32_t AccumuloProxy_listTables_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_listTables_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -6001,8 +6558,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listTables_pargs::~AccumuloProxy_listTables_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_listTables_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_listTables_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -6014,8 +6577,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listTables_result::~AccumuloProxy_listTables_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_listTables_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6038,15 +6607,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->success.clear();
-            uint32_t _size275;
-            ::apache::thrift::protocol::TType _etype278;
-            xfer += iprot->readSetBegin(_etype278, _size275);
-            uint32_t _i279;
-            for (_i279 = 0; _i279 < _size275; ++_i279)
+            uint32_t _size337;
+            ::apache::thrift::protocol::TType _etype340;
+            xfer += iprot->readSetBegin(_etype340, _size337);
+            uint32_t _i341;
+            for (_i341 = 0; _i341 < _size337; ++_i341)
             {
-              std::string _elem280;
-              xfer += iprot->readString(_elem280);
-              this->success.insert(_elem280);
+              std::string _elem342;
+              xfer += iprot->readString(_elem342);
+              this->success.insert(_elem342);
             }
             xfer += iprot->readSetEnd();
           }
@@ -6077,10 +6646,10 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_SET, 0);
     {
       xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
-      std::set<std::string> ::const_iterator _iter281;
-      for (_iter281 = this->success.begin(); _iter281 != this->success.end(); ++_iter281)
+      std::set<std::string> ::const_iterator _iter343;
+      for (_iter343 = this->success.begin(); _iter343 != this->success.end(); ++_iter343)
       {
-        xfer += oprot->writeString((*_iter281));
+        xfer += oprot->writeString((*_iter343));
       }
       xfer += oprot->writeSetEnd();
     }
@@ -6091,8 +6660,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listTables_presult::~AccumuloProxy_listTables_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_listTables_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6115,15 +6690,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             (*(this->success)).clear();
-            uint32_t _size282;
-            ::apache::thrift::protocol::TType _etype285;
-            xfer += iprot->readSetBegin(_etype285, _size282);
-            uint32_t _i286;
-            for (_i286 = 0; _i286 < _size282; ++_i286)
+            uint32_t _size344;
+            ::apache::thrift::protocol::TType _etype347;
+            xfer += iprot->readSetBegin(_etype347, _size344);
+            uint32_t _i348;
+            for (_i348 = 0; _i348 < _size344; ++_i348)
             {
-              std::string _elem287;
-              xfer += iprot->readString(_elem287);
-              (*(this->success)).insert(_elem287);
+              std::string _elem349;
+              xfer += iprot->readString(_elem349);
+              (*(this->success)).insert(_elem349);
             }
             xfer += iprot->readSetEnd();
           }
@@ -6144,8 +6719,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listIterators_args::~AccumuloProxy_listIterators_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_listIterators_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6194,6 +6775,7 @@
 
 uint32_t AccumuloProxy_listIterators_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_listIterators_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -6209,8 +6791,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listIterators_pargs::~AccumuloProxy_listIterators_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_listIterators_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_listIterators_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -6226,8 +6814,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listIterators_result::~AccumuloProxy_listIterators_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_listIterators_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6250,29 +6844,29 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->success.clear();
-            uint32_t _size288;
-            ::apache::thrift::protocol::TType _ktype289;
-            ::apache::thrift::protocol::TType _vtype290;
-            xfer += iprot->readMapBegin(_ktype289, _vtype290, _size288);
-            uint32_t _i292;
-            for (_i292 = 0; _i292 < _size288; ++_i292)
+            uint32_t _size350;
+            ::apache::thrift::protocol::TType _ktype351;
+            ::apache::thrift::protocol::TType _vtype352;
+            xfer += iprot->readMapBegin(_ktype351, _vtype352, _size350);
+            uint32_t _i354;
+            for (_i354 = 0; _i354 < _size350; ++_i354)
             {
-              std::string _key293;
-              xfer += iprot->readString(_key293);
-              std::set<IteratorScope::type> & _val294 = this->success[_key293];
+              std::string _key355;
+              xfer += iprot->readString(_key355);
+              std::set<IteratorScope::type> & _val356 = this->success[_key355];
               {
-                _val294.clear();
-                uint32_t _size295;
-                ::apache::thrift::protocol::TType _etype298;
-                xfer += iprot->readSetBegin(_etype298, _size295);
-                uint32_t _i299;
-                for (_i299 = 0; _i299 < _size295; ++_i299)
+                _val356.clear();
+                uint32_t _size357;
+                ::apache::thrift::protocol::TType _etype360;
+                xfer += iprot->readSetBegin(_etype360, _size357);
+                uint32_t _i361;
+                for (_i361 = 0; _i361 < _size357; ++_i361)
                 {
-                  IteratorScope::type _elem300;
-                  int32_t ecast301;
-                  xfer += iprot->readI32(ecast301);
-                  _elem300 = (IteratorScope::type)ecast301;
-                  _val294.insert(_elem300);
+                  IteratorScope::type _elem362;
+                  int32_t ecast363;
+                  xfer += iprot->readI32(ecast363);
+                  _elem362 = (IteratorScope::type)ecast363;
+                  _val356.insert(_elem362);
                 }
                 xfer += iprot->readSetEnd();
               }
@@ -6330,16 +6924,16 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
     {
       xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_SET, static_cast<uint32_t>(this->success.size()));
-      std::map<std::string, std::set<IteratorScope::type> > ::const_iterator _iter302;
-      for (_iter302 = this->success.begin(); _iter302 != this->success.end(); ++_iter302)
+      std::map<std::string, std::set<IteratorScope::type> > ::const_iterator _iter364;
+      for (_iter364 = this->success.begin(); _iter364 != this->success.end(); ++_iter364)
       {
-        xfer += oprot->writeString(_iter302->first);
+        xfer += oprot->writeString(_iter364->first);
         {
-          xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>(_iter302->second.size()));
-          std::set<IteratorScope::type> ::const_iterator _iter303;
-          for (_iter303 = _iter302->second.begin(); _iter303 != _iter302->second.end(); ++_iter303)
+          xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>(_iter364->second.size()));
+          std::set<IteratorScope::type> ::const_iterator _iter365;
+          for (_iter365 = _iter364->second.begin(); _iter365 != _iter364->second.end(); ++_iter365)
           {
-            xfer += oprot->writeI32((int32_t)(*_iter303));
+            xfer += oprot->writeI32((int32_t)(*_iter365));
           }
           xfer += oprot->writeSetEnd();
         }
@@ -6365,8 +6959,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listIterators_presult::~AccumuloProxy_listIterators_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_listIterators_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6389,29 +6989,29 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             (*(this->success)).clear();
-            uint32_t _size304;
-            ::apache::thrift::protocol::TType _ktype305;
-            ::apache::thrift::protocol::TType _vtype306;
-            xfer += iprot->readMapBegin(_ktype305, _vtype306, _size304);
-            uint32_t _i308;
-            for (_i308 = 0; _i308 < _size304; ++_i308)
+            uint32_t _size366;
+            ::apache::thrift::protocol::TType _ktype367;
+            ::apache::thrift::protocol::TType _vtype368;
+            xfer += iprot->readMapBegin(_ktype367, _vtype368, _size366);
+            uint32_t _i370;
+            for (_i370 = 0; _i370 < _size366; ++_i370)
             {
-              std::string _key309;
-              xfer += iprot->readString(_key309);
-              std::set<IteratorScope::type> & _val310 = (*(this->success))[_key309];
+              std::string _key371;
+              xfer += iprot->readString(_key371);
+              std::set<IteratorScope::type> & _val372 = (*(this->success))[_key371];
               {
-                _val310.clear();
-                uint32_t _size311;
-                ::apache::thrift::protocol::TType _etype314;
-                xfer += iprot->readSetBegin(_etype314, _size311);
-                uint32_t _i315;
-                for (_i315 = 0; _i315 < _size311; ++_i315)
+                _val372.clear();
+                uint32_t _size373;
+                ::apache::thrift::protocol::TType _etype376;
+                xfer += iprot->readSetBegin(_etype376, _size373);
+                uint32_t _i377;
+                for (_i377 = 0; _i377 < _size373; ++_i377)
                 {
-                  IteratorScope::type _elem316;
-                  int32_t ecast317;
-                  xfer += iprot->readI32(ecast317);
-                  _elem316 = (IteratorScope::type)ecast317;
-                  _val310.insert(_elem316);
+                  IteratorScope::type _elem378;
+                  int32_t ecast379;
+                  xfer += iprot->readI32(ecast379);
+                  _elem378 = (IteratorScope::type)ecast379;
+                  _val372.insert(_elem378);
                 }
                 xfer += iprot->readSetEnd();
               }
@@ -6459,8 +7059,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listConstraints_args::~AccumuloProxy_listConstraints_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_listConstraints_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6509,6 +7115,7 @@
 
 uint32_t AccumuloProxy_listConstraints_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_listConstraints_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -6524,8 +7131,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listConstraints_pargs::~AccumuloProxy_listConstraints_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_listConstraints_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_listConstraints_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -6541,8 +7154,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listConstraints_result::~AccumuloProxy_listConstraints_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_listConstraints_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6565,17 +7184,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->success.clear();
-            uint32_t _size318;
-            ::apache::thrift::protocol::TType _ktype319;
-            ::apache::thrift::protocol::TType _vtype320;
-            xfer += iprot->readMapBegin(_ktype319, _vtype320, _size318);
-            uint32_t _i322;
-            for (_i322 = 0; _i322 < _size318; ++_i322)
+            uint32_t _size380;
+            ::apache::thrift::protocol::TType _ktype381;
+            ::apache::thrift::protocol::TType _vtype382;
+            xfer += iprot->readMapBegin(_ktype381, _vtype382, _size380);
+            uint32_t _i384;
+            for (_i384 = 0; _i384 < _size380; ++_i384)
             {
-              std::string _key323;
-              xfer += iprot->readString(_key323);
-              int32_t& _val324 = this->success[_key323];
-              xfer += iprot->readI32(_val324);
+              std::string _key385;
+              xfer += iprot->readString(_key385);
+              int32_t& _val386 = this->success[_key385];
+              xfer += iprot->readI32(_val386);
             }
             xfer += iprot->readMapEnd();
           }
@@ -6630,11 +7249,11 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
     {
       xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_I32, static_cast<uint32_t>(this->success.size()));
-      std::map<std::string, int32_t> ::const_iterator _iter325;
-      for (_iter325 = this->success.begin(); _iter325 != this->success.end(); ++_iter325)
+      std::map<std::string, int32_t> ::const_iterator _iter387;
+      for (_iter387 = this->success.begin(); _iter387 != this->success.end(); ++_iter387)
       {
-        xfer += oprot->writeString(_iter325->first);
-        xfer += oprot->writeI32(_iter325->second);
+        xfer += oprot->writeString(_iter387->first);
+        xfer += oprot->writeI32(_iter387->second);
       }
       xfer += oprot->writeMapEnd();
     }
@@ -6657,8 +7276,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listConstraints_presult::~AccumuloProxy_listConstraints_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_listConstraints_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6681,17 +7306,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             (*(this->success)).clear();
-            uint32_t _size326;
-            ::apache::thrift::protocol::TType _ktype327;
-            ::apache::thrift::protocol::TType _vtype328;
-            xfer += iprot->readMapBegin(_ktype327, _vtype328, _size326);
-            uint32_t _i330;
-            for (_i330 = 0; _i330 < _size326; ++_i330)
+            uint32_t _size388;
+            ::apache::thrift::protocol::TType _ktype389;
+            ::apache::thrift::protocol::TType _vtype390;
+            xfer += iprot->readMapBegin(_ktype389, _vtype390, _size388);
+            uint32_t _i392;
+            for (_i392 = 0; _i392 < _size388; ++_i392)
             {
-              std::string _key331;
-              xfer += iprot->readString(_key331);
-              int32_t& _val332 = (*(this->success))[_key331];
-              xfer += iprot->readI32(_val332);
+              std::string _key393;
+              xfer += iprot->readString(_key393);
+              int32_t& _val394 = (*(this->success))[_key393];
+              xfer += iprot->readI32(_val394);
             }
             xfer += iprot->readMapEnd();
           }
@@ -6736,8 +7361,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_mergeTablets_args::~AccumuloProxy_mergeTablets_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_mergeTablets_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6802,6 +7433,7 @@
 
 uint32_t AccumuloProxy_mergeTablets_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_mergeTablets_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -6825,8 +7457,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_mergeTablets_pargs::~AccumuloProxy_mergeTablets_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_mergeTablets_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_mergeTablets_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -6850,8 +7488,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_mergeTablets_result::~AccumuloProxy_mergeTablets_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_mergeTablets_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6930,8 +7574,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_mergeTablets_presult::~AccumuloProxy_mergeTablets_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_mergeTablets_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -6986,8 +7636,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_offlineTable_args::~AccumuloProxy_offlineTable_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_offlineTable_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7044,6 +7700,7 @@
 
 uint32_t AccumuloProxy_offlineTable_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_offlineTable_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -7063,8 +7720,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_offlineTable_pargs::~AccumuloProxy_offlineTable_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_offlineTable_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_offlineTable_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -7084,8 +7747,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_offlineTable_result::~AccumuloProxy_offlineTable_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_offlineTable_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7164,8 +7833,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_offlineTable_presult::~AccumuloProxy_offlineTable_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_offlineTable_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7220,8 +7895,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_onlineTable_args::~AccumuloProxy_onlineTable_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_onlineTable_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7278,6 +7959,7 @@
 
 uint32_t AccumuloProxy_onlineTable_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_onlineTable_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -7297,8 +7979,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_onlineTable_pargs::~AccumuloProxy_onlineTable_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_onlineTable_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_onlineTable_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -7318,8 +8006,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_onlineTable_result::~AccumuloProxy_onlineTable_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_onlineTable_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7398,8 +8092,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_onlineTable_presult::~AccumuloProxy_onlineTable_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_onlineTable_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7454,8 +8154,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeConstraint_args::~AccumuloProxy_removeConstraint_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeConstraint_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7512,6 +8218,7 @@
 
 uint32_t AccumuloProxy_removeConstraint_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_removeConstraint_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -7531,8 +8238,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeConstraint_pargs::~AccumuloProxy_removeConstraint_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeConstraint_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_removeConstraint_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -7552,8 +8265,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeConstraint_result::~AccumuloProxy_removeConstraint_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeConstraint_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7632,8 +8351,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeConstraint_presult::~AccumuloProxy_removeConstraint_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeConstraint_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7688,8 +8413,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeIterator_args::~AccumuloProxy_removeIterator_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeIterator_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7736,17 +8467,17 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->scopes.clear();
-            uint32_t _size333;
-            ::apache::thrift::protocol::TType _etype336;
-            xfer += iprot->readSetBegin(_etype336, _size333);
-            uint32_t _i337;
-            for (_i337 = 0; _i337 < _size333; ++_i337)
+            uint32_t _size395;
+            ::apache::thrift::protocol::TType _etype398;
+            xfer += iprot->readSetBegin(_etype398, _size395);
+            uint32_t _i399;
+            for (_i399 = 0; _i399 < _size395; ++_i399)
             {
-              IteratorScope::type _elem338;
-              int32_t ecast339;
-              xfer += iprot->readI32(ecast339);
-              _elem338 = (IteratorScope::type)ecast339;
-              this->scopes.insert(_elem338);
+              IteratorScope::type _elem400;
+              int32_t ecast401;
+              xfer += iprot->readI32(ecast401);
+              _elem400 = (IteratorScope::type)ecast401;
+              this->scopes.insert(_elem400);
             }
             xfer += iprot->readSetEnd();
           }
@@ -7769,6 +8500,7 @@
 
 uint32_t AccumuloProxy_removeIterator_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_removeIterator_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -7786,10 +8518,10 @@
   xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>(this->scopes.size()));
-    std::set<IteratorScope::type> ::const_iterator _iter340;
-    for (_iter340 = this->scopes.begin(); _iter340 != this->scopes.end(); ++_iter340)
+    std::set<IteratorScope::type> ::const_iterator _iter402;
+    for (_iter402 = this->scopes.begin(); _iter402 != this->scopes.end(); ++_iter402)
     {
-      xfer += oprot->writeI32((int32_t)(*_iter340));
+      xfer += oprot->writeI32((int32_t)(*_iter402));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -7800,8 +8532,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeIterator_pargs::~AccumuloProxy_removeIterator_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeIterator_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_removeIterator_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -7819,10 +8557,10 @@
   xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>((*(this->scopes)).size()));
-    std::set<IteratorScope::type> ::const_iterator _iter341;
-    for (_iter341 = (*(this->scopes)).begin(); _iter341 != (*(this->scopes)).end(); ++_iter341)
+    std::set<IteratorScope::type> ::const_iterator _iter403;
+    for (_iter403 = (*(this->scopes)).begin(); _iter403 != (*(this->scopes)).end(); ++_iter403)
     {
-      xfer += oprot->writeI32((int32_t)(*_iter341));
+      xfer += oprot->writeI32((int32_t)(*_iter403));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -7833,8 +8571,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeIterator_result::~AccumuloProxy_removeIterator_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeIterator_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7913,8 +8657,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeIterator_presult::~AccumuloProxy_removeIterator_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeIterator_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -7969,8 +8719,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeTableProperty_args::~AccumuloProxy_removeTableProperty_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeTableProperty_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8027,6 +8783,7 @@
 
 uint32_t AccumuloProxy_removeTableProperty_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_removeTableProperty_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -8046,8 +8803,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeTableProperty_pargs::~AccumuloProxy_removeTableProperty_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeTableProperty_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_removeTableProperty_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -8067,8 +8830,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeTableProperty_result::~AccumuloProxy_removeTableProperty_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeTableProperty_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8147,8 +8916,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeTableProperty_presult::~AccumuloProxy_removeTableProperty_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeTableProperty_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8203,8 +8978,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_renameTable_args::~AccumuloProxy_renameTable_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_renameTable_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8261,6 +9042,7 @@
 
 uint32_t AccumuloProxy_renameTable_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_renameTable_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -8280,8 +9062,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_renameTable_pargs::~AccumuloProxy_renameTable_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_renameTable_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_renameTable_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -8301,8 +9089,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_renameTable_result::~AccumuloProxy_renameTable_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_renameTable_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8393,8 +9187,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_renameTable_presult::~AccumuloProxy_renameTable_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_renameTable_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8457,8 +9257,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setLocalityGroups_args::~AccumuloProxy_setLocalityGroups_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_setLocalityGroups_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8497,27 +9303,27 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->groups.clear();
-            uint32_t _size342;
-            ::apache::thrift::protocol::TType _ktype343;
-            ::apache::thrift::protocol::TType _vtype344;
-            xfer += iprot->readMapBegin(_ktype343, _vtype344, _size342);
-            uint32_t _i346;
-            for (_i346 = 0; _i346 < _size342; ++_i346)
+            uint32_t _size404;
+            ::apache::thrift::protocol::TType _ktype405;
+            ::apache::thrift::protocol::TType _vtype406;
+            xfer += iprot->readMapBegin(_ktype405, _vtype406, _size404);
+            uint32_t _i408;
+            for (_i408 = 0; _i408 < _size404; ++_i408)
             {
-              std::string _key347;
-              xfer += iprot->readString(_key347);
-              std::set<std::string> & _val348 = this->groups[_key347];
+              std::string _key409;
+              xfer += iprot->readString(_key409);
+              std::set<std::string> & _val410 = this->groups[_key409];
               {
-                _val348.clear();
-                uint32_t _size349;
-                ::apache::thrift::protocol::TType _etype352;
-                xfer += iprot->readSetBegin(_etype352, _size349);
-                uint32_t _i353;
-                for (_i353 = 0; _i353 < _size349; ++_i353)
+                _val410.clear();
+                uint32_t _size411;
+                ::apache::thrift::protocol::TType _etype414;
+                xfer += iprot->readSetBegin(_etype414, _size411);
+                uint32_t _i415;
+                for (_i415 = 0; _i415 < _size411; ++_i415)
                 {
-                  std::string _elem354;
-                  xfer += iprot->readString(_elem354);
-                  _val348.insert(_elem354);
+                  std::string _elem416;
+                  xfer += iprot->readString(_elem416);
+                  _val410.insert(_elem416);
                 }
                 xfer += iprot->readSetEnd();
               }
@@ -8543,6 +9349,7 @@
 
 uint32_t AccumuloProxy_setLocalityGroups_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_setLocalityGroups_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -8556,16 +9363,16 @@
   xfer += oprot->writeFieldBegin("groups", ::apache::thrift::protocol::T_MAP, 3);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_SET, static_cast<uint32_t>(this->groups.size()));
-    std::map<std::string, std::set<std::string> > ::const_iterator _iter355;
-    for (_iter355 = this->groups.begin(); _iter355 != this->groups.end(); ++_iter355)
+    std::map<std::string, std::set<std::string> > ::const_iterator _iter417;
+    for (_iter417 = this->groups.begin(); _iter417 != this->groups.end(); ++_iter417)
     {
-      xfer += oprot->writeString(_iter355->first);
+      xfer += oprot->writeString(_iter417->first);
       {
-        xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(_iter355->second.size()));
-        std::set<std::string> ::const_iterator _iter356;
-        for (_iter356 = _iter355->second.begin(); _iter356 != _iter355->second.end(); ++_iter356)
+        xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(_iter417->second.size()));
+        std::set<std::string> ::const_iterator _iter418;
+        for (_iter418 = _iter417->second.begin(); _iter418 != _iter417->second.end(); ++_iter418)
         {
-          xfer += oprot->writeString((*_iter356));
+          xfer += oprot->writeString((*_iter418));
         }
         xfer += oprot->writeSetEnd();
       }
@@ -8579,8 +9386,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setLocalityGroups_pargs::~AccumuloProxy_setLocalityGroups_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_setLocalityGroups_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_setLocalityGroups_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -8594,16 +9407,16 @@
   xfer += oprot->writeFieldBegin("groups", ::apache::thrift::protocol::T_MAP, 3);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_SET, static_cast<uint32_t>((*(this->groups)).size()));
-    std::map<std::string, std::set<std::string> > ::const_iterator _iter357;
-    for (_iter357 = (*(this->groups)).begin(); _iter357 != (*(this->groups)).end(); ++_iter357)
+    std::map<std::string, std::set<std::string> > ::const_iterator _iter419;
+    for (_iter419 = (*(this->groups)).begin(); _iter419 != (*(this->groups)).end(); ++_iter419)
     {
-      xfer += oprot->writeString(_iter357->first);
+      xfer += oprot->writeString(_iter419->first);
       {
-        xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(_iter357->second.size()));
-        std::set<std::string> ::const_iterator _iter358;
-        for (_iter358 = _iter357->second.begin(); _iter358 != _iter357->second.end(); ++_iter358)
+        xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(_iter419->second.size()));
+        std::set<std::string> ::const_iterator _iter420;
+        for (_iter420 = _iter419->second.begin(); _iter420 != _iter419->second.end(); ++_iter420)
         {
-          xfer += oprot->writeString((*_iter358));
+          xfer += oprot->writeString((*_iter420));
         }
         xfer += oprot->writeSetEnd();
       }
@@ -8617,8 +9430,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setLocalityGroups_result::~AccumuloProxy_setLocalityGroups_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_setLocalityGroups_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8697,8 +9516,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setLocalityGroups_presult::~AccumuloProxy_setLocalityGroups_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_setLocalityGroups_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8753,8 +9578,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setTableProperty_args::~AccumuloProxy_setTableProperty_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_setTableProperty_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8819,6 +9650,7 @@
 
 uint32_t AccumuloProxy_setTableProperty_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_setTableProperty_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -8842,8 +9674,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setTableProperty_pargs::~AccumuloProxy_setTableProperty_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_setTableProperty_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_setTableProperty_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -8867,8 +9705,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setTableProperty_result::~AccumuloProxy_setTableProperty_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_setTableProperty_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -8947,8 +9791,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setTableProperty_presult::~AccumuloProxy_setTableProperty_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_setTableProperty_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9003,8 +9853,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_splitRangeByTablets_args::~AccumuloProxy_splitRangeByTablets_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_splitRangeByTablets_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9069,6 +9925,7 @@
 
 uint32_t AccumuloProxy_splitRangeByTablets_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_splitRangeByTablets_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -9092,8 +9949,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_splitRangeByTablets_pargs::~AccumuloProxy_splitRangeByTablets_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_splitRangeByTablets_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_splitRangeByTablets_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -9117,8 +9980,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_splitRangeByTablets_result::~AccumuloProxy_splitRangeByTablets_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_splitRangeByTablets_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9141,15 +10010,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->success.clear();
-            uint32_t _size359;
-            ::apache::thrift::protocol::TType _etype362;
-            xfer += iprot->readSetBegin(_etype362, _size359);
-            uint32_t _i363;
-            for (_i363 = 0; _i363 < _size359; ++_i363)
+            uint32_t _size421;
+            ::apache::thrift::protocol::TType _etype424;
+            xfer += iprot->readSetBegin(_etype424, _size421);
+            uint32_t _i425;
+            for (_i425 = 0; _i425 < _size421; ++_i425)
             {
-              Range _elem364;
-              xfer += _elem364.read(iprot);
-              this->success.insert(_elem364);
+              Range _elem426;
+              xfer += _elem426.read(iprot);
+              this->success.insert(_elem426);
             }
             xfer += iprot->readSetEnd();
           }
@@ -9204,10 +10073,10 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_SET, 0);
     {
       xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->success.size()));
-      std::set<Range> ::const_iterator _iter365;
-      for (_iter365 = this->success.begin(); _iter365 != this->success.end(); ++_iter365)
+      std::set<Range> ::const_iterator _iter427;
+      for (_iter427 = this->success.begin(); _iter427 != this->success.end(); ++_iter427)
       {
-        xfer += (*_iter365).write(oprot);
+        xfer += (*_iter427).write(oprot);
       }
       xfer += oprot->writeSetEnd();
     }
@@ -9230,8 +10099,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_splitRangeByTablets_presult::~AccumuloProxy_splitRangeByTablets_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_splitRangeByTablets_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9254,15 +10129,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             (*(this->success)).clear();
-            uint32_t _size366;
-            ::apache::thrift::protocol::TType _etype369;
-            xfer += iprot->readSetBegin(_etype369, _size366);
-            uint32_t _i370;
-            for (_i370 = 0; _i370 < _size366; ++_i370)
+            uint32_t _size428;
+            ::apache::thrift::protocol::TType _etype431;
+            xfer += iprot->readSetBegin(_etype431, _size428);
+            uint32_t _i432;
+            for (_i432 = 0; _i432 < _size428; ++_i432)
             {
-              Range _elem371;
-              xfer += _elem371.read(iprot);
-              (*(this->success)).insert(_elem371);
+              Range _elem433;
+              xfer += _elem433.read(iprot);
+              (*(this->success)).insert(_elem433);
             }
             xfer += iprot->readSetEnd();
           }
@@ -9307,8 +10182,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_tableExists_args::~AccumuloProxy_tableExists_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_tableExists_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9357,6 +10238,7 @@
 
 uint32_t AccumuloProxy_tableExists_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_tableExists_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -9372,8 +10254,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_tableExists_pargs::~AccumuloProxy_tableExists_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_tableExists_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_tableExists_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -9389,8 +10277,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_tableExists_result::~AccumuloProxy_tableExists_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_tableExists_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9445,8 +10339,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_tableExists_presult::~AccumuloProxy_tableExists_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_tableExists_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9485,8 +10385,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_tableIdMap_args::~AccumuloProxy_tableIdMap_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_tableIdMap_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9527,6 +10433,7 @@
 
 uint32_t AccumuloProxy_tableIdMap_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_tableIdMap_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -9538,8 +10445,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_tableIdMap_pargs::~AccumuloProxy_tableIdMap_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_tableIdMap_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_tableIdMap_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -9551,8 +10464,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_tableIdMap_result::~AccumuloProxy_tableIdMap_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_tableIdMap_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9575,17 +10494,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->success.clear();
-            uint32_t _size372;
-            ::apache::thrift::protocol::TType _ktype373;
-            ::apache::thrift::protocol::TType _vtype374;
-            xfer += iprot->readMapBegin(_ktype373, _vtype374, _size372);
-            uint32_t _i376;
-            for (_i376 = 0; _i376 < _size372; ++_i376)
+            uint32_t _size434;
+            ::apache::thrift::protocol::TType _ktype435;
+            ::apache::thrift::protocol::TType _vtype436;
+            xfer += iprot->readMapBegin(_ktype435, _vtype436, _size434);
+            uint32_t _i438;
+            for (_i438 = 0; _i438 < _size434; ++_i438)
             {
-              std::string _key377;
-              xfer += iprot->readString(_key377);
-              std::string& _val378 = this->success[_key377];
-              xfer += iprot->readString(_val378);
+              std::string _key439;
+              xfer += iprot->readString(_key439);
+              std::string& _val440 = this->success[_key439];
+              xfer += iprot->readString(_val440);
             }
             xfer += iprot->readMapEnd();
           }
@@ -9616,11 +10535,11 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
     {
       xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
-      std::map<std::string, std::string> ::const_iterator _iter379;
-      for (_iter379 = this->success.begin(); _iter379 != this->success.end(); ++_iter379)
+      std::map<std::string, std::string> ::const_iterator _iter441;
+      for (_iter441 = this->success.begin(); _iter441 != this->success.end(); ++_iter441)
       {
-        xfer += oprot->writeString(_iter379->first);
-        xfer += oprot->writeString(_iter379->second);
+        xfer += oprot->writeString(_iter441->first);
+        xfer += oprot->writeString(_iter441->second);
       }
       xfer += oprot->writeMapEnd();
     }
@@ -9631,8 +10550,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_tableIdMap_presult::~AccumuloProxy_tableIdMap_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_tableIdMap_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9655,17 +10580,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             (*(this->success)).clear();
-            uint32_t _size380;
-            ::apache::thrift::protocol::TType _ktype381;
-            ::apache::thrift::protocol::TType _vtype382;
-            xfer += iprot->readMapBegin(_ktype381, _vtype382, _size380);
-            uint32_t _i384;
-            for (_i384 = 0; _i384 < _size380; ++_i384)
+            uint32_t _size442;
+            ::apache::thrift::protocol::TType _ktype443;
+            ::apache::thrift::protocol::TType _vtype444;
+            xfer += iprot->readMapBegin(_ktype443, _vtype444, _size442);
+            uint32_t _i446;
+            for (_i446 = 0; _i446 < _size442; ++_i446)
             {
-              std::string _key385;
-              xfer += iprot->readString(_key385);
-              std::string& _val386 = (*(this->success))[_key385];
-              xfer += iprot->readString(_val386);
+              std::string _key447;
+              xfer += iprot->readString(_key447);
+              std::string& _val448 = (*(this->success))[_key447];
+              xfer += iprot->readString(_val448);
             }
             xfer += iprot->readMapEnd();
           }
@@ -9686,8 +10611,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_testTableClassLoad_args::~AccumuloProxy_testTableClassLoad_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_testTableClassLoad_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9752,6 +10683,7 @@
 
 uint32_t AccumuloProxy_testTableClassLoad_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_testTableClassLoad_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -9775,8 +10707,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_testTableClassLoad_pargs::~AccumuloProxy_testTableClassLoad_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_testTableClassLoad_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_testTableClassLoad_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -9800,8 +10738,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_testTableClassLoad_result::~AccumuloProxy_testTableClassLoad_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_testTableClassLoad_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9892,8 +10836,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_testTableClassLoad_presult::~AccumuloProxy_testTableClassLoad_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_testTableClassLoad_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -9956,8 +10906,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_pingTabletServer_args::~AccumuloProxy_pingTabletServer_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_pingTabletServer_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10006,6 +10962,7 @@
 
 uint32_t AccumuloProxy_pingTabletServer_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_pingTabletServer_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -10021,8 +10978,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_pingTabletServer_pargs::~AccumuloProxy_pingTabletServer_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_pingTabletServer_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_pingTabletServer_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -10038,8 +11001,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_pingTabletServer_result::~AccumuloProxy_pingTabletServer_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_pingTabletServer_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10106,8 +11075,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_pingTabletServer_presult::~AccumuloProxy_pingTabletServer_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_pingTabletServer_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10154,8 +11129,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getActiveScans_args::~AccumuloProxy_getActiveScans_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getActiveScans_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10204,6 +11185,7 @@
 
 uint32_t AccumuloProxy_getActiveScans_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getActiveScans_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -10219,8 +11201,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getActiveScans_pargs::~AccumuloProxy_getActiveScans_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getActiveScans_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getActiveScans_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -10236,8 +11224,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getActiveScans_result::~AccumuloProxy_getActiveScans_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getActiveScans_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10260,14 +11254,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->success.clear();
-            uint32_t _size387;
-            ::apache::thrift::protocol::TType _etype390;
-            xfer += iprot->readListBegin(_etype390, _size387);
-            this->success.resize(_size387);
-            uint32_t _i391;
-            for (_i391 = 0; _i391 < _size387; ++_i391)
+            uint32_t _size449;
+            ::apache::thrift::protocol::TType _etype452;
+            xfer += iprot->readListBegin(_etype452, _size449);
+            this->success.resize(_size449);
+            uint32_t _i453;
+            for (_i453 = 0; _i453 < _size449; ++_i453)
             {
-              xfer += this->success[_i391].read(iprot);
+              xfer += this->success[_i453].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -10314,10 +11308,10 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_LIST, 0);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->success.size()));
-      std::vector<ActiveScan> ::const_iterator _iter392;
-      for (_iter392 = this->success.begin(); _iter392 != this->success.end(); ++_iter392)
+      std::vector<ActiveScan> ::const_iterator _iter454;
+      for (_iter454 = this->success.begin(); _iter454 != this->success.end(); ++_iter454)
       {
-        xfer += (*_iter392).write(oprot);
+        xfer += (*_iter454).write(oprot);
       }
       xfer += oprot->writeListEnd();
     }
@@ -10336,8 +11330,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getActiveScans_presult::~AccumuloProxy_getActiveScans_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getActiveScans_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10360,14 +11360,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             (*(this->success)).clear();
-            uint32_t _size393;
-            ::apache::thrift::protocol::TType _etype396;
-            xfer += iprot->readListBegin(_etype396, _size393);
-            (*(this->success)).resize(_size393);
-            uint32_t _i397;
-            for (_i397 = 0; _i397 < _size393; ++_i397)
+            uint32_t _size455;
+            ::apache::thrift::protocol::TType _etype458;
+            xfer += iprot->readListBegin(_etype458, _size455);
+            (*(this->success)).resize(_size455);
+            uint32_t _i459;
+            for (_i459 = 0; _i459 < _size455; ++_i459)
             {
-              xfer += (*(this->success))[_i397].read(iprot);
+              xfer += (*(this->success))[_i459].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -10404,8 +11404,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getActiveCompactions_args::~AccumuloProxy_getActiveCompactions_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getActiveCompactions_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10454,6 +11460,7 @@
 
 uint32_t AccumuloProxy_getActiveCompactions_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getActiveCompactions_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -10469,8 +11476,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getActiveCompactions_pargs::~AccumuloProxy_getActiveCompactions_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getActiveCompactions_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getActiveCompactions_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -10486,8 +11499,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getActiveCompactions_result::~AccumuloProxy_getActiveCompactions_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getActiveCompactions_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10510,14 +11529,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->success.clear();
-            uint32_t _size398;
-            ::apache::thrift::protocol::TType _etype401;
-            xfer += iprot->readListBegin(_etype401, _size398);
-            this->success.resize(_size398);
-            uint32_t _i402;
-            for (_i402 = 0; _i402 < _size398; ++_i402)
+            uint32_t _size460;
+            ::apache::thrift::protocol::TType _etype463;
+            xfer += iprot->readListBegin(_etype463, _size460);
+            this->success.resize(_size460);
+            uint32_t _i464;
+            for (_i464 = 0; _i464 < _size460; ++_i464)
             {
-              xfer += this->success[_i402].read(iprot);
+              xfer += this->success[_i464].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -10564,10 +11583,10 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_LIST, 0);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->success.size()));
-      std::vector<ActiveCompaction> ::const_iterator _iter403;
-      for (_iter403 = this->success.begin(); _iter403 != this->success.end(); ++_iter403)
+      std::vector<ActiveCompaction> ::const_iterator _iter465;
+      for (_iter465 = this->success.begin(); _iter465 != this->success.end(); ++_iter465)
       {
-        xfer += (*_iter403).write(oprot);
+        xfer += (*_iter465).write(oprot);
       }
       xfer += oprot->writeListEnd();
     }
@@ -10586,8 +11605,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getActiveCompactions_presult::~AccumuloProxy_getActiveCompactions_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getActiveCompactions_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10610,14 +11635,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             (*(this->success)).clear();
-            uint32_t _size404;
-            ::apache::thrift::protocol::TType _etype407;
-            xfer += iprot->readListBegin(_etype407, _size404);
-            (*(this->success)).resize(_size404);
-            uint32_t _i408;
-            for (_i408 = 0; _i408 < _size404; ++_i408)
+            uint32_t _size466;
+            ::apache::thrift::protocol::TType _etype469;
+            xfer += iprot->readListBegin(_etype469, _size466);
+            (*(this->success)).resize(_size466);
+            uint32_t _i470;
+            for (_i470 = 0; _i470 < _size466; ++_i470)
             {
-              xfer += (*(this->success))[_i408].read(iprot);
+              xfer += (*(this->success))[_i470].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -10654,8 +11679,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getSiteConfiguration_args::~AccumuloProxy_getSiteConfiguration_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getSiteConfiguration_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10696,6 +11727,7 @@
 
 uint32_t AccumuloProxy_getSiteConfiguration_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getSiteConfiguration_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -10707,8 +11739,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getSiteConfiguration_pargs::~AccumuloProxy_getSiteConfiguration_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getSiteConfiguration_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getSiteConfiguration_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -10720,8 +11758,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getSiteConfiguration_result::~AccumuloProxy_getSiteConfiguration_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getSiteConfiguration_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10744,17 +11788,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->success.clear();
-            uint32_t _size409;
-            ::apache::thrift::protocol::TType _ktype410;
-            ::apache::thrift::protocol::TType _vtype411;
-            xfer += iprot->readMapBegin(_ktype410, _vtype411, _size409);
-            uint32_t _i413;
-            for (_i413 = 0; _i413 < _size409; ++_i413)
+            uint32_t _size471;
+            ::apache::thrift::protocol::TType _ktype472;
+            ::apache::thrift::protocol::TType _vtype473;
+            xfer += iprot->readMapBegin(_ktype472, _vtype473, _size471);
+            uint32_t _i475;
+            for (_i475 = 0; _i475 < _size471; ++_i475)
             {
-              std::string _key414;
-              xfer += iprot->readString(_key414);
-              std::string& _val415 = this->success[_key414];
-              xfer += iprot->readString(_val415);
+              std::string _key476;
+              xfer += iprot->readString(_key476);
+              std::string& _val477 = this->success[_key476];
+              xfer += iprot->readString(_val477);
             }
             xfer += iprot->readMapEnd();
           }
@@ -10801,11 +11845,11 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
     {
       xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
-      std::map<std::string, std::string> ::const_iterator _iter416;
-      for (_iter416 = this->success.begin(); _iter416 != this->success.end(); ++_iter416)
+      std::map<std::string, std::string> ::const_iterator _iter478;
+      for (_iter478 = this->success.begin(); _iter478 != this->success.end(); ++_iter478)
       {
-        xfer += oprot->writeString(_iter416->first);
-        xfer += oprot->writeString(_iter416->second);
+        xfer += oprot->writeString(_iter478->first);
+        xfer += oprot->writeString(_iter478->second);
       }
       xfer += oprot->writeMapEnd();
     }
@@ -10824,8 +11868,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getSiteConfiguration_presult::~AccumuloProxy_getSiteConfiguration_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getSiteConfiguration_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10848,17 +11898,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             (*(this->success)).clear();
-            uint32_t _size417;
-            ::apache::thrift::protocol::TType _ktype418;
-            ::apache::thrift::protocol::TType _vtype419;
-            xfer += iprot->readMapBegin(_ktype418, _vtype419, _size417);
-            uint32_t _i421;
-            for (_i421 = 0; _i421 < _size417; ++_i421)
+            uint32_t _size479;
+            ::apache::thrift::protocol::TType _ktype480;
+            ::apache::thrift::protocol::TType _vtype481;
+            xfer += iprot->readMapBegin(_ktype480, _vtype481, _size479);
+            uint32_t _i483;
+            for (_i483 = 0; _i483 < _size479; ++_i483)
             {
-              std::string _key422;
-              xfer += iprot->readString(_key422);
-              std::string& _val423 = (*(this->success))[_key422];
-              xfer += iprot->readString(_val423);
+              std::string _key484;
+              xfer += iprot->readString(_key484);
+              std::string& _val485 = (*(this->success))[_key484];
+              xfer += iprot->readString(_val485);
             }
             xfer += iprot->readMapEnd();
           }
@@ -10895,8 +11945,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getSystemConfiguration_args::~AccumuloProxy_getSystemConfiguration_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getSystemConfiguration_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10937,6 +11993,7 @@
 
 uint32_t AccumuloProxy_getSystemConfiguration_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getSystemConfiguration_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -10948,8 +12005,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getSystemConfiguration_pargs::~AccumuloProxy_getSystemConfiguration_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getSystemConfiguration_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getSystemConfiguration_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -10961,8 +12024,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getSystemConfiguration_result::~AccumuloProxy_getSystemConfiguration_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getSystemConfiguration_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -10985,17 +12054,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->success.clear();
-            uint32_t _size424;
-            ::apache::thrift::protocol::TType _ktype425;
-            ::apache::thrift::protocol::TType _vtype426;
-            xfer += iprot->readMapBegin(_ktype425, _vtype426, _size424);
-            uint32_t _i428;
-            for (_i428 = 0; _i428 < _size424; ++_i428)
+            uint32_t _size486;
+            ::apache::thrift::protocol::TType _ktype487;
+            ::apache::thrift::protocol::TType _vtype488;
+            xfer += iprot->readMapBegin(_ktype487, _vtype488, _size486);
+            uint32_t _i490;
+            for (_i490 = 0; _i490 < _size486; ++_i490)
             {
-              std::string _key429;
-              xfer += iprot->readString(_key429);
-              std::string& _val430 = this->success[_key429];
-              xfer += iprot->readString(_val430);
+              std::string _key491;
+              xfer += iprot->readString(_key491);
+              std::string& _val492 = this->success[_key491];
+              xfer += iprot->readString(_val492);
             }
             xfer += iprot->readMapEnd();
           }
@@ -11042,11 +12111,11 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
     {
       xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
-      std::map<std::string, std::string> ::const_iterator _iter431;
-      for (_iter431 = this->success.begin(); _iter431 != this->success.end(); ++_iter431)
+      std::map<std::string, std::string> ::const_iterator _iter493;
+      for (_iter493 = this->success.begin(); _iter493 != this->success.end(); ++_iter493)
       {
-        xfer += oprot->writeString(_iter431->first);
-        xfer += oprot->writeString(_iter431->second);
+        xfer += oprot->writeString(_iter493->first);
+        xfer += oprot->writeString(_iter493->second);
       }
       xfer += oprot->writeMapEnd();
     }
@@ -11065,8 +12134,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getSystemConfiguration_presult::~AccumuloProxy_getSystemConfiguration_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getSystemConfiguration_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11089,17 +12164,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             (*(this->success)).clear();
-            uint32_t _size432;
-            ::apache::thrift::protocol::TType _ktype433;
-            ::apache::thrift::protocol::TType _vtype434;
-            xfer += iprot->readMapBegin(_ktype433, _vtype434, _size432);
-            uint32_t _i436;
-            for (_i436 = 0; _i436 < _size432; ++_i436)
+            uint32_t _size494;
+            ::apache::thrift::protocol::TType _ktype495;
+            ::apache::thrift::protocol::TType _vtype496;
+            xfer += iprot->readMapBegin(_ktype495, _vtype496, _size494);
+            uint32_t _i498;
+            for (_i498 = 0; _i498 < _size494; ++_i498)
             {
-              std::string _key437;
-              xfer += iprot->readString(_key437);
-              std::string& _val438 = (*(this->success))[_key437];
-              xfer += iprot->readString(_val438);
+              std::string _key499;
+              xfer += iprot->readString(_key499);
+              std::string& _val500 = (*(this->success))[_key499];
+              xfer += iprot->readString(_val500);
             }
             xfer += iprot->readMapEnd();
           }
@@ -11136,8 +12211,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getTabletServers_args::~AccumuloProxy_getTabletServers_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getTabletServers_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11178,6 +12259,7 @@
 
 uint32_t AccumuloProxy_getTabletServers_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getTabletServers_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -11189,8 +12271,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getTabletServers_pargs::~AccumuloProxy_getTabletServers_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getTabletServers_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getTabletServers_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -11202,8 +12290,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getTabletServers_result::~AccumuloProxy_getTabletServers_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getTabletServers_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11226,14 +12320,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->success.clear();
-            uint32_t _size439;
-            ::apache::thrift::protocol::TType _etype442;
-            xfer += iprot->readListBegin(_etype442, _size439);
-            this->success.resize(_size439);
-            uint32_t _i443;
-            for (_i443 = 0; _i443 < _size439; ++_i443)
+            uint32_t _size501;
+            ::apache::thrift::protocol::TType _etype504;
+            xfer += iprot->readListBegin(_etype504, _size501);
+            this->success.resize(_size501);
+            uint32_t _i505;
+            for (_i505 = 0; _i505 < _size501; ++_i505)
             {
-              xfer += iprot->readString(this->success[_i443]);
+              xfer += iprot->readString(this->success[_i505]);
             }
             xfer += iprot->readListEnd();
           }
@@ -11264,10 +12358,10 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_LIST, 0);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
-      std::vector<std::string> ::const_iterator _iter444;
-      for (_iter444 = this->success.begin(); _iter444 != this->success.end(); ++_iter444)
+      std::vector<std::string> ::const_iterator _iter506;
+      for (_iter506 = this->success.begin(); _iter506 != this->success.end(); ++_iter506)
       {
-        xfer += oprot->writeString((*_iter444));
+        xfer += oprot->writeString((*_iter506));
       }
       xfer += oprot->writeListEnd();
     }
@@ -11278,8 +12372,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getTabletServers_presult::~AccumuloProxy_getTabletServers_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getTabletServers_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11302,14 +12402,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             (*(this->success)).clear();
-            uint32_t _size445;
-            ::apache::thrift::protocol::TType _etype448;
-            xfer += iprot->readListBegin(_etype448, _size445);
-            (*(this->success)).resize(_size445);
-            uint32_t _i449;
-            for (_i449 = 0; _i449 < _size445; ++_i449)
+            uint32_t _size507;
+            ::apache::thrift::protocol::TType _etype510;
+            xfer += iprot->readListBegin(_etype510, _size507);
+            (*(this->success)).resize(_size507);
+            uint32_t _i511;
+            for (_i511 = 0; _i511 < _size507; ++_i511)
             {
-              xfer += iprot->readString((*(this->success))[_i449]);
+              xfer += iprot->readString((*(this->success))[_i511]);
             }
             xfer += iprot->readListEnd();
           }
@@ -11330,8 +12430,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeProperty_args::~AccumuloProxy_removeProperty_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeProperty_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11380,6 +12486,7 @@
 
 uint32_t AccumuloProxy_removeProperty_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_removeProperty_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -11395,8 +12502,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeProperty_pargs::~AccumuloProxy_removeProperty_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeProperty_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_removeProperty_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -11412,8 +12525,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeProperty_result::~AccumuloProxy_removeProperty_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeProperty_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11480,8 +12599,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_removeProperty_presult::~AccumuloProxy_removeProperty_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_removeProperty_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11528,8 +12653,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setProperty_args::~AccumuloProxy_setProperty_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_setProperty_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11586,6 +12717,7 @@
 
 uint32_t AccumuloProxy_setProperty_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_setProperty_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -11605,8 +12737,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setProperty_pargs::~AccumuloProxy_setProperty_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_setProperty_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_setProperty_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -11626,8 +12764,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setProperty_result::~AccumuloProxy_setProperty_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_setProperty_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11694,8 +12838,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_setProperty_presult::~AccumuloProxy_setProperty_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_setProperty_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11742,8 +12892,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_testClassLoad_args::~AccumuloProxy_testClassLoad_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_testClassLoad_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11800,6 +12956,7 @@
 
 uint32_t AccumuloProxy_testClassLoad_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_testClassLoad_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -11819,8 +12976,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_testClassLoad_pargs::~AccumuloProxy_testClassLoad_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_testClassLoad_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_testClassLoad_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -11840,8 +13003,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_testClassLoad_result::~AccumuloProxy_testClassLoad_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_testClassLoad_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11920,8 +13089,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_testClassLoad_presult::~AccumuloProxy_testClassLoad_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_testClassLoad_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -11976,8 +13151,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_authenticateUser_args::~AccumuloProxy_authenticateUser_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_authenticateUser_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12016,17 +13197,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->properties.clear();
-            uint32_t _size450;
-            ::apache::thrift::protocol::TType _ktype451;
-            ::apache::thrift::protocol::TType _vtype452;
-            xfer += iprot->readMapBegin(_ktype451, _vtype452, _size450);
-            uint32_t _i454;
-            for (_i454 = 0; _i454 < _size450; ++_i454)
+            uint32_t _size512;
+            ::apache::thrift::protocol::TType _ktype513;
+            ::apache::thrift::protocol::TType _vtype514;
+            xfer += iprot->readMapBegin(_ktype513, _vtype514, _size512);
+            uint32_t _i516;
+            for (_i516 = 0; _i516 < _size512; ++_i516)
             {
-              std::string _key455;
-              xfer += iprot->readString(_key455);
-              std::string& _val456 = this->properties[_key455];
-              xfer += iprot->readString(_val456);
+              std::string _key517;
+              xfer += iprot->readString(_key517);
+              std::string& _val518 = this->properties[_key517];
+              xfer += iprot->readString(_val518);
             }
             xfer += iprot->readMapEnd();
           }
@@ -12049,6 +13230,7 @@
 
 uint32_t AccumuloProxy_authenticateUser_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_authenticateUser_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -12062,11 +13244,11 @@
   xfer += oprot->writeFieldBegin("properties", ::apache::thrift::protocol::T_MAP, 3);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->properties.size()));
-    std::map<std::string, std::string> ::const_iterator _iter457;
-    for (_iter457 = this->properties.begin(); _iter457 != this->properties.end(); ++_iter457)
+    std::map<std::string, std::string> ::const_iterator _iter519;
+    for (_iter519 = this->properties.begin(); _iter519 != this->properties.end(); ++_iter519)
     {
-      xfer += oprot->writeString(_iter457->first);
-      xfer += oprot->writeString(_iter457->second);
+      xfer += oprot->writeString(_iter519->first);
+      xfer += oprot->writeString(_iter519->second);
     }
     xfer += oprot->writeMapEnd();
   }
@@ -12077,8 +13259,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_authenticateUser_pargs::~AccumuloProxy_authenticateUser_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_authenticateUser_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_authenticateUser_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -12092,11 +13280,11 @@
   xfer += oprot->writeFieldBegin("properties", ::apache::thrift::protocol::T_MAP, 3);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>((*(this->properties)).size()));
-    std::map<std::string, std::string> ::const_iterator _iter458;
-    for (_iter458 = (*(this->properties)).begin(); _iter458 != (*(this->properties)).end(); ++_iter458)
+    std::map<std::string, std::string> ::const_iterator _iter520;
+    for (_iter520 = (*(this->properties)).begin(); _iter520 != (*(this->properties)).end(); ++_iter520)
     {
-      xfer += oprot->writeString(_iter458->first);
-      xfer += oprot->writeString(_iter458->second);
+      xfer += oprot->writeString(_iter520->first);
+      xfer += oprot->writeString(_iter520->second);
     }
     xfer += oprot->writeMapEnd();
   }
@@ -12107,8 +13295,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_authenticateUser_result::~AccumuloProxy_authenticateUser_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_authenticateUser_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12187,8 +13381,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_authenticateUser_presult::~AccumuloProxy_authenticateUser_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_authenticateUser_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12243,8 +13443,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_changeUserAuthorizations_args::~AccumuloProxy_changeUserAuthorizations_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_changeUserAuthorizations_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12283,15 +13489,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->authorizations.clear();
-            uint32_t _size459;
-            ::apache::thrift::protocol::TType _etype462;
-            xfer += iprot->readSetBegin(_etype462, _size459);
-            uint32_t _i463;
-            for (_i463 = 0; _i463 < _size459; ++_i463)
+            uint32_t _size521;
+            ::apache::thrift::protocol::TType _etype524;
+            xfer += iprot->readSetBegin(_etype524, _size521);
+            uint32_t _i525;
+            for (_i525 = 0; _i525 < _size521; ++_i525)
             {
-              std::string _elem464;
-              xfer += iprot->readBinary(_elem464);
-              this->authorizations.insert(_elem464);
+              std::string _elem526;
+              xfer += iprot->readBinary(_elem526);
+              this->authorizations.insert(_elem526);
             }
             xfer += iprot->readSetEnd();
           }
@@ -12314,6 +13520,7 @@
 
 uint32_t AccumuloProxy_changeUserAuthorizations_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_changeUserAuthorizations_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -12327,10 +13534,10 @@
   xfer += oprot->writeFieldBegin("authorizations", ::apache::thrift::protocol::T_SET, 3);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->authorizations.size()));
-    std::set<std::string> ::const_iterator _iter465;
-    for (_iter465 = this->authorizations.begin(); _iter465 != this->authorizations.end(); ++_iter465)
+    std::set<std::string> ::const_iterator _iter527;
+    for (_iter527 = this->authorizations.begin(); _iter527 != this->authorizations.end(); ++_iter527)
     {
-      xfer += oprot->writeBinary((*_iter465));
+      xfer += oprot->writeBinary((*_iter527));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -12341,8 +13548,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_changeUserAuthorizations_pargs::~AccumuloProxy_changeUserAuthorizations_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_changeUserAuthorizations_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_changeUserAuthorizations_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -12356,10 +13569,10 @@
   xfer += oprot->writeFieldBegin("authorizations", ::apache::thrift::protocol::T_SET, 3);
   {
     xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>((*(this->authorizations)).size()));
-    std::set<std::string> ::const_iterator _iter466;
-    for (_iter466 = (*(this->authorizations)).begin(); _iter466 != (*(this->authorizations)).end(); ++_iter466)
+    std::set<std::string> ::const_iterator _iter528;
+    for (_iter528 = (*(this->authorizations)).begin(); _iter528 != (*(this->authorizations)).end(); ++_iter528)
     {
-      xfer += oprot->writeBinary((*_iter466));
+      xfer += oprot->writeBinary((*_iter528));
     }
     xfer += oprot->writeSetEnd();
   }
@@ -12370,8 +13583,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_changeUserAuthorizations_result::~AccumuloProxy_changeUserAuthorizations_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_changeUserAuthorizations_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12438,8 +13657,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_changeUserAuthorizations_presult::~AccumuloProxy_changeUserAuthorizations_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_changeUserAuthorizations_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12486,8 +13711,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_changeLocalUserPassword_args::~AccumuloProxy_changeLocalUserPassword_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_changeLocalUserPassword_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12544,6 +13775,7 @@
 
 uint32_t AccumuloProxy_changeLocalUserPassword_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_changeLocalUserPassword_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -12563,8 +13795,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_changeLocalUserPassword_pargs::~AccumuloProxy_changeLocalUserPassword_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_changeLocalUserPassword_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_changeLocalUserPassword_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -12584,8 +13822,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_changeLocalUserPassword_result::~AccumuloProxy_changeLocalUserPassword_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_changeLocalUserPassword_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12652,8 +13896,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_changeLocalUserPassword_presult::~AccumuloProxy_changeLocalUserPassword_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_changeLocalUserPassword_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12700,8 +13950,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createLocalUser_args::~AccumuloProxy_createLocalUser_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_createLocalUser_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12758,6 +14014,7 @@
 
 uint32_t AccumuloProxy_createLocalUser_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createLocalUser_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -12777,8 +14034,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createLocalUser_pargs::~AccumuloProxy_createLocalUser_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_createLocalUser_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createLocalUser_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -12798,8 +14061,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createLocalUser_result::~AccumuloProxy_createLocalUser_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_createLocalUser_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12866,8 +14135,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createLocalUser_presult::~AccumuloProxy_createLocalUser_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_createLocalUser_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12914,8 +14189,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_dropLocalUser_args::~AccumuloProxy_dropLocalUser_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_dropLocalUser_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -12964,6 +14245,7 @@
 
 uint32_t AccumuloProxy_dropLocalUser_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_dropLocalUser_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -12979,8 +14261,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_dropLocalUser_pargs::~AccumuloProxy_dropLocalUser_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_dropLocalUser_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_dropLocalUser_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -12996,8 +14284,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_dropLocalUser_result::~AccumuloProxy_dropLocalUser_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_dropLocalUser_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13064,8 +14358,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_dropLocalUser_presult::~AccumuloProxy_dropLocalUser_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_dropLocalUser_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13112,8 +14412,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getUserAuthorizations_args::~AccumuloProxy_getUserAuthorizations_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getUserAuthorizations_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13162,6 +14468,7 @@
 
 uint32_t AccumuloProxy_getUserAuthorizations_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getUserAuthorizations_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -13177,8 +14484,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getUserAuthorizations_pargs::~AccumuloProxy_getUserAuthorizations_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getUserAuthorizations_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getUserAuthorizations_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -13194,8 +14507,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getUserAuthorizations_result::~AccumuloProxy_getUserAuthorizations_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getUserAuthorizations_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13218,14 +14537,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->success.clear();
-            uint32_t _size467;
-            ::apache::thrift::protocol::TType _etype470;
-            xfer += iprot->readListBegin(_etype470, _size467);
-            this->success.resize(_size467);
-            uint32_t _i471;
-            for (_i471 = 0; _i471 < _size467; ++_i471)
+            uint32_t _size529;
+            ::apache::thrift::protocol::TType _etype532;
+            xfer += iprot->readListBegin(_etype532, _size529);
+            this->success.resize(_size529);
+            uint32_t _i533;
+            for (_i533 = 0; _i533 < _size529; ++_i533)
             {
-              xfer += iprot->readBinary(this->success[_i471]);
+              xfer += iprot->readBinary(this->success[_i533]);
             }
             xfer += iprot->readListEnd();
           }
@@ -13272,10 +14591,10 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_LIST, 0);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
-      std::vector<std::string> ::const_iterator _iter472;
-      for (_iter472 = this->success.begin(); _iter472 != this->success.end(); ++_iter472)
+      std::vector<std::string> ::const_iterator _iter534;
+      for (_iter534 = this->success.begin(); _iter534 != this->success.end(); ++_iter534)
       {
-        xfer += oprot->writeBinary((*_iter472));
+        xfer += oprot->writeBinary((*_iter534));
       }
       xfer += oprot->writeListEnd();
     }
@@ -13294,8 +14613,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getUserAuthorizations_presult::~AccumuloProxy_getUserAuthorizations_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getUserAuthorizations_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13318,14 +14643,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             (*(this->success)).clear();
-            uint32_t _size473;
-            ::apache::thrift::protocol::TType _etype476;
-            xfer += iprot->readListBegin(_etype476, _size473);
-            (*(this->success)).resize(_size473);
-            uint32_t _i477;
-            for (_i477 = 0; _i477 < _size473; ++_i477)
+            uint32_t _size535;
+            ::apache::thrift::protocol::TType _etype538;
+            xfer += iprot->readListBegin(_etype538, _size535);
+            (*(this->success)).resize(_size535);
+            uint32_t _i539;
+            for (_i539 = 0; _i539 < _size535; ++_i539)
             {
-              xfer += iprot->readBinary((*(this->success))[_i477]);
+              xfer += iprot->readBinary((*(this->success))[_i539]);
             }
             xfer += iprot->readListEnd();
           }
@@ -13362,8 +14687,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_grantSystemPermission_args::~AccumuloProxy_grantSystemPermission_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_grantSystemPermission_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13400,9 +14731,9 @@
         break;
       case 3:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast478;
-          xfer += iprot->readI32(ecast478);
-          this->perm = (SystemPermission::type)ecast478;
+          int32_t ecast540;
+          xfer += iprot->readI32(ecast540);
+          this->perm = (SystemPermission::type)ecast540;
           this->__isset.perm = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -13422,6 +14753,7 @@
 
 uint32_t AccumuloProxy_grantSystemPermission_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_grantSystemPermission_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -13441,8 +14773,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_grantSystemPermission_pargs::~AccumuloProxy_grantSystemPermission_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_grantSystemPermission_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_grantSystemPermission_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -13462,8 +14800,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_grantSystemPermission_result::~AccumuloProxy_grantSystemPermission_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_grantSystemPermission_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13530,8 +14874,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_grantSystemPermission_presult::~AccumuloProxy_grantSystemPermission_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_grantSystemPermission_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13578,8 +14928,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_grantTablePermission_args::~AccumuloProxy_grantTablePermission_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_grantTablePermission_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13624,9 +14980,9 @@
         break;
       case 4:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast479;
-          xfer += iprot->readI32(ecast479);
-          this->perm = (TablePermission::type)ecast479;
+          int32_t ecast541;
+          xfer += iprot->readI32(ecast541);
+          this->perm = (TablePermission::type)ecast541;
           this->__isset.perm = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -13646,6 +15002,7 @@
 
 uint32_t AccumuloProxy_grantTablePermission_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_grantTablePermission_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -13669,8 +15026,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_grantTablePermission_pargs::~AccumuloProxy_grantTablePermission_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_grantTablePermission_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_grantTablePermission_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -13694,8 +15057,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_grantTablePermission_result::~AccumuloProxy_grantTablePermission_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_grantTablePermission_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13774,8 +15143,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_grantTablePermission_presult::~AccumuloProxy_grantTablePermission_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_grantTablePermission_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13830,8 +15205,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasSystemPermission_args::~AccumuloProxy_hasSystemPermission_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasSystemPermission_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -13868,9 +15249,9 @@
         break;
       case 3:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast480;
-          xfer += iprot->readI32(ecast480);
-          this->perm = (SystemPermission::type)ecast480;
+          int32_t ecast542;
+          xfer += iprot->readI32(ecast542);
+          this->perm = (SystemPermission::type)ecast542;
           this->__isset.perm = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -13890,6 +15271,7 @@
 
 uint32_t AccumuloProxy_hasSystemPermission_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_hasSystemPermission_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -13909,8 +15291,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasSystemPermission_pargs::~AccumuloProxy_hasSystemPermission_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasSystemPermission_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_hasSystemPermission_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -13930,8 +15318,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasSystemPermission_result::~AccumuloProxy_hasSystemPermission_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasSystemPermission_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14010,8 +15404,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasSystemPermission_presult::~AccumuloProxy_hasSystemPermission_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasSystemPermission_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14066,8 +15466,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasTablePermission_args::~AccumuloProxy_hasTablePermission_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasTablePermission_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14112,9 +15518,9 @@
         break;
       case 4:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast481;
-          xfer += iprot->readI32(ecast481);
-          this->perm = (TablePermission::type)ecast481;
+          int32_t ecast543;
+          xfer += iprot->readI32(ecast543);
+          this->perm = (TablePermission::type)ecast543;
           this->__isset.perm = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -14134,6 +15540,7 @@
 
 uint32_t AccumuloProxy_hasTablePermission_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_hasTablePermission_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -14157,8 +15564,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasTablePermission_pargs::~AccumuloProxy_hasTablePermission_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasTablePermission_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_hasTablePermission_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -14182,8 +15595,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasTablePermission_result::~AccumuloProxy_hasTablePermission_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasTablePermission_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14274,8 +15693,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasTablePermission_presult::~AccumuloProxy_hasTablePermission_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasTablePermission_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14338,8 +15763,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listLocalUsers_args::~AccumuloProxy_listLocalUsers_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_listLocalUsers_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14380,6 +15811,7 @@
 
 uint32_t AccumuloProxy_listLocalUsers_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_listLocalUsers_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -14391,8 +15823,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listLocalUsers_pargs::~AccumuloProxy_listLocalUsers_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_listLocalUsers_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_listLocalUsers_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -14404,8 +15842,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listLocalUsers_result::~AccumuloProxy_listLocalUsers_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_listLocalUsers_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14428,15 +15872,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->success.clear();
-            uint32_t _size482;
-            ::apache::thrift::protocol::TType _etype485;
-            xfer += iprot->readSetBegin(_etype485, _size482);
-            uint32_t _i486;
-            for (_i486 = 0; _i486 < _size482; ++_i486)
+            uint32_t _size544;
+            ::apache::thrift::protocol::TType _etype547;
+            xfer += iprot->readSetBegin(_etype547, _size544);
+            uint32_t _i548;
+            for (_i548 = 0; _i548 < _size544; ++_i548)
             {
-              std::string _elem487;
-              xfer += iprot->readString(_elem487);
-              this->success.insert(_elem487);
+              std::string _elem549;
+              xfer += iprot->readString(_elem549);
+              this->success.insert(_elem549);
             }
             xfer += iprot->readSetEnd();
           }
@@ -14491,10 +15935,10 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_SET, 0);
     {
       xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
-      std::set<std::string> ::const_iterator _iter488;
-      for (_iter488 = this->success.begin(); _iter488 != this->success.end(); ++_iter488)
+      std::set<std::string> ::const_iterator _iter550;
+      for (_iter550 = this->success.begin(); _iter550 != this->success.end(); ++_iter550)
       {
-        xfer += oprot->writeString((*_iter488));
+        xfer += oprot->writeString((*_iter550));
       }
       xfer += oprot->writeSetEnd();
     }
@@ -14517,8 +15961,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_listLocalUsers_presult::~AccumuloProxy_listLocalUsers_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_listLocalUsers_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14541,15 +15991,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             (*(this->success)).clear();
-            uint32_t _size489;
-            ::apache::thrift::protocol::TType _etype492;
-            xfer += iprot->readSetBegin(_etype492, _size489);
-            uint32_t _i493;
-            for (_i493 = 0; _i493 < _size489; ++_i493)
+            uint32_t _size551;
+            ::apache::thrift::protocol::TType _etype554;
+            xfer += iprot->readSetBegin(_etype554, _size551);
+            uint32_t _i555;
+            for (_i555 = 0; _i555 < _size551; ++_i555)
             {
-              std::string _elem494;
-              xfer += iprot->readString(_elem494);
-              (*(this->success)).insert(_elem494);
+              std::string _elem556;
+              xfer += iprot->readString(_elem556);
+              (*(this->success)).insert(_elem556);
             }
             xfer += iprot->readSetEnd();
           }
@@ -14594,8 +16044,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_revokeSystemPermission_args::~AccumuloProxy_revokeSystemPermission_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_revokeSystemPermission_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14632,9 +16088,9 @@
         break;
       case 3:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast495;
-          xfer += iprot->readI32(ecast495);
-          this->perm = (SystemPermission::type)ecast495;
+          int32_t ecast557;
+          xfer += iprot->readI32(ecast557);
+          this->perm = (SystemPermission::type)ecast557;
           this->__isset.perm = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -14654,6 +16110,7 @@
 
 uint32_t AccumuloProxy_revokeSystemPermission_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_revokeSystemPermission_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -14673,8 +16130,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_revokeSystemPermission_pargs::~AccumuloProxy_revokeSystemPermission_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_revokeSystemPermission_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_revokeSystemPermission_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -14694,8 +16157,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_revokeSystemPermission_result::~AccumuloProxy_revokeSystemPermission_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_revokeSystemPermission_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14762,8 +16231,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_revokeSystemPermission_presult::~AccumuloProxy_revokeSystemPermission_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_revokeSystemPermission_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14810,8 +16285,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_revokeTablePermission_args::~AccumuloProxy_revokeTablePermission_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_revokeTablePermission_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -14856,9 +16337,9 @@
         break;
       case 4:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast496;
-          xfer += iprot->readI32(ecast496);
-          this->perm = (TablePermission::type)ecast496;
+          int32_t ecast558;
+          xfer += iprot->readI32(ecast558);
+          this->perm = (TablePermission::type)ecast558;
           this->__isset.perm = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -14878,6 +16359,7 @@
 
 uint32_t AccumuloProxy_revokeTablePermission_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_revokeTablePermission_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -14901,8 +16383,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_revokeTablePermission_pargs::~AccumuloProxy_revokeTablePermission_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_revokeTablePermission_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_revokeTablePermission_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -14926,8 +16414,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_revokeTablePermission_result::~AccumuloProxy_revokeTablePermission_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_revokeTablePermission_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15006,8 +16500,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_revokeTablePermission_presult::~AccumuloProxy_revokeTablePermission_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_revokeTablePermission_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15062,8 +16562,805 @@
   return xfer;
 }
 
+
+AccumuloProxy_grantNamespacePermission_args::~AccumuloProxy_grantNamespacePermission_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_grantNamespacePermission_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->user);
+          this->__isset.user = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_I32) {
+          int32_t ecast559;
+          xfer += iprot->readI32(ecast559);
+          this->perm = (NamespacePermission::type)ecast559;
+          this->__isset.perm = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_grantNamespacePermission_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_grantNamespacePermission_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("user", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->user);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("perm", ::apache::thrift::protocol::T_I32, 4);
+  xfer += oprot->writeI32((int32_t)this->perm);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_grantNamespacePermission_pargs::~AccumuloProxy_grantNamespacePermission_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_grantNamespacePermission_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_grantNamespacePermission_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("user", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->user)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("perm", ::apache::thrift::protocol::T_I32, 4);
+  xfer += oprot->writeI32((int32_t)(*(this->perm)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_grantNamespacePermission_result::~AccumuloProxy_grantNamespacePermission_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_grantNamespacePermission_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_grantNamespacePermission_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_grantNamespacePermission_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_grantNamespacePermission_presult::~AccumuloProxy_grantNamespacePermission_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_grantNamespacePermission_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_hasNamespacePermission_args::~AccumuloProxy_hasNamespacePermission_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_hasNamespacePermission_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->user);
+          this->__isset.user = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_I32) {
+          int32_t ecast560;
+          xfer += iprot->readI32(ecast560);
+          this->perm = (NamespacePermission::type)ecast560;
+          this->__isset.perm = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_hasNamespacePermission_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_hasNamespacePermission_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("user", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->user);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("perm", ::apache::thrift::protocol::T_I32, 4);
+  xfer += oprot->writeI32((int32_t)this->perm);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_hasNamespacePermission_pargs::~AccumuloProxy_hasNamespacePermission_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_hasNamespacePermission_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_hasNamespacePermission_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("user", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->user)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("perm", ::apache::thrift::protocol::T_I32, 4);
+  xfer += oprot->writeI32((int32_t)(*(this->perm)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_hasNamespacePermission_result::~AccumuloProxy_hasNamespacePermission_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_hasNamespacePermission_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_BOOL) {
+          xfer += iprot->readBool(this->success);
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_hasNamespacePermission_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_hasNamespacePermission_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_BOOL, 0);
+    xfer += oprot->writeBool(this->success);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_hasNamespacePermission_presult::~AccumuloProxy_hasNamespacePermission_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_hasNamespacePermission_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_BOOL) {
+          xfer += iprot->readBool((*(this->success)));
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_revokeNamespacePermission_args::~AccumuloProxy_revokeNamespacePermission_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_revokeNamespacePermission_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->user);
+          this->__isset.user = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_I32) {
+          int32_t ecast561;
+          xfer += iprot->readI32(ecast561);
+          this->perm = (NamespacePermission::type)ecast561;
+          this->__isset.perm = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_revokeNamespacePermission_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_revokeNamespacePermission_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("user", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->user);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("perm", ::apache::thrift::protocol::T_I32, 4);
+  xfer += oprot->writeI32((int32_t)this->perm);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_revokeNamespacePermission_pargs::~AccumuloProxy_revokeNamespacePermission_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_revokeNamespacePermission_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_revokeNamespacePermission_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("user", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->user)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("perm", ::apache::thrift::protocol::T_I32, 4);
+  xfer += oprot->writeI32((int32_t)(*(this->perm)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_revokeNamespacePermission_result::~AccumuloProxy_revokeNamespacePermission_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_revokeNamespacePermission_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_revokeNamespacePermission_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_revokeNamespacePermission_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_revokeNamespacePermission_presult::~AccumuloProxy_revokeNamespacePermission_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_revokeNamespacePermission_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_createBatchScanner_args::~AccumuloProxy_createBatchScanner_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_createBatchScanner_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15120,6 +17417,7 @@
 
 uint32_t AccumuloProxy_createBatchScanner_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createBatchScanner_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -15139,8 +17437,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createBatchScanner_pargs::~AccumuloProxy_createBatchScanner_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_createBatchScanner_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createBatchScanner_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -15160,8 +17464,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createBatchScanner_result::~AccumuloProxy_createBatchScanner_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_createBatchScanner_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15252,8 +17562,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createBatchScanner_presult::~AccumuloProxy_createBatchScanner_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_createBatchScanner_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15316,8 +17632,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createScanner_args::~AccumuloProxy_createScanner_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_createScanner_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15374,6 +17696,7 @@
 
 uint32_t AccumuloProxy_createScanner_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createScanner_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -15393,8 +17716,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createScanner_pargs::~AccumuloProxy_createScanner_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_createScanner_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createScanner_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -15414,8 +17743,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createScanner_result::~AccumuloProxy_createScanner_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_createScanner_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15506,8 +17841,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createScanner_presult::~AccumuloProxy_createScanner_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_createScanner_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15570,8 +17911,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasNext_args::~AccumuloProxy_hasNext_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasNext_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15612,6 +17959,7 @@
 
 uint32_t AccumuloProxy_hasNext_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_hasNext_args");
 
   xfer += oprot->writeFieldBegin("scanner", ::apache::thrift::protocol::T_STRING, 1);
@@ -15623,8 +17971,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasNext_pargs::~AccumuloProxy_hasNext_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasNext_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_hasNext_pargs");
 
   xfer += oprot->writeFieldBegin("scanner", ::apache::thrift::protocol::T_STRING, 1);
@@ -15636,8 +17990,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasNext_result::~AccumuloProxy_hasNext_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasNext_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15704,8 +18064,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_hasNext_presult::~AccumuloProxy_hasNext_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_hasNext_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15752,8 +18118,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_nextEntry_args::~AccumuloProxy_nextEntry_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_nextEntry_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15794,6 +18166,7 @@
 
 uint32_t AccumuloProxy_nextEntry_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_nextEntry_args");
 
   xfer += oprot->writeFieldBegin("scanner", ::apache::thrift::protocol::T_STRING, 1);
@@ -15805,8 +18178,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_nextEntry_pargs::~AccumuloProxy_nextEntry_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_nextEntry_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_nextEntry_pargs");
 
   xfer += oprot->writeFieldBegin("scanner", ::apache::thrift::protocol::T_STRING, 1);
@@ -15818,8 +18197,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_nextEntry_result::~AccumuloProxy_nextEntry_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_nextEntry_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15910,8 +18295,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_nextEntry_presult::~AccumuloProxy_nextEntry_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_nextEntry_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -15974,8 +18365,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_nextK_args::~AccumuloProxy_nextK_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_nextK_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16024,6 +18421,7 @@
 
 uint32_t AccumuloProxy_nextK_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_nextK_args");
 
   xfer += oprot->writeFieldBegin("scanner", ::apache::thrift::protocol::T_STRING, 1);
@@ -16039,8 +18437,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_nextK_pargs::~AccumuloProxy_nextK_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_nextK_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_nextK_pargs");
 
   xfer += oprot->writeFieldBegin("scanner", ::apache::thrift::protocol::T_STRING, 1);
@@ -16056,8 +18460,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_nextK_result::~AccumuloProxy_nextK_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_nextK_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16148,8 +18558,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_nextK_presult::~AccumuloProxy_nextK_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_nextK_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16212,8 +18628,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeScanner_args::~AccumuloProxy_closeScanner_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeScanner_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16254,6 +18676,7 @@
 
 uint32_t AccumuloProxy_closeScanner_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_closeScanner_args");
 
   xfer += oprot->writeFieldBegin("scanner", ::apache::thrift::protocol::T_STRING, 1);
@@ -16265,8 +18688,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeScanner_pargs::~AccumuloProxy_closeScanner_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeScanner_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_closeScanner_pargs");
 
   xfer += oprot->writeFieldBegin("scanner", ::apache::thrift::protocol::T_STRING, 1);
@@ -16278,8 +18707,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeScanner_result::~AccumuloProxy_closeScanner_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeScanner_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16334,8 +18769,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeScanner_presult::~AccumuloProxy_closeScanner_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeScanner_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16374,8 +18815,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateAndFlush_args::~AccumuloProxy_updateAndFlush_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateAndFlush_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16414,26 +18861,26 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->cells.clear();
-            uint32_t _size497;
-            ::apache::thrift::protocol::TType _ktype498;
-            ::apache::thrift::protocol::TType _vtype499;
-            xfer += iprot->readMapBegin(_ktype498, _vtype499, _size497);
-            uint32_t _i501;
-            for (_i501 = 0; _i501 < _size497; ++_i501)
+            uint32_t _size562;
+            ::apache::thrift::protocol::TType _ktype563;
+            ::apache::thrift::protocol::TType _vtype564;
+            xfer += iprot->readMapBegin(_ktype563, _vtype564, _size562);
+            uint32_t _i566;
+            for (_i566 = 0; _i566 < _size562; ++_i566)
             {
-              std::string _key502;
-              xfer += iprot->readBinary(_key502);
-              std::vector<ColumnUpdate> & _val503 = this->cells[_key502];
+              std::string _key567;
+              xfer += iprot->readBinary(_key567);
+              std::vector<ColumnUpdate> & _val568 = this->cells[_key567];
               {
-                _val503.clear();
-                uint32_t _size504;
-                ::apache::thrift::protocol::TType _etype507;
-                xfer += iprot->readListBegin(_etype507, _size504);
-                _val503.resize(_size504);
-                uint32_t _i508;
-                for (_i508 = 0; _i508 < _size504; ++_i508)
+                _val568.clear();
+                uint32_t _size569;
+                ::apache::thrift::protocol::TType _etype572;
+                xfer += iprot->readListBegin(_etype572, _size569);
+                _val568.resize(_size569);
+                uint32_t _i573;
+                for (_i573 = 0; _i573 < _size569; ++_i573)
                 {
-                  xfer += _val503[_i508].read(iprot);
+                  xfer += _val568[_i573].read(iprot);
                 }
                 xfer += iprot->readListEnd();
               }
@@ -16459,6 +18906,7 @@
 
 uint32_t AccumuloProxy_updateAndFlush_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_updateAndFlush_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -16472,16 +18920,16 @@
   xfer += oprot->writeFieldBegin("cells", ::apache::thrift::protocol::T_MAP, 3);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_LIST, static_cast<uint32_t>(this->cells.size()));
-    std::map<std::string, std::vector<ColumnUpdate> > ::const_iterator _iter509;
-    for (_iter509 = this->cells.begin(); _iter509 != this->cells.end(); ++_iter509)
+    std::map<std::string, std::vector<ColumnUpdate> > ::const_iterator _iter574;
+    for (_iter574 = this->cells.begin(); _iter574 != this->cells.end(); ++_iter574)
     {
-      xfer += oprot->writeBinary(_iter509->first);
+      xfer += oprot->writeBinary(_iter574->first);
       {
-        xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(_iter509->second.size()));
-        std::vector<ColumnUpdate> ::const_iterator _iter510;
-        for (_iter510 = _iter509->second.begin(); _iter510 != _iter509->second.end(); ++_iter510)
+        xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(_iter574->second.size()));
+        std::vector<ColumnUpdate> ::const_iterator _iter575;
+        for (_iter575 = _iter574->second.begin(); _iter575 != _iter574->second.end(); ++_iter575)
         {
-          xfer += (*_iter510).write(oprot);
+          xfer += (*_iter575).write(oprot);
         }
         xfer += oprot->writeListEnd();
       }
@@ -16495,8 +18943,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateAndFlush_pargs::~AccumuloProxy_updateAndFlush_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateAndFlush_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_updateAndFlush_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -16510,16 +18964,16 @@
   xfer += oprot->writeFieldBegin("cells", ::apache::thrift::protocol::T_MAP, 3);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_LIST, static_cast<uint32_t>((*(this->cells)).size()));
-    std::map<std::string, std::vector<ColumnUpdate> > ::const_iterator _iter511;
-    for (_iter511 = (*(this->cells)).begin(); _iter511 != (*(this->cells)).end(); ++_iter511)
+    std::map<std::string, std::vector<ColumnUpdate> > ::const_iterator _iter576;
+    for (_iter576 = (*(this->cells)).begin(); _iter576 != (*(this->cells)).end(); ++_iter576)
     {
-      xfer += oprot->writeBinary(_iter511->first);
+      xfer += oprot->writeBinary(_iter576->first);
       {
-        xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(_iter511->second.size()));
-        std::vector<ColumnUpdate> ::const_iterator _iter512;
-        for (_iter512 = _iter511->second.begin(); _iter512 != _iter511->second.end(); ++_iter512)
+        xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(_iter576->second.size()));
+        std::vector<ColumnUpdate> ::const_iterator _iter577;
+        for (_iter577 = _iter576->second.begin(); _iter577 != _iter576->second.end(); ++_iter577)
         {
-          xfer += (*_iter512).write(oprot);
+          xfer += (*_iter577).write(oprot);
         }
         xfer += oprot->writeListEnd();
       }
@@ -16533,8 +18987,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateAndFlush_result::~AccumuloProxy_updateAndFlush_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateAndFlush_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16625,8 +19085,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateAndFlush_presult::~AccumuloProxy_updateAndFlush_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateAndFlush_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16689,8 +19155,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createWriter_args::~AccumuloProxy_createWriter_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_createWriter_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16747,6 +19219,7 @@
 
 uint32_t AccumuloProxy_createWriter_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createWriter_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -16766,8 +19239,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createWriter_pargs::~AccumuloProxy_createWriter_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_createWriter_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createWriter_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -16787,8 +19266,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createWriter_result::~AccumuloProxy_createWriter_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_createWriter_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16879,8 +19364,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createWriter_presult::~AccumuloProxy_createWriter_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_createWriter_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16943,8 +19434,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_update_args::~AccumuloProxy_update_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_update_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -16975,26 +19472,26 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->cells.clear();
-            uint32_t _size513;
-            ::apache::thrift::protocol::TType _ktype514;
-            ::apache::thrift::protocol::TType _vtype515;
-            xfer += iprot->readMapBegin(_ktype514, _vtype515, _size513);
-            uint32_t _i517;
-            for (_i517 = 0; _i517 < _size513; ++_i517)
+            uint32_t _size578;
+            ::apache::thrift::protocol::TType _ktype579;
+            ::apache::thrift::protocol::TType _vtype580;
+            xfer += iprot->readMapBegin(_ktype579, _vtype580, _size578);
+            uint32_t _i582;
+            for (_i582 = 0; _i582 < _size578; ++_i582)
             {
-              std::string _key518;
-              xfer += iprot->readBinary(_key518);
-              std::vector<ColumnUpdate> & _val519 = this->cells[_key518];
+              std::string _key583;
+              xfer += iprot->readBinary(_key583);
+              std::vector<ColumnUpdate> & _val584 = this->cells[_key583];
               {
-                _val519.clear();
-                uint32_t _size520;
-                ::apache::thrift::protocol::TType _etype523;
-                xfer += iprot->readListBegin(_etype523, _size520);
-                _val519.resize(_size520);
-                uint32_t _i524;
-                for (_i524 = 0; _i524 < _size520; ++_i524)
+                _val584.clear();
+                uint32_t _size585;
+                ::apache::thrift::protocol::TType _etype588;
+                xfer += iprot->readListBegin(_etype588, _size585);
+                _val584.resize(_size585);
+                uint32_t _i589;
+                for (_i589 = 0; _i589 < _size585; ++_i589)
                 {
-                  xfer += _val519[_i524].read(iprot);
+                  xfer += _val584[_i589].read(iprot);
                 }
                 xfer += iprot->readListEnd();
               }
@@ -17020,6 +19517,7 @@
 
 uint32_t AccumuloProxy_update_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_update_args");
 
   xfer += oprot->writeFieldBegin("writer", ::apache::thrift::protocol::T_STRING, 1);
@@ -17029,16 +19527,16 @@
   xfer += oprot->writeFieldBegin("cells", ::apache::thrift::protocol::T_MAP, 2);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_LIST, static_cast<uint32_t>(this->cells.size()));
-    std::map<std::string, std::vector<ColumnUpdate> > ::const_iterator _iter525;
-    for (_iter525 = this->cells.begin(); _iter525 != this->cells.end(); ++_iter525)
+    std::map<std::string, std::vector<ColumnUpdate> > ::const_iterator _iter590;
+    for (_iter590 = this->cells.begin(); _iter590 != this->cells.end(); ++_iter590)
     {
-      xfer += oprot->writeBinary(_iter525->first);
+      xfer += oprot->writeBinary(_iter590->first);
       {
-        xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(_iter525->second.size()));
-        std::vector<ColumnUpdate> ::const_iterator _iter526;
-        for (_iter526 = _iter525->second.begin(); _iter526 != _iter525->second.end(); ++_iter526)
+        xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(_iter590->second.size()));
+        std::vector<ColumnUpdate> ::const_iterator _iter591;
+        for (_iter591 = _iter590->second.begin(); _iter591 != _iter590->second.end(); ++_iter591)
         {
-          xfer += (*_iter526).write(oprot);
+          xfer += (*_iter591).write(oprot);
         }
         xfer += oprot->writeListEnd();
       }
@@ -17052,8 +19550,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_update_pargs::~AccumuloProxy_update_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_update_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_update_pargs");
 
   xfer += oprot->writeFieldBegin("writer", ::apache::thrift::protocol::T_STRING, 1);
@@ -17063,16 +19567,16 @@
   xfer += oprot->writeFieldBegin("cells", ::apache::thrift::protocol::T_MAP, 2);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_LIST, static_cast<uint32_t>((*(this->cells)).size()));
-    std::map<std::string, std::vector<ColumnUpdate> > ::const_iterator _iter527;
-    for (_iter527 = (*(this->cells)).begin(); _iter527 != (*(this->cells)).end(); ++_iter527)
+    std::map<std::string, std::vector<ColumnUpdate> > ::const_iterator _iter592;
+    for (_iter592 = (*(this->cells)).begin(); _iter592 != (*(this->cells)).end(); ++_iter592)
     {
-      xfer += oprot->writeBinary(_iter527->first);
+      xfer += oprot->writeBinary(_iter592->first);
       {
-        xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(_iter527->second.size()));
-        std::vector<ColumnUpdate> ::const_iterator _iter528;
-        for (_iter528 = _iter527->second.begin(); _iter528 != _iter527->second.end(); ++_iter528)
+        xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(_iter592->second.size()));
+        std::vector<ColumnUpdate> ::const_iterator _iter593;
+        for (_iter593 = _iter592->second.begin(); _iter593 != _iter592->second.end(); ++_iter593)
         {
-          xfer += (*_iter528).write(oprot);
+          xfer += (*_iter593).write(oprot);
         }
         xfer += oprot->writeListEnd();
       }
@@ -17086,8 +19590,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_flush_args::~AccumuloProxy_flush_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_flush_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17128,6 +19638,7 @@
 
 uint32_t AccumuloProxy_flush_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_flush_args");
 
   xfer += oprot->writeFieldBegin("writer", ::apache::thrift::protocol::T_STRING, 1);
@@ -17139,8 +19650,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_flush_pargs::~AccumuloProxy_flush_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_flush_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_flush_pargs");
 
   xfer += oprot->writeFieldBegin("writer", ::apache::thrift::protocol::T_STRING, 1);
@@ -17152,8 +19669,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_flush_result::~AccumuloProxy_flush_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_flush_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17220,8 +19743,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_flush_presult::~AccumuloProxy_flush_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_flush_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17268,8 +19797,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeWriter_args::~AccumuloProxy_closeWriter_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeWriter_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17310,6 +19845,7 @@
 
 uint32_t AccumuloProxy_closeWriter_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_closeWriter_args");
 
   xfer += oprot->writeFieldBegin("writer", ::apache::thrift::protocol::T_STRING, 1);
@@ -17321,8 +19857,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeWriter_pargs::~AccumuloProxy_closeWriter_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeWriter_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_closeWriter_pargs");
 
   xfer += oprot->writeFieldBegin("writer", ::apache::thrift::protocol::T_STRING, 1);
@@ -17334,8 +19876,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeWriter_result::~AccumuloProxy_closeWriter_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeWriter_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17402,8 +19950,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeWriter_presult::~AccumuloProxy_closeWriter_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeWriter_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17450,8 +20004,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateRowConditionally_args::~AccumuloProxy_updateRowConditionally_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateRowConditionally_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17516,6 +20076,7 @@
 
 uint32_t AccumuloProxy_updateRowConditionally_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_updateRowConditionally_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -17539,8 +20100,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateRowConditionally_pargs::~AccumuloProxy_updateRowConditionally_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateRowConditionally_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_updateRowConditionally_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -17564,8 +20131,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateRowConditionally_result::~AccumuloProxy_updateRowConditionally_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateRowConditionally_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17586,9 +20159,9 @@
     {
       case 0:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast529;
-          xfer += iprot->readI32(ecast529);
-          this->success = (ConditionalStatus::type)ecast529;
+          int32_t ecast594;
+          xfer += iprot->readI32(ecast594);
+          this->success = (ConditionalStatus::type)ecast594;
           this->__isset.success = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -17658,8 +20231,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateRowConditionally_presult::~AccumuloProxy_updateRowConditionally_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateRowConditionally_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17680,9 +20259,9 @@
     {
       case 0:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast530;
-          xfer += iprot->readI32(ecast530);
-          (*(this->success)) = (ConditionalStatus::type)ecast530;
+          int32_t ecast595;
+          xfer += iprot->readI32(ecast595);
+          (*(this->success)) = (ConditionalStatus::type)ecast595;
           this->__isset.success = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -17724,8 +20303,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createConditionalWriter_args::~AccumuloProxy_createConditionalWriter_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_createConditionalWriter_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17782,6 +20367,7 @@
 
 uint32_t AccumuloProxy_createConditionalWriter_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createConditionalWriter_args");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -17801,8 +20387,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createConditionalWriter_pargs::~AccumuloProxy_createConditionalWriter_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_createConditionalWriter_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_createConditionalWriter_pargs");
 
   xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
@@ -17822,8 +20414,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createConditionalWriter_result::~AccumuloProxy_createConditionalWriter_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_createConditionalWriter_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17914,8 +20512,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_createConditionalWriter_presult::~AccumuloProxy_createConditionalWriter_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_createConditionalWriter_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -17978,8 +20582,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateRowsConditionally_args::~AccumuloProxy_updateRowsConditionally_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateRowsConditionally_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18010,17 +20620,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->updates.clear();
-            uint32_t _size531;
-            ::apache::thrift::protocol::TType _ktype532;
-            ::apache::thrift::protocol::TType _vtype533;
-            xfer += iprot->readMapBegin(_ktype532, _vtype533, _size531);
-            uint32_t _i535;
-            for (_i535 = 0; _i535 < _size531; ++_i535)
+            uint32_t _size596;
+            ::apache::thrift::protocol::TType _ktype597;
+            ::apache::thrift::protocol::TType _vtype598;
+            xfer += iprot->readMapBegin(_ktype597, _vtype598, _size596);
+            uint32_t _i600;
+            for (_i600 = 0; _i600 < _size596; ++_i600)
             {
-              std::string _key536;
-              xfer += iprot->readBinary(_key536);
-              ConditionalUpdates& _val537 = this->updates[_key536];
-              xfer += _val537.read(iprot);
+              std::string _key601;
+              xfer += iprot->readBinary(_key601);
+              ConditionalUpdates& _val602 = this->updates[_key601];
+              xfer += _val602.read(iprot);
             }
             xfer += iprot->readMapEnd();
           }
@@ -18043,6 +20653,7 @@
 
 uint32_t AccumuloProxy_updateRowsConditionally_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_updateRowsConditionally_args");
 
   xfer += oprot->writeFieldBegin("conditionalWriter", ::apache::thrift::protocol::T_STRING, 1);
@@ -18052,11 +20663,11 @@
   xfer += oprot->writeFieldBegin("updates", ::apache::thrift::protocol::T_MAP, 2);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->updates.size()));
-    std::map<std::string, ConditionalUpdates> ::const_iterator _iter538;
-    for (_iter538 = this->updates.begin(); _iter538 != this->updates.end(); ++_iter538)
+    std::map<std::string, ConditionalUpdates> ::const_iterator _iter603;
+    for (_iter603 = this->updates.begin(); _iter603 != this->updates.end(); ++_iter603)
     {
-      xfer += oprot->writeBinary(_iter538->first);
-      xfer += _iter538->second.write(oprot);
+      xfer += oprot->writeBinary(_iter603->first);
+      xfer += _iter603->second.write(oprot);
     }
     xfer += oprot->writeMapEnd();
   }
@@ -18067,8 +20678,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateRowsConditionally_pargs::~AccumuloProxy_updateRowsConditionally_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateRowsConditionally_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_updateRowsConditionally_pargs");
 
   xfer += oprot->writeFieldBegin("conditionalWriter", ::apache::thrift::protocol::T_STRING, 1);
@@ -18078,11 +20695,11 @@
   xfer += oprot->writeFieldBegin("updates", ::apache::thrift::protocol::T_MAP, 2);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>((*(this->updates)).size()));
-    std::map<std::string, ConditionalUpdates> ::const_iterator _iter539;
-    for (_iter539 = (*(this->updates)).begin(); _iter539 != (*(this->updates)).end(); ++_iter539)
+    std::map<std::string, ConditionalUpdates> ::const_iterator _iter604;
+    for (_iter604 = (*(this->updates)).begin(); _iter604 != (*(this->updates)).end(); ++_iter604)
     {
-      xfer += oprot->writeBinary(_iter539->first);
-      xfer += _iter539->second.write(oprot);
+      xfer += oprot->writeBinary(_iter604->first);
+      xfer += _iter604->second.write(oprot);
     }
     xfer += oprot->writeMapEnd();
   }
@@ -18093,8 +20710,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateRowsConditionally_result::~AccumuloProxy_updateRowsConditionally_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateRowsConditionally_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18117,19 +20740,19 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->success.clear();
-            uint32_t _size540;
-            ::apache::thrift::protocol::TType _ktype541;
-            ::apache::thrift::protocol::TType _vtype542;
-            xfer += iprot->readMapBegin(_ktype541, _vtype542, _size540);
-            uint32_t _i544;
-            for (_i544 = 0; _i544 < _size540; ++_i544)
+            uint32_t _size605;
+            ::apache::thrift::protocol::TType _ktype606;
+            ::apache::thrift::protocol::TType _vtype607;
+            xfer += iprot->readMapBegin(_ktype606, _vtype607, _size605);
+            uint32_t _i609;
+            for (_i609 = 0; _i609 < _size605; ++_i609)
             {
-              std::string _key545;
-              xfer += iprot->readBinary(_key545);
-              ConditionalStatus::type& _val546 = this->success[_key545];
-              int32_t ecast547;
-              xfer += iprot->readI32(ecast547);
-              _val546 = (ConditionalStatus::type)ecast547;
+              std::string _key610;
+              xfer += iprot->readBinary(_key610);
+              ConditionalStatus::type& _val611 = this->success[_key610];
+              int32_t ecast612;
+              xfer += iprot->readI32(ecast612);
+              _val611 = (ConditionalStatus::type)ecast612;
             }
             xfer += iprot->readMapEnd();
           }
@@ -18184,11 +20807,11 @@
     xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
     {
       xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_I32, static_cast<uint32_t>(this->success.size()));
-      std::map<std::string, ConditionalStatus::type> ::const_iterator _iter548;
-      for (_iter548 = this->success.begin(); _iter548 != this->success.end(); ++_iter548)
+      std::map<std::string, ConditionalStatus::type> ::const_iterator _iter613;
+      for (_iter613 = this->success.begin(); _iter613 != this->success.end(); ++_iter613)
       {
-        xfer += oprot->writeBinary(_iter548->first);
-        xfer += oprot->writeI32((int32_t)_iter548->second);
+        xfer += oprot->writeBinary(_iter613->first);
+        xfer += oprot->writeI32((int32_t)_iter613->second);
       }
       xfer += oprot->writeMapEnd();
     }
@@ -18211,8 +20834,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_updateRowsConditionally_presult::~AccumuloProxy_updateRowsConditionally_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_updateRowsConditionally_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18235,19 +20864,19 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             (*(this->success)).clear();
-            uint32_t _size549;
-            ::apache::thrift::protocol::TType _ktype550;
-            ::apache::thrift::protocol::TType _vtype551;
-            xfer += iprot->readMapBegin(_ktype550, _vtype551, _size549);
-            uint32_t _i553;
-            for (_i553 = 0; _i553 < _size549; ++_i553)
+            uint32_t _size614;
+            ::apache::thrift::protocol::TType _ktype615;
+            ::apache::thrift::protocol::TType _vtype616;
+            xfer += iprot->readMapBegin(_ktype615, _vtype616, _size614);
+            uint32_t _i618;
+            for (_i618 = 0; _i618 < _size614; ++_i618)
             {
-              std::string _key554;
-              xfer += iprot->readBinary(_key554);
-              ConditionalStatus::type& _val555 = (*(this->success))[_key554];
-              int32_t ecast556;
-              xfer += iprot->readI32(ecast556);
-              _val555 = (ConditionalStatus::type)ecast556;
+              std::string _key619;
+              xfer += iprot->readBinary(_key619);
+              ConditionalStatus::type& _val620 = (*(this->success))[_key619];
+              int32_t ecast621;
+              xfer += iprot->readI32(ecast621);
+              _val620 = (ConditionalStatus::type)ecast621;
             }
             xfer += iprot->readMapEnd();
           }
@@ -18292,8 +20921,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeConditionalWriter_args::~AccumuloProxy_closeConditionalWriter_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeConditionalWriter_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18334,6 +20969,7 @@
 
 uint32_t AccumuloProxy_closeConditionalWriter_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_closeConditionalWriter_args");
 
   xfer += oprot->writeFieldBegin("conditionalWriter", ::apache::thrift::protocol::T_STRING, 1);
@@ -18345,8 +20981,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeConditionalWriter_pargs::~AccumuloProxy_closeConditionalWriter_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeConditionalWriter_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_closeConditionalWriter_pargs");
 
   xfer += oprot->writeFieldBegin("conditionalWriter", ::apache::thrift::protocol::T_STRING, 1);
@@ -18358,8 +21000,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeConditionalWriter_result::~AccumuloProxy_closeConditionalWriter_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeConditionalWriter_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18396,8 +21044,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_closeConditionalWriter_presult::~AccumuloProxy_closeConditionalWriter_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_closeConditionalWriter_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18423,8 +21077,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getRowRange_args::~AccumuloProxy_getRowRange_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getRowRange_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18465,6 +21125,7 @@
 
 uint32_t AccumuloProxy_getRowRange_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getRowRange_args");
 
   xfer += oprot->writeFieldBegin("row", ::apache::thrift::protocol::T_STRING, 1);
@@ -18476,8 +21137,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getRowRange_pargs::~AccumuloProxy_getRowRange_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getRowRange_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getRowRange_pargs");
 
   xfer += oprot->writeFieldBegin("row", ::apache::thrift::protocol::T_STRING, 1);
@@ -18489,8 +21156,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getRowRange_result::~AccumuloProxy_getRowRange_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getRowRange_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18545,8 +21218,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getRowRange_presult::~AccumuloProxy_getRowRange_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getRowRange_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18585,8 +21264,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getFollowing_args::~AccumuloProxy_getFollowing_args() throw() {
+}
+
+
 uint32_t AccumuloProxy_getFollowing_args::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18615,9 +21300,9 @@
         break;
       case 2:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast557;
-          xfer += iprot->readI32(ecast557);
-          this->part = (PartialKey::type)ecast557;
+          int32_t ecast622;
+          xfer += iprot->readI32(ecast622);
+          this->part = (PartialKey::type)ecast622;
           this->__isset.part = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -18637,6 +21322,7 @@
 
 uint32_t AccumuloProxy_getFollowing_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getFollowing_args");
 
   xfer += oprot->writeFieldBegin("key", ::apache::thrift::protocol::T_STRUCT, 1);
@@ -18652,8 +21338,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getFollowing_pargs::~AccumuloProxy_getFollowing_pargs() throw() {
+}
+
+
 uint32_t AccumuloProxy_getFollowing_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloProxy_getFollowing_pargs");
 
   xfer += oprot->writeFieldBegin("key", ::apache::thrift::protocol::T_STRUCT, 1);
@@ -18669,8 +21361,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getFollowing_result::~AccumuloProxy_getFollowing_result() throw() {
+}
+
+
 uint32_t AccumuloProxy_getFollowing_result::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18725,8 +21423,14 @@
   return xfer;
 }
 
+
+AccumuloProxy_getFollowing_presult::~AccumuloProxy_getFollowing_presult() throw() {
+}
+
+
 uint32_t AccumuloProxy_getFollowing_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -18765,6 +21469,5417 @@
   return xfer;
 }
 
+
+AccumuloProxy_systemNamespace_args::~AccumuloProxy_systemNamespace_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_systemNamespace_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    xfer += iprot->skip(ftype);
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_systemNamespace_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_systemNamespace_args");
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_systemNamespace_pargs::~AccumuloProxy_systemNamespace_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_systemNamespace_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_systemNamespace_pargs");
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_systemNamespace_result::~AccumuloProxy_systemNamespace_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_systemNamespace_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->success);
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_systemNamespace_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_systemNamespace_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_STRING, 0);
+    xfer += oprot->writeString(this->success);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_systemNamespace_presult::~AccumuloProxy_systemNamespace_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_systemNamespace_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString((*(this->success)));
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_defaultNamespace_args::~AccumuloProxy_defaultNamespace_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_defaultNamespace_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    xfer += iprot->skip(ftype);
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_defaultNamespace_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_defaultNamespace_args");
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_defaultNamespace_pargs::~AccumuloProxy_defaultNamespace_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_defaultNamespace_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_defaultNamespace_pargs");
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_defaultNamespace_result::~AccumuloProxy_defaultNamespace_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_defaultNamespace_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->success);
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_defaultNamespace_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_defaultNamespace_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_STRING, 0);
+    xfer += oprot->writeString(this->success);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_defaultNamespace_presult::~AccumuloProxy_defaultNamespace_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_defaultNamespace_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString((*(this->success)));
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaces_args::~AccumuloProxy_listNamespaces_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaces_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_listNamespaces_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_listNamespaces_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaces_pargs::~AccumuloProxy_listNamespaces_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaces_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_listNamespaces_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaces_result::~AccumuloProxy_listNamespaces_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaces_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_LIST) {
+          {
+            this->success.clear();
+            uint32_t _size623;
+            ::apache::thrift::protocol::TType _etype626;
+            xfer += iprot->readListBegin(_etype626, _size623);
+            this->success.resize(_size623);
+            uint32_t _i627;
+            for (_i627 = 0; _i627 < _size623; ++_i627)
+            {
+              xfer += iprot->readString(this->success[_i627]);
+            }
+            xfer += iprot->readListEnd();
+          }
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_listNamespaces_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_listNamespaces_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_LIST, 0);
+    {
+      xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
+      std::vector<std::string> ::const_iterator _iter628;
+      for (_iter628 = this->success.begin(); _iter628 != this->success.end(); ++_iter628)
+      {
+        xfer += oprot->writeString((*_iter628));
+      }
+      xfer += oprot->writeListEnd();
+    }
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaces_presult::~AccumuloProxy_listNamespaces_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaces_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_LIST) {
+          {
+            (*(this->success)).clear();
+            uint32_t _size629;
+            ::apache::thrift::protocol::TType _etype632;
+            xfer += iprot->readListBegin(_etype632, _size629);
+            (*(this->success)).resize(_size629);
+            uint32_t _i633;
+            for (_i633 = 0; _i633 < _size629; ++_i633)
+            {
+              xfer += iprot->readString((*(this->success))[_i633]);
+            }
+            xfer += iprot->readListEnd();
+          }
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_namespaceExists_args::~AccumuloProxy_namespaceExists_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_namespaceExists_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_namespaceExists_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_namespaceExists_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_namespaceExists_pargs::~AccumuloProxy_namespaceExists_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_namespaceExists_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_namespaceExists_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_namespaceExists_result::~AccumuloProxy_namespaceExists_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_namespaceExists_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_BOOL) {
+          xfer += iprot->readBool(this->success);
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_namespaceExists_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_namespaceExists_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_BOOL, 0);
+    xfer += oprot->writeBool(this->success);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_namespaceExists_presult::~AccumuloProxy_namespaceExists_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_namespaceExists_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_BOOL) {
+          xfer += iprot->readBool((*(this->success)));
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_createNamespace_args::~AccumuloProxy_createNamespace_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_createNamespace_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_createNamespace_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_createNamespace_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_createNamespace_pargs::~AccumuloProxy_createNamespace_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_createNamespace_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_createNamespace_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_createNamespace_result::~AccumuloProxy_createNamespace_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_createNamespace_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_createNamespace_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_createNamespace_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_createNamespace_presult::~AccumuloProxy_createNamespace_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_createNamespace_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_deleteNamespace_args::~AccumuloProxy_deleteNamespace_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_deleteNamespace_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_deleteNamespace_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_deleteNamespace_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_deleteNamespace_pargs::~AccumuloProxy_deleteNamespace_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_deleteNamespace_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_deleteNamespace_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_deleteNamespace_result::~AccumuloProxy_deleteNamespace_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_deleteNamespace_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch4.read(iprot);
+          this->__isset.ouch4 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_deleteNamespace_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_deleteNamespace_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch4) {
+    xfer += oprot->writeFieldBegin("ouch4", ::apache::thrift::protocol::T_STRUCT, 4);
+    xfer += this->ouch4.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_deleteNamespace_presult::~AccumuloProxy_deleteNamespace_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_deleteNamespace_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch4.read(iprot);
+          this->__isset.ouch4 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_renameNamespace_args::~AccumuloProxy_renameNamespace_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_renameNamespace_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->oldNamespaceName);
+          this->__isset.oldNamespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->newNamespaceName);
+          this->__isset.newNamespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_renameNamespace_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_renameNamespace_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("oldNamespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->oldNamespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("newNamespaceName", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->newNamespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_renameNamespace_pargs::~AccumuloProxy_renameNamespace_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_renameNamespace_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_renameNamespace_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("oldNamespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->oldNamespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("newNamespaceName", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString((*(this->newNamespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_renameNamespace_result::~AccumuloProxy_renameNamespace_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_renameNamespace_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch4.read(iprot);
+          this->__isset.ouch4 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_renameNamespace_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_renameNamespace_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch4) {
+    xfer += oprot->writeFieldBegin("ouch4", ::apache::thrift::protocol::T_STRUCT, 4);
+    xfer += this->ouch4.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_renameNamespace_presult::~AccumuloProxy_renameNamespace_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_renameNamespace_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch4.read(iprot);
+          this->__isset.ouch4 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_setNamespaceProperty_args::~AccumuloProxy_setNamespaceProperty_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_setNamespaceProperty_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->property);
+          this->__isset.property = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->value);
+          this->__isset.value = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_setNamespaceProperty_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_setNamespaceProperty_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("property", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->property);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("value", ::apache::thrift::protocol::T_STRING, 4);
+  xfer += oprot->writeString(this->value);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_setNamespaceProperty_pargs::~AccumuloProxy_setNamespaceProperty_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_setNamespaceProperty_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_setNamespaceProperty_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("property", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString((*(this->property)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("value", ::apache::thrift::protocol::T_STRING, 4);
+  xfer += oprot->writeString((*(this->value)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_setNamespaceProperty_result::~AccumuloProxy_setNamespaceProperty_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_setNamespaceProperty_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_setNamespaceProperty_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_setNamespaceProperty_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_setNamespaceProperty_presult::~AccumuloProxy_setNamespaceProperty_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_setNamespaceProperty_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceProperty_args::~AccumuloProxy_removeNamespaceProperty_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceProperty_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->property);
+          this->__isset.property = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_removeNamespaceProperty_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_removeNamespaceProperty_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("property", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->property);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceProperty_pargs::~AccumuloProxy_removeNamespaceProperty_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceProperty_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_removeNamespaceProperty_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("property", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString((*(this->property)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceProperty_result::~AccumuloProxy_removeNamespaceProperty_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceProperty_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_removeNamespaceProperty_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_removeNamespaceProperty_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceProperty_presult::~AccumuloProxy_removeNamespaceProperty_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceProperty_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_getNamespaceProperties_args::~AccumuloProxy_getNamespaceProperties_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_getNamespaceProperties_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_getNamespaceProperties_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_getNamespaceProperties_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_getNamespaceProperties_pargs::~AccumuloProxy_getNamespaceProperties_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_getNamespaceProperties_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_getNamespaceProperties_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_getNamespaceProperties_result::~AccumuloProxy_getNamespaceProperties_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_getNamespaceProperties_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_MAP) {
+          {
+            this->success.clear();
+            uint32_t _size634;
+            ::apache::thrift::protocol::TType _ktype635;
+            ::apache::thrift::protocol::TType _vtype636;
+            xfer += iprot->readMapBegin(_ktype635, _vtype636, _size634);
+            uint32_t _i638;
+            for (_i638 = 0; _i638 < _size634; ++_i638)
+            {
+              std::string _key639;
+              xfer += iprot->readString(_key639);
+              std::string& _val640 = this->success[_key639];
+              xfer += iprot->readString(_val640);
+            }
+            xfer += iprot->readMapEnd();
+          }
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_getNamespaceProperties_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_getNamespaceProperties_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
+    {
+      xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
+      std::map<std::string, std::string> ::const_iterator _iter641;
+      for (_iter641 = this->success.begin(); _iter641 != this->success.end(); ++_iter641)
+      {
+        xfer += oprot->writeString(_iter641->first);
+        xfer += oprot->writeString(_iter641->second);
+      }
+      xfer += oprot->writeMapEnd();
+    }
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_getNamespaceProperties_presult::~AccumuloProxy_getNamespaceProperties_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_getNamespaceProperties_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_MAP) {
+          {
+            (*(this->success)).clear();
+            uint32_t _size642;
+            ::apache::thrift::protocol::TType _ktype643;
+            ::apache::thrift::protocol::TType _vtype644;
+            xfer += iprot->readMapBegin(_ktype643, _vtype644, _size642);
+            uint32_t _i646;
+            for (_i646 = 0; _i646 < _size642; ++_i646)
+            {
+              std::string _key647;
+              xfer += iprot->readString(_key647);
+              std::string& _val648 = (*(this->success))[_key647];
+              xfer += iprot->readString(_val648);
+            }
+            xfer += iprot->readMapEnd();
+          }
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_namespaceIdMap_args::~AccumuloProxy_namespaceIdMap_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_namespaceIdMap_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_namespaceIdMap_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_namespaceIdMap_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_namespaceIdMap_pargs::~AccumuloProxy_namespaceIdMap_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_namespaceIdMap_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_namespaceIdMap_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_namespaceIdMap_result::~AccumuloProxy_namespaceIdMap_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_namespaceIdMap_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_MAP) {
+          {
+            this->success.clear();
+            uint32_t _size649;
+            ::apache::thrift::protocol::TType _ktype650;
+            ::apache::thrift::protocol::TType _vtype651;
+            xfer += iprot->readMapBegin(_ktype650, _vtype651, _size649);
+            uint32_t _i653;
+            for (_i653 = 0; _i653 < _size649; ++_i653)
+            {
+              std::string _key654;
+              xfer += iprot->readString(_key654);
+              std::string& _val655 = this->success[_key654];
+              xfer += iprot->readString(_val655);
+            }
+            xfer += iprot->readMapEnd();
+          }
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_namespaceIdMap_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_namespaceIdMap_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
+    {
+      xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->success.size()));
+      std::map<std::string, std::string> ::const_iterator _iter656;
+      for (_iter656 = this->success.begin(); _iter656 != this->success.end(); ++_iter656)
+      {
+        xfer += oprot->writeString(_iter656->first);
+        xfer += oprot->writeString(_iter656->second);
+      }
+      xfer += oprot->writeMapEnd();
+    }
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_namespaceIdMap_presult::~AccumuloProxy_namespaceIdMap_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_namespaceIdMap_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_MAP) {
+          {
+            (*(this->success)).clear();
+            uint32_t _size657;
+            ::apache::thrift::protocol::TType _ktype658;
+            ::apache::thrift::protocol::TType _vtype659;
+            xfer += iprot->readMapBegin(_ktype658, _vtype659, _size657);
+            uint32_t _i661;
+            for (_i661 = 0; _i661 < _size657; ++_i661)
+            {
+              std::string _key662;
+              xfer += iprot->readString(_key662);
+              std::string& _val663 = (*(this->success))[_key662];
+              xfer += iprot->readString(_val663);
+            }
+            xfer += iprot->readMapEnd();
+          }
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_attachNamespaceIterator_args::~AccumuloProxy_attachNamespaceIterator_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_attachNamespaceIterator_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->setting.read(iprot);
+          this->__isset.setting = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_SET) {
+          {
+            this->scopes.clear();
+            uint32_t _size664;
+            ::apache::thrift::protocol::TType _etype667;
+            xfer += iprot->readSetBegin(_etype667, _size664);
+            uint32_t _i668;
+            for (_i668 = 0; _i668 < _size664; ++_i668)
+            {
+              IteratorScope::type _elem669;
+              int32_t ecast670;
+              xfer += iprot->readI32(ecast670);
+              _elem669 = (IteratorScope::type)ecast670;
+              this->scopes.insert(_elem669);
+            }
+            xfer += iprot->readSetEnd();
+          }
+          this->__isset.scopes = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_attachNamespaceIterator_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_attachNamespaceIterator_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("setting", ::apache::thrift::protocol::T_STRUCT, 3);
+  xfer += this->setting.write(oprot);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
+  {
+    xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>(this->scopes.size()));
+    std::set<IteratorScope::type> ::const_iterator _iter671;
+    for (_iter671 = this->scopes.begin(); _iter671 != this->scopes.end(); ++_iter671)
+    {
+      xfer += oprot->writeI32((int32_t)(*_iter671));
+    }
+    xfer += oprot->writeSetEnd();
+  }
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_attachNamespaceIterator_pargs::~AccumuloProxy_attachNamespaceIterator_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_attachNamespaceIterator_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_attachNamespaceIterator_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("setting", ::apache::thrift::protocol::T_STRUCT, 3);
+  xfer += (*(this->setting)).write(oprot);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
+  {
+    xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>((*(this->scopes)).size()));
+    std::set<IteratorScope::type> ::const_iterator _iter672;
+    for (_iter672 = (*(this->scopes)).begin(); _iter672 != (*(this->scopes)).end(); ++_iter672)
+    {
+      xfer += oprot->writeI32((int32_t)(*_iter672));
+    }
+    xfer += oprot->writeSetEnd();
+  }
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_attachNamespaceIterator_result::~AccumuloProxy_attachNamespaceIterator_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_attachNamespaceIterator_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_attachNamespaceIterator_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_attachNamespaceIterator_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_attachNamespaceIterator_presult::~AccumuloProxy_attachNamespaceIterator_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_attachNamespaceIterator_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceIterator_args::~AccumuloProxy_removeNamespaceIterator_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceIterator_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->name);
+          this->__isset.name = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_SET) {
+          {
+            this->scopes.clear();
+            uint32_t _size673;
+            ::apache::thrift::protocol::TType _etype676;
+            xfer += iprot->readSetBegin(_etype676, _size673);
+            uint32_t _i677;
+            for (_i677 = 0; _i677 < _size673; ++_i677)
+            {
+              IteratorScope::type _elem678;
+              int32_t ecast679;
+              xfer += iprot->readI32(ecast679);
+              _elem678 = (IteratorScope::type)ecast679;
+              this->scopes.insert(_elem678);
+            }
+            xfer += iprot->readSetEnd();
+          }
+          this->__isset.scopes = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_removeNamespaceIterator_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_removeNamespaceIterator_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("name", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->name);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
+  {
+    xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>(this->scopes.size()));
+    std::set<IteratorScope::type> ::const_iterator _iter680;
+    for (_iter680 = this->scopes.begin(); _iter680 != this->scopes.end(); ++_iter680)
+    {
+      xfer += oprot->writeI32((int32_t)(*_iter680));
+    }
+    xfer += oprot->writeSetEnd();
+  }
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceIterator_pargs::~AccumuloProxy_removeNamespaceIterator_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceIterator_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_removeNamespaceIterator_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("name", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString((*(this->name)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
+  {
+    xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>((*(this->scopes)).size()));
+    std::set<IteratorScope::type> ::const_iterator _iter681;
+    for (_iter681 = (*(this->scopes)).begin(); _iter681 != (*(this->scopes)).end(); ++_iter681)
+    {
+      xfer += oprot->writeI32((int32_t)(*_iter681));
+    }
+    xfer += oprot->writeSetEnd();
+  }
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceIterator_result::~AccumuloProxy_removeNamespaceIterator_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceIterator_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_removeNamespaceIterator_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_removeNamespaceIterator_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceIterator_presult::~AccumuloProxy_removeNamespaceIterator_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceIterator_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_getNamespaceIteratorSetting_args::~AccumuloProxy_getNamespaceIteratorSetting_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_getNamespaceIteratorSetting_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->name);
+          this->__isset.name = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_I32) {
+          int32_t ecast682;
+          xfer += iprot->readI32(ecast682);
+          this->scope = (IteratorScope::type)ecast682;
+          this->__isset.scope = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_getNamespaceIteratorSetting_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_getNamespaceIteratorSetting_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("name", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->name);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("scope", ::apache::thrift::protocol::T_I32, 4);
+  xfer += oprot->writeI32((int32_t)this->scope);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_getNamespaceIteratorSetting_pargs::~AccumuloProxy_getNamespaceIteratorSetting_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_getNamespaceIteratorSetting_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_getNamespaceIteratorSetting_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("name", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString((*(this->name)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("scope", ::apache::thrift::protocol::T_I32, 4);
+  xfer += oprot->writeI32((int32_t)(*(this->scope)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_getNamespaceIteratorSetting_result::~AccumuloProxy_getNamespaceIteratorSetting_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_getNamespaceIteratorSetting_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->success.read(iprot);
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_getNamespaceIteratorSetting_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_getNamespaceIteratorSetting_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_STRUCT, 0);
+    xfer += this->success.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_getNamespaceIteratorSetting_presult::~AccumuloProxy_getNamespaceIteratorSetting_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_getNamespaceIteratorSetting_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += (*(this->success)).read(iprot);
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaceIterators_args::~AccumuloProxy_listNamespaceIterators_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaceIterators_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_listNamespaceIterators_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_listNamespaceIterators_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaceIterators_pargs::~AccumuloProxy_listNamespaceIterators_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaceIterators_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_listNamespaceIterators_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaceIterators_result::~AccumuloProxy_listNamespaceIterators_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaceIterators_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_MAP) {
+          {
+            this->success.clear();
+            uint32_t _size683;
+            ::apache::thrift::protocol::TType _ktype684;
+            ::apache::thrift::protocol::TType _vtype685;
+            xfer += iprot->readMapBegin(_ktype684, _vtype685, _size683);
+            uint32_t _i687;
+            for (_i687 = 0; _i687 < _size683; ++_i687)
+            {
+              std::string _key688;
+              xfer += iprot->readString(_key688);
+              std::set<IteratorScope::type> & _val689 = this->success[_key688];
+              {
+                _val689.clear();
+                uint32_t _size690;
+                ::apache::thrift::protocol::TType _etype693;
+                xfer += iprot->readSetBegin(_etype693, _size690);
+                uint32_t _i694;
+                for (_i694 = 0; _i694 < _size690; ++_i694)
+                {
+                  IteratorScope::type _elem695;
+                  int32_t ecast696;
+                  xfer += iprot->readI32(ecast696);
+                  _elem695 = (IteratorScope::type)ecast696;
+                  _val689.insert(_elem695);
+                }
+                xfer += iprot->readSetEnd();
+              }
+            }
+            xfer += iprot->readMapEnd();
+          }
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_listNamespaceIterators_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_listNamespaceIterators_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
+    {
+      xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_SET, static_cast<uint32_t>(this->success.size()));
+      std::map<std::string, std::set<IteratorScope::type> > ::const_iterator _iter697;
+      for (_iter697 = this->success.begin(); _iter697 != this->success.end(); ++_iter697)
+      {
+        xfer += oprot->writeString(_iter697->first);
+        {
+          xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>(_iter697->second.size()));
+          std::set<IteratorScope::type> ::const_iterator _iter698;
+          for (_iter698 = _iter697->second.begin(); _iter698 != _iter697->second.end(); ++_iter698)
+          {
+            xfer += oprot->writeI32((int32_t)(*_iter698));
+          }
+          xfer += oprot->writeSetEnd();
+        }
+      }
+      xfer += oprot->writeMapEnd();
+    }
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaceIterators_presult::~AccumuloProxy_listNamespaceIterators_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaceIterators_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_MAP) {
+          {
+            (*(this->success)).clear();
+            uint32_t _size699;
+            ::apache::thrift::protocol::TType _ktype700;
+            ::apache::thrift::protocol::TType _vtype701;
+            xfer += iprot->readMapBegin(_ktype700, _vtype701, _size699);
+            uint32_t _i703;
+            for (_i703 = 0; _i703 < _size699; ++_i703)
+            {
+              std::string _key704;
+              xfer += iprot->readString(_key704);
+              std::set<IteratorScope::type> & _val705 = (*(this->success))[_key704];
+              {
+                _val705.clear();
+                uint32_t _size706;
+                ::apache::thrift::protocol::TType _etype709;
+                xfer += iprot->readSetBegin(_etype709, _size706);
+                uint32_t _i710;
+                for (_i710 = 0; _i710 < _size706; ++_i710)
+                {
+                  IteratorScope::type _elem711;
+                  int32_t ecast712;
+                  xfer += iprot->readI32(ecast712);
+                  _elem711 = (IteratorScope::type)ecast712;
+                  _val705.insert(_elem711);
+                }
+                xfer += iprot->readSetEnd();
+              }
+            }
+            xfer += iprot->readMapEnd();
+          }
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_checkNamespaceIteratorConflicts_args::~AccumuloProxy_checkNamespaceIteratorConflicts_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_checkNamespaceIteratorConflicts_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->setting.read(iprot);
+          this->__isset.setting = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_SET) {
+          {
+            this->scopes.clear();
+            uint32_t _size713;
+            ::apache::thrift::protocol::TType _etype716;
+            xfer += iprot->readSetBegin(_etype716, _size713);
+            uint32_t _i717;
+            for (_i717 = 0; _i717 < _size713; ++_i717)
+            {
+              IteratorScope::type _elem718;
+              int32_t ecast719;
+              xfer += iprot->readI32(ecast719);
+              _elem718 = (IteratorScope::type)ecast719;
+              this->scopes.insert(_elem718);
+            }
+            xfer += iprot->readSetEnd();
+          }
+          this->__isset.scopes = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_checkNamespaceIteratorConflicts_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_checkNamespaceIteratorConflicts_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("setting", ::apache::thrift::protocol::T_STRUCT, 3);
+  xfer += this->setting.write(oprot);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
+  {
+    xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>(this->scopes.size()));
+    std::set<IteratorScope::type> ::const_iterator _iter720;
+    for (_iter720 = this->scopes.begin(); _iter720 != this->scopes.end(); ++_iter720)
+    {
+      xfer += oprot->writeI32((int32_t)(*_iter720));
+    }
+    xfer += oprot->writeSetEnd();
+  }
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_checkNamespaceIteratorConflicts_pargs::~AccumuloProxy_checkNamespaceIteratorConflicts_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_checkNamespaceIteratorConflicts_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_checkNamespaceIteratorConflicts_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("setting", ::apache::thrift::protocol::T_STRUCT, 3);
+  xfer += (*(this->setting)).write(oprot);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("scopes", ::apache::thrift::protocol::T_SET, 4);
+  {
+    xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_I32, static_cast<uint32_t>((*(this->scopes)).size()));
+    std::set<IteratorScope::type> ::const_iterator _iter721;
+    for (_iter721 = (*(this->scopes)).begin(); _iter721 != (*(this->scopes)).end(); ++_iter721)
+    {
+      xfer += oprot->writeI32((int32_t)(*_iter721));
+    }
+    xfer += oprot->writeSetEnd();
+  }
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_checkNamespaceIteratorConflicts_result::~AccumuloProxy_checkNamespaceIteratorConflicts_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_checkNamespaceIteratorConflicts_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_checkNamespaceIteratorConflicts_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_checkNamespaceIteratorConflicts_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_checkNamespaceIteratorConflicts_presult::~AccumuloProxy_checkNamespaceIteratorConflicts_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_checkNamespaceIteratorConflicts_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_addNamespaceConstraint_args::~AccumuloProxy_addNamespaceConstraint_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_addNamespaceConstraint_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->constraintClassName);
+          this->__isset.constraintClassName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_addNamespaceConstraint_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_addNamespaceConstraint_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("constraintClassName", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->constraintClassName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_addNamespaceConstraint_pargs::~AccumuloProxy_addNamespaceConstraint_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_addNamespaceConstraint_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_addNamespaceConstraint_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("constraintClassName", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString((*(this->constraintClassName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_addNamespaceConstraint_result::~AccumuloProxy_addNamespaceConstraint_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_addNamespaceConstraint_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_I32) {
+          xfer += iprot->readI32(this->success);
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_addNamespaceConstraint_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_addNamespaceConstraint_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_I32, 0);
+    xfer += oprot->writeI32(this->success);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_addNamespaceConstraint_presult::~AccumuloProxy_addNamespaceConstraint_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_addNamespaceConstraint_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_I32) {
+          xfer += iprot->readI32((*(this->success)));
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceConstraint_args::~AccumuloProxy_removeNamespaceConstraint_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceConstraint_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_I32) {
+          xfer += iprot->readI32(this->id);
+          this->__isset.id = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_removeNamespaceConstraint_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_removeNamespaceConstraint_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("id", ::apache::thrift::protocol::T_I32, 3);
+  xfer += oprot->writeI32(this->id);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceConstraint_pargs::~AccumuloProxy_removeNamespaceConstraint_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceConstraint_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_removeNamespaceConstraint_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("id", ::apache::thrift::protocol::T_I32, 3);
+  xfer += oprot->writeI32((*(this->id)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceConstraint_result::~AccumuloProxy_removeNamespaceConstraint_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceConstraint_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_removeNamespaceConstraint_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_removeNamespaceConstraint_result");
+
+  if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_removeNamespaceConstraint_presult::~AccumuloProxy_removeNamespaceConstraint_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_removeNamespaceConstraint_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaceConstraints_args::~AccumuloProxy_listNamespaceConstraints_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaceConstraints_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_listNamespaceConstraints_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_listNamespaceConstraints_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaceConstraints_pargs::~AccumuloProxy_listNamespaceConstraints_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaceConstraints_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_listNamespaceConstraints_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaceConstraints_result::~AccumuloProxy_listNamespaceConstraints_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaceConstraints_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_MAP) {
+          {
+            this->success.clear();
+            uint32_t _size722;
+            ::apache::thrift::protocol::TType _ktype723;
+            ::apache::thrift::protocol::TType _vtype724;
+            xfer += iprot->readMapBegin(_ktype723, _vtype724, _size722);
+            uint32_t _i726;
+            for (_i726 = 0; _i726 < _size722; ++_i726)
+            {
+              std::string _key727;
+              xfer += iprot->readString(_key727);
+              int32_t& _val728 = this->success[_key727];
+              xfer += iprot->readI32(_val728);
+            }
+            xfer += iprot->readMapEnd();
+          }
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_listNamespaceConstraints_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_listNamespaceConstraints_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_MAP, 0);
+    {
+      xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_I32, static_cast<uint32_t>(this->success.size()));
+      std::map<std::string, int32_t> ::const_iterator _iter729;
+      for (_iter729 = this->success.begin(); _iter729 != this->success.end(); ++_iter729)
+      {
+        xfer += oprot->writeString(_iter729->first);
+        xfer += oprot->writeI32(_iter729->second);
+      }
+      xfer += oprot->writeMapEnd();
+    }
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_listNamespaceConstraints_presult::~AccumuloProxy_listNamespaceConstraints_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_listNamespaceConstraints_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_MAP) {
+          {
+            (*(this->success)).clear();
+            uint32_t _size730;
+            ::apache::thrift::protocol::TType _ktype731;
+            ::apache::thrift::protocol::TType _vtype732;
+            xfer += iprot->readMapBegin(_ktype731, _vtype732, _size730);
+            uint32_t _i734;
+            for (_i734 = 0; _i734 < _size730; ++_i734)
+            {
+              std::string _key735;
+              xfer += iprot->readString(_key735);
+              int32_t& _val736 = (*(this->success))[_key735];
+              xfer += iprot->readI32(_val736);
+            }
+            xfer += iprot->readMapEnd();
+          }
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+
+AccumuloProxy_testNamespaceClassLoad_args::~AccumuloProxy_testNamespaceClassLoad_args() throw() {
+}
+
+
+uint32_t AccumuloProxy_testNamespaceClassLoad_args::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readBinary(this->login);
+          this->__isset.login = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->namespaceName);
+          this->__isset.namespaceName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->className);
+          this->__isset.className = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 4:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->asTypeName);
+          this->__isset.asTypeName = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_testNamespaceClassLoad_args::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_testNamespaceClassLoad_args");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary(this->login);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString(this->namespaceName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("className", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString(this->className);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("asTypeName", ::apache::thrift::protocol::T_STRING, 4);
+  xfer += oprot->writeString(this->asTypeName);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_testNamespaceClassLoad_pargs::~AccumuloProxy_testNamespaceClassLoad_pargs() throw() {
+}
+
+
+uint32_t AccumuloProxy_testNamespaceClassLoad_pargs::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("AccumuloProxy_testNamespaceClassLoad_pargs");
+
+  xfer += oprot->writeFieldBegin("login", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeBinary((*(this->login)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("namespaceName", ::apache::thrift::protocol::T_STRING, 2);
+  xfer += oprot->writeString((*(this->namespaceName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("className", ::apache::thrift::protocol::T_STRING, 3);
+  xfer += oprot->writeString((*(this->className)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldBegin("asTypeName", ::apache::thrift::protocol::T_STRING, 4);
+  xfer += oprot->writeString((*(this->asTypeName)));
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_testNamespaceClassLoad_result::~AccumuloProxy_testNamespaceClassLoad_result() throw() {
+}
+
+
+uint32_t AccumuloProxy_testNamespaceClassLoad_result::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_BOOL) {
+          xfer += iprot->readBool(this->success);
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t AccumuloProxy_testNamespaceClassLoad_result::write(::apache::thrift::protocol::TProtocol* oprot) const {
+
+  uint32_t xfer = 0;
+
+  xfer += oprot->writeStructBegin("AccumuloProxy_testNamespaceClassLoad_result");
+
+  if (this->__isset.success) {
+    xfer += oprot->writeFieldBegin("success", ::apache::thrift::protocol::T_BOOL, 0);
+    xfer += oprot->writeBool(this->success);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch1) {
+    xfer += oprot->writeFieldBegin("ouch1", ::apache::thrift::protocol::T_STRUCT, 1);
+    xfer += this->ouch1.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch2) {
+    xfer += oprot->writeFieldBegin("ouch2", ::apache::thrift::protocol::T_STRUCT, 2);
+    xfer += this->ouch2.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  } else if (this->__isset.ouch3) {
+    xfer += oprot->writeFieldBegin("ouch3", ::apache::thrift::protocol::T_STRUCT, 3);
+    xfer += this->ouch3.write(oprot);
+    xfer += oprot->writeFieldEnd();
+  }
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+
+AccumuloProxy_testNamespaceClassLoad_presult::~AccumuloProxy_testNamespaceClassLoad_presult() throw() {
+}
+
+
+uint32_t AccumuloProxy_testNamespaceClassLoad_presult::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 0:
+        if (ftype == ::apache::thrift::protocol::T_BOOL) {
+          xfer += iprot->readBool((*(this->success)));
+          this->__isset.success = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch1.read(iprot);
+          this->__isset.ouch1 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 2:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch2.read(iprot);
+          this->__isset.ouch2 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      case 3:
+        if (ftype == ::apache::thrift::protocol::T_STRUCT) {
+          xfer += this->ouch3.read(iprot);
+          this->__isset.ouch3 = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
 void AccumuloProxyClient::login(std::string& _return, const std::string& principal, const std::map<std::string, std::string> & loginProperties)
 {
   send_login(principal, loginProperties);
@@ -22643,6 +30758,197 @@
   return;
 }
 
+void AccumuloProxyClient::grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  send_grantNamespacePermission(login, user, namespaceName, perm);
+  recv_grantNamespacePermission();
+}
+
+void AccumuloProxyClient::send_grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("grantNamespacePermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_grantNamespacePermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.namespaceName = &namespaceName;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_grantNamespacePermission()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("grantNamespacePermission") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_grantNamespacePermission_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  return;
+}
+
+bool AccumuloProxyClient::hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  send_hasNamespacePermission(login, user, namespaceName, perm);
+  return recv_hasNamespacePermission();
+}
+
+void AccumuloProxyClient::send_hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("hasNamespacePermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_hasNamespacePermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.namespaceName = &namespaceName;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+bool AccumuloProxyClient::recv_hasNamespacePermission()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("hasNamespacePermission") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  bool _return;
+  AccumuloProxy_hasNamespacePermission_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    return _return;
+  }
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "hasNamespacePermission failed: unknown result");
+}
+
+void AccumuloProxyClient::revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  send_revokeNamespacePermission(login, user, namespaceName, perm);
+  recv_revokeNamespacePermission();
+}
+
+void AccumuloProxyClient::send_revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("revokeNamespacePermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_revokeNamespacePermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.namespaceName = &namespaceName;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_revokeNamespacePermission()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("revokeNamespacePermission") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_revokeNamespacePermission_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  return;
+}
+
 void AccumuloProxyClient::createBatchScanner(std::string& _return, const std::string& login, const std::string& tableName, const BatchScanOptions& options)
 {
   send_createBatchScanner(login, tableName, options);
@@ -23177,7 +31483,7 @@
 void AccumuloProxyClient::send_update(const std::string& writer, const std::map<std::string, std::vector<ColumnUpdate> > & cells)
 {
   int32_t cseqid = 0;
-  oprot_->writeMessageBegin("update", ::apache::thrift::protocol::T_CALL, cseqid);
+  oprot_->writeMessageBegin("update", ::apache::thrift::protocol::T_ONEWAY, cseqid);
 
   AccumuloProxy_update_pargs args;
   args.writer = &writer;
@@ -23684,6 +31990,1310 @@
   throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getFollowing failed: unknown result");
 }
 
+void AccumuloProxyClient::systemNamespace(std::string& _return)
+{
+  send_systemNamespace();
+  recv_systemNamespace(_return);
+}
+
+void AccumuloProxyClient::send_systemNamespace()
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("systemNamespace", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_systemNamespace_pargs args;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_systemNamespace(std::string& _return)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("systemNamespace") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_systemNamespace_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    // _return pointer has now been filled
+    return;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "systemNamespace failed: unknown result");
+}
+
+void AccumuloProxyClient::defaultNamespace(std::string& _return)
+{
+  send_defaultNamespace();
+  recv_defaultNamespace(_return);
+}
+
+void AccumuloProxyClient::send_defaultNamespace()
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("defaultNamespace", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_defaultNamespace_pargs args;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_defaultNamespace(std::string& _return)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("defaultNamespace") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_defaultNamespace_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    // _return pointer has now been filled
+    return;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "defaultNamespace failed: unknown result");
+}
+
+void AccumuloProxyClient::listNamespaces(std::vector<std::string> & _return, const std::string& login)
+{
+  send_listNamespaces(login);
+  recv_listNamespaces(_return);
+}
+
+void AccumuloProxyClient::send_listNamespaces(const std::string& login)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("listNamespaces", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listNamespaces_pargs args;
+  args.login = &login;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_listNamespaces(std::vector<std::string> & _return)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("listNamespaces") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_listNamespaces_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    // _return pointer has now been filled
+    return;
+  }
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listNamespaces failed: unknown result");
+}
+
+bool AccumuloProxyClient::namespaceExists(const std::string& login, const std::string& namespaceName)
+{
+  send_namespaceExists(login, namespaceName);
+  return recv_namespaceExists();
+}
+
+void AccumuloProxyClient::send_namespaceExists(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("namespaceExists", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_namespaceExists_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+bool AccumuloProxyClient::recv_namespaceExists()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("namespaceExists") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  bool _return;
+  AccumuloProxy_namespaceExists_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    return _return;
+  }
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "namespaceExists failed: unknown result");
+}
+
+void AccumuloProxyClient::createNamespace(const std::string& login, const std::string& namespaceName)
+{
+  send_createNamespace(login, namespaceName);
+  recv_createNamespace();
+}
+
+void AccumuloProxyClient::send_createNamespace(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("createNamespace", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_createNamespace_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_createNamespace()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("createNamespace") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_createNamespace_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  return;
+}
+
+void AccumuloProxyClient::deleteNamespace(const std::string& login, const std::string& namespaceName)
+{
+  send_deleteNamespace(login, namespaceName);
+  recv_deleteNamespace();
+}
+
+void AccumuloProxyClient::send_deleteNamespace(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("deleteNamespace", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_deleteNamespace_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_deleteNamespace()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("deleteNamespace") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_deleteNamespace_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  if (result.__isset.ouch4) {
+    throw result.ouch4;
+  }
+  return;
+}
+
+void AccumuloProxyClient::renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName)
+{
+  send_renameNamespace(login, oldNamespaceName, newNamespaceName);
+  recv_renameNamespace();
+}
+
+void AccumuloProxyClient::send_renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("renameNamespace", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_renameNamespace_pargs args;
+  args.login = &login;
+  args.oldNamespaceName = &oldNamespaceName;
+  args.newNamespaceName = &newNamespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_renameNamespace()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("renameNamespace") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_renameNamespace_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  if (result.__isset.ouch4) {
+    throw result.ouch4;
+  }
+  return;
+}
+
+void AccumuloProxyClient::setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value)
+{
+  send_setNamespaceProperty(login, namespaceName, property, value);
+  recv_setNamespaceProperty();
+}
+
+void AccumuloProxyClient::send_setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("setNamespaceProperty", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_setNamespaceProperty_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.property = &property;
+  args.value = &value;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_setNamespaceProperty()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("setNamespaceProperty") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_setNamespaceProperty_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  return;
+}
+
+void AccumuloProxyClient::removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property)
+{
+  send_removeNamespaceProperty(login, namespaceName, property);
+  recv_removeNamespaceProperty();
+}
+
+void AccumuloProxyClient::send_removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("removeNamespaceProperty", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_removeNamespaceProperty_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.property = &property;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_removeNamespaceProperty()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("removeNamespaceProperty") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_removeNamespaceProperty_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  return;
+}
+
+void AccumuloProxyClient::getNamespaceProperties(std::map<std::string, std::string> & _return, const std::string& login, const std::string& namespaceName)
+{
+  send_getNamespaceProperties(login, namespaceName);
+  recv_getNamespaceProperties(_return);
+}
+
+void AccumuloProxyClient::send_getNamespaceProperties(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("getNamespaceProperties", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getNamespaceProperties_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_getNamespaceProperties(std::map<std::string, std::string> & _return)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("getNamespaceProperties") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_getNamespaceProperties_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    // _return pointer has now been filled
+    return;
+  }
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getNamespaceProperties failed: unknown result");
+}
+
+void AccumuloProxyClient::namespaceIdMap(std::map<std::string, std::string> & _return, const std::string& login)
+{
+  send_namespaceIdMap(login);
+  recv_namespaceIdMap(_return);
+}
+
+void AccumuloProxyClient::send_namespaceIdMap(const std::string& login)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("namespaceIdMap", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_namespaceIdMap_pargs args;
+  args.login = &login;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_namespaceIdMap(std::map<std::string, std::string> & _return)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("namespaceIdMap") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_namespaceIdMap_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    // _return pointer has now been filled
+    return;
+  }
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "namespaceIdMap failed: unknown result");
+}
+
+void AccumuloProxyClient::attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  send_attachNamespaceIterator(login, namespaceName, setting, scopes);
+  recv_attachNamespaceIterator();
+}
+
+void AccumuloProxyClient::send_attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("attachNamespaceIterator", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_attachNamespaceIterator_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.setting = &setting;
+  args.scopes = &scopes;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_attachNamespaceIterator()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("attachNamespaceIterator") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_attachNamespaceIterator_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  return;
+}
+
+void AccumuloProxyClient::removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes)
+{
+  send_removeNamespaceIterator(login, namespaceName, name, scopes);
+  recv_removeNamespaceIterator();
+}
+
+void AccumuloProxyClient::send_removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("removeNamespaceIterator", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_removeNamespaceIterator_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.name = &name;
+  args.scopes = &scopes;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_removeNamespaceIterator()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("removeNamespaceIterator") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_removeNamespaceIterator_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  return;
+}
+
+void AccumuloProxyClient::getNamespaceIteratorSetting(IteratorSetting& _return, const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope)
+{
+  send_getNamespaceIteratorSetting(login, namespaceName, name, scope);
+  recv_getNamespaceIteratorSetting(_return);
+}
+
+void AccumuloProxyClient::send_getNamespaceIteratorSetting(const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("getNamespaceIteratorSetting", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getNamespaceIteratorSetting_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.name = &name;
+  args.scope = &scope;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_getNamespaceIteratorSetting(IteratorSetting& _return)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("getNamespaceIteratorSetting") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_getNamespaceIteratorSetting_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    // _return pointer has now been filled
+    return;
+  }
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getNamespaceIteratorSetting failed: unknown result");
+}
+
+void AccumuloProxyClient::listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const std::string& login, const std::string& namespaceName)
+{
+  send_listNamespaceIterators(login, namespaceName);
+  recv_listNamespaceIterators(_return);
+}
+
+void AccumuloProxyClient::send_listNamespaceIterators(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("listNamespaceIterators", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listNamespaceIterators_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("listNamespaceIterators") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_listNamespaceIterators_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    // _return pointer has now been filled
+    return;
+  }
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listNamespaceIterators failed: unknown result");
+}
+
+void AccumuloProxyClient::checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  send_checkNamespaceIteratorConflicts(login, namespaceName, setting, scopes);
+  recv_checkNamespaceIteratorConflicts();
+}
+
+void AccumuloProxyClient::send_checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("checkNamespaceIteratorConflicts", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_checkNamespaceIteratorConflicts_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.setting = &setting;
+  args.scopes = &scopes;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_checkNamespaceIteratorConflicts()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("checkNamespaceIteratorConflicts") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_checkNamespaceIteratorConflicts_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  return;
+}
+
+int32_t AccumuloProxyClient::addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName)
+{
+  send_addNamespaceConstraint(login, namespaceName, constraintClassName);
+  return recv_addNamespaceConstraint();
+}
+
+void AccumuloProxyClient::send_addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("addNamespaceConstraint", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_addNamespaceConstraint_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.constraintClassName = &constraintClassName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+int32_t AccumuloProxyClient::recv_addNamespaceConstraint()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("addNamespaceConstraint") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  int32_t _return;
+  AccumuloProxy_addNamespaceConstraint_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    return _return;
+  }
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "addNamespaceConstraint failed: unknown result");
+}
+
+void AccumuloProxyClient::removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id)
+{
+  send_removeNamespaceConstraint(login, namespaceName, id);
+  recv_removeNamespaceConstraint();
+}
+
+void AccumuloProxyClient::send_removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("removeNamespaceConstraint", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_removeNamespaceConstraint_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.id = &id;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_removeNamespaceConstraint()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("removeNamespaceConstraint") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_removeNamespaceConstraint_presult result;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  return;
+}
+
+void AccumuloProxyClient::listNamespaceConstraints(std::map<std::string, int32_t> & _return, const std::string& login, const std::string& namespaceName)
+{
+  send_listNamespaceConstraints(login, namespaceName);
+  recv_listNamespaceConstraints(_return);
+}
+
+void AccumuloProxyClient::send_listNamespaceConstraints(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("listNamespaceConstraints", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listNamespaceConstraints_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+void AccumuloProxyClient::recv_listNamespaceConstraints(std::map<std::string, int32_t> & _return)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("listNamespaceConstraints") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  AccumuloProxy_listNamespaceConstraints_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    // _return pointer has now been filled
+    return;
+  }
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listNamespaceConstraints failed: unknown result");
+}
+
+bool AccumuloProxyClient::testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName)
+{
+  send_testNamespaceClassLoad(login, namespaceName, className, asTypeName);
+  return recv_testNamespaceClassLoad();
+}
+
+void AccumuloProxyClient::send_testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName)
+{
+  int32_t cseqid = 0;
+  oprot_->writeMessageBegin("testNamespaceClassLoad", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_testNamespaceClassLoad_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.className = &className;
+  args.asTypeName = &asTypeName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+}
+
+bool AccumuloProxyClient::recv_testNamespaceClassLoad()
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  iprot_->readMessageBegin(fname, mtype, rseqid);
+  if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+    ::apache::thrift::TApplicationException x;
+    x.read(iprot_);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+    throw x;
+  }
+  if (mtype != ::apache::thrift::protocol::T_REPLY) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  if (fname.compare("testNamespaceClassLoad") != 0) {
+    iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+    iprot_->readMessageEnd();
+    iprot_->getTransport()->readEnd();
+  }
+  bool _return;
+  AccumuloProxy_testNamespaceClassLoad_presult result;
+  result.success = &_return;
+  result.read(iprot_);
+  iprot_->readMessageEnd();
+  iprot_->getTransport()->readEnd();
+
+  if (result.__isset.success) {
+    return _return;
+  }
+  if (result.__isset.ouch1) {
+    throw result.ouch1;
+  }
+  if (result.__isset.ouch2) {
+    throw result.ouch2;
+  }
+  if (result.__isset.ouch3) {
+    throw result.ouch3;
+  }
+  throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "testNamespaceClassLoad failed: unknown result");
+}
+
 bool AccumuloProxyProcessor::dispatchCall(::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, const std::string& fname, int32_t seqid, void* callContext) {
   ProcessMap::iterator pfn;
   pfn = processMap_.find(fname);
@@ -27356,6 +36966,184 @@
   }
 }
 
+void AccumuloProxyProcessor::process_grantNamespacePermission(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.grantNamespacePermission", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.grantNamespacePermission");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.grantNamespacePermission");
+  }
+
+  AccumuloProxy_grantNamespacePermission_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.grantNamespacePermission", bytes);
+  }
+
+  AccumuloProxy_grantNamespacePermission_result result;
+  try {
+    iface_->grantNamespacePermission(args.login, args.user, args.namespaceName, args.perm);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.grantNamespacePermission");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("grantNamespacePermission", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.grantNamespacePermission");
+  }
+
+  oprot->writeMessageBegin("grantNamespacePermission", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.grantNamespacePermission", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_hasNamespacePermission(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.hasNamespacePermission", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.hasNamespacePermission");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.hasNamespacePermission");
+  }
+
+  AccumuloProxy_hasNamespacePermission_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.hasNamespacePermission", bytes);
+  }
+
+  AccumuloProxy_hasNamespacePermission_result result;
+  try {
+    result.success = iface_->hasNamespacePermission(args.login, args.user, args.namespaceName, args.perm);
+    result.__isset.success = true;
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.hasNamespacePermission");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("hasNamespacePermission", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.hasNamespacePermission");
+  }
+
+  oprot->writeMessageBegin("hasNamespacePermission", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.hasNamespacePermission", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_revokeNamespacePermission(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.revokeNamespacePermission", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.revokeNamespacePermission");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.revokeNamespacePermission");
+  }
+
+  AccumuloProxy_revokeNamespacePermission_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.revokeNamespacePermission", bytes);
+  }
+
+  AccumuloProxy_revokeNamespacePermission_result result;
+  try {
+    iface_->revokeNamespacePermission(args.login, args.user, args.namespaceName, args.perm);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.revokeNamespacePermission");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("revokeNamespacePermission", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.revokeNamespacePermission");
+  }
+
+  oprot->writeMessageBegin("revokeNamespacePermission", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.revokeNamespacePermission", bytes);
+  }
+}
+
 void AccumuloProxyProcessor::process_createBatchScanner(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
 {
   void* ctx = NULL;
@@ -27872,7 +37660,7 @@
 
   try {
     iface_->update(args.writer, args.cells);
-  } catch (const std::exception& e) {
+  } catch (const std::exception&) {
     if (this->eventHandler_.get() != NULL) {
       this->eventHandler_->handlerError(ctx, "AccumuloProxy.update");
     }
@@ -28354,11 +38142,10425 @@
   }
 }
 
+void AccumuloProxyProcessor::process_systemNamespace(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.systemNamespace", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.systemNamespace");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.systemNamespace");
+  }
+
+  AccumuloProxy_systemNamespace_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.systemNamespace", bytes);
+  }
+
+  AccumuloProxy_systemNamespace_result result;
+  try {
+    iface_->systemNamespace(result.success);
+    result.__isset.success = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.systemNamespace");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("systemNamespace", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.systemNamespace");
+  }
+
+  oprot->writeMessageBegin("systemNamespace", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.systemNamespace", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_defaultNamespace(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.defaultNamespace", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.defaultNamespace");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.defaultNamespace");
+  }
+
+  AccumuloProxy_defaultNamespace_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.defaultNamespace", bytes);
+  }
+
+  AccumuloProxy_defaultNamespace_result result;
+  try {
+    iface_->defaultNamespace(result.success);
+    result.__isset.success = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.defaultNamespace");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("defaultNamespace", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.defaultNamespace");
+  }
+
+  oprot->writeMessageBegin("defaultNamespace", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.defaultNamespace", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_listNamespaces(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.listNamespaces", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.listNamespaces");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.listNamespaces");
+  }
+
+  AccumuloProxy_listNamespaces_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.listNamespaces", bytes);
+  }
+
+  AccumuloProxy_listNamespaces_result result;
+  try {
+    iface_->listNamespaces(result.success, args.login);
+    result.__isset.success = true;
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.listNamespaces");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("listNamespaces", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.listNamespaces");
+  }
+
+  oprot->writeMessageBegin("listNamespaces", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.listNamespaces", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_namespaceExists(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.namespaceExists", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.namespaceExists");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.namespaceExists");
+  }
+
+  AccumuloProxy_namespaceExists_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.namespaceExists", bytes);
+  }
+
+  AccumuloProxy_namespaceExists_result result;
+  try {
+    result.success = iface_->namespaceExists(args.login, args.namespaceName);
+    result.__isset.success = true;
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.namespaceExists");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("namespaceExists", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.namespaceExists");
+  }
+
+  oprot->writeMessageBegin("namespaceExists", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.namespaceExists", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_createNamespace(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.createNamespace", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.createNamespace");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.createNamespace");
+  }
+
+  AccumuloProxy_createNamespace_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.createNamespace", bytes);
+  }
+
+  AccumuloProxy_createNamespace_result result;
+  try {
+    iface_->createNamespace(args.login, args.namespaceName);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceExistsException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.createNamespace");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("createNamespace", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.createNamespace");
+  }
+
+  oprot->writeMessageBegin("createNamespace", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.createNamespace", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_deleteNamespace(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.deleteNamespace", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.deleteNamespace");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.deleteNamespace");
+  }
+
+  AccumuloProxy_deleteNamespace_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.deleteNamespace", bytes);
+  }
+
+  AccumuloProxy_deleteNamespace_result result;
+  try {
+    iface_->deleteNamespace(args.login, args.namespaceName);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (NamespaceNotEmptyException &ouch4) {
+    result.ouch4 = ouch4;
+    result.__isset.ouch4 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.deleteNamespace");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("deleteNamespace", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.deleteNamespace");
+  }
+
+  oprot->writeMessageBegin("deleteNamespace", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.deleteNamespace", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_renameNamespace(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.renameNamespace", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.renameNamespace");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.renameNamespace");
+  }
+
+  AccumuloProxy_renameNamespace_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.renameNamespace", bytes);
+  }
+
+  AccumuloProxy_renameNamespace_result result;
+  try {
+    iface_->renameNamespace(args.login, args.oldNamespaceName, args.newNamespaceName);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (NamespaceExistsException &ouch4) {
+    result.ouch4 = ouch4;
+    result.__isset.ouch4 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.renameNamespace");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("renameNamespace", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.renameNamespace");
+  }
+
+  oprot->writeMessageBegin("renameNamespace", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.renameNamespace", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_setNamespaceProperty(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.setNamespaceProperty", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.setNamespaceProperty");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.setNamespaceProperty");
+  }
+
+  AccumuloProxy_setNamespaceProperty_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.setNamespaceProperty", bytes);
+  }
+
+  AccumuloProxy_setNamespaceProperty_result result;
+  try {
+    iface_->setNamespaceProperty(args.login, args.namespaceName, args.property, args.value);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.setNamespaceProperty");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("setNamespaceProperty", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.setNamespaceProperty");
+  }
+
+  oprot->writeMessageBegin("setNamespaceProperty", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.setNamespaceProperty", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_removeNamespaceProperty(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.removeNamespaceProperty", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.removeNamespaceProperty");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.removeNamespaceProperty");
+  }
+
+  AccumuloProxy_removeNamespaceProperty_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.removeNamespaceProperty", bytes);
+  }
+
+  AccumuloProxy_removeNamespaceProperty_result result;
+  try {
+    iface_->removeNamespaceProperty(args.login, args.namespaceName, args.property);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.removeNamespaceProperty");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("removeNamespaceProperty", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.removeNamespaceProperty");
+  }
+
+  oprot->writeMessageBegin("removeNamespaceProperty", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.removeNamespaceProperty", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_getNamespaceProperties(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.getNamespaceProperties", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.getNamespaceProperties");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.getNamespaceProperties");
+  }
+
+  AccumuloProxy_getNamespaceProperties_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.getNamespaceProperties", bytes);
+  }
+
+  AccumuloProxy_getNamespaceProperties_result result;
+  try {
+    iface_->getNamespaceProperties(result.success, args.login, args.namespaceName);
+    result.__isset.success = true;
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.getNamespaceProperties");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("getNamespaceProperties", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.getNamespaceProperties");
+  }
+
+  oprot->writeMessageBegin("getNamespaceProperties", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.getNamespaceProperties", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_namespaceIdMap(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.namespaceIdMap", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.namespaceIdMap");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.namespaceIdMap");
+  }
+
+  AccumuloProxy_namespaceIdMap_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.namespaceIdMap", bytes);
+  }
+
+  AccumuloProxy_namespaceIdMap_result result;
+  try {
+    iface_->namespaceIdMap(result.success, args.login);
+    result.__isset.success = true;
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.namespaceIdMap");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("namespaceIdMap", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.namespaceIdMap");
+  }
+
+  oprot->writeMessageBegin("namespaceIdMap", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.namespaceIdMap", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_attachNamespaceIterator(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.attachNamespaceIterator", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.attachNamespaceIterator");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.attachNamespaceIterator");
+  }
+
+  AccumuloProxy_attachNamespaceIterator_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.attachNamespaceIterator", bytes);
+  }
+
+  AccumuloProxy_attachNamespaceIterator_result result;
+  try {
+    iface_->attachNamespaceIterator(args.login, args.namespaceName, args.setting, args.scopes);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.attachNamespaceIterator");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("attachNamespaceIterator", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.attachNamespaceIterator");
+  }
+
+  oprot->writeMessageBegin("attachNamespaceIterator", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.attachNamespaceIterator", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_removeNamespaceIterator(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.removeNamespaceIterator", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.removeNamespaceIterator");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.removeNamespaceIterator");
+  }
+
+  AccumuloProxy_removeNamespaceIterator_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.removeNamespaceIterator", bytes);
+  }
+
+  AccumuloProxy_removeNamespaceIterator_result result;
+  try {
+    iface_->removeNamespaceIterator(args.login, args.namespaceName, args.name, args.scopes);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.removeNamespaceIterator");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("removeNamespaceIterator", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.removeNamespaceIterator");
+  }
+
+  oprot->writeMessageBegin("removeNamespaceIterator", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.removeNamespaceIterator", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_getNamespaceIteratorSetting(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.getNamespaceIteratorSetting", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.getNamespaceIteratorSetting");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.getNamespaceIteratorSetting");
+  }
+
+  AccumuloProxy_getNamespaceIteratorSetting_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.getNamespaceIteratorSetting", bytes);
+  }
+
+  AccumuloProxy_getNamespaceIteratorSetting_result result;
+  try {
+    iface_->getNamespaceIteratorSetting(result.success, args.login, args.namespaceName, args.name, args.scope);
+    result.__isset.success = true;
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.getNamespaceIteratorSetting");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("getNamespaceIteratorSetting", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.getNamespaceIteratorSetting");
+  }
+
+  oprot->writeMessageBegin("getNamespaceIteratorSetting", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.getNamespaceIteratorSetting", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_listNamespaceIterators(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.listNamespaceIterators", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.listNamespaceIterators");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.listNamespaceIterators");
+  }
+
+  AccumuloProxy_listNamespaceIterators_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.listNamespaceIterators", bytes);
+  }
+
+  AccumuloProxy_listNamespaceIterators_result result;
+  try {
+    iface_->listNamespaceIterators(result.success, args.login, args.namespaceName);
+    result.__isset.success = true;
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.listNamespaceIterators");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("listNamespaceIterators", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.listNamespaceIterators");
+  }
+
+  oprot->writeMessageBegin("listNamespaceIterators", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.listNamespaceIterators", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_checkNamespaceIteratorConflicts(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.checkNamespaceIteratorConflicts", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.checkNamespaceIteratorConflicts");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.checkNamespaceIteratorConflicts");
+  }
+
+  AccumuloProxy_checkNamespaceIteratorConflicts_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.checkNamespaceIteratorConflicts", bytes);
+  }
+
+  AccumuloProxy_checkNamespaceIteratorConflicts_result result;
+  try {
+    iface_->checkNamespaceIteratorConflicts(args.login, args.namespaceName, args.setting, args.scopes);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.checkNamespaceIteratorConflicts");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("checkNamespaceIteratorConflicts", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.checkNamespaceIteratorConflicts");
+  }
+
+  oprot->writeMessageBegin("checkNamespaceIteratorConflicts", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.checkNamespaceIteratorConflicts", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_addNamespaceConstraint(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.addNamespaceConstraint", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.addNamespaceConstraint");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.addNamespaceConstraint");
+  }
+
+  AccumuloProxy_addNamespaceConstraint_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.addNamespaceConstraint", bytes);
+  }
+
+  AccumuloProxy_addNamespaceConstraint_result result;
+  try {
+    result.success = iface_->addNamespaceConstraint(args.login, args.namespaceName, args.constraintClassName);
+    result.__isset.success = true;
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.addNamespaceConstraint");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("addNamespaceConstraint", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.addNamespaceConstraint");
+  }
+
+  oprot->writeMessageBegin("addNamespaceConstraint", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.addNamespaceConstraint", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_removeNamespaceConstraint(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.removeNamespaceConstraint", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.removeNamespaceConstraint");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.removeNamespaceConstraint");
+  }
+
+  AccumuloProxy_removeNamespaceConstraint_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.removeNamespaceConstraint", bytes);
+  }
+
+  AccumuloProxy_removeNamespaceConstraint_result result;
+  try {
+    iface_->removeNamespaceConstraint(args.login, args.namespaceName, args.id);
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.removeNamespaceConstraint");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("removeNamespaceConstraint", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.removeNamespaceConstraint");
+  }
+
+  oprot->writeMessageBegin("removeNamespaceConstraint", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.removeNamespaceConstraint", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_listNamespaceConstraints(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.listNamespaceConstraints", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.listNamespaceConstraints");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.listNamespaceConstraints");
+  }
+
+  AccumuloProxy_listNamespaceConstraints_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.listNamespaceConstraints", bytes);
+  }
+
+  AccumuloProxy_listNamespaceConstraints_result result;
+  try {
+    iface_->listNamespaceConstraints(result.success, args.login, args.namespaceName);
+    result.__isset.success = true;
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.listNamespaceConstraints");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("listNamespaceConstraints", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.listNamespaceConstraints");
+  }
+
+  oprot->writeMessageBegin("listNamespaceConstraints", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.listNamespaceConstraints", bytes);
+  }
+}
+
+void AccumuloProxyProcessor::process_testNamespaceClassLoad(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext)
+{
+  void* ctx = NULL;
+  if (this->eventHandler_.get() != NULL) {
+    ctx = this->eventHandler_->getContext("AccumuloProxy.testNamespaceClassLoad", callContext);
+  }
+  ::apache::thrift::TProcessorContextFreer freer(this->eventHandler_.get(), ctx, "AccumuloProxy.testNamespaceClassLoad");
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preRead(ctx, "AccumuloProxy.testNamespaceClassLoad");
+  }
+
+  AccumuloProxy_testNamespaceClassLoad_args args;
+  args.read(iprot);
+  iprot->readMessageEnd();
+  uint32_t bytes = iprot->getTransport()->readEnd();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postRead(ctx, "AccumuloProxy.testNamespaceClassLoad", bytes);
+  }
+
+  AccumuloProxy_testNamespaceClassLoad_result result;
+  try {
+    result.success = iface_->testNamespaceClassLoad(args.login, args.namespaceName, args.className, args.asTypeName);
+    result.__isset.success = true;
+  } catch (AccumuloException &ouch1) {
+    result.ouch1 = ouch1;
+    result.__isset.ouch1 = true;
+  } catch (AccumuloSecurityException &ouch2) {
+    result.ouch2 = ouch2;
+    result.__isset.ouch2 = true;
+  } catch (NamespaceNotFoundException &ouch3) {
+    result.ouch3 = ouch3;
+    result.__isset.ouch3 = true;
+  } catch (const std::exception& e) {
+    if (this->eventHandler_.get() != NULL) {
+      this->eventHandler_->handlerError(ctx, "AccumuloProxy.testNamespaceClassLoad");
+    }
+
+    ::apache::thrift::TApplicationException x(e.what());
+    oprot->writeMessageBegin("testNamespaceClassLoad", ::apache::thrift::protocol::T_EXCEPTION, seqid);
+    x.write(oprot);
+    oprot->writeMessageEnd();
+    oprot->getTransport()->writeEnd();
+    oprot->getTransport()->flush();
+    return;
+  }
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->preWrite(ctx, "AccumuloProxy.testNamespaceClassLoad");
+  }
+
+  oprot->writeMessageBegin("testNamespaceClassLoad", ::apache::thrift::protocol::T_REPLY, seqid);
+  result.write(oprot);
+  oprot->writeMessageEnd();
+  bytes = oprot->getTransport()->writeEnd();
+  oprot->getTransport()->flush();
+
+  if (this->eventHandler_.get() != NULL) {
+    this->eventHandler_->postWrite(ctx, "AccumuloProxy.testNamespaceClassLoad", bytes);
+  }
+}
+
 ::boost::shared_ptr< ::apache::thrift::TProcessor > AccumuloProxyProcessorFactory::getProcessor(const ::apache::thrift::TConnectionInfo& connInfo) {
   ::apache::thrift::ReleaseHandler< AccumuloProxyIfFactory > cleanup(handlerFactory_);
   ::boost::shared_ptr< AccumuloProxyIf > handler(handlerFactory_->getHandler(connInfo), cleanup);
   ::boost::shared_ptr< ::apache::thrift::TProcessor > processor(new AccumuloProxyProcessor(handler));
   return processor;
 }
+
+void AccumuloProxyConcurrentClient::login(std::string& _return, const std::string& principal, const std::map<std::string, std::string> & loginProperties)
+{
+  int32_t seqid = send_login(principal, loginProperties);
+  recv_login(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_login(const std::string& principal, const std::map<std::string, std::string> & loginProperties)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("login", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_login_pargs args;
+  args.principal = &principal;
+  args.loginProperties = &loginProperties;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_login(std::string& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("login") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_login_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "login failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+int32_t AccumuloProxyConcurrentClient::addConstraint(const std::string& login, const std::string& tableName, const std::string& constraintClassName)
+{
+  int32_t seqid = send_addConstraint(login, tableName, constraintClassName);
+  return recv_addConstraint(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_addConstraint(const std::string& login, const std::string& tableName, const std::string& constraintClassName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("addConstraint", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_addConstraint_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.constraintClassName = &constraintClassName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+int32_t AccumuloProxyConcurrentClient::recv_addConstraint(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("addConstraint") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      int32_t _return;
+      AccumuloProxy_addConstraint_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "addConstraint failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::addSplits(const std::string& login, const std::string& tableName, const std::set<std::string> & splits)
+{
+  int32_t seqid = send_addSplits(login, tableName, splits);
+  recv_addSplits(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_addSplits(const std::string& login, const std::string& tableName, const std::set<std::string> & splits)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("addSplits", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_addSplits_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.splits = &splits;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_addSplits(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("addSplits") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_addSplits_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::attachIterator(const std::string& login, const std::string& tableName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t seqid = send_attachIterator(login, tableName, setting, scopes);
+  recv_attachIterator(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_attachIterator(const std::string& login, const std::string& tableName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("attachIterator", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_attachIterator_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.setting = &setting;
+  args.scopes = &scopes;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_attachIterator(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("attachIterator") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_attachIterator_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::checkIteratorConflicts(const std::string& login, const std::string& tableName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t seqid = send_checkIteratorConflicts(login, tableName, setting, scopes);
+  recv_checkIteratorConflicts(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_checkIteratorConflicts(const std::string& login, const std::string& tableName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("checkIteratorConflicts", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_checkIteratorConflicts_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.setting = &setting;
+  args.scopes = &scopes;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_checkIteratorConflicts(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("checkIteratorConflicts") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_checkIteratorConflicts_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::clearLocatorCache(const std::string& login, const std::string& tableName)
+{
+  int32_t seqid = send_clearLocatorCache(login, tableName);
+  recv_clearLocatorCache(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_clearLocatorCache(const std::string& login, const std::string& tableName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("clearLocatorCache", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_clearLocatorCache_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_clearLocatorCache(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("clearLocatorCache") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_clearLocatorCache_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::cloneTable(const std::string& login, const std::string& tableName, const std::string& newTableName, const bool flush, const std::map<std::string, std::string> & propertiesToSet, const std::set<std::string> & propertiesToExclude)
+{
+  int32_t seqid = send_cloneTable(login, tableName, newTableName, flush, propertiesToSet, propertiesToExclude);
+  recv_cloneTable(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_cloneTable(const std::string& login, const std::string& tableName, const std::string& newTableName, const bool flush, const std::map<std::string, std::string> & propertiesToSet, const std::set<std::string> & propertiesToExclude)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("cloneTable", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_cloneTable_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.newTableName = &newTableName;
+  args.flush = &flush;
+  args.propertiesToSet = &propertiesToSet;
+  args.propertiesToExclude = &propertiesToExclude;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_cloneTable(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("cloneTable") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_cloneTable_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      if (result.__isset.ouch4) {
+        sentry.commit();
+        throw result.ouch4;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::compactTable(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow, const std::vector<IteratorSetting> & iterators, const bool flush, const bool wait, const CompactionStrategyConfig& compactionStrategy)
+{
+  int32_t seqid = send_compactTable(login, tableName, startRow, endRow, iterators, flush, wait, compactionStrategy);
+  recv_compactTable(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_compactTable(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow, const std::vector<IteratorSetting> & iterators, const bool flush, const bool wait, const CompactionStrategyConfig& compactionStrategy)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("compactTable", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_compactTable_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.startRow = &startRow;
+  args.endRow = &endRow;
+  args.iterators = &iterators;
+  args.flush = &flush;
+  args.wait = &wait;
+  args.compactionStrategy = &compactionStrategy;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_compactTable(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("compactTable") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_compactTable_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::cancelCompaction(const std::string& login, const std::string& tableName)
+{
+  int32_t seqid = send_cancelCompaction(login, tableName);
+  recv_cancelCompaction(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_cancelCompaction(const std::string& login, const std::string& tableName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("cancelCompaction", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_cancelCompaction_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_cancelCompaction(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("cancelCompaction") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_cancelCompaction_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::createTable(const std::string& login, const std::string& tableName, const bool versioningIter, const TimeType::type type)
+{
+  int32_t seqid = send_createTable(login, tableName, versioningIter, type);
+  recv_createTable(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_createTable(const std::string& login, const std::string& tableName, const bool versioningIter, const TimeType::type type)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("createTable", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_createTable_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.versioningIter = &versioningIter;
+  args.type = &type;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_createTable(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("createTable") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_createTable_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::deleteTable(const std::string& login, const std::string& tableName)
+{
+  int32_t seqid = send_deleteTable(login, tableName);
+  recv_deleteTable(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_deleteTable(const std::string& login, const std::string& tableName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("deleteTable", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_deleteTable_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_deleteTable(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("deleteTable") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_deleteTable_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::deleteRows(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow)
+{
+  int32_t seqid = send_deleteRows(login, tableName, startRow, endRow);
+  recv_deleteRows(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_deleteRows(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("deleteRows", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_deleteRows_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.startRow = &startRow;
+  args.endRow = &endRow;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_deleteRows(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("deleteRows") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_deleteRows_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::exportTable(const std::string& login, const std::string& tableName, const std::string& exportDir)
+{
+  int32_t seqid = send_exportTable(login, tableName, exportDir);
+  recv_exportTable(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_exportTable(const std::string& login, const std::string& tableName, const std::string& exportDir)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("exportTable", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_exportTable_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.exportDir = &exportDir;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_exportTable(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("exportTable") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_exportTable_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::flushTable(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow, const bool wait)
+{
+  int32_t seqid = send_flushTable(login, tableName, startRow, endRow, wait);
+  recv_flushTable(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_flushTable(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow, const bool wait)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("flushTable", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_flushTable_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.startRow = &startRow;
+  args.endRow = &endRow;
+  args.wait = &wait;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_flushTable(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("flushTable") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_flushTable_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getDiskUsage(std::vector<DiskUsage> & _return, const std::string& login, const std::set<std::string> & tables)
+{
+  int32_t seqid = send_getDiskUsage(login, tables);
+  recv_getDiskUsage(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getDiskUsage(const std::string& login, const std::set<std::string> & tables)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getDiskUsage", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getDiskUsage_pargs args;
+  args.login = &login;
+  args.tables = &tables;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getDiskUsage(std::vector<DiskUsage> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getDiskUsage") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getDiskUsage_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getDiskUsage failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getLocalityGroups(std::map<std::string, std::set<std::string> > & _return, const std::string& login, const std::string& tableName)
+{
+  int32_t seqid = send_getLocalityGroups(login, tableName);
+  recv_getLocalityGroups(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getLocalityGroups(const std::string& login, const std::string& tableName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getLocalityGroups", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getLocalityGroups_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getLocalityGroups(std::map<std::string, std::set<std::string> > & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getLocalityGroups") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getLocalityGroups_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getLocalityGroups failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getIteratorSetting(IteratorSetting& _return, const std::string& login, const std::string& tableName, const std::string& iteratorName, const IteratorScope::type scope)
+{
+  int32_t seqid = send_getIteratorSetting(login, tableName, iteratorName, scope);
+  recv_getIteratorSetting(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getIteratorSetting(const std::string& login, const std::string& tableName, const std::string& iteratorName, const IteratorScope::type scope)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getIteratorSetting", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getIteratorSetting_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.iteratorName = &iteratorName;
+  args.scope = &scope;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getIteratorSetting(IteratorSetting& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getIteratorSetting") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getIteratorSetting_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getIteratorSetting failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getMaxRow(std::string& _return, const std::string& login, const std::string& tableName, const std::set<std::string> & auths, const std::string& startRow, const bool startInclusive, const std::string& endRow, const bool endInclusive)
+{
+  int32_t seqid = send_getMaxRow(login, tableName, auths, startRow, startInclusive, endRow, endInclusive);
+  recv_getMaxRow(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getMaxRow(const std::string& login, const std::string& tableName, const std::set<std::string> & auths, const std::string& startRow, const bool startInclusive, const std::string& endRow, const bool endInclusive)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getMaxRow", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getMaxRow_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.auths = &auths;
+  args.startRow = &startRow;
+  args.startInclusive = &startInclusive;
+  args.endRow = &endRow;
+  args.endInclusive = &endInclusive;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getMaxRow(std::string& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getMaxRow") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getMaxRow_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getMaxRow failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getTableProperties(std::map<std::string, std::string> & _return, const std::string& login, const std::string& tableName)
+{
+  int32_t seqid = send_getTableProperties(login, tableName);
+  recv_getTableProperties(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getTableProperties(const std::string& login, const std::string& tableName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getTableProperties", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getTableProperties_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getTableProperties(std::map<std::string, std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getTableProperties") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getTableProperties_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getTableProperties failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::importDirectory(const std::string& login, const std::string& tableName, const std::string& importDir, const std::string& failureDir, const bool setTime)
+{
+  int32_t seqid = send_importDirectory(login, tableName, importDir, failureDir, setTime);
+  recv_importDirectory(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_importDirectory(const std::string& login, const std::string& tableName, const std::string& importDir, const std::string& failureDir, const bool setTime)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("importDirectory", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_importDirectory_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.importDir = &importDir;
+  args.failureDir = &failureDir;
+  args.setTime = &setTime;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_importDirectory(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("importDirectory") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_importDirectory_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      if (result.__isset.ouch4) {
+        sentry.commit();
+        throw result.ouch4;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::importTable(const std::string& login, const std::string& tableName, const std::string& importDir)
+{
+  int32_t seqid = send_importTable(login, tableName, importDir);
+  recv_importTable(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_importTable(const std::string& login, const std::string& tableName, const std::string& importDir)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("importTable", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_importTable_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.importDir = &importDir;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_importTable(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("importTable") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_importTable_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::listSplits(std::vector<std::string> & _return, const std::string& login, const std::string& tableName, const int32_t maxSplits)
+{
+  int32_t seqid = send_listSplits(login, tableName, maxSplits);
+  recv_listSplits(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_listSplits(const std::string& login, const std::string& tableName, const int32_t maxSplits)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("listSplits", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listSplits_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.maxSplits = &maxSplits;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_listSplits(std::vector<std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("listSplits") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_listSplits_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listSplits failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::listTables(std::set<std::string> & _return, const std::string& login)
+{
+  int32_t seqid = send_listTables(login);
+  recv_listTables(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_listTables(const std::string& login)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("listTables", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listTables_pargs args;
+  args.login = &login;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_listTables(std::set<std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("listTables") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_listTables_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listTables failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::listIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const std::string& login, const std::string& tableName)
+{
+  int32_t seqid = send_listIterators(login, tableName);
+  recv_listIterators(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_listIterators(const std::string& login, const std::string& tableName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("listIterators", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listIterators_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_listIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("listIterators") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_listIterators_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listIterators failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::listConstraints(std::map<std::string, int32_t> & _return, const std::string& login, const std::string& tableName)
+{
+  int32_t seqid = send_listConstraints(login, tableName);
+  recv_listConstraints(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_listConstraints(const std::string& login, const std::string& tableName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("listConstraints", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listConstraints_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_listConstraints(std::map<std::string, int32_t> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("listConstraints") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_listConstraints_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listConstraints failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::mergeTablets(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow)
+{
+  int32_t seqid = send_mergeTablets(login, tableName, startRow, endRow);
+  recv_mergeTablets(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_mergeTablets(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("mergeTablets", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_mergeTablets_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.startRow = &startRow;
+  args.endRow = &endRow;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_mergeTablets(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("mergeTablets") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_mergeTablets_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::offlineTable(const std::string& login, const std::string& tableName, const bool wait)
+{
+  int32_t seqid = send_offlineTable(login, tableName, wait);
+  recv_offlineTable(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_offlineTable(const std::string& login, const std::string& tableName, const bool wait)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("offlineTable", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_offlineTable_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.wait = &wait;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_offlineTable(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("offlineTable") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_offlineTable_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::onlineTable(const std::string& login, const std::string& tableName, const bool wait)
+{
+  int32_t seqid = send_onlineTable(login, tableName, wait);
+  recv_onlineTable(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_onlineTable(const std::string& login, const std::string& tableName, const bool wait)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("onlineTable", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_onlineTable_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.wait = &wait;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_onlineTable(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("onlineTable") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_onlineTable_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::removeConstraint(const std::string& login, const std::string& tableName, const int32_t constraint)
+{
+  int32_t seqid = send_removeConstraint(login, tableName, constraint);
+  recv_removeConstraint(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_removeConstraint(const std::string& login, const std::string& tableName, const int32_t constraint)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("removeConstraint", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_removeConstraint_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.constraint = &constraint;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_removeConstraint(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("removeConstraint") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_removeConstraint_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::removeIterator(const std::string& login, const std::string& tableName, const std::string& iterName, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t seqid = send_removeIterator(login, tableName, iterName, scopes);
+  recv_removeIterator(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_removeIterator(const std::string& login, const std::string& tableName, const std::string& iterName, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("removeIterator", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_removeIterator_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.iterName = &iterName;
+  args.scopes = &scopes;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_removeIterator(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("removeIterator") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_removeIterator_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::removeTableProperty(const std::string& login, const std::string& tableName, const std::string& property)
+{
+  int32_t seqid = send_removeTableProperty(login, tableName, property);
+  recv_removeTableProperty(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_removeTableProperty(const std::string& login, const std::string& tableName, const std::string& property)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("removeTableProperty", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_removeTableProperty_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.property = &property;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_removeTableProperty(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("removeTableProperty") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_removeTableProperty_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::renameTable(const std::string& login, const std::string& oldTableName, const std::string& newTableName)
+{
+  int32_t seqid = send_renameTable(login, oldTableName, newTableName);
+  recv_renameTable(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_renameTable(const std::string& login, const std::string& oldTableName, const std::string& newTableName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("renameTable", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_renameTable_pargs args;
+  args.login = &login;
+  args.oldTableName = &oldTableName;
+  args.newTableName = &newTableName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_renameTable(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("renameTable") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_renameTable_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      if (result.__isset.ouch4) {
+        sentry.commit();
+        throw result.ouch4;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::setLocalityGroups(const std::string& login, const std::string& tableName, const std::map<std::string, std::set<std::string> > & groups)
+{
+  int32_t seqid = send_setLocalityGroups(login, tableName, groups);
+  recv_setLocalityGroups(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_setLocalityGroups(const std::string& login, const std::string& tableName, const std::map<std::string, std::set<std::string> > & groups)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("setLocalityGroups", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_setLocalityGroups_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.groups = &groups;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_setLocalityGroups(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("setLocalityGroups") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_setLocalityGroups_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::setTableProperty(const std::string& login, const std::string& tableName, const std::string& property, const std::string& value)
+{
+  int32_t seqid = send_setTableProperty(login, tableName, property, value);
+  recv_setTableProperty(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_setTableProperty(const std::string& login, const std::string& tableName, const std::string& property, const std::string& value)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("setTableProperty", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_setTableProperty_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.property = &property;
+  args.value = &value;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_setTableProperty(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("setTableProperty") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_setTableProperty_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::splitRangeByTablets(std::set<Range> & _return, const std::string& login, const std::string& tableName, const Range& range, const int32_t maxSplits)
+{
+  int32_t seqid = send_splitRangeByTablets(login, tableName, range, maxSplits);
+  recv_splitRangeByTablets(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_splitRangeByTablets(const std::string& login, const std::string& tableName, const Range& range, const int32_t maxSplits)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("splitRangeByTablets", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_splitRangeByTablets_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.range = &range;
+  args.maxSplits = &maxSplits;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_splitRangeByTablets(std::set<Range> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("splitRangeByTablets") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_splitRangeByTablets_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "splitRangeByTablets failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+bool AccumuloProxyConcurrentClient::tableExists(const std::string& login, const std::string& tableName)
+{
+  int32_t seqid = send_tableExists(login, tableName);
+  return recv_tableExists(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_tableExists(const std::string& login, const std::string& tableName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("tableExists", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_tableExists_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+bool AccumuloProxyConcurrentClient::recv_tableExists(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("tableExists") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      bool _return;
+      AccumuloProxy_tableExists_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "tableExists failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::tableIdMap(std::map<std::string, std::string> & _return, const std::string& login)
+{
+  int32_t seqid = send_tableIdMap(login);
+  recv_tableIdMap(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_tableIdMap(const std::string& login)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("tableIdMap", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_tableIdMap_pargs args;
+  args.login = &login;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_tableIdMap(std::map<std::string, std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("tableIdMap") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_tableIdMap_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "tableIdMap failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+bool AccumuloProxyConcurrentClient::testTableClassLoad(const std::string& login, const std::string& tableName, const std::string& className, const std::string& asTypeName)
+{
+  int32_t seqid = send_testTableClassLoad(login, tableName, className, asTypeName);
+  return recv_testTableClassLoad(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_testTableClassLoad(const std::string& login, const std::string& tableName, const std::string& className, const std::string& asTypeName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("testTableClassLoad", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_testTableClassLoad_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.className = &className;
+  args.asTypeName = &asTypeName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+bool AccumuloProxyConcurrentClient::recv_testTableClassLoad(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("testTableClassLoad") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      bool _return;
+      AccumuloProxy_testTableClassLoad_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "testTableClassLoad failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::pingTabletServer(const std::string& login, const std::string& tserver)
+{
+  int32_t seqid = send_pingTabletServer(login, tserver);
+  recv_pingTabletServer(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_pingTabletServer(const std::string& login, const std::string& tserver)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("pingTabletServer", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_pingTabletServer_pargs args;
+  args.login = &login;
+  args.tserver = &tserver;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_pingTabletServer(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("pingTabletServer") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_pingTabletServer_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getActiveScans(std::vector<ActiveScan> & _return, const std::string& login, const std::string& tserver)
+{
+  int32_t seqid = send_getActiveScans(login, tserver);
+  recv_getActiveScans(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getActiveScans(const std::string& login, const std::string& tserver)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getActiveScans", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getActiveScans_pargs args;
+  args.login = &login;
+  args.tserver = &tserver;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getActiveScans(std::vector<ActiveScan> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getActiveScans") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getActiveScans_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getActiveScans failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getActiveCompactions(std::vector<ActiveCompaction> & _return, const std::string& login, const std::string& tserver)
+{
+  int32_t seqid = send_getActiveCompactions(login, tserver);
+  recv_getActiveCompactions(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getActiveCompactions(const std::string& login, const std::string& tserver)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getActiveCompactions", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getActiveCompactions_pargs args;
+  args.login = &login;
+  args.tserver = &tserver;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getActiveCompactions(std::vector<ActiveCompaction> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getActiveCompactions") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getActiveCompactions_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getActiveCompactions failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getSiteConfiguration(std::map<std::string, std::string> & _return, const std::string& login)
+{
+  int32_t seqid = send_getSiteConfiguration(login);
+  recv_getSiteConfiguration(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getSiteConfiguration(const std::string& login)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getSiteConfiguration", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getSiteConfiguration_pargs args;
+  args.login = &login;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getSiteConfiguration(std::map<std::string, std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getSiteConfiguration") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getSiteConfiguration_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getSiteConfiguration failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getSystemConfiguration(std::map<std::string, std::string> & _return, const std::string& login)
+{
+  int32_t seqid = send_getSystemConfiguration(login);
+  recv_getSystemConfiguration(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getSystemConfiguration(const std::string& login)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getSystemConfiguration", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getSystemConfiguration_pargs args;
+  args.login = &login;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getSystemConfiguration(std::map<std::string, std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getSystemConfiguration") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getSystemConfiguration_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getSystemConfiguration failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getTabletServers(std::vector<std::string> & _return, const std::string& login)
+{
+  int32_t seqid = send_getTabletServers(login);
+  recv_getTabletServers(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getTabletServers(const std::string& login)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getTabletServers", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getTabletServers_pargs args;
+  args.login = &login;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getTabletServers(std::vector<std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getTabletServers") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getTabletServers_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getTabletServers failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::removeProperty(const std::string& login, const std::string& property)
+{
+  int32_t seqid = send_removeProperty(login, property);
+  recv_removeProperty(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_removeProperty(const std::string& login, const std::string& property)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("removeProperty", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_removeProperty_pargs args;
+  args.login = &login;
+  args.property = &property;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_removeProperty(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("removeProperty") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_removeProperty_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::setProperty(const std::string& login, const std::string& property, const std::string& value)
+{
+  int32_t seqid = send_setProperty(login, property, value);
+  recv_setProperty(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_setProperty(const std::string& login, const std::string& property, const std::string& value)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("setProperty", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_setProperty_pargs args;
+  args.login = &login;
+  args.property = &property;
+  args.value = &value;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_setProperty(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("setProperty") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_setProperty_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+bool AccumuloProxyConcurrentClient::testClassLoad(const std::string& login, const std::string& className, const std::string& asTypeName)
+{
+  int32_t seqid = send_testClassLoad(login, className, asTypeName);
+  return recv_testClassLoad(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_testClassLoad(const std::string& login, const std::string& className, const std::string& asTypeName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("testClassLoad", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_testClassLoad_pargs args;
+  args.login = &login;
+  args.className = &className;
+  args.asTypeName = &asTypeName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+bool AccumuloProxyConcurrentClient::recv_testClassLoad(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("testClassLoad") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      bool _return;
+      AccumuloProxy_testClassLoad_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "testClassLoad failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+bool AccumuloProxyConcurrentClient::authenticateUser(const std::string& login, const std::string& user, const std::map<std::string, std::string> & properties)
+{
+  int32_t seqid = send_authenticateUser(login, user, properties);
+  return recv_authenticateUser(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_authenticateUser(const std::string& login, const std::string& user, const std::map<std::string, std::string> & properties)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("authenticateUser", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_authenticateUser_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.properties = &properties;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+bool AccumuloProxyConcurrentClient::recv_authenticateUser(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("authenticateUser") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      bool _return;
+      AccumuloProxy_authenticateUser_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "authenticateUser failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::changeUserAuthorizations(const std::string& login, const std::string& user, const std::set<std::string> & authorizations)
+{
+  int32_t seqid = send_changeUserAuthorizations(login, user, authorizations);
+  recv_changeUserAuthorizations(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_changeUserAuthorizations(const std::string& login, const std::string& user, const std::set<std::string> & authorizations)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("changeUserAuthorizations", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_changeUserAuthorizations_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.authorizations = &authorizations;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_changeUserAuthorizations(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("changeUserAuthorizations") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_changeUserAuthorizations_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::changeLocalUserPassword(const std::string& login, const std::string& user, const std::string& password)
+{
+  int32_t seqid = send_changeLocalUserPassword(login, user, password);
+  recv_changeLocalUserPassword(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_changeLocalUserPassword(const std::string& login, const std::string& user, const std::string& password)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("changeLocalUserPassword", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_changeLocalUserPassword_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.password = &password;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_changeLocalUserPassword(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("changeLocalUserPassword") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_changeLocalUserPassword_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::createLocalUser(const std::string& login, const std::string& user, const std::string& password)
+{
+  int32_t seqid = send_createLocalUser(login, user, password);
+  recv_createLocalUser(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_createLocalUser(const std::string& login, const std::string& user, const std::string& password)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("createLocalUser", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_createLocalUser_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.password = &password;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_createLocalUser(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("createLocalUser") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_createLocalUser_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::dropLocalUser(const std::string& login, const std::string& user)
+{
+  int32_t seqid = send_dropLocalUser(login, user);
+  recv_dropLocalUser(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_dropLocalUser(const std::string& login, const std::string& user)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("dropLocalUser", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_dropLocalUser_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_dropLocalUser(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("dropLocalUser") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_dropLocalUser_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getUserAuthorizations(std::vector<std::string> & _return, const std::string& login, const std::string& user)
+{
+  int32_t seqid = send_getUserAuthorizations(login, user);
+  recv_getUserAuthorizations(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getUserAuthorizations(const std::string& login, const std::string& user)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getUserAuthorizations", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getUserAuthorizations_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getUserAuthorizations(std::vector<std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getUserAuthorizations") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getUserAuthorizations_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getUserAuthorizations failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::grantSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm)
+{
+  int32_t seqid = send_grantSystemPermission(login, user, perm);
+  recv_grantSystemPermission(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_grantSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("grantSystemPermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_grantSystemPermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_grantSystemPermission(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("grantSystemPermission") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_grantSystemPermission_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::grantTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm)
+{
+  int32_t seqid = send_grantTablePermission(login, user, table, perm);
+  recv_grantTablePermission(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_grantTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("grantTablePermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_grantTablePermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.table = &table;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_grantTablePermission(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("grantTablePermission") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_grantTablePermission_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+bool AccumuloProxyConcurrentClient::hasSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm)
+{
+  int32_t seqid = send_hasSystemPermission(login, user, perm);
+  return recv_hasSystemPermission(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_hasSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("hasSystemPermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_hasSystemPermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+bool AccumuloProxyConcurrentClient::recv_hasSystemPermission(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("hasSystemPermission") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      bool _return;
+      AccumuloProxy_hasSystemPermission_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "hasSystemPermission failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+bool AccumuloProxyConcurrentClient::hasTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm)
+{
+  int32_t seqid = send_hasTablePermission(login, user, table, perm);
+  return recv_hasTablePermission(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_hasTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("hasTablePermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_hasTablePermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.table = &table;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+bool AccumuloProxyConcurrentClient::recv_hasTablePermission(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("hasTablePermission") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      bool _return;
+      AccumuloProxy_hasTablePermission_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "hasTablePermission failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::listLocalUsers(std::set<std::string> & _return, const std::string& login)
+{
+  int32_t seqid = send_listLocalUsers(login);
+  recv_listLocalUsers(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_listLocalUsers(const std::string& login)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("listLocalUsers", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listLocalUsers_pargs args;
+  args.login = &login;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_listLocalUsers(std::set<std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("listLocalUsers") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_listLocalUsers_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listLocalUsers failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::revokeSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm)
+{
+  int32_t seqid = send_revokeSystemPermission(login, user, perm);
+  recv_revokeSystemPermission(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_revokeSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("revokeSystemPermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_revokeSystemPermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_revokeSystemPermission(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("revokeSystemPermission") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_revokeSystemPermission_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::revokeTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm)
+{
+  int32_t seqid = send_revokeTablePermission(login, user, table, perm);
+  recv_revokeTablePermission(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_revokeTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("revokeTablePermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_revokeTablePermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.table = &table;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_revokeTablePermission(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("revokeTablePermission") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_revokeTablePermission_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  int32_t seqid = send_grantNamespacePermission(login, user, namespaceName, perm);
+  recv_grantNamespacePermission(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("grantNamespacePermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_grantNamespacePermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.namespaceName = &namespaceName;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_grantNamespacePermission(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("grantNamespacePermission") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_grantNamespacePermission_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+bool AccumuloProxyConcurrentClient::hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  int32_t seqid = send_hasNamespacePermission(login, user, namespaceName, perm);
+  return recv_hasNamespacePermission(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("hasNamespacePermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_hasNamespacePermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.namespaceName = &namespaceName;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+bool AccumuloProxyConcurrentClient::recv_hasNamespacePermission(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("hasNamespacePermission") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      bool _return;
+      AccumuloProxy_hasNamespacePermission_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "hasNamespacePermission failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  int32_t seqid = send_revokeNamespacePermission(login, user, namespaceName, perm);
+  recv_revokeNamespacePermission(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("revokeNamespacePermission", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_revokeNamespacePermission_pargs args;
+  args.login = &login;
+  args.user = &user;
+  args.namespaceName = &namespaceName;
+  args.perm = &perm;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_revokeNamespacePermission(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("revokeNamespacePermission") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_revokeNamespacePermission_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::createBatchScanner(std::string& _return, const std::string& login, const std::string& tableName, const BatchScanOptions& options)
+{
+  int32_t seqid = send_createBatchScanner(login, tableName, options);
+  recv_createBatchScanner(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_createBatchScanner(const std::string& login, const std::string& tableName, const BatchScanOptions& options)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("createBatchScanner", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_createBatchScanner_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.options = &options;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_createBatchScanner(std::string& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("createBatchScanner") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_createBatchScanner_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "createBatchScanner failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::createScanner(std::string& _return, const std::string& login, const std::string& tableName, const ScanOptions& options)
+{
+  int32_t seqid = send_createScanner(login, tableName, options);
+  recv_createScanner(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_createScanner(const std::string& login, const std::string& tableName, const ScanOptions& options)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("createScanner", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_createScanner_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.options = &options;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_createScanner(std::string& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("createScanner") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_createScanner_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "createScanner failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+bool AccumuloProxyConcurrentClient::hasNext(const std::string& scanner)
+{
+  int32_t seqid = send_hasNext(scanner);
+  return recv_hasNext(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_hasNext(const std::string& scanner)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("hasNext", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_hasNext_pargs args;
+  args.scanner = &scanner;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+bool AccumuloProxyConcurrentClient::recv_hasNext(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("hasNext") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      bool _return;
+      AccumuloProxy_hasNext_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "hasNext failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::nextEntry(KeyValueAndPeek& _return, const std::string& scanner)
+{
+  int32_t seqid = send_nextEntry(scanner);
+  recv_nextEntry(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_nextEntry(const std::string& scanner)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("nextEntry", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_nextEntry_pargs args;
+  args.scanner = &scanner;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_nextEntry(KeyValueAndPeek& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("nextEntry") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_nextEntry_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "nextEntry failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::nextK(ScanResult& _return, const std::string& scanner, const int32_t k)
+{
+  int32_t seqid = send_nextK(scanner, k);
+  recv_nextK(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_nextK(const std::string& scanner, const int32_t k)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("nextK", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_nextK_pargs args;
+  args.scanner = &scanner;
+  args.k = &k;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_nextK(ScanResult& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("nextK") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_nextK_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "nextK failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::closeScanner(const std::string& scanner)
+{
+  int32_t seqid = send_closeScanner(scanner);
+  recv_closeScanner(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_closeScanner(const std::string& scanner)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("closeScanner", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_closeScanner_pargs args;
+  args.scanner = &scanner;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_closeScanner(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("closeScanner") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_closeScanner_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::updateAndFlush(const std::string& login, const std::string& tableName, const std::map<std::string, std::vector<ColumnUpdate> > & cells)
+{
+  int32_t seqid = send_updateAndFlush(login, tableName, cells);
+  recv_updateAndFlush(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_updateAndFlush(const std::string& login, const std::string& tableName, const std::map<std::string, std::vector<ColumnUpdate> > & cells)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("updateAndFlush", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_updateAndFlush_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.cells = &cells;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_updateAndFlush(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("updateAndFlush") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_updateAndFlush_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.outch1) {
+        sentry.commit();
+        throw result.outch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      if (result.__isset.ouch4) {
+        sentry.commit();
+        throw result.ouch4;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::createWriter(std::string& _return, const std::string& login, const std::string& tableName, const WriterOptions& opts)
+{
+  int32_t seqid = send_createWriter(login, tableName, opts);
+  recv_createWriter(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_createWriter(const std::string& login, const std::string& tableName, const WriterOptions& opts)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("createWriter", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_createWriter_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.opts = &opts;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_createWriter(std::string& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("createWriter") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_createWriter_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.outch1) {
+        sentry.commit();
+        throw result.outch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "createWriter failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::update(const std::string& writer, const std::map<std::string, std::vector<ColumnUpdate> > & cells)
+{
+  send_update(writer, cells);
+}
+
+void AccumuloProxyConcurrentClient::send_update(const std::string& writer, const std::map<std::string, std::vector<ColumnUpdate> > & cells)
+{
+  int32_t cseqid = 0;
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("update", ::apache::thrift::protocol::T_ONEWAY, cseqid);
+
+  AccumuloProxy_update_pargs args;
+  args.writer = &writer;
+  args.cells = &cells;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+}
+
+void AccumuloProxyConcurrentClient::flush(const std::string& writer)
+{
+  int32_t seqid = send_flush(writer);
+  recv_flush(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_flush(const std::string& writer)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("flush", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_flush_pargs args;
+  args.writer = &writer;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_flush(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("flush") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_flush_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::closeWriter(const std::string& writer)
+{
+  int32_t seqid = send_closeWriter(writer);
+  recv_closeWriter(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_closeWriter(const std::string& writer)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("closeWriter", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_closeWriter_pargs args;
+  args.writer = &writer;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_closeWriter(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("closeWriter") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_closeWriter_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+ConditionalStatus::type AccumuloProxyConcurrentClient::updateRowConditionally(const std::string& login, const std::string& tableName, const std::string& row, const ConditionalUpdates& updates)
+{
+  int32_t seqid = send_updateRowConditionally(login, tableName, row, updates);
+  return recv_updateRowConditionally(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_updateRowConditionally(const std::string& login, const std::string& tableName, const std::string& row, const ConditionalUpdates& updates)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("updateRowConditionally", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_updateRowConditionally_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.row = &row;
+  args.updates = &updates;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+ConditionalStatus::type AccumuloProxyConcurrentClient::recv_updateRowConditionally(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("updateRowConditionally") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      ConditionalStatus::type _return;
+      AccumuloProxy_updateRowConditionally_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "updateRowConditionally failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::createConditionalWriter(std::string& _return, const std::string& login, const std::string& tableName, const ConditionalWriterOptions& options)
+{
+  int32_t seqid = send_createConditionalWriter(login, tableName, options);
+  recv_createConditionalWriter(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_createConditionalWriter(const std::string& login, const std::string& tableName, const ConditionalWriterOptions& options)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("createConditionalWriter", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_createConditionalWriter_pargs args;
+  args.login = &login;
+  args.tableName = &tableName;
+  args.options = &options;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_createConditionalWriter(std::string& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("createConditionalWriter") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_createConditionalWriter_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "createConditionalWriter failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::updateRowsConditionally(std::map<std::string, ConditionalStatus::type> & _return, const std::string& conditionalWriter, const std::map<std::string, ConditionalUpdates> & updates)
+{
+  int32_t seqid = send_updateRowsConditionally(conditionalWriter, updates);
+  recv_updateRowsConditionally(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_updateRowsConditionally(const std::string& conditionalWriter, const std::map<std::string, ConditionalUpdates> & updates)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("updateRowsConditionally", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_updateRowsConditionally_pargs args;
+  args.conditionalWriter = &conditionalWriter;
+  args.updates = &updates;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_updateRowsConditionally(std::map<std::string, ConditionalStatus::type> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("updateRowsConditionally") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_updateRowsConditionally_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "updateRowsConditionally failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::closeConditionalWriter(const std::string& conditionalWriter)
+{
+  int32_t seqid = send_closeConditionalWriter(conditionalWriter);
+  recv_closeConditionalWriter(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_closeConditionalWriter(const std::string& conditionalWriter)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("closeConditionalWriter", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_closeConditionalWriter_pargs args;
+  args.conditionalWriter = &conditionalWriter;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_closeConditionalWriter(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("closeConditionalWriter") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_closeConditionalWriter_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getRowRange(Range& _return, const std::string& row)
+{
+  int32_t seqid = send_getRowRange(row);
+  recv_getRowRange(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getRowRange(const std::string& row)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getRowRange", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getRowRange_pargs args;
+  args.row = &row;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getRowRange(Range& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getRowRange") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getRowRange_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getRowRange failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getFollowing(Key& _return, const Key& key, const PartialKey::type part)
+{
+  int32_t seqid = send_getFollowing(key, part);
+  recv_getFollowing(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getFollowing(const Key& key, const PartialKey::type part)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getFollowing", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getFollowing_pargs args;
+  args.key = &key;
+  args.part = &part;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getFollowing(Key& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getFollowing") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getFollowing_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getFollowing failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::systemNamespace(std::string& _return)
+{
+  int32_t seqid = send_systemNamespace();
+  recv_systemNamespace(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_systemNamespace()
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("systemNamespace", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_systemNamespace_pargs args;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_systemNamespace(std::string& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("systemNamespace") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_systemNamespace_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "systemNamespace failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::defaultNamespace(std::string& _return)
+{
+  int32_t seqid = send_defaultNamespace();
+  recv_defaultNamespace(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_defaultNamespace()
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("defaultNamespace", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_defaultNamespace_pargs args;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_defaultNamespace(std::string& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("defaultNamespace") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_defaultNamespace_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "defaultNamespace failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::listNamespaces(std::vector<std::string> & _return, const std::string& login)
+{
+  int32_t seqid = send_listNamespaces(login);
+  recv_listNamespaces(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_listNamespaces(const std::string& login)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("listNamespaces", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listNamespaces_pargs args;
+  args.login = &login;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_listNamespaces(std::vector<std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("listNamespaces") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_listNamespaces_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listNamespaces failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+bool AccumuloProxyConcurrentClient::namespaceExists(const std::string& login, const std::string& namespaceName)
+{
+  int32_t seqid = send_namespaceExists(login, namespaceName);
+  return recv_namespaceExists(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_namespaceExists(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("namespaceExists", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_namespaceExists_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+bool AccumuloProxyConcurrentClient::recv_namespaceExists(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("namespaceExists") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      bool _return;
+      AccumuloProxy_namespaceExists_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "namespaceExists failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::createNamespace(const std::string& login, const std::string& namespaceName)
+{
+  int32_t seqid = send_createNamespace(login, namespaceName);
+  recv_createNamespace(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_createNamespace(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("createNamespace", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_createNamespace_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_createNamespace(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("createNamespace") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_createNamespace_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::deleteNamespace(const std::string& login, const std::string& namespaceName)
+{
+  int32_t seqid = send_deleteNamespace(login, namespaceName);
+  recv_deleteNamespace(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_deleteNamespace(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("deleteNamespace", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_deleteNamespace_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_deleteNamespace(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("deleteNamespace") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_deleteNamespace_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      if (result.__isset.ouch4) {
+        sentry.commit();
+        throw result.ouch4;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName)
+{
+  int32_t seqid = send_renameNamespace(login, oldNamespaceName, newNamespaceName);
+  recv_renameNamespace(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("renameNamespace", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_renameNamespace_pargs args;
+  args.login = &login;
+  args.oldNamespaceName = &oldNamespaceName;
+  args.newNamespaceName = &newNamespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_renameNamespace(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("renameNamespace") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_renameNamespace_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      if (result.__isset.ouch4) {
+        sentry.commit();
+        throw result.ouch4;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value)
+{
+  int32_t seqid = send_setNamespaceProperty(login, namespaceName, property, value);
+  recv_setNamespaceProperty(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("setNamespaceProperty", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_setNamespaceProperty_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.property = &property;
+  args.value = &value;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_setNamespaceProperty(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("setNamespaceProperty") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_setNamespaceProperty_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property)
+{
+  int32_t seqid = send_removeNamespaceProperty(login, namespaceName, property);
+  recv_removeNamespaceProperty(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("removeNamespaceProperty", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_removeNamespaceProperty_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.property = &property;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_removeNamespaceProperty(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("removeNamespaceProperty") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_removeNamespaceProperty_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getNamespaceProperties(std::map<std::string, std::string> & _return, const std::string& login, const std::string& namespaceName)
+{
+  int32_t seqid = send_getNamespaceProperties(login, namespaceName);
+  recv_getNamespaceProperties(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getNamespaceProperties(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getNamespaceProperties", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getNamespaceProperties_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getNamespaceProperties(std::map<std::string, std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getNamespaceProperties") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getNamespaceProperties_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getNamespaceProperties failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::namespaceIdMap(std::map<std::string, std::string> & _return, const std::string& login)
+{
+  int32_t seqid = send_namespaceIdMap(login);
+  recv_namespaceIdMap(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_namespaceIdMap(const std::string& login)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("namespaceIdMap", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_namespaceIdMap_pargs args;
+  args.login = &login;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_namespaceIdMap(std::map<std::string, std::string> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("namespaceIdMap") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_namespaceIdMap_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "namespaceIdMap failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t seqid = send_attachNamespaceIterator(login, namespaceName, setting, scopes);
+  recv_attachNamespaceIterator(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("attachNamespaceIterator", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_attachNamespaceIterator_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.setting = &setting;
+  args.scopes = &scopes;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_attachNamespaceIterator(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("attachNamespaceIterator") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_attachNamespaceIterator_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t seqid = send_removeNamespaceIterator(login, namespaceName, name, scopes);
+  recv_removeNamespaceIterator(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("removeNamespaceIterator", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_removeNamespaceIterator_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.name = &name;
+  args.scopes = &scopes;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_removeNamespaceIterator(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("removeNamespaceIterator") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_removeNamespaceIterator_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::getNamespaceIteratorSetting(IteratorSetting& _return, const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope)
+{
+  int32_t seqid = send_getNamespaceIteratorSetting(login, namespaceName, name, scope);
+  recv_getNamespaceIteratorSetting(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_getNamespaceIteratorSetting(const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("getNamespaceIteratorSetting", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_getNamespaceIteratorSetting_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.name = &name;
+  args.scope = &scope;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_getNamespaceIteratorSetting(IteratorSetting& _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("getNamespaceIteratorSetting") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_getNamespaceIteratorSetting_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "getNamespaceIteratorSetting failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const std::string& login, const std::string& namespaceName)
+{
+  int32_t seqid = send_listNamespaceIterators(login, namespaceName);
+  recv_listNamespaceIterators(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_listNamespaceIterators(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("listNamespaceIterators", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listNamespaceIterators_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("listNamespaceIterators") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_listNamespaceIterators_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listNamespaceIterators failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t seqid = send_checkNamespaceIteratorConflicts(login, namespaceName, setting, scopes);
+  recv_checkNamespaceIteratorConflicts(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("checkNamespaceIteratorConflicts", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_checkNamespaceIteratorConflicts_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.setting = &setting;
+  args.scopes = &scopes;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_checkNamespaceIteratorConflicts(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("checkNamespaceIteratorConflicts") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_checkNamespaceIteratorConflicts_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+int32_t AccumuloProxyConcurrentClient::addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName)
+{
+  int32_t seqid = send_addNamespaceConstraint(login, namespaceName, constraintClassName);
+  return recv_addNamespaceConstraint(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("addNamespaceConstraint", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_addNamespaceConstraint_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.constraintClassName = &constraintClassName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+int32_t AccumuloProxyConcurrentClient::recv_addNamespaceConstraint(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("addNamespaceConstraint") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      int32_t _return;
+      AccumuloProxy_addNamespaceConstraint_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "addNamespaceConstraint failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id)
+{
+  int32_t seqid = send_removeNamespaceConstraint(login, namespaceName, id);
+  recv_removeNamespaceConstraint(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("removeNamespaceConstraint", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_removeNamespaceConstraint_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.id = &id;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_removeNamespaceConstraint(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("removeNamespaceConstraint") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_removeNamespaceConstraint_presult result;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      sentry.commit();
+      return;
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+void AccumuloProxyConcurrentClient::listNamespaceConstraints(std::map<std::string, int32_t> & _return, const std::string& login, const std::string& namespaceName)
+{
+  int32_t seqid = send_listNamespaceConstraints(login, namespaceName);
+  recv_listNamespaceConstraints(_return, seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_listNamespaceConstraints(const std::string& login, const std::string& namespaceName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("listNamespaceConstraints", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_listNamespaceConstraints_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+void AccumuloProxyConcurrentClient::recv_listNamespaceConstraints(std::map<std::string, int32_t> & _return, const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("listNamespaceConstraints") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      AccumuloProxy_listNamespaceConstraints_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        // _return pointer has now been filled
+        sentry.commit();
+        return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "listNamespaceConstraints failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
+bool AccumuloProxyConcurrentClient::testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName)
+{
+  int32_t seqid = send_testNamespaceClassLoad(login, namespaceName, className, asTypeName);
+  return recv_testNamespaceClassLoad(seqid);
+}
+
+int32_t AccumuloProxyConcurrentClient::send_testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName)
+{
+  int32_t cseqid = this->sync_.generateSeqId();
+  ::apache::thrift::async::TConcurrentSendSentry sentry(&this->sync_);
+  oprot_->writeMessageBegin("testNamespaceClassLoad", ::apache::thrift::protocol::T_CALL, cseqid);
+
+  AccumuloProxy_testNamespaceClassLoad_pargs args;
+  args.login = &login;
+  args.namespaceName = &namespaceName;
+  args.className = &className;
+  args.asTypeName = &asTypeName;
+  args.write(oprot_);
+
+  oprot_->writeMessageEnd();
+  oprot_->getTransport()->writeEnd();
+  oprot_->getTransport()->flush();
+
+  sentry.commit();
+  return cseqid;
+}
+
+bool AccumuloProxyConcurrentClient::recv_testNamespaceClassLoad(const int32_t seqid)
+{
+
+  int32_t rseqid = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TMessageType mtype;
+
+  // the read mutex gets dropped and reacquired as part of waitForWork()
+  // The destructor of this sentry wakes up other clients
+  ::apache::thrift::async::TConcurrentRecvSentry sentry(&this->sync_, seqid);
+
+  while(true) {
+    if(!this->sync_.getPending(fname, mtype, rseqid)) {
+      iprot_->readMessageBegin(fname, mtype, rseqid);
+    }
+    if(seqid == rseqid) {
+      if (mtype == ::apache::thrift::protocol::T_EXCEPTION) {
+        ::apache::thrift::TApplicationException x;
+        x.read(iprot_);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+        sentry.commit();
+        throw x;
+      }
+      if (mtype != ::apache::thrift::protocol::T_REPLY) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+      }
+      if (fname.compare("testNamespaceClassLoad") != 0) {
+        iprot_->skip(::apache::thrift::protocol::T_STRUCT);
+        iprot_->readMessageEnd();
+        iprot_->getTransport()->readEnd();
+
+        // in a bad state, don't commit
+        using ::apache::thrift::protocol::TProtocolException;
+        throw TProtocolException(TProtocolException::INVALID_DATA);
+      }
+      bool _return;
+      AccumuloProxy_testNamespaceClassLoad_presult result;
+      result.success = &_return;
+      result.read(iprot_);
+      iprot_->readMessageEnd();
+      iprot_->getTransport()->readEnd();
+
+      if (result.__isset.success) {
+        sentry.commit();
+        return _return;
+      }
+      if (result.__isset.ouch1) {
+        sentry.commit();
+        throw result.ouch1;
+      }
+      if (result.__isset.ouch2) {
+        sentry.commit();
+        throw result.ouch2;
+      }
+      if (result.__isset.ouch3) {
+        sentry.commit();
+        throw result.ouch3;
+      }
+      // in a bad state, don't commit
+      throw ::apache::thrift::TApplicationException(::apache::thrift::TApplicationException::MISSING_RESULT, "testNamespaceClassLoad failed: unknown result");
+    }
+    // seqid != rseqid
+    this->sync_.updatePending(fname, mtype, rseqid);
+
+    // this will temporarily unlock the readMutex, and let other clients get work done
+    this->sync_.waitForWork(seqid);
+  } // end while(true)
+}
+
 } // namespace
 
diff --git a/proxy/src/main/cpp/AccumuloProxy.h b/proxy/src/main/cpp/AccumuloProxy.h
index 269884f..429cf55 100644
--- a/proxy/src/main/cpp/AccumuloProxy.h
+++ b/proxy/src/main/cpp/AccumuloProxy.h
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,10 +24,16 @@
 #define AccumuloProxy_H
 
 #include <thrift/TDispatchProcessor.h>
+#include <thrift/async/TConcurrentClientSyncInfo.h>
 #include "proxy_types.h"
 
 namespace accumulo {
 
+#ifdef _WIN32
+  #pragma warning( push )
+  #pragma warning (disable : 4250 ) //inheriting methods via dominance 
+#endif
+
 class AccumuloProxyIf {
  public:
   virtual ~AccumuloProxyIf() {}
@@ -91,6 +97,9 @@
   virtual void listLocalUsers(std::set<std::string> & _return, const std::string& login) = 0;
   virtual void revokeSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm) = 0;
   virtual void revokeTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm) = 0;
+  virtual void grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm) = 0;
+  virtual bool hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm) = 0;
+  virtual void revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm) = 0;
   virtual void createBatchScanner(std::string& _return, const std::string& login, const std::string& tableName, const BatchScanOptions& options) = 0;
   virtual void createScanner(std::string& _return, const std::string& login, const std::string& tableName, const ScanOptions& options) = 0;
   virtual bool hasNext(const std::string& scanner) = 0;
@@ -108,6 +117,26 @@
   virtual void closeConditionalWriter(const std::string& conditionalWriter) = 0;
   virtual void getRowRange(Range& _return, const std::string& row) = 0;
   virtual void getFollowing(Key& _return, const Key& key, const PartialKey::type part) = 0;
+  virtual void systemNamespace(std::string& _return) = 0;
+  virtual void defaultNamespace(std::string& _return) = 0;
+  virtual void listNamespaces(std::vector<std::string> & _return, const std::string& login) = 0;
+  virtual bool namespaceExists(const std::string& login, const std::string& namespaceName) = 0;
+  virtual void createNamespace(const std::string& login, const std::string& namespaceName) = 0;
+  virtual void deleteNamespace(const std::string& login, const std::string& namespaceName) = 0;
+  virtual void renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName) = 0;
+  virtual void setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value) = 0;
+  virtual void removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property) = 0;
+  virtual void getNamespaceProperties(std::map<std::string, std::string> & _return, const std::string& login, const std::string& namespaceName) = 0;
+  virtual void namespaceIdMap(std::map<std::string, std::string> & _return, const std::string& login) = 0;
+  virtual void attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes) = 0;
+  virtual void removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes) = 0;
+  virtual void getNamespaceIteratorSetting(IteratorSetting& _return, const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope) = 0;
+  virtual void listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const std::string& login, const std::string& namespaceName) = 0;
+  virtual void checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes) = 0;
+  virtual int32_t addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName) = 0;
+  virtual void removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id) = 0;
+  virtual void listNamespaceConstraints(std::map<std::string, int32_t> & _return, const std::string& login, const std::string& namespaceName) = 0;
+  virtual bool testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName) = 0;
 };
 
 class AccumuloProxyIfFactory {
@@ -324,6 +353,16 @@
   void revokeTablePermission(const std::string& /* login */, const std::string& /* user */, const std::string& /* table */, const TablePermission::type /* perm */) {
     return;
   }
+  void grantNamespacePermission(const std::string& /* login */, const std::string& /* user */, const std::string& /* namespaceName */, const NamespacePermission::type /* perm */) {
+    return;
+  }
+  bool hasNamespacePermission(const std::string& /* login */, const std::string& /* user */, const std::string& /* namespaceName */, const NamespacePermission::type /* perm */) {
+    bool _return = false;
+    return _return;
+  }
+  void revokeNamespacePermission(const std::string& /* login */, const std::string& /* user */, const std::string& /* namespaceName */, const NamespacePermission::type /* perm */) {
+    return;
+  }
   void createBatchScanner(std::string& /* _return */, const std::string& /* login */, const std::string& /* tableName */, const BatchScanOptions& /* options */) {
     return;
   }
@@ -377,34 +416,94 @@
   void getFollowing(Key& /* _return */, const Key& /* key */, const PartialKey::type /* part */) {
     return;
   }
+  void systemNamespace(std::string& /* _return */) {
+    return;
+  }
+  void defaultNamespace(std::string& /* _return */) {
+    return;
+  }
+  void listNamespaces(std::vector<std::string> & /* _return */, const std::string& /* login */) {
+    return;
+  }
+  bool namespaceExists(const std::string& /* login */, const std::string& /* namespaceName */) {
+    bool _return = false;
+    return _return;
+  }
+  void createNamespace(const std::string& /* login */, const std::string& /* namespaceName */) {
+    return;
+  }
+  void deleteNamespace(const std::string& /* login */, const std::string& /* namespaceName */) {
+    return;
+  }
+  void renameNamespace(const std::string& /* login */, const std::string& /* oldNamespaceName */, const std::string& /* newNamespaceName */) {
+    return;
+  }
+  void setNamespaceProperty(const std::string& /* login */, const std::string& /* namespaceName */, const std::string& /* property */, const std::string& /* value */) {
+    return;
+  }
+  void removeNamespaceProperty(const std::string& /* login */, const std::string& /* namespaceName */, const std::string& /* property */) {
+    return;
+  }
+  void getNamespaceProperties(std::map<std::string, std::string> & /* _return */, const std::string& /* login */, const std::string& /* namespaceName */) {
+    return;
+  }
+  void namespaceIdMap(std::map<std::string, std::string> & /* _return */, const std::string& /* login */) {
+    return;
+  }
+  void attachNamespaceIterator(const std::string& /* login */, const std::string& /* namespaceName */, const IteratorSetting& /* setting */, const std::set<IteratorScope::type> & /* scopes */) {
+    return;
+  }
+  void removeNamespaceIterator(const std::string& /* login */, const std::string& /* namespaceName */, const std::string& /* name */, const std::set<IteratorScope::type> & /* scopes */) {
+    return;
+  }
+  void getNamespaceIteratorSetting(IteratorSetting& /* _return */, const std::string& /* login */, const std::string& /* namespaceName */, const std::string& /* name */, const IteratorScope::type /* scope */) {
+    return;
+  }
+  void listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & /* _return */, const std::string& /* login */, const std::string& /* namespaceName */) {
+    return;
+  }
+  void checkNamespaceIteratorConflicts(const std::string& /* login */, const std::string& /* namespaceName */, const IteratorSetting& /* setting */, const std::set<IteratorScope::type> & /* scopes */) {
+    return;
+  }
+  int32_t addNamespaceConstraint(const std::string& /* login */, const std::string& /* namespaceName */, const std::string& /* constraintClassName */) {
+    int32_t _return = 0;
+    return _return;
+  }
+  void removeNamespaceConstraint(const std::string& /* login */, const std::string& /* namespaceName */, const int32_t /* id */) {
+    return;
+  }
+  void listNamespaceConstraints(std::map<std::string, int32_t> & /* _return */, const std::string& /* login */, const std::string& /* namespaceName */) {
+    return;
+  }
+  bool testNamespaceClassLoad(const std::string& /* login */, const std::string& /* namespaceName */, const std::string& /* className */, const std::string& /* asTypeName */) {
+    bool _return = false;
+    return _return;
+  }
 };
 
 typedef struct _AccumuloProxy_login_args__isset {
   _AccumuloProxy_login_args__isset() : principal(false), loginProperties(false) {}
-  bool principal;
-  bool loginProperties;
+  bool principal :1;
+  bool loginProperties :1;
 } _AccumuloProxy_login_args__isset;
 
 class AccumuloProxy_login_args {
  public:
 
+  AccumuloProxy_login_args(const AccumuloProxy_login_args&);
+  AccumuloProxy_login_args& operator=(const AccumuloProxy_login_args&);
   AccumuloProxy_login_args() : principal() {
   }
 
-  virtual ~AccumuloProxy_login_args() throw() {}
-
+  virtual ~AccumuloProxy_login_args() throw();
   std::string principal;
   std::map<std::string, std::string>  loginProperties;
 
   _AccumuloProxy_login_args__isset __isset;
 
-  void __set_principal(const std::string& val) {
-    principal = val;
-  }
+  void __set_principal(const std::string& val);
 
-  void __set_loginProperties(const std::map<std::string, std::string> & val) {
-    loginProperties = val;
-  }
+  void __set_loginProperties(const std::map<std::string, std::string> & val);
 
   bool operator == (const AccumuloProxy_login_args & rhs) const
   {
@@ -430,8 +529,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_login_pargs() throw() {}
-
+  virtual ~AccumuloProxy_login_pargs() throw();
   const std::string* principal;
   const std::map<std::string, std::string> * loginProperties;
 
@@ -441,30 +539,27 @@
 
 typedef struct _AccumuloProxy_login_result__isset {
   _AccumuloProxy_login_result__isset() : success(false), ouch2(false) {}
-  bool success;
-  bool ouch2;
+  bool success :1;
+  bool ouch2 :1;
 } _AccumuloProxy_login_result__isset;
 
 class AccumuloProxy_login_result {
  public:
 
+  AccumuloProxy_login_result(const AccumuloProxy_login_result&);
+  AccumuloProxy_login_result& operator=(const AccumuloProxy_login_result&);
   AccumuloProxy_login_result() : success() {
   }
 
-  virtual ~AccumuloProxy_login_result() throw() {}
-
+  virtual ~AccumuloProxy_login_result() throw();
   std::string success;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_login_result__isset __isset;
 
-  void __set_success(const std::string& val) {
-    success = val;
-  }
+  void __set_success(const std::string& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_login_result & rhs) const
   {
@@ -487,16 +582,15 @@
 
 typedef struct _AccumuloProxy_login_presult__isset {
   _AccumuloProxy_login_presult__isset() : success(false), ouch2(false) {}
-  bool success;
-  bool ouch2;
+  bool success :1;
+  bool ouch2 :1;
 } _AccumuloProxy_login_presult__isset;
 
 class AccumuloProxy_login_presult {
  public:
 
 
-  virtual ~AccumuloProxy_login_presult() throw() {}
-
+  virtual ~AccumuloProxy_login_presult() throw();
   std::string* success;
   AccumuloSecurityException ouch2;
 
@@ -508,36 +602,31 @@
 
 typedef struct _AccumuloProxy_addConstraint_args__isset {
   _AccumuloProxy_addConstraint_args__isset() : login(false), tableName(false), constraintClassName(false) {}
-  bool login;
-  bool tableName;
-  bool constraintClassName;
+  bool login :1;
+  bool tableName :1;
+  bool constraintClassName :1;
 } _AccumuloProxy_addConstraint_args__isset;
 
 class AccumuloProxy_addConstraint_args {
  public:
 
+  AccumuloProxy_addConstraint_args(const AccumuloProxy_addConstraint_args&);
+  AccumuloProxy_addConstraint_args& operator=(const AccumuloProxy_addConstraint_args&);
   AccumuloProxy_addConstraint_args() : login(), tableName(), constraintClassName() {
   }
 
-  virtual ~AccumuloProxy_addConstraint_args() throw() {}
-
+  virtual ~AccumuloProxy_addConstraint_args() throw();
   std::string login;
   std::string tableName;
   std::string constraintClassName;
 
   _AccumuloProxy_addConstraint_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_constraintClassName(const std::string& val) {
-    constraintClassName = val;
-  }
+  void __set_constraintClassName(const std::string& val);
 
   bool operator == (const AccumuloProxy_addConstraint_args & rhs) const
   {
@@ -565,8 +654,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_addConstraint_pargs() throw() {}
-
+  virtual ~AccumuloProxy_addConstraint_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* constraintClassName;
@@ -577,20 +665,21 @@
 
 typedef struct _AccumuloProxy_addConstraint_result__isset {
   _AccumuloProxy_addConstraint_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_addConstraint_result__isset;
 
 class AccumuloProxy_addConstraint_result {
  public:
 
+  AccumuloProxy_addConstraint_result(const AccumuloProxy_addConstraint_result&);
+  AccumuloProxy_addConstraint_result& operator=(const AccumuloProxy_addConstraint_result&);
   AccumuloProxy_addConstraint_result() : success(0) {
   }
 
-  virtual ~AccumuloProxy_addConstraint_result() throw() {}
-
+  virtual ~AccumuloProxy_addConstraint_result() throw();
   int32_t success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -598,21 +687,13 @@
 
   _AccumuloProxy_addConstraint_result__isset __isset;
 
-  void __set_success(const int32_t val) {
-    success = val;
-  }
+  void __set_success(const int32_t val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_addConstraint_result & rhs) const
   {
@@ -639,18 +720,17 @@
 
 typedef struct _AccumuloProxy_addConstraint_presult__isset {
   _AccumuloProxy_addConstraint_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_addConstraint_presult__isset;
 
 class AccumuloProxy_addConstraint_presult {
  public:
 
 
-  virtual ~AccumuloProxy_addConstraint_presult() throw() {}
-
+  virtual ~AccumuloProxy_addConstraint_presult() throw();
   int32_t* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -664,36 +744,31 @@
 
 typedef struct _AccumuloProxy_addSplits_args__isset {
   _AccumuloProxy_addSplits_args__isset() : login(false), tableName(false), splits(false) {}
-  bool login;
-  bool tableName;
-  bool splits;
+  bool login :1;
+  bool tableName :1;
+  bool splits :1;
 } _AccumuloProxy_addSplits_args__isset;
 
 class AccumuloProxy_addSplits_args {
  public:
 
+  AccumuloProxy_addSplits_args(const AccumuloProxy_addSplits_args&);
+  AccumuloProxy_addSplits_args& operator=(const AccumuloProxy_addSplits_args&);
   AccumuloProxy_addSplits_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_addSplits_args() throw() {}
-
+  virtual ~AccumuloProxy_addSplits_args() throw();
   std::string login;
   std::string tableName;
   std::set<std::string>  splits;
 
   _AccumuloProxy_addSplits_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_splits(const std::set<std::string> & val) {
-    splits = val;
-  }
+  void __set_splits(const std::set<std::string> & val);
 
   bool operator == (const AccumuloProxy_addSplits_args & rhs) const
   {
@@ -721,8 +796,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_addSplits_pargs() throw() {}
-
+  virtual ~AccumuloProxy_addSplits_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::set<std::string> * splits;
@@ -733,36 +807,31 @@
 
 typedef struct _AccumuloProxy_addSplits_result__isset {
   _AccumuloProxy_addSplits_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_addSplits_result__isset;
 
 class AccumuloProxy_addSplits_result {
  public:
 
+  AccumuloProxy_addSplits_result(const AccumuloProxy_addSplits_result&);
+  AccumuloProxy_addSplits_result& operator=(const AccumuloProxy_addSplits_result&);
   AccumuloProxy_addSplits_result() {
   }
 
-  virtual ~AccumuloProxy_addSplits_result() throw() {}
-
+  virtual ~AccumuloProxy_addSplits_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_addSplits_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_addSplits_result & rhs) const
   {
@@ -787,17 +856,16 @@
 
 typedef struct _AccumuloProxy_addSplits_presult__isset {
   _AccumuloProxy_addSplits_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_addSplits_presult__isset;
 
 class AccumuloProxy_addSplits_presult {
  public:
 
 
-  virtual ~AccumuloProxy_addSplits_presult() throw() {}
-
+  virtual ~AccumuloProxy_addSplits_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -810,20 +878,21 @@
 
 typedef struct _AccumuloProxy_attachIterator_args__isset {
   _AccumuloProxy_attachIterator_args__isset() : login(false), tableName(false), setting(false), scopes(false) {}
-  bool login;
-  bool tableName;
-  bool setting;
-  bool scopes;
+  bool login :1;
+  bool tableName :1;
+  bool setting :1;
+  bool scopes :1;
 } _AccumuloProxy_attachIterator_args__isset;
 
 class AccumuloProxy_attachIterator_args {
  public:
 
+  AccumuloProxy_attachIterator_args(const AccumuloProxy_attachIterator_args&);
+  AccumuloProxy_attachIterator_args& operator=(const AccumuloProxy_attachIterator_args&);
   AccumuloProxy_attachIterator_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_attachIterator_args() throw() {}
-
+  virtual ~AccumuloProxy_attachIterator_args() throw();
   std::string login;
   std::string tableName;
   IteratorSetting setting;
@@ -831,21 +900,13 @@
 
   _AccumuloProxy_attachIterator_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_setting(const IteratorSetting& val) {
-    setting = val;
-  }
+  void __set_setting(const IteratorSetting& val);
 
-  void __set_scopes(const std::set<IteratorScope::type> & val) {
-    scopes = val;
-  }
+  void __set_scopes(const std::set<IteratorScope::type> & val);
 
   bool operator == (const AccumuloProxy_attachIterator_args & rhs) const
   {
@@ -875,8 +936,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_attachIterator_pargs() throw() {}
-
+  virtual ~AccumuloProxy_attachIterator_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const IteratorSetting* setting;
@@ -888,36 +948,31 @@
 
 typedef struct _AccumuloProxy_attachIterator_result__isset {
   _AccumuloProxy_attachIterator_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_attachIterator_result__isset;
 
 class AccumuloProxy_attachIterator_result {
  public:
 
+  AccumuloProxy_attachIterator_result(const AccumuloProxy_attachIterator_result&);
+  AccumuloProxy_attachIterator_result& operator=(const AccumuloProxy_attachIterator_result&);
   AccumuloProxy_attachIterator_result() {
   }
 
-  virtual ~AccumuloProxy_attachIterator_result() throw() {}
-
+  virtual ~AccumuloProxy_attachIterator_result() throw();
   AccumuloSecurityException ouch1;
   AccumuloException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_attachIterator_result__isset __isset;
 
-  void __set_ouch1(const AccumuloSecurityException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloSecurityException& val);
 
-  void __set_ouch2(const AccumuloException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_attachIterator_result & rhs) const
   {
@@ -942,17 +997,16 @@
 
 typedef struct _AccumuloProxy_attachIterator_presult__isset {
   _AccumuloProxy_attachIterator_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_attachIterator_presult__isset;
 
 class AccumuloProxy_attachIterator_presult {
  public:
 
 
-  virtual ~AccumuloProxy_attachIterator_presult() throw() {}
-
+  virtual ~AccumuloProxy_attachIterator_presult() throw();
   AccumuloSecurityException ouch1;
   AccumuloException ouch2;
   TableNotFoundException ouch3;
@@ -965,20 +1019,21 @@
 
 typedef struct _AccumuloProxy_checkIteratorConflicts_args__isset {
   _AccumuloProxy_checkIteratorConflicts_args__isset() : login(false), tableName(false), setting(false), scopes(false) {}
-  bool login;
-  bool tableName;
-  bool setting;
-  bool scopes;
+  bool login :1;
+  bool tableName :1;
+  bool setting :1;
+  bool scopes :1;
 } _AccumuloProxy_checkIteratorConflicts_args__isset;
 
 class AccumuloProxy_checkIteratorConflicts_args {
  public:
 
+  AccumuloProxy_checkIteratorConflicts_args(const AccumuloProxy_checkIteratorConflicts_args&);
+  AccumuloProxy_checkIteratorConflicts_args& operator=(const AccumuloProxy_checkIteratorConflicts_args&);
   AccumuloProxy_checkIteratorConflicts_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_checkIteratorConflicts_args() throw() {}
-
+  virtual ~AccumuloProxy_checkIteratorConflicts_args() throw();
   std::string login;
   std::string tableName;
   IteratorSetting setting;
@@ -986,21 +1041,13 @@
 
   _AccumuloProxy_checkIteratorConflicts_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_setting(const IteratorSetting& val) {
-    setting = val;
-  }
+  void __set_setting(const IteratorSetting& val);
 
-  void __set_scopes(const std::set<IteratorScope::type> & val) {
-    scopes = val;
-  }
+  void __set_scopes(const std::set<IteratorScope::type> & val);
 
   bool operator == (const AccumuloProxy_checkIteratorConflicts_args & rhs) const
   {
@@ -1030,8 +1077,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_checkIteratorConflicts_pargs() throw() {}
-
+  virtual ~AccumuloProxy_checkIteratorConflicts_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const IteratorSetting* setting;
@@ -1043,36 +1089,31 @@
 
 typedef struct _AccumuloProxy_checkIteratorConflicts_result__isset {
   _AccumuloProxy_checkIteratorConflicts_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_checkIteratorConflicts_result__isset;
 
 class AccumuloProxy_checkIteratorConflicts_result {
  public:
 
+  AccumuloProxy_checkIteratorConflicts_result(const AccumuloProxy_checkIteratorConflicts_result&);
+  AccumuloProxy_checkIteratorConflicts_result& operator=(const AccumuloProxy_checkIteratorConflicts_result&);
   AccumuloProxy_checkIteratorConflicts_result() {
   }
 
-  virtual ~AccumuloProxy_checkIteratorConflicts_result() throw() {}
-
+  virtual ~AccumuloProxy_checkIteratorConflicts_result() throw();
   AccumuloSecurityException ouch1;
   AccumuloException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_checkIteratorConflicts_result__isset __isset;
 
-  void __set_ouch1(const AccumuloSecurityException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloSecurityException& val);
 
-  void __set_ouch2(const AccumuloException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_checkIteratorConflicts_result & rhs) const
   {
@@ -1097,17 +1138,16 @@
 
 typedef struct _AccumuloProxy_checkIteratorConflicts_presult__isset {
   _AccumuloProxy_checkIteratorConflicts_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_checkIteratorConflicts_presult__isset;
 
 class AccumuloProxy_checkIteratorConflicts_presult {
  public:
 
 
-  virtual ~AccumuloProxy_checkIteratorConflicts_presult() throw() {}
-
+  virtual ~AccumuloProxy_checkIteratorConflicts_presult() throw();
   AccumuloSecurityException ouch1;
   AccumuloException ouch2;
   TableNotFoundException ouch3;
@@ -1120,30 +1160,27 @@
 
 typedef struct _AccumuloProxy_clearLocatorCache_args__isset {
   _AccumuloProxy_clearLocatorCache_args__isset() : login(false), tableName(false) {}
-  bool login;
-  bool tableName;
+  bool login :1;
+  bool tableName :1;
 } _AccumuloProxy_clearLocatorCache_args__isset;
 
 class AccumuloProxy_clearLocatorCache_args {
  public:
 
+  AccumuloProxy_clearLocatorCache_args(const AccumuloProxy_clearLocatorCache_args&);
+  AccumuloProxy_clearLocatorCache_args& operator=(const AccumuloProxy_clearLocatorCache_args&);
   AccumuloProxy_clearLocatorCache_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_clearLocatorCache_args() throw() {}
-
+  virtual ~AccumuloProxy_clearLocatorCache_args() throw();
   std::string login;
   std::string tableName;
 
   _AccumuloProxy_clearLocatorCache_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
   bool operator == (const AccumuloProxy_clearLocatorCache_args & rhs) const
   {
@@ -1169,8 +1206,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_clearLocatorCache_pargs() throw() {}
-
+  virtual ~AccumuloProxy_clearLocatorCache_pargs() throw();
   const std::string* login;
   const std::string* tableName;
 
@@ -1180,24 +1216,23 @@
 
 typedef struct _AccumuloProxy_clearLocatorCache_result__isset {
   _AccumuloProxy_clearLocatorCache_result__isset() : ouch1(false) {}
-  bool ouch1;
+  bool ouch1 :1;
 } _AccumuloProxy_clearLocatorCache_result__isset;
 
 class AccumuloProxy_clearLocatorCache_result {
  public:
 
+  AccumuloProxy_clearLocatorCache_result(const AccumuloProxy_clearLocatorCache_result&);
+  AccumuloProxy_clearLocatorCache_result& operator=(const AccumuloProxy_clearLocatorCache_result&);
   AccumuloProxy_clearLocatorCache_result() {
   }
 
-  virtual ~AccumuloProxy_clearLocatorCache_result() throw() {}
-
+  virtual ~AccumuloProxy_clearLocatorCache_result() throw();
   TableNotFoundException ouch1;
 
   _AccumuloProxy_clearLocatorCache_result__isset __isset;
 
-  void __set_ouch1(const TableNotFoundException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_clearLocatorCache_result & rhs) const
   {
@@ -1218,15 +1253,14 @@
 
 typedef struct _AccumuloProxy_clearLocatorCache_presult__isset {
   _AccumuloProxy_clearLocatorCache_presult__isset() : ouch1(false) {}
-  bool ouch1;
+  bool ouch1 :1;
 } _AccumuloProxy_clearLocatorCache_presult__isset;
 
 class AccumuloProxy_clearLocatorCache_presult {
  public:
 
 
-  virtual ~AccumuloProxy_clearLocatorCache_presult() throw() {}
-
+  virtual ~AccumuloProxy_clearLocatorCache_presult() throw();
   TableNotFoundException ouch1;
 
   _AccumuloProxy_clearLocatorCache_presult__isset __isset;
@@ -1237,22 +1271,23 @@
 
 typedef struct _AccumuloProxy_cloneTable_args__isset {
   _AccumuloProxy_cloneTable_args__isset() : login(false), tableName(false), newTableName(false), flush(false), propertiesToSet(false), propertiesToExclude(false) {}
-  bool login;
-  bool tableName;
-  bool newTableName;
-  bool flush;
-  bool propertiesToSet;
-  bool propertiesToExclude;
+  bool login :1;
+  bool tableName :1;
+  bool newTableName :1;
+  bool flush :1;
+  bool propertiesToSet :1;
+  bool propertiesToExclude :1;
 } _AccumuloProxy_cloneTable_args__isset;
 
 class AccumuloProxy_cloneTable_args {
  public:
 
+  AccumuloProxy_cloneTable_args(const AccumuloProxy_cloneTable_args&);
+  AccumuloProxy_cloneTable_args& operator=(const AccumuloProxy_cloneTable_args&);
   AccumuloProxy_cloneTable_args() : login(), tableName(), newTableName(), flush(0) {
   }
 
-  virtual ~AccumuloProxy_cloneTable_args() throw() {}
-
+  virtual ~AccumuloProxy_cloneTable_args() throw();
   std::string login;
   std::string tableName;
   std::string newTableName;
@@ -1262,29 +1297,17 @@
 
   _AccumuloProxy_cloneTable_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_newTableName(const std::string& val) {
-    newTableName = val;
-  }
+  void __set_newTableName(const std::string& val);
 
-  void __set_flush(const bool val) {
-    flush = val;
-  }
+  void __set_flush(const bool val);
 
-  void __set_propertiesToSet(const std::map<std::string, std::string> & val) {
-    propertiesToSet = val;
-  }
+  void __set_propertiesToSet(const std::map<std::string, std::string> & val);
 
-  void __set_propertiesToExclude(const std::set<std::string> & val) {
-    propertiesToExclude = val;
-  }
+  void __set_propertiesToExclude(const std::set<std::string> & val);
 
   bool operator == (const AccumuloProxy_cloneTable_args & rhs) const
   {
@@ -1318,8 +1341,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_cloneTable_pargs() throw() {}
-
+  virtual ~AccumuloProxy_cloneTable_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* newTableName;
@@ -1333,20 +1355,21 @@
 
 typedef struct _AccumuloProxy_cloneTable_result__isset {
   _AccumuloProxy_cloneTable_result__isset() : ouch1(false), ouch2(false), ouch3(false), ouch4(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
-  bool ouch4;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
 } _AccumuloProxy_cloneTable_result__isset;
 
 class AccumuloProxy_cloneTable_result {
  public:
 
+  AccumuloProxy_cloneTable_result(const AccumuloProxy_cloneTable_result&);
+  AccumuloProxy_cloneTable_result& operator=(const AccumuloProxy_cloneTable_result&);
   AccumuloProxy_cloneTable_result() {
   }
 
-  virtual ~AccumuloProxy_cloneTable_result() throw() {}
-
+  virtual ~AccumuloProxy_cloneTable_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -1354,21 +1377,13 @@
 
   _AccumuloProxy_cloneTable_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
-  void __set_ouch4(const TableExistsException& val) {
-    ouch4 = val;
-  }
+  void __set_ouch4(const TableExistsException& val);
 
   bool operator == (const AccumuloProxy_cloneTable_result & rhs) const
   {
@@ -1395,18 +1410,17 @@
 
 typedef struct _AccumuloProxy_cloneTable_presult__isset {
   _AccumuloProxy_cloneTable_presult__isset() : ouch1(false), ouch2(false), ouch3(false), ouch4(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
-  bool ouch4;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
 } _AccumuloProxy_cloneTable_presult__isset;
 
 class AccumuloProxy_cloneTable_presult {
  public:
 
 
-  virtual ~AccumuloProxy_cloneTable_presult() throw() {}
-
+  virtual ~AccumuloProxy_cloneTable_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -1420,24 +1434,25 @@
 
 typedef struct _AccumuloProxy_compactTable_args__isset {
   _AccumuloProxy_compactTable_args__isset() : login(false), tableName(false), startRow(false), endRow(false), iterators(false), flush(false), wait(false), compactionStrategy(false) {}
-  bool login;
-  bool tableName;
-  bool startRow;
-  bool endRow;
-  bool iterators;
-  bool flush;
-  bool wait;
-  bool compactionStrategy;
+  bool login :1;
+  bool tableName :1;
+  bool startRow :1;
+  bool endRow :1;
+  bool iterators :1;
+  bool flush :1;
+  bool wait :1;
+  bool compactionStrategy :1;
 } _AccumuloProxy_compactTable_args__isset;
 
 class AccumuloProxy_compactTable_args {
  public:
 
+  AccumuloProxy_compactTable_args(const AccumuloProxy_compactTable_args&);
+  AccumuloProxy_compactTable_args& operator=(const AccumuloProxy_compactTable_args&);
   AccumuloProxy_compactTable_args() : login(), tableName(), startRow(), endRow(), flush(0), wait(0) {
   }
 
-  virtual ~AccumuloProxy_compactTable_args() throw() {}
-
+  virtual ~AccumuloProxy_compactTable_args() throw();
   std::string login;
   std::string tableName;
   std::string startRow;
@@ -1449,37 +1464,21 @@
 
   _AccumuloProxy_compactTable_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_startRow(const std::string& val) {
-    startRow = val;
-  }
+  void __set_startRow(const std::string& val);
 
-  void __set_endRow(const std::string& val) {
-    endRow = val;
-  }
+  void __set_endRow(const std::string& val);
 
-  void __set_iterators(const std::vector<IteratorSetting> & val) {
-    iterators = val;
-  }
+  void __set_iterators(const std::vector<IteratorSetting> & val);
 
-  void __set_flush(const bool val) {
-    flush = val;
-  }
+  void __set_flush(const bool val);
 
-  void __set_wait(const bool val) {
-    wait = val;
-  }
+  void __set_wait(const bool val);
 
-  void __set_compactionStrategy(const CompactionStrategyConfig& val) {
-    compactionStrategy = val;
-  }
+  void __set_compactionStrategy(const CompactionStrategyConfig& val);
 
   bool operator == (const AccumuloProxy_compactTable_args & rhs) const
   {
@@ -1517,8 +1516,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_compactTable_pargs() throw() {}
-
+  virtual ~AccumuloProxy_compactTable_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* startRow;
@@ -1534,36 +1532,31 @@
 
 typedef struct _AccumuloProxy_compactTable_result__isset {
   _AccumuloProxy_compactTable_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_compactTable_result__isset;
 
 class AccumuloProxy_compactTable_result {
  public:
 
+  AccumuloProxy_compactTable_result(const AccumuloProxy_compactTable_result&);
+  AccumuloProxy_compactTable_result& operator=(const AccumuloProxy_compactTable_result&);
   AccumuloProxy_compactTable_result() {
   }
 
-  virtual ~AccumuloProxy_compactTable_result() throw() {}
-
+  virtual ~AccumuloProxy_compactTable_result() throw();
   AccumuloSecurityException ouch1;
   TableNotFoundException ouch2;
   AccumuloException ouch3;
 
   _AccumuloProxy_compactTable_result__isset __isset;
 
-  void __set_ouch1(const AccumuloSecurityException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloSecurityException& val);
 
-  void __set_ouch2(const TableNotFoundException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const TableNotFoundException& val);
 
-  void __set_ouch3(const AccumuloException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const AccumuloException& val);
 
   bool operator == (const AccumuloProxy_compactTable_result & rhs) const
   {
@@ -1588,17 +1581,16 @@
 
 typedef struct _AccumuloProxy_compactTable_presult__isset {
   _AccumuloProxy_compactTable_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_compactTable_presult__isset;
 
 class AccumuloProxy_compactTable_presult {
  public:
 
 
-  virtual ~AccumuloProxy_compactTable_presult() throw() {}
-
+  virtual ~AccumuloProxy_compactTable_presult() throw();
   AccumuloSecurityException ouch1;
   TableNotFoundException ouch2;
   AccumuloException ouch3;
@@ -1611,30 +1603,27 @@
 
 typedef struct _AccumuloProxy_cancelCompaction_args__isset {
   _AccumuloProxy_cancelCompaction_args__isset() : login(false), tableName(false) {}
-  bool login;
-  bool tableName;
+  bool login :1;
+  bool tableName :1;
 } _AccumuloProxy_cancelCompaction_args__isset;
 
 class AccumuloProxy_cancelCompaction_args {
  public:
 
+  AccumuloProxy_cancelCompaction_args(const AccumuloProxy_cancelCompaction_args&);
+  AccumuloProxy_cancelCompaction_args& operator=(const AccumuloProxy_cancelCompaction_args&);
   AccumuloProxy_cancelCompaction_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_cancelCompaction_args() throw() {}
-
+  virtual ~AccumuloProxy_cancelCompaction_args() throw();
   std::string login;
   std::string tableName;
 
   _AccumuloProxy_cancelCompaction_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
   bool operator == (const AccumuloProxy_cancelCompaction_args & rhs) const
   {
@@ -1660,8 +1649,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_cancelCompaction_pargs() throw() {}
-
+  virtual ~AccumuloProxy_cancelCompaction_pargs() throw();
   const std::string* login;
   const std::string* tableName;
 
@@ -1671,36 +1659,31 @@
 
 typedef struct _AccumuloProxy_cancelCompaction_result__isset {
   _AccumuloProxy_cancelCompaction_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_cancelCompaction_result__isset;
 
 class AccumuloProxy_cancelCompaction_result {
  public:
 
+  AccumuloProxy_cancelCompaction_result(const AccumuloProxy_cancelCompaction_result&);
+  AccumuloProxy_cancelCompaction_result& operator=(const AccumuloProxy_cancelCompaction_result&);
   AccumuloProxy_cancelCompaction_result() {
   }
 
-  virtual ~AccumuloProxy_cancelCompaction_result() throw() {}
-
+  virtual ~AccumuloProxy_cancelCompaction_result() throw();
   AccumuloSecurityException ouch1;
   TableNotFoundException ouch2;
   AccumuloException ouch3;
 
   _AccumuloProxy_cancelCompaction_result__isset __isset;
 
-  void __set_ouch1(const AccumuloSecurityException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloSecurityException& val);
 
-  void __set_ouch2(const TableNotFoundException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const TableNotFoundException& val);
 
-  void __set_ouch3(const AccumuloException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const AccumuloException& val);
 
   bool operator == (const AccumuloProxy_cancelCompaction_result & rhs) const
   {
@@ -1725,17 +1708,16 @@
 
 typedef struct _AccumuloProxy_cancelCompaction_presult__isset {
   _AccumuloProxy_cancelCompaction_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_cancelCompaction_presult__isset;
 
 class AccumuloProxy_cancelCompaction_presult {
  public:
 
 
-  virtual ~AccumuloProxy_cancelCompaction_presult() throw() {}
-
+  virtual ~AccumuloProxy_cancelCompaction_presult() throw();
   AccumuloSecurityException ouch1;
   TableNotFoundException ouch2;
   AccumuloException ouch3;
@@ -1748,20 +1730,21 @@
 
 typedef struct _AccumuloProxy_createTable_args__isset {
   _AccumuloProxy_createTable_args__isset() : login(false), tableName(false), versioningIter(false), type(false) {}
-  bool login;
-  bool tableName;
-  bool versioningIter;
-  bool type;
+  bool login :1;
+  bool tableName :1;
+  bool versioningIter :1;
+  bool type :1;
 } _AccumuloProxy_createTable_args__isset;
 
 class AccumuloProxy_createTable_args {
  public:
 
+  AccumuloProxy_createTable_args(const AccumuloProxy_createTable_args&);
+  AccumuloProxy_createTable_args& operator=(const AccumuloProxy_createTable_args&);
   AccumuloProxy_createTable_args() : login(), tableName(), versioningIter(0), type((TimeType::type)0) {
   }
 
-  virtual ~AccumuloProxy_createTable_args() throw() {}
-
+  virtual ~AccumuloProxy_createTable_args() throw();
   std::string login;
   std::string tableName;
   bool versioningIter;
@@ -1769,21 +1752,13 @@
 
   _AccumuloProxy_createTable_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_versioningIter(const bool val) {
-    versioningIter = val;
-  }
+  void __set_versioningIter(const bool val);
 
-  void __set_type(const TimeType::type val) {
-    type = val;
-  }
+  void __set_type(const TimeType::type val);
 
   bool operator == (const AccumuloProxy_createTable_args & rhs) const
   {
@@ -1813,8 +1788,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_createTable_pargs() throw() {}
-
+  virtual ~AccumuloProxy_createTable_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const bool* versioningIter;
@@ -1826,36 +1800,31 @@
 
 typedef struct _AccumuloProxy_createTable_result__isset {
   _AccumuloProxy_createTable_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_createTable_result__isset;
 
 class AccumuloProxy_createTable_result {
  public:
 
+  AccumuloProxy_createTable_result(const AccumuloProxy_createTable_result&);
+  AccumuloProxy_createTable_result& operator=(const AccumuloProxy_createTable_result&);
   AccumuloProxy_createTable_result() {
   }
 
-  virtual ~AccumuloProxy_createTable_result() throw() {}
-
+  virtual ~AccumuloProxy_createTable_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableExistsException ouch3;
 
   _AccumuloProxy_createTable_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableExistsException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableExistsException& val);
 
   bool operator == (const AccumuloProxy_createTable_result & rhs) const
   {
@@ -1880,17 +1849,16 @@
 
 typedef struct _AccumuloProxy_createTable_presult__isset {
   _AccumuloProxy_createTable_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_createTable_presult__isset;
 
 class AccumuloProxy_createTable_presult {
  public:
 
 
-  virtual ~AccumuloProxy_createTable_presult() throw() {}
-
+  virtual ~AccumuloProxy_createTable_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableExistsException ouch3;
@@ -1903,30 +1871,27 @@
 
 typedef struct _AccumuloProxy_deleteTable_args__isset {
   _AccumuloProxy_deleteTable_args__isset() : login(false), tableName(false) {}
-  bool login;
-  bool tableName;
+  bool login :1;
+  bool tableName :1;
 } _AccumuloProxy_deleteTable_args__isset;
 
 class AccumuloProxy_deleteTable_args {
  public:
 
+  AccumuloProxy_deleteTable_args(const AccumuloProxy_deleteTable_args&);
+  AccumuloProxy_deleteTable_args& operator=(const AccumuloProxy_deleteTable_args&);
   AccumuloProxy_deleteTable_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_deleteTable_args() throw() {}
-
+  virtual ~AccumuloProxy_deleteTable_args() throw();
   std::string login;
   std::string tableName;
 
   _AccumuloProxy_deleteTable_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
   bool operator == (const AccumuloProxy_deleteTable_args & rhs) const
   {
@@ -1952,8 +1917,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_deleteTable_pargs() throw() {}
-
+  virtual ~AccumuloProxy_deleteTable_pargs() throw();
   const std::string* login;
   const std::string* tableName;
 
@@ -1963,36 +1927,31 @@
 
 typedef struct _AccumuloProxy_deleteTable_result__isset {
   _AccumuloProxy_deleteTable_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_deleteTable_result__isset;
 
 class AccumuloProxy_deleteTable_result {
  public:
 
+  AccumuloProxy_deleteTable_result(const AccumuloProxy_deleteTable_result&);
+  AccumuloProxy_deleteTable_result& operator=(const AccumuloProxy_deleteTable_result&);
   AccumuloProxy_deleteTable_result() {
   }
 
-  virtual ~AccumuloProxy_deleteTable_result() throw() {}
-
+  virtual ~AccumuloProxy_deleteTable_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_deleteTable_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_deleteTable_result & rhs) const
   {
@@ -2017,17 +1976,16 @@
 
 typedef struct _AccumuloProxy_deleteTable_presult__isset {
   _AccumuloProxy_deleteTable_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_deleteTable_presult__isset;
 
 class AccumuloProxy_deleteTable_presult {
  public:
 
 
-  virtual ~AccumuloProxy_deleteTable_presult() throw() {}
-
+  virtual ~AccumuloProxy_deleteTable_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -2040,20 +1998,21 @@
 
 typedef struct _AccumuloProxy_deleteRows_args__isset {
   _AccumuloProxy_deleteRows_args__isset() : login(false), tableName(false), startRow(false), endRow(false) {}
-  bool login;
-  bool tableName;
-  bool startRow;
-  bool endRow;
+  bool login :1;
+  bool tableName :1;
+  bool startRow :1;
+  bool endRow :1;
 } _AccumuloProxy_deleteRows_args__isset;
 
 class AccumuloProxy_deleteRows_args {
  public:
 
+  AccumuloProxy_deleteRows_args(const AccumuloProxy_deleteRows_args&);
+  AccumuloProxy_deleteRows_args& operator=(const AccumuloProxy_deleteRows_args&);
   AccumuloProxy_deleteRows_args() : login(), tableName(), startRow(), endRow() {
   }
 
-  virtual ~AccumuloProxy_deleteRows_args() throw() {}
-
+  virtual ~AccumuloProxy_deleteRows_args() throw();
   std::string login;
   std::string tableName;
   std::string startRow;
@@ -2061,21 +2020,13 @@
 
   _AccumuloProxy_deleteRows_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_startRow(const std::string& val) {
-    startRow = val;
-  }
+  void __set_startRow(const std::string& val);
 
-  void __set_endRow(const std::string& val) {
-    endRow = val;
-  }
+  void __set_endRow(const std::string& val);
 
   bool operator == (const AccumuloProxy_deleteRows_args & rhs) const
   {
@@ -2105,8 +2056,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_deleteRows_pargs() throw() {}
-
+  virtual ~AccumuloProxy_deleteRows_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* startRow;
@@ -2118,36 +2068,31 @@
 
 typedef struct _AccumuloProxy_deleteRows_result__isset {
   _AccumuloProxy_deleteRows_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_deleteRows_result__isset;
 
 class AccumuloProxy_deleteRows_result {
  public:
 
+  AccumuloProxy_deleteRows_result(const AccumuloProxy_deleteRows_result&);
+  AccumuloProxy_deleteRows_result& operator=(const AccumuloProxy_deleteRows_result&);
   AccumuloProxy_deleteRows_result() {
   }
 
-  virtual ~AccumuloProxy_deleteRows_result() throw() {}
-
+  virtual ~AccumuloProxy_deleteRows_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_deleteRows_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_deleteRows_result & rhs) const
   {
@@ -2172,17 +2117,16 @@
 
 typedef struct _AccumuloProxy_deleteRows_presult__isset {
   _AccumuloProxy_deleteRows_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_deleteRows_presult__isset;
 
 class AccumuloProxy_deleteRows_presult {
  public:
 
 
-  virtual ~AccumuloProxy_deleteRows_presult() throw() {}
-
+  virtual ~AccumuloProxy_deleteRows_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -2195,36 +2139,31 @@
 
 typedef struct _AccumuloProxy_exportTable_args__isset {
   _AccumuloProxy_exportTable_args__isset() : login(false), tableName(false), exportDir(false) {}
-  bool login;
-  bool tableName;
-  bool exportDir;
+  bool login :1;
+  bool tableName :1;
+  bool exportDir :1;
 } _AccumuloProxy_exportTable_args__isset;
 
 class AccumuloProxy_exportTable_args {
  public:
 
+  AccumuloProxy_exportTable_args(const AccumuloProxy_exportTable_args&);
+  AccumuloProxy_exportTable_args& operator=(const AccumuloProxy_exportTable_args&);
   AccumuloProxy_exportTable_args() : login(), tableName(), exportDir() {
   }
 
-  virtual ~AccumuloProxy_exportTable_args() throw() {}
-
+  virtual ~AccumuloProxy_exportTable_args() throw();
   std::string login;
   std::string tableName;
   std::string exportDir;
 
   _AccumuloProxy_exportTable_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_exportDir(const std::string& val) {
-    exportDir = val;
-  }
+  void __set_exportDir(const std::string& val);
 
   bool operator == (const AccumuloProxy_exportTable_args & rhs) const
   {
@@ -2252,8 +2191,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_exportTable_pargs() throw() {}
-
+  virtual ~AccumuloProxy_exportTable_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* exportDir;
@@ -2264,36 +2202,31 @@
 
 typedef struct _AccumuloProxy_exportTable_result__isset {
   _AccumuloProxy_exportTable_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_exportTable_result__isset;
 
 class AccumuloProxy_exportTable_result {
  public:
 
+  AccumuloProxy_exportTable_result(const AccumuloProxy_exportTable_result&);
+  AccumuloProxy_exportTable_result& operator=(const AccumuloProxy_exportTable_result&);
   AccumuloProxy_exportTable_result() {
   }
 
-  virtual ~AccumuloProxy_exportTable_result() throw() {}
-
+  virtual ~AccumuloProxy_exportTable_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_exportTable_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_exportTable_result & rhs) const
   {
@@ -2318,17 +2251,16 @@
 
 typedef struct _AccumuloProxy_exportTable_presult__isset {
   _AccumuloProxy_exportTable_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_exportTable_presult__isset;
 
 class AccumuloProxy_exportTable_presult {
  public:
 
 
-  virtual ~AccumuloProxy_exportTable_presult() throw() {}
-
+  virtual ~AccumuloProxy_exportTable_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -2341,21 +2273,22 @@
 
 typedef struct _AccumuloProxy_flushTable_args__isset {
   _AccumuloProxy_flushTable_args__isset() : login(false), tableName(false), startRow(false), endRow(false), wait(false) {}
-  bool login;
-  bool tableName;
-  bool startRow;
-  bool endRow;
-  bool wait;
+  bool login :1;
+  bool tableName :1;
+  bool startRow :1;
+  bool endRow :1;
+  bool wait :1;
 } _AccumuloProxy_flushTable_args__isset;
 
 class AccumuloProxy_flushTable_args {
  public:
 
+  AccumuloProxy_flushTable_args(const AccumuloProxy_flushTable_args&);
+  AccumuloProxy_flushTable_args& operator=(const AccumuloProxy_flushTable_args&);
   AccumuloProxy_flushTable_args() : login(), tableName(), startRow(), endRow(), wait(0) {
   }
 
-  virtual ~AccumuloProxy_flushTable_args() throw() {}
-
+  virtual ~AccumuloProxy_flushTable_args() throw();
   std::string login;
   std::string tableName;
   std::string startRow;
@@ -2364,25 +2297,15 @@
 
   _AccumuloProxy_flushTable_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_startRow(const std::string& val) {
-    startRow = val;
-  }
+  void __set_startRow(const std::string& val);
 
-  void __set_endRow(const std::string& val) {
-    endRow = val;
-  }
+  void __set_endRow(const std::string& val);
 
-  void __set_wait(const bool val) {
-    wait = val;
-  }
+  void __set_wait(const bool val);
 
   bool operator == (const AccumuloProxy_flushTable_args & rhs) const
   {
@@ -2414,8 +2337,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_flushTable_pargs() throw() {}
-
+  virtual ~AccumuloProxy_flushTable_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* startRow;
@@ -2428,36 +2350,31 @@
 
 typedef struct _AccumuloProxy_flushTable_result__isset {
   _AccumuloProxy_flushTable_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_flushTable_result__isset;
 
 class AccumuloProxy_flushTable_result {
  public:
 
+  AccumuloProxy_flushTable_result(const AccumuloProxy_flushTable_result&);
+  AccumuloProxy_flushTable_result& operator=(const AccumuloProxy_flushTable_result&);
   AccumuloProxy_flushTable_result() {
   }
 
-  virtual ~AccumuloProxy_flushTable_result() throw() {}
-
+  virtual ~AccumuloProxy_flushTable_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_flushTable_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_flushTable_result & rhs) const
   {
@@ -2482,17 +2399,16 @@
 
 typedef struct _AccumuloProxy_flushTable_presult__isset {
   _AccumuloProxy_flushTable_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_flushTable_presult__isset;
 
 class AccumuloProxy_flushTable_presult {
  public:
 
 
-  virtual ~AccumuloProxy_flushTable_presult() throw() {}
-
+  virtual ~AccumuloProxy_flushTable_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -2505,30 +2421,27 @@
 
 typedef struct _AccumuloProxy_getDiskUsage_args__isset {
   _AccumuloProxy_getDiskUsage_args__isset() : login(false), tables(false) {}
-  bool login;
-  bool tables;
+  bool login :1;
+  bool tables :1;
 } _AccumuloProxy_getDiskUsage_args__isset;
 
 class AccumuloProxy_getDiskUsage_args {
  public:
 
+  AccumuloProxy_getDiskUsage_args(const AccumuloProxy_getDiskUsage_args&);
+  AccumuloProxy_getDiskUsage_args& operator=(const AccumuloProxy_getDiskUsage_args&);
   AccumuloProxy_getDiskUsage_args() : login() {
   }
 
-  virtual ~AccumuloProxy_getDiskUsage_args() throw() {}
-
+  virtual ~AccumuloProxy_getDiskUsage_args() throw();
   std::string login;
   std::set<std::string>  tables;
 
   _AccumuloProxy_getDiskUsage_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tables(const std::set<std::string> & val) {
-    tables = val;
-  }
+  void __set_tables(const std::set<std::string> & val);
 
   bool operator == (const AccumuloProxy_getDiskUsage_args & rhs) const
   {
@@ -2554,8 +2467,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getDiskUsage_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getDiskUsage_pargs() throw();
   const std::string* login;
   const std::set<std::string> * tables;
 
@@ -2565,20 +2477,21 @@
 
 typedef struct _AccumuloProxy_getDiskUsage_result__isset {
   _AccumuloProxy_getDiskUsage_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_getDiskUsage_result__isset;
 
 class AccumuloProxy_getDiskUsage_result {
  public:
 
+  AccumuloProxy_getDiskUsage_result(const AccumuloProxy_getDiskUsage_result&);
+  AccumuloProxy_getDiskUsage_result& operator=(const AccumuloProxy_getDiskUsage_result&);
   AccumuloProxy_getDiskUsage_result() {
   }
 
-  virtual ~AccumuloProxy_getDiskUsage_result() throw() {}
-
+  virtual ~AccumuloProxy_getDiskUsage_result() throw();
   std::vector<DiskUsage>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -2586,21 +2499,13 @@
 
   _AccumuloProxy_getDiskUsage_result__isset __isset;
 
-  void __set_success(const std::vector<DiskUsage> & val) {
-    success = val;
-  }
+  void __set_success(const std::vector<DiskUsage> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_getDiskUsage_result & rhs) const
   {
@@ -2627,18 +2532,17 @@
 
 typedef struct _AccumuloProxy_getDiskUsage_presult__isset {
   _AccumuloProxy_getDiskUsage_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_getDiskUsage_presult__isset;
 
 class AccumuloProxy_getDiskUsage_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getDiskUsage_presult() throw() {}
-
+  virtual ~AccumuloProxy_getDiskUsage_presult() throw();
   std::vector<DiskUsage> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -2652,30 +2556,27 @@
 
 typedef struct _AccumuloProxy_getLocalityGroups_args__isset {
   _AccumuloProxy_getLocalityGroups_args__isset() : login(false), tableName(false) {}
-  bool login;
-  bool tableName;
+  bool login :1;
+  bool tableName :1;
 } _AccumuloProxy_getLocalityGroups_args__isset;
 
 class AccumuloProxy_getLocalityGroups_args {
  public:
 
+  AccumuloProxy_getLocalityGroups_args(const AccumuloProxy_getLocalityGroups_args&);
+  AccumuloProxy_getLocalityGroups_args& operator=(const AccumuloProxy_getLocalityGroups_args&);
   AccumuloProxy_getLocalityGroups_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_getLocalityGroups_args() throw() {}
-
+  virtual ~AccumuloProxy_getLocalityGroups_args() throw();
   std::string login;
   std::string tableName;
 
   _AccumuloProxy_getLocalityGroups_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
   bool operator == (const AccumuloProxy_getLocalityGroups_args & rhs) const
   {
@@ -2701,8 +2602,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getLocalityGroups_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getLocalityGroups_pargs() throw();
   const std::string* login;
   const std::string* tableName;
 
@@ -2712,20 +2612,21 @@
 
 typedef struct _AccumuloProxy_getLocalityGroups_result__isset {
   _AccumuloProxy_getLocalityGroups_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_getLocalityGroups_result__isset;
 
 class AccumuloProxy_getLocalityGroups_result {
  public:
 
+  AccumuloProxy_getLocalityGroups_result(const AccumuloProxy_getLocalityGroups_result&);
+  AccumuloProxy_getLocalityGroups_result& operator=(const AccumuloProxy_getLocalityGroups_result&);
   AccumuloProxy_getLocalityGroups_result() {
   }
 
-  virtual ~AccumuloProxy_getLocalityGroups_result() throw() {}
-
+  virtual ~AccumuloProxy_getLocalityGroups_result() throw();
   std::map<std::string, std::set<std::string> >  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -2733,21 +2634,13 @@
 
   _AccumuloProxy_getLocalityGroups_result__isset __isset;
 
-  void __set_success(const std::map<std::string, std::set<std::string> > & val) {
-    success = val;
-  }
+  void __set_success(const std::map<std::string, std::set<std::string> > & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_getLocalityGroups_result & rhs) const
   {
@@ -2774,18 +2667,17 @@
 
 typedef struct _AccumuloProxy_getLocalityGroups_presult__isset {
   _AccumuloProxy_getLocalityGroups_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_getLocalityGroups_presult__isset;
 
 class AccumuloProxy_getLocalityGroups_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getLocalityGroups_presult() throw() {}
-
+  virtual ~AccumuloProxy_getLocalityGroups_presult() throw();
   std::map<std::string, std::set<std::string> > * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -2799,20 +2691,21 @@
 
 typedef struct _AccumuloProxy_getIteratorSetting_args__isset {
   _AccumuloProxy_getIteratorSetting_args__isset() : login(false), tableName(false), iteratorName(false), scope(false) {}
-  bool login;
-  bool tableName;
-  bool iteratorName;
-  bool scope;
+  bool login :1;
+  bool tableName :1;
+  bool iteratorName :1;
+  bool scope :1;
 } _AccumuloProxy_getIteratorSetting_args__isset;
 
 class AccumuloProxy_getIteratorSetting_args {
  public:
 
+  AccumuloProxy_getIteratorSetting_args(const AccumuloProxy_getIteratorSetting_args&);
+  AccumuloProxy_getIteratorSetting_args& operator=(const AccumuloProxy_getIteratorSetting_args&);
   AccumuloProxy_getIteratorSetting_args() : login(), tableName(), iteratorName(), scope((IteratorScope::type)0) {
   }
 
-  virtual ~AccumuloProxy_getIteratorSetting_args() throw() {}
-
+  virtual ~AccumuloProxy_getIteratorSetting_args() throw();
   std::string login;
   std::string tableName;
   std::string iteratorName;
@@ -2820,21 +2713,13 @@
 
   _AccumuloProxy_getIteratorSetting_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_iteratorName(const std::string& val) {
-    iteratorName = val;
-  }
+  void __set_iteratorName(const std::string& val);
 
-  void __set_scope(const IteratorScope::type val) {
-    scope = val;
-  }
+  void __set_scope(const IteratorScope::type val);
 
   bool operator == (const AccumuloProxy_getIteratorSetting_args & rhs) const
   {
@@ -2864,8 +2749,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getIteratorSetting_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getIteratorSetting_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* iteratorName;
@@ -2877,20 +2761,21 @@
 
 typedef struct _AccumuloProxy_getIteratorSetting_result__isset {
   _AccumuloProxy_getIteratorSetting_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_getIteratorSetting_result__isset;
 
 class AccumuloProxy_getIteratorSetting_result {
  public:
 
+  AccumuloProxy_getIteratorSetting_result(const AccumuloProxy_getIteratorSetting_result&);
+  AccumuloProxy_getIteratorSetting_result& operator=(const AccumuloProxy_getIteratorSetting_result&);
   AccumuloProxy_getIteratorSetting_result() {
   }
 
-  virtual ~AccumuloProxy_getIteratorSetting_result() throw() {}
-
+  virtual ~AccumuloProxy_getIteratorSetting_result() throw();
   IteratorSetting success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -2898,21 +2783,13 @@
 
   _AccumuloProxy_getIteratorSetting_result__isset __isset;
 
-  void __set_success(const IteratorSetting& val) {
-    success = val;
-  }
+  void __set_success(const IteratorSetting& val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_getIteratorSetting_result & rhs) const
   {
@@ -2939,18 +2816,17 @@
 
 typedef struct _AccumuloProxy_getIteratorSetting_presult__isset {
   _AccumuloProxy_getIteratorSetting_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_getIteratorSetting_presult__isset;
 
 class AccumuloProxy_getIteratorSetting_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getIteratorSetting_presult() throw() {}
-
+  virtual ~AccumuloProxy_getIteratorSetting_presult() throw();
   IteratorSetting* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -2964,23 +2840,24 @@
 
 typedef struct _AccumuloProxy_getMaxRow_args__isset {
   _AccumuloProxy_getMaxRow_args__isset() : login(false), tableName(false), auths(false), startRow(false), startInclusive(false), endRow(false), endInclusive(false) {}
-  bool login;
-  bool tableName;
-  bool auths;
-  bool startRow;
-  bool startInclusive;
-  bool endRow;
-  bool endInclusive;
+  bool login :1;
+  bool tableName :1;
+  bool auths :1;
+  bool startRow :1;
+  bool startInclusive :1;
+  bool endRow :1;
+  bool endInclusive :1;
 } _AccumuloProxy_getMaxRow_args__isset;
 
 class AccumuloProxy_getMaxRow_args {
  public:
 
+  AccumuloProxy_getMaxRow_args(const AccumuloProxy_getMaxRow_args&);
+  AccumuloProxy_getMaxRow_args& operator=(const AccumuloProxy_getMaxRow_args&);
   AccumuloProxy_getMaxRow_args() : login(), tableName(), startRow(), startInclusive(0), endRow(), endInclusive(0) {
   }
 
-  virtual ~AccumuloProxy_getMaxRow_args() throw() {}
-
+  virtual ~AccumuloProxy_getMaxRow_args() throw();
   std::string login;
   std::string tableName;
   std::set<std::string>  auths;
@@ -2991,33 +2868,19 @@
 
   _AccumuloProxy_getMaxRow_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_auths(const std::set<std::string> & val) {
-    auths = val;
-  }
+  void __set_auths(const std::set<std::string> & val);
 
-  void __set_startRow(const std::string& val) {
-    startRow = val;
-  }
+  void __set_startRow(const std::string& val);
 
-  void __set_startInclusive(const bool val) {
-    startInclusive = val;
-  }
+  void __set_startInclusive(const bool val);
 
-  void __set_endRow(const std::string& val) {
-    endRow = val;
-  }
+  void __set_endRow(const std::string& val);
 
-  void __set_endInclusive(const bool val) {
-    endInclusive = val;
-  }
+  void __set_endInclusive(const bool val);
 
   bool operator == (const AccumuloProxy_getMaxRow_args & rhs) const
   {
@@ -3053,8 +2916,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getMaxRow_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getMaxRow_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::set<std::string> * auths;
@@ -3069,20 +2931,21 @@
 
 typedef struct _AccumuloProxy_getMaxRow_result__isset {
   _AccumuloProxy_getMaxRow_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_getMaxRow_result__isset;
 
 class AccumuloProxy_getMaxRow_result {
  public:
 
+  AccumuloProxy_getMaxRow_result(const AccumuloProxy_getMaxRow_result&);
+  AccumuloProxy_getMaxRow_result& operator=(const AccumuloProxy_getMaxRow_result&);
   AccumuloProxy_getMaxRow_result() : success() {
   }
 
-  virtual ~AccumuloProxy_getMaxRow_result() throw() {}
-
+  virtual ~AccumuloProxy_getMaxRow_result() throw();
   std::string success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -3090,21 +2953,13 @@
 
   _AccumuloProxy_getMaxRow_result__isset __isset;
 
-  void __set_success(const std::string& val) {
-    success = val;
-  }
+  void __set_success(const std::string& val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_getMaxRow_result & rhs) const
   {
@@ -3131,18 +2986,17 @@
 
 typedef struct _AccumuloProxy_getMaxRow_presult__isset {
   _AccumuloProxy_getMaxRow_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_getMaxRow_presult__isset;
 
 class AccumuloProxy_getMaxRow_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getMaxRow_presult() throw() {}
-
+  virtual ~AccumuloProxy_getMaxRow_presult() throw();
   std::string* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -3156,30 +3010,27 @@
 
 typedef struct _AccumuloProxy_getTableProperties_args__isset {
   _AccumuloProxy_getTableProperties_args__isset() : login(false), tableName(false) {}
-  bool login;
-  bool tableName;
+  bool login :1;
+  bool tableName :1;
 } _AccumuloProxy_getTableProperties_args__isset;
 
 class AccumuloProxy_getTableProperties_args {
  public:
 
+  AccumuloProxy_getTableProperties_args(const AccumuloProxy_getTableProperties_args&);
+  AccumuloProxy_getTableProperties_args& operator=(const AccumuloProxy_getTableProperties_args&);
   AccumuloProxy_getTableProperties_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_getTableProperties_args() throw() {}
-
+  virtual ~AccumuloProxy_getTableProperties_args() throw();
   std::string login;
   std::string tableName;
 
   _AccumuloProxy_getTableProperties_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
   bool operator == (const AccumuloProxy_getTableProperties_args & rhs) const
   {
@@ -3205,8 +3056,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getTableProperties_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getTableProperties_pargs() throw();
   const std::string* login;
   const std::string* tableName;
 
@@ -3216,20 +3066,21 @@
 
 typedef struct _AccumuloProxy_getTableProperties_result__isset {
   _AccumuloProxy_getTableProperties_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_getTableProperties_result__isset;
 
 class AccumuloProxy_getTableProperties_result {
  public:
 
+  AccumuloProxy_getTableProperties_result(const AccumuloProxy_getTableProperties_result&);
+  AccumuloProxy_getTableProperties_result& operator=(const AccumuloProxy_getTableProperties_result&);
   AccumuloProxy_getTableProperties_result() {
   }
 
-  virtual ~AccumuloProxy_getTableProperties_result() throw() {}
-
+  virtual ~AccumuloProxy_getTableProperties_result() throw();
   std::map<std::string, std::string>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -3237,21 +3088,13 @@
 
   _AccumuloProxy_getTableProperties_result__isset __isset;
 
-  void __set_success(const std::map<std::string, std::string> & val) {
-    success = val;
-  }
+  void __set_success(const std::map<std::string, std::string> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_getTableProperties_result & rhs) const
   {
@@ -3278,18 +3121,17 @@
 
 typedef struct _AccumuloProxy_getTableProperties_presult__isset {
   _AccumuloProxy_getTableProperties_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_getTableProperties_presult__isset;
 
 class AccumuloProxy_getTableProperties_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getTableProperties_presult() throw() {}
-
+  virtual ~AccumuloProxy_getTableProperties_presult() throw();
   std::map<std::string, std::string> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -3303,21 +3145,22 @@
 
 typedef struct _AccumuloProxy_importDirectory_args__isset {
   _AccumuloProxy_importDirectory_args__isset() : login(false), tableName(false), importDir(false), failureDir(false), setTime(false) {}
-  bool login;
-  bool tableName;
-  bool importDir;
-  bool failureDir;
-  bool setTime;
+  bool login :1;
+  bool tableName :1;
+  bool importDir :1;
+  bool failureDir :1;
+  bool setTime :1;
 } _AccumuloProxy_importDirectory_args__isset;
 
 class AccumuloProxy_importDirectory_args {
  public:
 
+  AccumuloProxy_importDirectory_args(const AccumuloProxy_importDirectory_args&);
+  AccumuloProxy_importDirectory_args& operator=(const AccumuloProxy_importDirectory_args&);
   AccumuloProxy_importDirectory_args() : login(), tableName(), importDir(), failureDir(), setTime(0) {
   }
 
-  virtual ~AccumuloProxy_importDirectory_args() throw() {}
-
+  virtual ~AccumuloProxy_importDirectory_args() throw();
   std::string login;
   std::string tableName;
   std::string importDir;
@@ -3326,25 +3169,15 @@
 
   _AccumuloProxy_importDirectory_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_importDir(const std::string& val) {
-    importDir = val;
-  }
+  void __set_importDir(const std::string& val);
 
-  void __set_failureDir(const std::string& val) {
-    failureDir = val;
-  }
+  void __set_failureDir(const std::string& val);
 
-  void __set_setTime(const bool val) {
-    setTime = val;
-  }
+  void __set_setTime(const bool val);
 
   bool operator == (const AccumuloProxy_importDirectory_args & rhs) const
   {
@@ -3376,8 +3209,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_importDirectory_pargs() throw() {}
-
+  virtual ~AccumuloProxy_importDirectory_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* importDir;
@@ -3390,36 +3222,31 @@
 
 typedef struct _AccumuloProxy_importDirectory_result__isset {
   _AccumuloProxy_importDirectory_result__isset() : ouch1(false), ouch3(false), ouch4(false) {}
-  bool ouch1;
-  bool ouch3;
-  bool ouch4;
+  bool ouch1 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
 } _AccumuloProxy_importDirectory_result__isset;
 
 class AccumuloProxy_importDirectory_result {
  public:
 
+  AccumuloProxy_importDirectory_result(const AccumuloProxy_importDirectory_result&);
+  AccumuloProxy_importDirectory_result& operator=(const AccumuloProxy_importDirectory_result&);
   AccumuloProxy_importDirectory_result() {
   }
 
-  virtual ~AccumuloProxy_importDirectory_result() throw() {}
-
+  virtual ~AccumuloProxy_importDirectory_result() throw();
   TableNotFoundException ouch1;
   AccumuloException ouch3;
   AccumuloSecurityException ouch4;
 
   _AccumuloProxy_importDirectory_result__isset __isset;
 
-  void __set_ouch1(const TableNotFoundException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const TableNotFoundException& val);
 
-  void __set_ouch3(const AccumuloException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const AccumuloException& val);
 
-  void __set_ouch4(const AccumuloSecurityException& val) {
-    ouch4 = val;
-  }
+  void __set_ouch4(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_importDirectory_result & rhs) const
   {
@@ -3444,17 +3271,16 @@
 
 typedef struct _AccumuloProxy_importDirectory_presult__isset {
   _AccumuloProxy_importDirectory_presult__isset() : ouch1(false), ouch3(false), ouch4(false) {}
-  bool ouch1;
-  bool ouch3;
-  bool ouch4;
+  bool ouch1 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
 } _AccumuloProxy_importDirectory_presult__isset;
 
 class AccumuloProxy_importDirectory_presult {
  public:
 
 
-  virtual ~AccumuloProxy_importDirectory_presult() throw() {}
-
+  virtual ~AccumuloProxy_importDirectory_presult() throw();
   TableNotFoundException ouch1;
   AccumuloException ouch3;
   AccumuloSecurityException ouch4;
@@ -3467,36 +3293,31 @@
 
 typedef struct _AccumuloProxy_importTable_args__isset {
   _AccumuloProxy_importTable_args__isset() : login(false), tableName(false), importDir(false) {}
-  bool login;
-  bool tableName;
-  bool importDir;
+  bool login :1;
+  bool tableName :1;
+  bool importDir :1;
 } _AccumuloProxy_importTable_args__isset;
 
 class AccumuloProxy_importTable_args {
  public:
 
+  AccumuloProxy_importTable_args(const AccumuloProxy_importTable_args&);
+  AccumuloProxy_importTable_args& operator=(const AccumuloProxy_importTable_args&);
   AccumuloProxy_importTable_args() : login(), tableName(), importDir() {
   }
 
-  virtual ~AccumuloProxy_importTable_args() throw() {}
-
+  virtual ~AccumuloProxy_importTable_args() throw();
   std::string login;
   std::string tableName;
   std::string importDir;
 
   _AccumuloProxy_importTable_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_importDir(const std::string& val) {
-    importDir = val;
-  }
+  void __set_importDir(const std::string& val);
 
   bool operator == (const AccumuloProxy_importTable_args & rhs) const
   {
@@ -3524,8 +3345,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_importTable_pargs() throw() {}
-
+  virtual ~AccumuloProxy_importTable_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* importDir;
@@ -3536,36 +3356,31 @@
 
 typedef struct _AccumuloProxy_importTable_result__isset {
   _AccumuloProxy_importTable_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_importTable_result__isset;
 
 class AccumuloProxy_importTable_result {
  public:
 
+  AccumuloProxy_importTable_result(const AccumuloProxy_importTable_result&);
+  AccumuloProxy_importTable_result& operator=(const AccumuloProxy_importTable_result&);
   AccumuloProxy_importTable_result() {
   }
 
-  virtual ~AccumuloProxy_importTable_result() throw() {}
-
+  virtual ~AccumuloProxy_importTable_result() throw();
   TableExistsException ouch1;
   AccumuloException ouch2;
   AccumuloSecurityException ouch3;
 
   _AccumuloProxy_importTable_result__isset __isset;
 
-  void __set_ouch1(const TableExistsException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const TableExistsException& val);
 
-  void __set_ouch2(const AccumuloException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloException& val);
 
-  void __set_ouch3(const AccumuloSecurityException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_importTable_result & rhs) const
   {
@@ -3590,17 +3405,16 @@
 
 typedef struct _AccumuloProxy_importTable_presult__isset {
   _AccumuloProxy_importTable_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_importTable_presult__isset;
 
 class AccumuloProxy_importTable_presult {
  public:
 
 
-  virtual ~AccumuloProxy_importTable_presult() throw() {}
-
+  virtual ~AccumuloProxy_importTable_presult() throw();
   TableExistsException ouch1;
   AccumuloException ouch2;
   AccumuloSecurityException ouch3;
@@ -3613,36 +3427,31 @@
 
 typedef struct _AccumuloProxy_listSplits_args__isset {
   _AccumuloProxy_listSplits_args__isset() : login(false), tableName(false), maxSplits(false) {}
-  bool login;
-  bool tableName;
-  bool maxSplits;
+  bool login :1;
+  bool tableName :1;
+  bool maxSplits :1;
 } _AccumuloProxy_listSplits_args__isset;
 
 class AccumuloProxy_listSplits_args {
  public:
 
+  AccumuloProxy_listSplits_args(const AccumuloProxy_listSplits_args&);
+  AccumuloProxy_listSplits_args& operator=(const AccumuloProxy_listSplits_args&);
   AccumuloProxy_listSplits_args() : login(), tableName(), maxSplits(0) {
   }
 
-  virtual ~AccumuloProxy_listSplits_args() throw() {}
-
+  virtual ~AccumuloProxy_listSplits_args() throw();
   std::string login;
   std::string tableName;
   int32_t maxSplits;
 
   _AccumuloProxy_listSplits_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_maxSplits(const int32_t val) {
-    maxSplits = val;
-  }
+  void __set_maxSplits(const int32_t val);
 
   bool operator == (const AccumuloProxy_listSplits_args & rhs) const
   {
@@ -3670,8 +3479,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_listSplits_pargs() throw() {}
-
+  virtual ~AccumuloProxy_listSplits_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const int32_t* maxSplits;
@@ -3682,20 +3490,21 @@
 
 typedef struct _AccumuloProxy_listSplits_result__isset {
   _AccumuloProxy_listSplits_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_listSplits_result__isset;
 
 class AccumuloProxy_listSplits_result {
  public:
 
+  AccumuloProxy_listSplits_result(const AccumuloProxy_listSplits_result&);
+  AccumuloProxy_listSplits_result& operator=(const AccumuloProxy_listSplits_result&);
   AccumuloProxy_listSplits_result() {
   }
 
-  virtual ~AccumuloProxy_listSplits_result() throw() {}
-
+  virtual ~AccumuloProxy_listSplits_result() throw();
   std::vector<std::string>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -3703,21 +3512,13 @@
 
   _AccumuloProxy_listSplits_result__isset __isset;
 
-  void __set_success(const std::vector<std::string> & val) {
-    success = val;
-  }
+  void __set_success(const std::vector<std::string> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_listSplits_result & rhs) const
   {
@@ -3744,18 +3545,17 @@
 
 typedef struct _AccumuloProxy_listSplits_presult__isset {
   _AccumuloProxy_listSplits_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_listSplits_presult__isset;
 
 class AccumuloProxy_listSplits_presult {
  public:
 
 
-  virtual ~AccumuloProxy_listSplits_presult() throw() {}
-
+  virtual ~AccumuloProxy_listSplits_presult() throw();
   std::vector<std::string> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -3769,24 +3569,23 @@
 
 typedef struct _AccumuloProxy_listTables_args__isset {
   _AccumuloProxy_listTables_args__isset() : login(false) {}
-  bool login;
+  bool login :1;
 } _AccumuloProxy_listTables_args__isset;
 
 class AccumuloProxy_listTables_args {
  public:
 
+  AccumuloProxy_listTables_args(const AccumuloProxy_listTables_args&);
+  AccumuloProxy_listTables_args& operator=(const AccumuloProxy_listTables_args&);
   AccumuloProxy_listTables_args() : login() {
   }
 
-  virtual ~AccumuloProxy_listTables_args() throw() {}
-
+  virtual ~AccumuloProxy_listTables_args() throw();
   std::string login;
 
   _AccumuloProxy_listTables_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
   bool operator == (const AccumuloProxy_listTables_args & rhs) const
   {
@@ -3810,8 +3609,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_listTables_pargs() throw() {}
-
+  virtual ~AccumuloProxy_listTables_pargs() throw();
   const std::string* login;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -3820,24 +3618,23 @@
 
 typedef struct _AccumuloProxy_listTables_result__isset {
   _AccumuloProxy_listTables_result__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_listTables_result__isset;
 
 class AccumuloProxy_listTables_result {
  public:
 
+  AccumuloProxy_listTables_result(const AccumuloProxy_listTables_result&);
+  AccumuloProxy_listTables_result& operator=(const AccumuloProxy_listTables_result&);
   AccumuloProxy_listTables_result() {
   }
 
-  virtual ~AccumuloProxy_listTables_result() throw() {}
-
+  virtual ~AccumuloProxy_listTables_result() throw();
   std::set<std::string>  success;
 
   _AccumuloProxy_listTables_result__isset __isset;
 
-  void __set_success(const std::set<std::string> & val) {
-    success = val;
-  }
+  void __set_success(const std::set<std::string> & val);
 
   bool operator == (const AccumuloProxy_listTables_result & rhs) const
   {
@@ -3858,15 +3655,14 @@
 
 typedef struct _AccumuloProxy_listTables_presult__isset {
   _AccumuloProxy_listTables_presult__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_listTables_presult__isset;
 
 class AccumuloProxy_listTables_presult {
  public:
 
 
-  virtual ~AccumuloProxy_listTables_presult() throw() {}
-
+  virtual ~AccumuloProxy_listTables_presult() throw();
   std::set<std::string> * success;
 
   _AccumuloProxy_listTables_presult__isset __isset;
@@ -3877,30 +3673,27 @@
 
 typedef struct _AccumuloProxy_listIterators_args__isset {
   _AccumuloProxy_listIterators_args__isset() : login(false), tableName(false) {}
-  bool login;
-  bool tableName;
+  bool login :1;
+  bool tableName :1;
 } _AccumuloProxy_listIterators_args__isset;
 
 class AccumuloProxy_listIterators_args {
  public:
 
+  AccumuloProxy_listIterators_args(const AccumuloProxy_listIterators_args&);
+  AccumuloProxy_listIterators_args& operator=(const AccumuloProxy_listIterators_args&);
   AccumuloProxy_listIterators_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_listIterators_args() throw() {}
-
+  virtual ~AccumuloProxy_listIterators_args() throw();
   std::string login;
   std::string tableName;
 
   _AccumuloProxy_listIterators_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
   bool operator == (const AccumuloProxy_listIterators_args & rhs) const
   {
@@ -3926,8 +3719,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_listIterators_pargs() throw() {}
-
+  virtual ~AccumuloProxy_listIterators_pargs() throw();
   const std::string* login;
   const std::string* tableName;
 
@@ -3937,20 +3729,21 @@
 
 typedef struct _AccumuloProxy_listIterators_result__isset {
   _AccumuloProxy_listIterators_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_listIterators_result__isset;
 
 class AccumuloProxy_listIterators_result {
  public:
 
+  AccumuloProxy_listIterators_result(const AccumuloProxy_listIterators_result&);
+  AccumuloProxy_listIterators_result& operator=(const AccumuloProxy_listIterators_result&);
   AccumuloProxy_listIterators_result() {
   }
 
-  virtual ~AccumuloProxy_listIterators_result() throw() {}
-
+  virtual ~AccumuloProxy_listIterators_result() throw();
   std::map<std::string, std::set<IteratorScope::type> >  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -3958,21 +3751,13 @@
 
   _AccumuloProxy_listIterators_result__isset __isset;
 
-  void __set_success(const std::map<std::string, std::set<IteratorScope::type> > & val) {
-    success = val;
-  }
+  void __set_success(const std::map<std::string, std::set<IteratorScope::type> > & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_listIterators_result & rhs) const
   {
@@ -3999,18 +3784,17 @@
 
 typedef struct _AccumuloProxy_listIterators_presult__isset {
   _AccumuloProxy_listIterators_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_listIterators_presult__isset;
 
 class AccumuloProxy_listIterators_presult {
  public:
 
 
-  virtual ~AccumuloProxy_listIterators_presult() throw() {}
-
+  virtual ~AccumuloProxy_listIterators_presult() throw();
   std::map<std::string, std::set<IteratorScope::type> > * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -4024,30 +3808,27 @@
 
 typedef struct _AccumuloProxy_listConstraints_args__isset {
   _AccumuloProxy_listConstraints_args__isset() : login(false), tableName(false) {}
-  bool login;
-  bool tableName;
+  bool login :1;
+  bool tableName :1;
 } _AccumuloProxy_listConstraints_args__isset;
 
 class AccumuloProxy_listConstraints_args {
  public:
 
+  AccumuloProxy_listConstraints_args(const AccumuloProxy_listConstraints_args&);
+  AccumuloProxy_listConstraints_args& operator=(const AccumuloProxy_listConstraints_args&);
   AccumuloProxy_listConstraints_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_listConstraints_args() throw() {}
-
+  virtual ~AccumuloProxy_listConstraints_args() throw();
   std::string login;
   std::string tableName;
 
   _AccumuloProxy_listConstraints_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
   bool operator == (const AccumuloProxy_listConstraints_args & rhs) const
   {
@@ -4073,8 +3854,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_listConstraints_pargs() throw() {}
-
+  virtual ~AccumuloProxy_listConstraints_pargs() throw();
   const std::string* login;
   const std::string* tableName;
 
@@ -4084,20 +3864,21 @@
 
 typedef struct _AccumuloProxy_listConstraints_result__isset {
   _AccumuloProxy_listConstraints_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_listConstraints_result__isset;
 
 class AccumuloProxy_listConstraints_result {
  public:
 
+  AccumuloProxy_listConstraints_result(const AccumuloProxy_listConstraints_result&);
+  AccumuloProxy_listConstraints_result& operator=(const AccumuloProxy_listConstraints_result&);
   AccumuloProxy_listConstraints_result() {
   }
 
-  virtual ~AccumuloProxy_listConstraints_result() throw() {}
-
+  virtual ~AccumuloProxy_listConstraints_result() throw();
   std::map<std::string, int32_t>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -4105,21 +3886,13 @@
 
   _AccumuloProxy_listConstraints_result__isset __isset;
 
-  void __set_success(const std::map<std::string, int32_t> & val) {
-    success = val;
-  }
+  void __set_success(const std::map<std::string, int32_t> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_listConstraints_result & rhs) const
   {
@@ -4146,18 +3919,17 @@
 
 typedef struct _AccumuloProxy_listConstraints_presult__isset {
   _AccumuloProxy_listConstraints_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_listConstraints_presult__isset;
 
 class AccumuloProxy_listConstraints_presult {
  public:
 
 
-  virtual ~AccumuloProxy_listConstraints_presult() throw() {}
-
+  virtual ~AccumuloProxy_listConstraints_presult() throw();
   std::map<std::string, int32_t> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -4171,20 +3943,21 @@
 
 typedef struct _AccumuloProxy_mergeTablets_args__isset {
   _AccumuloProxy_mergeTablets_args__isset() : login(false), tableName(false), startRow(false), endRow(false) {}
-  bool login;
-  bool tableName;
-  bool startRow;
-  bool endRow;
+  bool login :1;
+  bool tableName :1;
+  bool startRow :1;
+  bool endRow :1;
 } _AccumuloProxy_mergeTablets_args__isset;
 
 class AccumuloProxy_mergeTablets_args {
  public:
 
+  AccumuloProxy_mergeTablets_args(const AccumuloProxy_mergeTablets_args&);
+  AccumuloProxy_mergeTablets_args& operator=(const AccumuloProxy_mergeTablets_args&);
   AccumuloProxy_mergeTablets_args() : login(), tableName(), startRow(), endRow() {
   }
 
-  virtual ~AccumuloProxy_mergeTablets_args() throw() {}
-
+  virtual ~AccumuloProxy_mergeTablets_args() throw();
   std::string login;
   std::string tableName;
   std::string startRow;
@@ -4192,21 +3965,13 @@
 
   _AccumuloProxy_mergeTablets_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_startRow(const std::string& val) {
-    startRow = val;
-  }
+  void __set_startRow(const std::string& val);
 
-  void __set_endRow(const std::string& val) {
-    endRow = val;
-  }
+  void __set_endRow(const std::string& val);
 
   bool operator == (const AccumuloProxy_mergeTablets_args & rhs) const
   {
@@ -4236,8 +4001,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_mergeTablets_pargs() throw() {}
-
+  virtual ~AccumuloProxy_mergeTablets_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* startRow;
@@ -4249,36 +4013,31 @@
 
 typedef struct _AccumuloProxy_mergeTablets_result__isset {
   _AccumuloProxy_mergeTablets_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_mergeTablets_result__isset;
 
 class AccumuloProxy_mergeTablets_result {
  public:
 
+  AccumuloProxy_mergeTablets_result(const AccumuloProxy_mergeTablets_result&);
+  AccumuloProxy_mergeTablets_result& operator=(const AccumuloProxy_mergeTablets_result&);
   AccumuloProxy_mergeTablets_result() {
   }
 
-  virtual ~AccumuloProxy_mergeTablets_result() throw() {}
-
+  virtual ~AccumuloProxy_mergeTablets_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_mergeTablets_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_mergeTablets_result & rhs) const
   {
@@ -4303,17 +4062,16 @@
 
 typedef struct _AccumuloProxy_mergeTablets_presult__isset {
   _AccumuloProxy_mergeTablets_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_mergeTablets_presult__isset;
 
 class AccumuloProxy_mergeTablets_presult {
  public:
 
 
-  virtual ~AccumuloProxy_mergeTablets_presult() throw() {}
-
+  virtual ~AccumuloProxy_mergeTablets_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -4326,36 +4084,31 @@
 
 typedef struct _AccumuloProxy_offlineTable_args__isset {
   _AccumuloProxy_offlineTable_args__isset() : login(false), tableName(false), wait(true) {}
-  bool login;
-  bool tableName;
-  bool wait;
+  bool login :1;
+  bool tableName :1;
+  bool wait :1;
 } _AccumuloProxy_offlineTable_args__isset;
 
 class AccumuloProxy_offlineTable_args {
  public:
 
+  AccumuloProxy_offlineTable_args(const AccumuloProxy_offlineTable_args&);
+  AccumuloProxy_offlineTable_args& operator=(const AccumuloProxy_offlineTable_args&);
   AccumuloProxy_offlineTable_args() : login(), tableName(), wait(false) {
   }
 
-  virtual ~AccumuloProxy_offlineTable_args() throw() {}
-
+  virtual ~AccumuloProxy_offlineTable_args() throw();
   std::string login;
   std::string tableName;
   bool wait;
 
   _AccumuloProxy_offlineTable_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_wait(const bool val) {
-    wait = val;
-  }
+  void __set_wait(const bool val);
 
   bool operator == (const AccumuloProxy_offlineTable_args & rhs) const
   {
@@ -4383,8 +4136,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_offlineTable_pargs() throw() {}
-
+  virtual ~AccumuloProxy_offlineTable_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const bool* wait;
@@ -4395,36 +4147,31 @@
 
 typedef struct _AccumuloProxy_offlineTable_result__isset {
   _AccumuloProxy_offlineTable_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_offlineTable_result__isset;
 
 class AccumuloProxy_offlineTable_result {
  public:
 
+  AccumuloProxy_offlineTable_result(const AccumuloProxy_offlineTable_result&);
+  AccumuloProxy_offlineTable_result& operator=(const AccumuloProxy_offlineTable_result&);
   AccumuloProxy_offlineTable_result() {
   }
 
-  virtual ~AccumuloProxy_offlineTable_result() throw() {}
-
+  virtual ~AccumuloProxy_offlineTable_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_offlineTable_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_offlineTable_result & rhs) const
   {
@@ -4449,17 +4196,16 @@
 
 typedef struct _AccumuloProxy_offlineTable_presult__isset {
   _AccumuloProxy_offlineTable_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_offlineTable_presult__isset;
 
 class AccumuloProxy_offlineTable_presult {
  public:
 
 
-  virtual ~AccumuloProxy_offlineTable_presult() throw() {}
-
+  virtual ~AccumuloProxy_offlineTable_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -4472,36 +4218,31 @@
 
 typedef struct _AccumuloProxy_onlineTable_args__isset {
   _AccumuloProxy_onlineTable_args__isset() : login(false), tableName(false), wait(true) {}
-  bool login;
-  bool tableName;
-  bool wait;
+  bool login :1;
+  bool tableName :1;
+  bool wait :1;
 } _AccumuloProxy_onlineTable_args__isset;
 
 class AccumuloProxy_onlineTable_args {
  public:
 
+  AccumuloProxy_onlineTable_args(const AccumuloProxy_onlineTable_args&);
+  AccumuloProxy_onlineTable_args& operator=(const AccumuloProxy_onlineTable_args&);
   AccumuloProxy_onlineTable_args() : login(), tableName(), wait(false) {
   }
 
-  virtual ~AccumuloProxy_onlineTable_args() throw() {}
-
+  virtual ~AccumuloProxy_onlineTable_args() throw();
   std::string login;
   std::string tableName;
   bool wait;
 
   _AccumuloProxy_onlineTable_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_wait(const bool val) {
-    wait = val;
-  }
+  void __set_wait(const bool val);
 
   bool operator == (const AccumuloProxy_onlineTable_args & rhs) const
   {
@@ -4529,8 +4270,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_onlineTable_pargs() throw() {}
-
+  virtual ~AccumuloProxy_onlineTable_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const bool* wait;
@@ -4541,36 +4281,31 @@
 
 typedef struct _AccumuloProxy_onlineTable_result__isset {
   _AccumuloProxy_onlineTable_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_onlineTable_result__isset;
 
 class AccumuloProxy_onlineTable_result {
  public:
 
+  AccumuloProxy_onlineTable_result(const AccumuloProxy_onlineTable_result&);
+  AccumuloProxy_onlineTable_result& operator=(const AccumuloProxy_onlineTable_result&);
   AccumuloProxy_onlineTable_result() {
   }
 
-  virtual ~AccumuloProxy_onlineTable_result() throw() {}
-
+  virtual ~AccumuloProxy_onlineTable_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_onlineTable_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_onlineTable_result & rhs) const
   {
@@ -4595,17 +4330,16 @@
 
 typedef struct _AccumuloProxy_onlineTable_presult__isset {
   _AccumuloProxy_onlineTable_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_onlineTable_presult__isset;
 
 class AccumuloProxy_onlineTable_presult {
  public:
 
 
-  virtual ~AccumuloProxy_onlineTable_presult() throw() {}
-
+  virtual ~AccumuloProxy_onlineTable_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -4618,36 +4352,31 @@
 
 typedef struct _AccumuloProxy_removeConstraint_args__isset {
   _AccumuloProxy_removeConstraint_args__isset() : login(false), tableName(false), constraint(false) {}
-  bool login;
-  bool tableName;
-  bool constraint;
+  bool login :1;
+  bool tableName :1;
+  bool constraint :1;
 } _AccumuloProxy_removeConstraint_args__isset;
 
 class AccumuloProxy_removeConstraint_args {
  public:
 
+  AccumuloProxy_removeConstraint_args(const AccumuloProxy_removeConstraint_args&);
+  AccumuloProxy_removeConstraint_args& operator=(const AccumuloProxy_removeConstraint_args&);
   AccumuloProxy_removeConstraint_args() : login(), tableName(), constraint(0) {
   }
 
-  virtual ~AccumuloProxy_removeConstraint_args() throw() {}
-
+  virtual ~AccumuloProxy_removeConstraint_args() throw();
   std::string login;
   std::string tableName;
   int32_t constraint;
 
   _AccumuloProxy_removeConstraint_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_constraint(const int32_t val) {
-    constraint = val;
-  }
+  void __set_constraint(const int32_t val);
 
   bool operator == (const AccumuloProxy_removeConstraint_args & rhs) const
   {
@@ -4675,8 +4404,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_removeConstraint_pargs() throw() {}
-
+  virtual ~AccumuloProxy_removeConstraint_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const int32_t* constraint;
@@ -4687,36 +4415,31 @@
 
 typedef struct _AccumuloProxy_removeConstraint_result__isset {
   _AccumuloProxy_removeConstraint_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_removeConstraint_result__isset;
 
 class AccumuloProxy_removeConstraint_result {
  public:
 
+  AccumuloProxy_removeConstraint_result(const AccumuloProxy_removeConstraint_result&);
+  AccumuloProxy_removeConstraint_result& operator=(const AccumuloProxy_removeConstraint_result&);
   AccumuloProxy_removeConstraint_result() {
   }
 
-  virtual ~AccumuloProxy_removeConstraint_result() throw() {}
-
+  virtual ~AccumuloProxy_removeConstraint_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_removeConstraint_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_removeConstraint_result & rhs) const
   {
@@ -4741,17 +4464,16 @@
 
 typedef struct _AccumuloProxy_removeConstraint_presult__isset {
   _AccumuloProxy_removeConstraint_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_removeConstraint_presult__isset;
 
 class AccumuloProxy_removeConstraint_presult {
  public:
 
 
-  virtual ~AccumuloProxy_removeConstraint_presult() throw() {}
-
+  virtual ~AccumuloProxy_removeConstraint_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -4764,20 +4486,21 @@
 
 typedef struct _AccumuloProxy_removeIterator_args__isset {
   _AccumuloProxy_removeIterator_args__isset() : login(false), tableName(false), iterName(false), scopes(false) {}
-  bool login;
-  bool tableName;
-  bool iterName;
-  bool scopes;
+  bool login :1;
+  bool tableName :1;
+  bool iterName :1;
+  bool scopes :1;
 } _AccumuloProxy_removeIterator_args__isset;
 
 class AccumuloProxy_removeIterator_args {
  public:
 
+  AccumuloProxy_removeIterator_args(const AccumuloProxy_removeIterator_args&);
+  AccumuloProxy_removeIterator_args& operator=(const AccumuloProxy_removeIterator_args&);
   AccumuloProxy_removeIterator_args() : login(), tableName(), iterName() {
   }
 
-  virtual ~AccumuloProxy_removeIterator_args() throw() {}
-
+  virtual ~AccumuloProxy_removeIterator_args() throw();
   std::string login;
   std::string tableName;
   std::string iterName;
@@ -4785,21 +4508,13 @@
 
   _AccumuloProxy_removeIterator_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_iterName(const std::string& val) {
-    iterName = val;
-  }
+  void __set_iterName(const std::string& val);
 
-  void __set_scopes(const std::set<IteratorScope::type> & val) {
-    scopes = val;
-  }
+  void __set_scopes(const std::set<IteratorScope::type> & val);
 
   bool operator == (const AccumuloProxy_removeIterator_args & rhs) const
   {
@@ -4829,8 +4544,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_removeIterator_pargs() throw() {}
-
+  virtual ~AccumuloProxy_removeIterator_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* iterName;
@@ -4842,36 +4556,31 @@
 
 typedef struct _AccumuloProxy_removeIterator_result__isset {
   _AccumuloProxy_removeIterator_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_removeIterator_result__isset;
 
 class AccumuloProxy_removeIterator_result {
  public:
 
+  AccumuloProxy_removeIterator_result(const AccumuloProxy_removeIterator_result&);
+  AccumuloProxy_removeIterator_result& operator=(const AccumuloProxy_removeIterator_result&);
   AccumuloProxy_removeIterator_result() {
   }
 
-  virtual ~AccumuloProxy_removeIterator_result() throw() {}
-
+  virtual ~AccumuloProxy_removeIterator_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_removeIterator_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_removeIterator_result & rhs) const
   {
@@ -4896,17 +4605,16 @@
 
 typedef struct _AccumuloProxy_removeIterator_presult__isset {
   _AccumuloProxy_removeIterator_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_removeIterator_presult__isset;
 
 class AccumuloProxy_removeIterator_presult {
  public:
 
 
-  virtual ~AccumuloProxy_removeIterator_presult() throw() {}
-
+  virtual ~AccumuloProxy_removeIterator_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -4919,36 +4627,31 @@
 
 typedef struct _AccumuloProxy_removeTableProperty_args__isset {
   _AccumuloProxy_removeTableProperty_args__isset() : login(false), tableName(false), property(false) {}
-  bool login;
-  bool tableName;
-  bool property;
+  bool login :1;
+  bool tableName :1;
+  bool property :1;
 } _AccumuloProxy_removeTableProperty_args__isset;
 
 class AccumuloProxy_removeTableProperty_args {
  public:
 
+  AccumuloProxy_removeTableProperty_args(const AccumuloProxy_removeTableProperty_args&);
+  AccumuloProxy_removeTableProperty_args& operator=(const AccumuloProxy_removeTableProperty_args&);
   AccumuloProxy_removeTableProperty_args() : login(), tableName(), property() {
   }
 
-  virtual ~AccumuloProxy_removeTableProperty_args() throw() {}
-
+  virtual ~AccumuloProxy_removeTableProperty_args() throw();
   std::string login;
   std::string tableName;
   std::string property;
 
   _AccumuloProxy_removeTableProperty_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_property(const std::string& val) {
-    property = val;
-  }
+  void __set_property(const std::string& val);
 
   bool operator == (const AccumuloProxy_removeTableProperty_args & rhs) const
   {
@@ -4976,8 +4679,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_removeTableProperty_pargs() throw() {}
-
+  virtual ~AccumuloProxy_removeTableProperty_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* property;
@@ -4988,36 +4690,31 @@
 
 typedef struct _AccumuloProxy_removeTableProperty_result__isset {
   _AccumuloProxy_removeTableProperty_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_removeTableProperty_result__isset;
 
 class AccumuloProxy_removeTableProperty_result {
  public:
 
+  AccumuloProxy_removeTableProperty_result(const AccumuloProxy_removeTableProperty_result&);
+  AccumuloProxy_removeTableProperty_result& operator=(const AccumuloProxy_removeTableProperty_result&);
   AccumuloProxy_removeTableProperty_result() {
   }
 
-  virtual ~AccumuloProxy_removeTableProperty_result() throw() {}
-
+  virtual ~AccumuloProxy_removeTableProperty_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_removeTableProperty_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_removeTableProperty_result & rhs) const
   {
@@ -5042,17 +4739,16 @@
 
 typedef struct _AccumuloProxy_removeTableProperty_presult__isset {
   _AccumuloProxy_removeTableProperty_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_removeTableProperty_presult__isset;
 
 class AccumuloProxy_removeTableProperty_presult {
  public:
 
 
-  virtual ~AccumuloProxy_removeTableProperty_presult() throw() {}
-
+  virtual ~AccumuloProxy_removeTableProperty_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -5065,36 +4761,31 @@
 
 typedef struct _AccumuloProxy_renameTable_args__isset {
   _AccumuloProxy_renameTable_args__isset() : login(false), oldTableName(false), newTableName(false) {}
-  bool login;
-  bool oldTableName;
-  bool newTableName;
+  bool login :1;
+  bool oldTableName :1;
+  bool newTableName :1;
 } _AccumuloProxy_renameTable_args__isset;
 
 class AccumuloProxy_renameTable_args {
  public:
 
+  AccumuloProxy_renameTable_args(const AccumuloProxy_renameTable_args&);
+  AccumuloProxy_renameTable_args& operator=(const AccumuloProxy_renameTable_args&);
   AccumuloProxy_renameTable_args() : login(), oldTableName(), newTableName() {
   }
 
-  virtual ~AccumuloProxy_renameTable_args() throw() {}
-
+  virtual ~AccumuloProxy_renameTable_args() throw();
   std::string login;
   std::string oldTableName;
   std::string newTableName;
 
   _AccumuloProxy_renameTable_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_oldTableName(const std::string& val) {
-    oldTableName = val;
-  }
+  void __set_oldTableName(const std::string& val);
 
-  void __set_newTableName(const std::string& val) {
-    newTableName = val;
-  }
+  void __set_newTableName(const std::string& val);
 
   bool operator == (const AccumuloProxy_renameTable_args & rhs) const
   {
@@ -5122,8 +4813,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_renameTable_pargs() throw() {}
-
+  virtual ~AccumuloProxy_renameTable_pargs() throw();
   const std::string* login;
   const std::string* oldTableName;
   const std::string* newTableName;
@@ -5134,20 +4824,21 @@
 
 typedef struct _AccumuloProxy_renameTable_result__isset {
   _AccumuloProxy_renameTable_result__isset() : ouch1(false), ouch2(false), ouch3(false), ouch4(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
-  bool ouch4;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
 } _AccumuloProxy_renameTable_result__isset;
 
 class AccumuloProxy_renameTable_result {
  public:
 
+  AccumuloProxy_renameTable_result(const AccumuloProxy_renameTable_result&);
+  AccumuloProxy_renameTable_result& operator=(const AccumuloProxy_renameTable_result&);
   AccumuloProxy_renameTable_result() {
   }
 
-  virtual ~AccumuloProxy_renameTable_result() throw() {}
-
+  virtual ~AccumuloProxy_renameTable_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -5155,21 +4846,13 @@
 
   _AccumuloProxy_renameTable_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
-  void __set_ouch4(const TableExistsException& val) {
-    ouch4 = val;
-  }
+  void __set_ouch4(const TableExistsException& val);
 
   bool operator == (const AccumuloProxy_renameTable_result & rhs) const
   {
@@ -5196,18 +4879,17 @@
 
 typedef struct _AccumuloProxy_renameTable_presult__isset {
   _AccumuloProxy_renameTable_presult__isset() : ouch1(false), ouch2(false), ouch3(false), ouch4(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
-  bool ouch4;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
 } _AccumuloProxy_renameTable_presult__isset;
 
 class AccumuloProxy_renameTable_presult {
  public:
 
 
-  virtual ~AccumuloProxy_renameTable_presult() throw() {}
-
+  virtual ~AccumuloProxy_renameTable_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -5221,36 +4903,31 @@
 
 typedef struct _AccumuloProxy_setLocalityGroups_args__isset {
   _AccumuloProxy_setLocalityGroups_args__isset() : login(false), tableName(false), groups(false) {}
-  bool login;
-  bool tableName;
-  bool groups;
+  bool login :1;
+  bool tableName :1;
+  bool groups :1;
 } _AccumuloProxy_setLocalityGroups_args__isset;
 
 class AccumuloProxy_setLocalityGroups_args {
  public:
 
+  AccumuloProxy_setLocalityGroups_args(const AccumuloProxy_setLocalityGroups_args&);
+  AccumuloProxy_setLocalityGroups_args& operator=(const AccumuloProxy_setLocalityGroups_args&);
   AccumuloProxy_setLocalityGroups_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_setLocalityGroups_args() throw() {}
-
+  virtual ~AccumuloProxy_setLocalityGroups_args() throw();
   std::string login;
   std::string tableName;
   std::map<std::string, std::set<std::string> >  groups;
 
   _AccumuloProxy_setLocalityGroups_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_groups(const std::map<std::string, std::set<std::string> > & val) {
-    groups = val;
-  }
+  void __set_groups(const std::map<std::string, std::set<std::string> > & val);
 
   bool operator == (const AccumuloProxy_setLocalityGroups_args & rhs) const
   {
@@ -5278,8 +4955,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_setLocalityGroups_pargs() throw() {}
-
+  virtual ~AccumuloProxy_setLocalityGroups_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::map<std::string, std::set<std::string> > * groups;
@@ -5290,36 +4966,31 @@
 
 typedef struct _AccumuloProxy_setLocalityGroups_result__isset {
   _AccumuloProxy_setLocalityGroups_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_setLocalityGroups_result__isset;
 
 class AccumuloProxy_setLocalityGroups_result {
  public:
 
+  AccumuloProxy_setLocalityGroups_result(const AccumuloProxy_setLocalityGroups_result&);
+  AccumuloProxy_setLocalityGroups_result& operator=(const AccumuloProxy_setLocalityGroups_result&);
   AccumuloProxy_setLocalityGroups_result() {
   }
 
-  virtual ~AccumuloProxy_setLocalityGroups_result() throw() {}
-
+  virtual ~AccumuloProxy_setLocalityGroups_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_setLocalityGroups_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_setLocalityGroups_result & rhs) const
   {
@@ -5344,17 +5015,16 @@
 
 typedef struct _AccumuloProxy_setLocalityGroups_presult__isset {
   _AccumuloProxy_setLocalityGroups_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_setLocalityGroups_presult__isset;
 
 class AccumuloProxy_setLocalityGroups_presult {
  public:
 
 
-  virtual ~AccumuloProxy_setLocalityGroups_presult() throw() {}
-
+  virtual ~AccumuloProxy_setLocalityGroups_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -5367,20 +5037,21 @@
 
 typedef struct _AccumuloProxy_setTableProperty_args__isset {
   _AccumuloProxy_setTableProperty_args__isset() : login(false), tableName(false), property(false), value(false) {}
-  bool login;
-  bool tableName;
-  bool property;
-  bool value;
+  bool login :1;
+  bool tableName :1;
+  bool property :1;
+  bool value :1;
 } _AccumuloProxy_setTableProperty_args__isset;
 
 class AccumuloProxy_setTableProperty_args {
  public:
 
+  AccumuloProxy_setTableProperty_args(const AccumuloProxy_setTableProperty_args&);
+  AccumuloProxy_setTableProperty_args& operator=(const AccumuloProxy_setTableProperty_args&);
   AccumuloProxy_setTableProperty_args() : login(), tableName(), property(), value() {
   }
 
-  virtual ~AccumuloProxy_setTableProperty_args() throw() {}
-
+  virtual ~AccumuloProxy_setTableProperty_args() throw();
   std::string login;
   std::string tableName;
   std::string property;
@@ -5388,21 +5059,13 @@
 
   _AccumuloProxy_setTableProperty_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_property(const std::string& val) {
-    property = val;
-  }
+  void __set_property(const std::string& val);
 
-  void __set_value(const std::string& val) {
-    value = val;
-  }
+  void __set_value(const std::string& val);
 
   bool operator == (const AccumuloProxy_setTableProperty_args & rhs) const
   {
@@ -5432,8 +5095,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_setTableProperty_pargs() throw() {}
-
+  virtual ~AccumuloProxy_setTableProperty_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* property;
@@ -5445,36 +5107,31 @@
 
 typedef struct _AccumuloProxy_setTableProperty_result__isset {
   _AccumuloProxy_setTableProperty_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_setTableProperty_result__isset;
 
 class AccumuloProxy_setTableProperty_result {
  public:
 
+  AccumuloProxy_setTableProperty_result(const AccumuloProxy_setTableProperty_result&);
+  AccumuloProxy_setTableProperty_result& operator=(const AccumuloProxy_setTableProperty_result&);
   AccumuloProxy_setTableProperty_result() {
   }
 
-  virtual ~AccumuloProxy_setTableProperty_result() throw() {}
-
+  virtual ~AccumuloProxy_setTableProperty_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_setTableProperty_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_setTableProperty_result & rhs) const
   {
@@ -5499,17 +5156,16 @@
 
 typedef struct _AccumuloProxy_setTableProperty_presult__isset {
   _AccumuloProxy_setTableProperty_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_setTableProperty_presult__isset;
 
 class AccumuloProxy_setTableProperty_presult {
  public:
 
 
-  virtual ~AccumuloProxy_setTableProperty_presult() throw() {}
-
+  virtual ~AccumuloProxy_setTableProperty_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -5522,20 +5178,21 @@
 
 typedef struct _AccumuloProxy_splitRangeByTablets_args__isset {
   _AccumuloProxy_splitRangeByTablets_args__isset() : login(false), tableName(false), range(false), maxSplits(false) {}
-  bool login;
-  bool tableName;
-  bool range;
-  bool maxSplits;
+  bool login :1;
+  bool tableName :1;
+  bool range :1;
+  bool maxSplits :1;
 } _AccumuloProxy_splitRangeByTablets_args__isset;
 
 class AccumuloProxy_splitRangeByTablets_args {
  public:
 
+  AccumuloProxy_splitRangeByTablets_args(const AccumuloProxy_splitRangeByTablets_args&);
+  AccumuloProxy_splitRangeByTablets_args& operator=(const AccumuloProxy_splitRangeByTablets_args&);
   AccumuloProxy_splitRangeByTablets_args() : login(), tableName(), maxSplits(0) {
   }
 
-  virtual ~AccumuloProxy_splitRangeByTablets_args() throw() {}
-
+  virtual ~AccumuloProxy_splitRangeByTablets_args() throw();
   std::string login;
   std::string tableName;
   Range range;
@@ -5543,21 +5200,13 @@
 
   _AccumuloProxy_splitRangeByTablets_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_range(const Range& val) {
-    range = val;
-  }
+  void __set_range(const Range& val);
 
-  void __set_maxSplits(const int32_t val) {
-    maxSplits = val;
-  }
+  void __set_maxSplits(const int32_t val);
 
   bool operator == (const AccumuloProxy_splitRangeByTablets_args & rhs) const
   {
@@ -5587,8 +5236,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_splitRangeByTablets_pargs() throw() {}
-
+  virtual ~AccumuloProxy_splitRangeByTablets_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const Range* range;
@@ -5600,20 +5248,21 @@
 
 typedef struct _AccumuloProxy_splitRangeByTablets_result__isset {
   _AccumuloProxy_splitRangeByTablets_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_splitRangeByTablets_result__isset;
 
 class AccumuloProxy_splitRangeByTablets_result {
  public:
 
+  AccumuloProxy_splitRangeByTablets_result(const AccumuloProxy_splitRangeByTablets_result&);
+  AccumuloProxy_splitRangeByTablets_result& operator=(const AccumuloProxy_splitRangeByTablets_result&);
   AccumuloProxy_splitRangeByTablets_result() {
   }
 
-  virtual ~AccumuloProxy_splitRangeByTablets_result() throw() {}
-
+  virtual ~AccumuloProxy_splitRangeByTablets_result() throw();
   std::set<Range>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -5621,21 +5270,13 @@
 
   _AccumuloProxy_splitRangeByTablets_result__isset __isset;
 
-  void __set_success(const std::set<Range> & val) {
-    success = val;
-  }
+  void __set_success(const std::set<Range> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_splitRangeByTablets_result & rhs) const
   {
@@ -5662,18 +5303,17 @@
 
 typedef struct _AccumuloProxy_splitRangeByTablets_presult__isset {
   _AccumuloProxy_splitRangeByTablets_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_splitRangeByTablets_presult__isset;
 
 class AccumuloProxy_splitRangeByTablets_presult {
  public:
 
 
-  virtual ~AccumuloProxy_splitRangeByTablets_presult() throw() {}
-
+  virtual ~AccumuloProxy_splitRangeByTablets_presult() throw();
   std::set<Range> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -5687,30 +5327,27 @@
 
 typedef struct _AccumuloProxy_tableExists_args__isset {
   _AccumuloProxy_tableExists_args__isset() : login(false), tableName(false) {}
-  bool login;
-  bool tableName;
+  bool login :1;
+  bool tableName :1;
 } _AccumuloProxy_tableExists_args__isset;
 
 class AccumuloProxy_tableExists_args {
  public:
 
+  AccumuloProxy_tableExists_args(const AccumuloProxy_tableExists_args&);
+  AccumuloProxy_tableExists_args& operator=(const AccumuloProxy_tableExists_args&);
   AccumuloProxy_tableExists_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_tableExists_args() throw() {}
-
+  virtual ~AccumuloProxy_tableExists_args() throw();
   std::string login;
   std::string tableName;
 
   _AccumuloProxy_tableExists_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
   bool operator == (const AccumuloProxy_tableExists_args & rhs) const
   {
@@ -5736,8 +5373,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_tableExists_pargs() throw() {}
-
+  virtual ~AccumuloProxy_tableExists_pargs() throw();
   const std::string* login;
   const std::string* tableName;
 
@@ -5747,24 +5383,23 @@
 
 typedef struct _AccumuloProxy_tableExists_result__isset {
   _AccumuloProxy_tableExists_result__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_tableExists_result__isset;
 
 class AccumuloProxy_tableExists_result {
  public:
 
+  AccumuloProxy_tableExists_result(const AccumuloProxy_tableExists_result&);
+  AccumuloProxy_tableExists_result& operator=(const AccumuloProxy_tableExists_result&);
   AccumuloProxy_tableExists_result() : success(0) {
   }
 
-  virtual ~AccumuloProxy_tableExists_result() throw() {}
-
+  virtual ~AccumuloProxy_tableExists_result() throw();
   bool success;
 
   _AccumuloProxy_tableExists_result__isset __isset;
 
-  void __set_success(const bool val) {
-    success = val;
-  }
+  void __set_success(const bool val);
 
   bool operator == (const AccumuloProxy_tableExists_result & rhs) const
   {
@@ -5785,15 +5420,14 @@
 
 typedef struct _AccumuloProxy_tableExists_presult__isset {
   _AccumuloProxy_tableExists_presult__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_tableExists_presult__isset;
 
 class AccumuloProxy_tableExists_presult {
  public:
 
 
-  virtual ~AccumuloProxy_tableExists_presult() throw() {}
-
+  virtual ~AccumuloProxy_tableExists_presult() throw();
   bool* success;
 
   _AccumuloProxy_tableExists_presult__isset __isset;
@@ -5804,24 +5438,23 @@
 
 typedef struct _AccumuloProxy_tableIdMap_args__isset {
   _AccumuloProxy_tableIdMap_args__isset() : login(false) {}
-  bool login;
+  bool login :1;
 } _AccumuloProxy_tableIdMap_args__isset;
 
 class AccumuloProxy_tableIdMap_args {
  public:
 
+  AccumuloProxy_tableIdMap_args(const AccumuloProxy_tableIdMap_args&);
+  AccumuloProxy_tableIdMap_args& operator=(const AccumuloProxy_tableIdMap_args&);
   AccumuloProxy_tableIdMap_args() : login() {
   }
 
-  virtual ~AccumuloProxy_tableIdMap_args() throw() {}
-
+  virtual ~AccumuloProxy_tableIdMap_args() throw();
   std::string login;
 
   _AccumuloProxy_tableIdMap_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
   bool operator == (const AccumuloProxy_tableIdMap_args & rhs) const
   {
@@ -5845,8 +5478,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_tableIdMap_pargs() throw() {}
-
+  virtual ~AccumuloProxy_tableIdMap_pargs() throw();
   const std::string* login;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -5855,24 +5487,23 @@
 
 typedef struct _AccumuloProxy_tableIdMap_result__isset {
   _AccumuloProxy_tableIdMap_result__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_tableIdMap_result__isset;
 
 class AccumuloProxy_tableIdMap_result {
  public:
 
+  AccumuloProxy_tableIdMap_result(const AccumuloProxy_tableIdMap_result&);
+  AccumuloProxy_tableIdMap_result& operator=(const AccumuloProxy_tableIdMap_result&);
   AccumuloProxy_tableIdMap_result() {
   }
 
-  virtual ~AccumuloProxy_tableIdMap_result() throw() {}
-
+  virtual ~AccumuloProxy_tableIdMap_result() throw();
   std::map<std::string, std::string>  success;
 
   _AccumuloProxy_tableIdMap_result__isset __isset;
 
-  void __set_success(const std::map<std::string, std::string> & val) {
-    success = val;
-  }
+  void __set_success(const std::map<std::string, std::string> & val);
 
   bool operator == (const AccumuloProxy_tableIdMap_result & rhs) const
   {
@@ -5893,15 +5524,14 @@
 
 typedef struct _AccumuloProxy_tableIdMap_presult__isset {
   _AccumuloProxy_tableIdMap_presult__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_tableIdMap_presult__isset;
 
 class AccumuloProxy_tableIdMap_presult {
  public:
 
 
-  virtual ~AccumuloProxy_tableIdMap_presult() throw() {}
-
+  virtual ~AccumuloProxy_tableIdMap_presult() throw();
   std::map<std::string, std::string> * success;
 
   _AccumuloProxy_tableIdMap_presult__isset __isset;
@@ -5912,20 +5542,21 @@
 
 typedef struct _AccumuloProxy_testTableClassLoad_args__isset {
   _AccumuloProxy_testTableClassLoad_args__isset() : login(false), tableName(false), className(false), asTypeName(false) {}
-  bool login;
-  bool tableName;
-  bool className;
-  bool asTypeName;
+  bool login :1;
+  bool tableName :1;
+  bool className :1;
+  bool asTypeName :1;
 } _AccumuloProxy_testTableClassLoad_args__isset;
 
 class AccumuloProxy_testTableClassLoad_args {
  public:
 
+  AccumuloProxy_testTableClassLoad_args(const AccumuloProxy_testTableClassLoad_args&);
+  AccumuloProxy_testTableClassLoad_args& operator=(const AccumuloProxy_testTableClassLoad_args&);
   AccumuloProxy_testTableClassLoad_args() : login(), tableName(), className(), asTypeName() {
   }
 
-  virtual ~AccumuloProxy_testTableClassLoad_args() throw() {}
-
+  virtual ~AccumuloProxy_testTableClassLoad_args() throw();
   std::string login;
   std::string tableName;
   std::string className;
@@ -5933,21 +5564,13 @@
 
   _AccumuloProxy_testTableClassLoad_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_className(const std::string& val) {
-    className = val;
-  }
+  void __set_className(const std::string& val);
 
-  void __set_asTypeName(const std::string& val) {
-    asTypeName = val;
-  }
+  void __set_asTypeName(const std::string& val);
 
   bool operator == (const AccumuloProxy_testTableClassLoad_args & rhs) const
   {
@@ -5977,8 +5600,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_testTableClassLoad_pargs() throw() {}
-
+  virtual ~AccumuloProxy_testTableClassLoad_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* className;
@@ -5990,20 +5612,21 @@
 
 typedef struct _AccumuloProxy_testTableClassLoad_result__isset {
   _AccumuloProxy_testTableClassLoad_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_testTableClassLoad_result__isset;
 
 class AccumuloProxy_testTableClassLoad_result {
  public:
 
+  AccumuloProxy_testTableClassLoad_result(const AccumuloProxy_testTableClassLoad_result&);
+  AccumuloProxy_testTableClassLoad_result& operator=(const AccumuloProxy_testTableClassLoad_result&);
   AccumuloProxy_testTableClassLoad_result() : success(0) {
   }
 
-  virtual ~AccumuloProxy_testTableClassLoad_result() throw() {}
-
+  virtual ~AccumuloProxy_testTableClassLoad_result() throw();
   bool success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -6011,21 +5634,13 @@
 
   _AccumuloProxy_testTableClassLoad_result__isset __isset;
 
-  void __set_success(const bool val) {
-    success = val;
-  }
+  void __set_success(const bool val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_testTableClassLoad_result & rhs) const
   {
@@ -6052,18 +5667,17 @@
 
 typedef struct _AccumuloProxy_testTableClassLoad_presult__isset {
   _AccumuloProxy_testTableClassLoad_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_testTableClassLoad_presult__isset;
 
 class AccumuloProxy_testTableClassLoad_presult {
  public:
 
 
-  virtual ~AccumuloProxy_testTableClassLoad_presult() throw() {}
-
+  virtual ~AccumuloProxy_testTableClassLoad_presult() throw();
   bool* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -6077,30 +5691,27 @@
 
 typedef struct _AccumuloProxy_pingTabletServer_args__isset {
   _AccumuloProxy_pingTabletServer_args__isset() : login(false), tserver(false) {}
-  bool login;
-  bool tserver;
+  bool login :1;
+  bool tserver :1;
 } _AccumuloProxy_pingTabletServer_args__isset;
 
 class AccumuloProxy_pingTabletServer_args {
  public:
 
+  AccumuloProxy_pingTabletServer_args(const AccumuloProxy_pingTabletServer_args&);
+  AccumuloProxy_pingTabletServer_args& operator=(const AccumuloProxy_pingTabletServer_args&);
   AccumuloProxy_pingTabletServer_args() : login(), tserver() {
   }
 
-  virtual ~AccumuloProxy_pingTabletServer_args() throw() {}
-
+  virtual ~AccumuloProxy_pingTabletServer_args() throw();
   std::string login;
   std::string tserver;
 
   _AccumuloProxy_pingTabletServer_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tserver(const std::string& val) {
-    tserver = val;
-  }
+  void __set_tserver(const std::string& val);
 
   bool operator == (const AccumuloProxy_pingTabletServer_args & rhs) const
   {
@@ -6126,8 +5737,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_pingTabletServer_pargs() throw() {}
-
+  virtual ~AccumuloProxy_pingTabletServer_pargs() throw();
   const std::string* login;
   const std::string* tserver;
 
@@ -6137,30 +5747,27 @@
 
 typedef struct _AccumuloProxy_pingTabletServer_result__isset {
   _AccumuloProxy_pingTabletServer_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_pingTabletServer_result__isset;
 
 class AccumuloProxy_pingTabletServer_result {
  public:
 
+  AccumuloProxy_pingTabletServer_result(const AccumuloProxy_pingTabletServer_result&);
+  AccumuloProxy_pingTabletServer_result& operator=(const AccumuloProxy_pingTabletServer_result&);
   AccumuloProxy_pingTabletServer_result() {
   }
 
-  virtual ~AccumuloProxy_pingTabletServer_result() throw() {}
-
+  virtual ~AccumuloProxy_pingTabletServer_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_pingTabletServer_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_pingTabletServer_result & rhs) const
   {
@@ -6183,16 +5790,15 @@
 
 typedef struct _AccumuloProxy_pingTabletServer_presult__isset {
   _AccumuloProxy_pingTabletServer_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_pingTabletServer_presult__isset;
 
 class AccumuloProxy_pingTabletServer_presult {
  public:
 
 
-  virtual ~AccumuloProxy_pingTabletServer_presult() throw() {}
-
+  virtual ~AccumuloProxy_pingTabletServer_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
@@ -6204,30 +5810,27 @@
 
 typedef struct _AccumuloProxy_getActiveScans_args__isset {
   _AccumuloProxy_getActiveScans_args__isset() : login(false), tserver(false) {}
-  bool login;
-  bool tserver;
+  bool login :1;
+  bool tserver :1;
 } _AccumuloProxy_getActiveScans_args__isset;
 
 class AccumuloProxy_getActiveScans_args {
  public:
 
+  AccumuloProxy_getActiveScans_args(const AccumuloProxy_getActiveScans_args&);
+  AccumuloProxy_getActiveScans_args& operator=(const AccumuloProxy_getActiveScans_args&);
   AccumuloProxy_getActiveScans_args() : login(), tserver() {
   }
 
-  virtual ~AccumuloProxy_getActiveScans_args() throw() {}
-
+  virtual ~AccumuloProxy_getActiveScans_args() throw();
   std::string login;
   std::string tserver;
 
   _AccumuloProxy_getActiveScans_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tserver(const std::string& val) {
-    tserver = val;
-  }
+  void __set_tserver(const std::string& val);
 
   bool operator == (const AccumuloProxy_getActiveScans_args & rhs) const
   {
@@ -6253,8 +5856,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getActiveScans_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getActiveScans_pargs() throw();
   const std::string* login;
   const std::string* tserver;
 
@@ -6264,36 +5866,31 @@
 
 typedef struct _AccumuloProxy_getActiveScans_result__isset {
   _AccumuloProxy_getActiveScans_result__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_getActiveScans_result__isset;
 
 class AccumuloProxy_getActiveScans_result {
  public:
 
+  AccumuloProxy_getActiveScans_result(const AccumuloProxy_getActiveScans_result&);
+  AccumuloProxy_getActiveScans_result& operator=(const AccumuloProxy_getActiveScans_result&);
   AccumuloProxy_getActiveScans_result() {
   }
 
-  virtual ~AccumuloProxy_getActiveScans_result() throw() {}
-
+  virtual ~AccumuloProxy_getActiveScans_result() throw();
   std::vector<ActiveScan>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_getActiveScans_result__isset __isset;
 
-  void __set_success(const std::vector<ActiveScan> & val) {
-    success = val;
-  }
+  void __set_success(const std::vector<ActiveScan> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_getActiveScans_result & rhs) const
   {
@@ -6318,17 +5915,16 @@
 
 typedef struct _AccumuloProxy_getActiveScans_presult__isset {
   _AccumuloProxy_getActiveScans_presult__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_getActiveScans_presult__isset;
 
 class AccumuloProxy_getActiveScans_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getActiveScans_presult() throw() {}
-
+  virtual ~AccumuloProxy_getActiveScans_presult() throw();
   std::vector<ActiveScan> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -6341,30 +5937,27 @@
 
 typedef struct _AccumuloProxy_getActiveCompactions_args__isset {
   _AccumuloProxy_getActiveCompactions_args__isset() : login(false), tserver(false) {}
-  bool login;
-  bool tserver;
+  bool login :1;
+  bool tserver :1;
 } _AccumuloProxy_getActiveCompactions_args__isset;
 
 class AccumuloProxy_getActiveCompactions_args {
  public:
 
+  AccumuloProxy_getActiveCompactions_args(const AccumuloProxy_getActiveCompactions_args&);
+  AccumuloProxy_getActiveCompactions_args& operator=(const AccumuloProxy_getActiveCompactions_args&);
   AccumuloProxy_getActiveCompactions_args() : login(), tserver() {
   }
 
-  virtual ~AccumuloProxy_getActiveCompactions_args() throw() {}
-
+  virtual ~AccumuloProxy_getActiveCompactions_args() throw();
   std::string login;
   std::string tserver;
 
   _AccumuloProxy_getActiveCompactions_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tserver(const std::string& val) {
-    tserver = val;
-  }
+  void __set_tserver(const std::string& val);
 
   bool operator == (const AccumuloProxy_getActiveCompactions_args & rhs) const
   {
@@ -6390,8 +5983,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getActiveCompactions_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getActiveCompactions_pargs() throw();
   const std::string* login;
   const std::string* tserver;
 
@@ -6401,36 +5993,31 @@
 
 typedef struct _AccumuloProxy_getActiveCompactions_result__isset {
   _AccumuloProxy_getActiveCompactions_result__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_getActiveCompactions_result__isset;
 
 class AccumuloProxy_getActiveCompactions_result {
  public:
 
+  AccumuloProxy_getActiveCompactions_result(const AccumuloProxy_getActiveCompactions_result&);
+  AccumuloProxy_getActiveCompactions_result& operator=(const AccumuloProxy_getActiveCompactions_result&);
   AccumuloProxy_getActiveCompactions_result() {
   }
 
-  virtual ~AccumuloProxy_getActiveCompactions_result() throw() {}
-
+  virtual ~AccumuloProxy_getActiveCompactions_result() throw();
   std::vector<ActiveCompaction>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_getActiveCompactions_result__isset __isset;
 
-  void __set_success(const std::vector<ActiveCompaction> & val) {
-    success = val;
-  }
+  void __set_success(const std::vector<ActiveCompaction> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_getActiveCompactions_result & rhs) const
   {
@@ -6455,17 +6042,16 @@
 
 typedef struct _AccumuloProxy_getActiveCompactions_presult__isset {
   _AccumuloProxy_getActiveCompactions_presult__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_getActiveCompactions_presult__isset;
 
 class AccumuloProxy_getActiveCompactions_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getActiveCompactions_presult() throw() {}
-
+  virtual ~AccumuloProxy_getActiveCompactions_presult() throw();
   std::vector<ActiveCompaction> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -6478,24 +6064,23 @@
 
 typedef struct _AccumuloProxy_getSiteConfiguration_args__isset {
   _AccumuloProxy_getSiteConfiguration_args__isset() : login(false) {}
-  bool login;
+  bool login :1;
 } _AccumuloProxy_getSiteConfiguration_args__isset;
 
 class AccumuloProxy_getSiteConfiguration_args {
  public:
 
+  AccumuloProxy_getSiteConfiguration_args(const AccumuloProxy_getSiteConfiguration_args&);
+  AccumuloProxy_getSiteConfiguration_args& operator=(const AccumuloProxy_getSiteConfiguration_args&);
   AccumuloProxy_getSiteConfiguration_args() : login() {
   }
 
-  virtual ~AccumuloProxy_getSiteConfiguration_args() throw() {}
-
+  virtual ~AccumuloProxy_getSiteConfiguration_args() throw();
   std::string login;
 
   _AccumuloProxy_getSiteConfiguration_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
   bool operator == (const AccumuloProxy_getSiteConfiguration_args & rhs) const
   {
@@ -6519,8 +6104,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getSiteConfiguration_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getSiteConfiguration_pargs() throw();
   const std::string* login;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -6529,36 +6113,31 @@
 
 typedef struct _AccumuloProxy_getSiteConfiguration_result__isset {
   _AccumuloProxy_getSiteConfiguration_result__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_getSiteConfiguration_result__isset;
 
 class AccumuloProxy_getSiteConfiguration_result {
  public:
 
+  AccumuloProxy_getSiteConfiguration_result(const AccumuloProxy_getSiteConfiguration_result&);
+  AccumuloProxy_getSiteConfiguration_result& operator=(const AccumuloProxy_getSiteConfiguration_result&);
   AccumuloProxy_getSiteConfiguration_result() {
   }
 
-  virtual ~AccumuloProxy_getSiteConfiguration_result() throw() {}
-
+  virtual ~AccumuloProxy_getSiteConfiguration_result() throw();
   std::map<std::string, std::string>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_getSiteConfiguration_result__isset __isset;
 
-  void __set_success(const std::map<std::string, std::string> & val) {
-    success = val;
-  }
+  void __set_success(const std::map<std::string, std::string> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_getSiteConfiguration_result & rhs) const
   {
@@ -6583,17 +6162,16 @@
 
 typedef struct _AccumuloProxy_getSiteConfiguration_presult__isset {
   _AccumuloProxy_getSiteConfiguration_presult__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_getSiteConfiguration_presult__isset;
 
 class AccumuloProxy_getSiteConfiguration_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getSiteConfiguration_presult() throw() {}
-
+  virtual ~AccumuloProxy_getSiteConfiguration_presult() throw();
   std::map<std::string, std::string> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -6606,24 +6184,23 @@
 
 typedef struct _AccumuloProxy_getSystemConfiguration_args__isset {
   _AccumuloProxy_getSystemConfiguration_args__isset() : login(false) {}
-  bool login;
+  bool login :1;
 } _AccumuloProxy_getSystemConfiguration_args__isset;
 
 class AccumuloProxy_getSystemConfiguration_args {
  public:
 
+  AccumuloProxy_getSystemConfiguration_args(const AccumuloProxy_getSystemConfiguration_args&);
+  AccumuloProxy_getSystemConfiguration_args& operator=(const AccumuloProxy_getSystemConfiguration_args&);
   AccumuloProxy_getSystemConfiguration_args() : login() {
   }
 
-  virtual ~AccumuloProxy_getSystemConfiguration_args() throw() {}
-
+  virtual ~AccumuloProxy_getSystemConfiguration_args() throw();
   std::string login;
 
   _AccumuloProxy_getSystemConfiguration_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
   bool operator == (const AccumuloProxy_getSystemConfiguration_args & rhs) const
   {
@@ -6647,8 +6224,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getSystemConfiguration_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getSystemConfiguration_pargs() throw();
   const std::string* login;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -6657,36 +6233,31 @@
 
 typedef struct _AccumuloProxy_getSystemConfiguration_result__isset {
   _AccumuloProxy_getSystemConfiguration_result__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_getSystemConfiguration_result__isset;
 
 class AccumuloProxy_getSystemConfiguration_result {
  public:
 
+  AccumuloProxy_getSystemConfiguration_result(const AccumuloProxy_getSystemConfiguration_result&);
+  AccumuloProxy_getSystemConfiguration_result& operator=(const AccumuloProxy_getSystemConfiguration_result&);
   AccumuloProxy_getSystemConfiguration_result() {
   }
 
-  virtual ~AccumuloProxy_getSystemConfiguration_result() throw() {}
-
+  virtual ~AccumuloProxy_getSystemConfiguration_result() throw();
   std::map<std::string, std::string>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_getSystemConfiguration_result__isset __isset;
 
-  void __set_success(const std::map<std::string, std::string> & val) {
-    success = val;
-  }
+  void __set_success(const std::map<std::string, std::string> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_getSystemConfiguration_result & rhs) const
   {
@@ -6711,17 +6282,16 @@
 
 typedef struct _AccumuloProxy_getSystemConfiguration_presult__isset {
   _AccumuloProxy_getSystemConfiguration_presult__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_getSystemConfiguration_presult__isset;
 
 class AccumuloProxy_getSystemConfiguration_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getSystemConfiguration_presult() throw() {}
-
+  virtual ~AccumuloProxy_getSystemConfiguration_presult() throw();
   std::map<std::string, std::string> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -6734,24 +6304,23 @@
 
 typedef struct _AccumuloProxy_getTabletServers_args__isset {
   _AccumuloProxy_getTabletServers_args__isset() : login(false) {}
-  bool login;
+  bool login :1;
 } _AccumuloProxy_getTabletServers_args__isset;
 
 class AccumuloProxy_getTabletServers_args {
  public:
 
+  AccumuloProxy_getTabletServers_args(const AccumuloProxy_getTabletServers_args&);
+  AccumuloProxy_getTabletServers_args& operator=(const AccumuloProxy_getTabletServers_args&);
   AccumuloProxy_getTabletServers_args() : login() {
   }
 
-  virtual ~AccumuloProxy_getTabletServers_args() throw() {}
-
+  virtual ~AccumuloProxy_getTabletServers_args() throw();
   std::string login;
 
   _AccumuloProxy_getTabletServers_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
   bool operator == (const AccumuloProxy_getTabletServers_args & rhs) const
   {
@@ -6775,8 +6344,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getTabletServers_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getTabletServers_pargs() throw();
   const std::string* login;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -6785,24 +6353,23 @@
 
 typedef struct _AccumuloProxy_getTabletServers_result__isset {
   _AccumuloProxy_getTabletServers_result__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_getTabletServers_result__isset;
 
 class AccumuloProxy_getTabletServers_result {
  public:
 
+  AccumuloProxy_getTabletServers_result(const AccumuloProxy_getTabletServers_result&);
+  AccumuloProxy_getTabletServers_result& operator=(const AccumuloProxy_getTabletServers_result&);
   AccumuloProxy_getTabletServers_result() {
   }
 
-  virtual ~AccumuloProxy_getTabletServers_result() throw() {}
-
+  virtual ~AccumuloProxy_getTabletServers_result() throw();
   std::vector<std::string>  success;
 
   _AccumuloProxy_getTabletServers_result__isset __isset;
 
-  void __set_success(const std::vector<std::string> & val) {
-    success = val;
-  }
+  void __set_success(const std::vector<std::string> & val);
 
   bool operator == (const AccumuloProxy_getTabletServers_result & rhs) const
   {
@@ -6823,15 +6390,14 @@
 
 typedef struct _AccumuloProxy_getTabletServers_presult__isset {
   _AccumuloProxy_getTabletServers_presult__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_getTabletServers_presult__isset;
 
 class AccumuloProxy_getTabletServers_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getTabletServers_presult() throw() {}
-
+  virtual ~AccumuloProxy_getTabletServers_presult() throw();
   std::vector<std::string> * success;
 
   _AccumuloProxy_getTabletServers_presult__isset __isset;
@@ -6842,30 +6408,27 @@
 
 typedef struct _AccumuloProxy_removeProperty_args__isset {
   _AccumuloProxy_removeProperty_args__isset() : login(false), property(false) {}
-  bool login;
-  bool property;
+  bool login :1;
+  bool property :1;
 } _AccumuloProxy_removeProperty_args__isset;
 
 class AccumuloProxy_removeProperty_args {
  public:
 
+  AccumuloProxy_removeProperty_args(const AccumuloProxy_removeProperty_args&);
+  AccumuloProxy_removeProperty_args& operator=(const AccumuloProxy_removeProperty_args&);
   AccumuloProxy_removeProperty_args() : login(), property() {
   }
 
-  virtual ~AccumuloProxy_removeProperty_args() throw() {}
-
+  virtual ~AccumuloProxy_removeProperty_args() throw();
   std::string login;
   std::string property;
 
   _AccumuloProxy_removeProperty_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_property(const std::string& val) {
-    property = val;
-  }
+  void __set_property(const std::string& val);
 
   bool operator == (const AccumuloProxy_removeProperty_args & rhs) const
   {
@@ -6891,8 +6454,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_removeProperty_pargs() throw() {}
-
+  virtual ~AccumuloProxy_removeProperty_pargs() throw();
   const std::string* login;
   const std::string* property;
 
@@ -6902,30 +6464,27 @@
 
 typedef struct _AccumuloProxy_removeProperty_result__isset {
   _AccumuloProxy_removeProperty_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_removeProperty_result__isset;
 
 class AccumuloProxy_removeProperty_result {
  public:
 
+  AccumuloProxy_removeProperty_result(const AccumuloProxy_removeProperty_result&);
+  AccumuloProxy_removeProperty_result& operator=(const AccumuloProxy_removeProperty_result&);
   AccumuloProxy_removeProperty_result() {
   }
 
-  virtual ~AccumuloProxy_removeProperty_result() throw() {}
-
+  virtual ~AccumuloProxy_removeProperty_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_removeProperty_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_removeProperty_result & rhs) const
   {
@@ -6948,16 +6507,15 @@
 
 typedef struct _AccumuloProxy_removeProperty_presult__isset {
   _AccumuloProxy_removeProperty_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_removeProperty_presult__isset;
 
 class AccumuloProxy_removeProperty_presult {
  public:
 
 
-  virtual ~AccumuloProxy_removeProperty_presult() throw() {}
-
+  virtual ~AccumuloProxy_removeProperty_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
@@ -6969,36 +6527,31 @@
 
 typedef struct _AccumuloProxy_setProperty_args__isset {
   _AccumuloProxy_setProperty_args__isset() : login(false), property(false), value(false) {}
-  bool login;
-  bool property;
-  bool value;
+  bool login :1;
+  bool property :1;
+  bool value :1;
 } _AccumuloProxy_setProperty_args__isset;
 
 class AccumuloProxy_setProperty_args {
  public:
 
+  AccumuloProxy_setProperty_args(const AccumuloProxy_setProperty_args&);
+  AccumuloProxy_setProperty_args& operator=(const AccumuloProxy_setProperty_args&);
   AccumuloProxy_setProperty_args() : login(), property(), value() {
   }
 
-  virtual ~AccumuloProxy_setProperty_args() throw() {}
-
+  virtual ~AccumuloProxy_setProperty_args() throw();
   std::string login;
   std::string property;
   std::string value;
 
   _AccumuloProxy_setProperty_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_property(const std::string& val) {
-    property = val;
-  }
+  void __set_property(const std::string& val);
 
-  void __set_value(const std::string& val) {
-    value = val;
-  }
+  void __set_value(const std::string& val);
 
   bool operator == (const AccumuloProxy_setProperty_args & rhs) const
   {
@@ -7026,8 +6579,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_setProperty_pargs() throw() {}
-
+  virtual ~AccumuloProxy_setProperty_pargs() throw();
   const std::string* login;
   const std::string* property;
   const std::string* value;
@@ -7038,30 +6590,27 @@
 
 typedef struct _AccumuloProxy_setProperty_result__isset {
   _AccumuloProxy_setProperty_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_setProperty_result__isset;
 
 class AccumuloProxy_setProperty_result {
  public:
 
+  AccumuloProxy_setProperty_result(const AccumuloProxy_setProperty_result&);
+  AccumuloProxy_setProperty_result& operator=(const AccumuloProxy_setProperty_result&);
   AccumuloProxy_setProperty_result() {
   }
 
-  virtual ~AccumuloProxy_setProperty_result() throw() {}
-
+  virtual ~AccumuloProxy_setProperty_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_setProperty_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_setProperty_result & rhs) const
   {
@@ -7084,16 +6633,15 @@
 
 typedef struct _AccumuloProxy_setProperty_presult__isset {
   _AccumuloProxy_setProperty_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_setProperty_presult__isset;
 
 class AccumuloProxy_setProperty_presult {
  public:
 
 
-  virtual ~AccumuloProxy_setProperty_presult() throw() {}
-
+  virtual ~AccumuloProxy_setProperty_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
@@ -7105,36 +6653,31 @@
 
 typedef struct _AccumuloProxy_testClassLoad_args__isset {
   _AccumuloProxy_testClassLoad_args__isset() : login(false), className(false), asTypeName(false) {}
-  bool login;
-  bool className;
-  bool asTypeName;
+  bool login :1;
+  bool className :1;
+  bool asTypeName :1;
 } _AccumuloProxy_testClassLoad_args__isset;
 
 class AccumuloProxy_testClassLoad_args {
  public:
 
+  AccumuloProxy_testClassLoad_args(const AccumuloProxy_testClassLoad_args&);
+  AccumuloProxy_testClassLoad_args& operator=(const AccumuloProxy_testClassLoad_args&);
   AccumuloProxy_testClassLoad_args() : login(), className(), asTypeName() {
   }
 
-  virtual ~AccumuloProxy_testClassLoad_args() throw() {}
-
+  virtual ~AccumuloProxy_testClassLoad_args() throw();
   std::string login;
   std::string className;
   std::string asTypeName;
 
   _AccumuloProxy_testClassLoad_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_className(const std::string& val) {
-    className = val;
-  }
+  void __set_className(const std::string& val);
 
-  void __set_asTypeName(const std::string& val) {
-    asTypeName = val;
-  }
+  void __set_asTypeName(const std::string& val);
 
   bool operator == (const AccumuloProxy_testClassLoad_args & rhs) const
   {
@@ -7162,8 +6705,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_testClassLoad_pargs() throw() {}
-
+  virtual ~AccumuloProxy_testClassLoad_pargs() throw();
   const std::string* login;
   const std::string* className;
   const std::string* asTypeName;
@@ -7174,36 +6716,31 @@
 
 typedef struct _AccumuloProxy_testClassLoad_result__isset {
   _AccumuloProxy_testClassLoad_result__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_testClassLoad_result__isset;
 
 class AccumuloProxy_testClassLoad_result {
  public:
 
+  AccumuloProxy_testClassLoad_result(const AccumuloProxy_testClassLoad_result&);
+  AccumuloProxy_testClassLoad_result& operator=(const AccumuloProxy_testClassLoad_result&);
   AccumuloProxy_testClassLoad_result() : success(0) {
   }
 
-  virtual ~AccumuloProxy_testClassLoad_result() throw() {}
-
+  virtual ~AccumuloProxy_testClassLoad_result() throw();
   bool success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_testClassLoad_result__isset __isset;
 
-  void __set_success(const bool val) {
-    success = val;
-  }
+  void __set_success(const bool val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_testClassLoad_result & rhs) const
   {
@@ -7228,17 +6765,16 @@
 
 typedef struct _AccumuloProxy_testClassLoad_presult__isset {
   _AccumuloProxy_testClassLoad_presult__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_testClassLoad_presult__isset;
 
 class AccumuloProxy_testClassLoad_presult {
  public:
 
 
-  virtual ~AccumuloProxy_testClassLoad_presult() throw() {}
-
+  virtual ~AccumuloProxy_testClassLoad_presult() throw();
   bool* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -7251,36 +6787,31 @@
 
 typedef struct _AccumuloProxy_authenticateUser_args__isset {
   _AccumuloProxy_authenticateUser_args__isset() : login(false), user(false), properties(false) {}
-  bool login;
-  bool user;
-  bool properties;
+  bool login :1;
+  bool user :1;
+  bool properties :1;
 } _AccumuloProxy_authenticateUser_args__isset;
 
 class AccumuloProxy_authenticateUser_args {
  public:
 
+  AccumuloProxy_authenticateUser_args(const AccumuloProxy_authenticateUser_args&);
+  AccumuloProxy_authenticateUser_args& operator=(const AccumuloProxy_authenticateUser_args&);
   AccumuloProxy_authenticateUser_args() : login(), user() {
   }
 
-  virtual ~AccumuloProxy_authenticateUser_args() throw() {}
-
+  virtual ~AccumuloProxy_authenticateUser_args() throw();
   std::string login;
   std::string user;
   std::map<std::string, std::string>  properties;
 
   _AccumuloProxy_authenticateUser_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_properties(const std::map<std::string, std::string> & val) {
-    properties = val;
-  }
+  void __set_properties(const std::map<std::string, std::string> & val);
 
   bool operator == (const AccumuloProxy_authenticateUser_args & rhs) const
   {
@@ -7308,8 +6839,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_authenticateUser_pargs() throw() {}
-
+  virtual ~AccumuloProxy_authenticateUser_pargs() throw();
   const std::string* login;
   const std::string* user;
   const std::map<std::string, std::string> * properties;
@@ -7320,36 +6850,31 @@
 
 typedef struct _AccumuloProxy_authenticateUser_result__isset {
   _AccumuloProxy_authenticateUser_result__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_authenticateUser_result__isset;
 
 class AccumuloProxy_authenticateUser_result {
  public:
 
+  AccumuloProxy_authenticateUser_result(const AccumuloProxy_authenticateUser_result&);
+  AccumuloProxy_authenticateUser_result& operator=(const AccumuloProxy_authenticateUser_result&);
   AccumuloProxy_authenticateUser_result() : success(0) {
   }
 
-  virtual ~AccumuloProxy_authenticateUser_result() throw() {}
-
+  virtual ~AccumuloProxy_authenticateUser_result() throw();
   bool success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_authenticateUser_result__isset __isset;
 
-  void __set_success(const bool val) {
-    success = val;
-  }
+  void __set_success(const bool val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_authenticateUser_result & rhs) const
   {
@@ -7374,17 +6899,16 @@
 
 typedef struct _AccumuloProxy_authenticateUser_presult__isset {
   _AccumuloProxy_authenticateUser_presult__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_authenticateUser_presult__isset;
 
 class AccumuloProxy_authenticateUser_presult {
  public:
 
 
-  virtual ~AccumuloProxy_authenticateUser_presult() throw() {}
-
+  virtual ~AccumuloProxy_authenticateUser_presult() throw();
   bool* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -7397,36 +6921,31 @@
 
 typedef struct _AccumuloProxy_changeUserAuthorizations_args__isset {
   _AccumuloProxy_changeUserAuthorizations_args__isset() : login(false), user(false), authorizations(false) {}
-  bool login;
-  bool user;
-  bool authorizations;
+  bool login :1;
+  bool user :1;
+  bool authorizations :1;
 } _AccumuloProxy_changeUserAuthorizations_args__isset;
 
 class AccumuloProxy_changeUserAuthorizations_args {
  public:
 
+  AccumuloProxy_changeUserAuthorizations_args(const AccumuloProxy_changeUserAuthorizations_args&);
+  AccumuloProxy_changeUserAuthorizations_args& operator=(const AccumuloProxy_changeUserAuthorizations_args&);
   AccumuloProxy_changeUserAuthorizations_args() : login(), user() {
   }
 
-  virtual ~AccumuloProxy_changeUserAuthorizations_args() throw() {}
-
+  virtual ~AccumuloProxy_changeUserAuthorizations_args() throw();
   std::string login;
   std::string user;
   std::set<std::string>  authorizations;
 
   _AccumuloProxy_changeUserAuthorizations_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_authorizations(const std::set<std::string> & val) {
-    authorizations = val;
-  }
+  void __set_authorizations(const std::set<std::string> & val);
 
   bool operator == (const AccumuloProxy_changeUserAuthorizations_args & rhs) const
   {
@@ -7454,8 +6973,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_changeUserAuthorizations_pargs() throw() {}
-
+  virtual ~AccumuloProxy_changeUserAuthorizations_pargs() throw();
   const std::string* login;
   const std::string* user;
   const std::set<std::string> * authorizations;
@@ -7466,30 +6984,27 @@
 
 typedef struct _AccumuloProxy_changeUserAuthorizations_result__isset {
   _AccumuloProxy_changeUserAuthorizations_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_changeUserAuthorizations_result__isset;
 
 class AccumuloProxy_changeUserAuthorizations_result {
  public:
 
+  AccumuloProxy_changeUserAuthorizations_result(const AccumuloProxy_changeUserAuthorizations_result&);
+  AccumuloProxy_changeUserAuthorizations_result& operator=(const AccumuloProxy_changeUserAuthorizations_result&);
   AccumuloProxy_changeUserAuthorizations_result() {
   }
 
-  virtual ~AccumuloProxy_changeUserAuthorizations_result() throw() {}
-
+  virtual ~AccumuloProxy_changeUserAuthorizations_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_changeUserAuthorizations_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_changeUserAuthorizations_result & rhs) const
   {
@@ -7512,16 +7027,15 @@
 
 typedef struct _AccumuloProxy_changeUserAuthorizations_presult__isset {
   _AccumuloProxy_changeUserAuthorizations_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_changeUserAuthorizations_presult__isset;
 
 class AccumuloProxy_changeUserAuthorizations_presult {
  public:
 
 
-  virtual ~AccumuloProxy_changeUserAuthorizations_presult() throw() {}
-
+  virtual ~AccumuloProxy_changeUserAuthorizations_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
@@ -7533,36 +7047,31 @@
 
 typedef struct _AccumuloProxy_changeLocalUserPassword_args__isset {
   _AccumuloProxy_changeLocalUserPassword_args__isset() : login(false), user(false), password(false) {}
-  bool login;
-  bool user;
-  bool password;
+  bool login :1;
+  bool user :1;
+  bool password :1;
 } _AccumuloProxy_changeLocalUserPassword_args__isset;
 
 class AccumuloProxy_changeLocalUserPassword_args {
  public:
 
+  AccumuloProxy_changeLocalUserPassword_args(const AccumuloProxy_changeLocalUserPassword_args&);
+  AccumuloProxy_changeLocalUserPassword_args& operator=(const AccumuloProxy_changeLocalUserPassword_args&);
   AccumuloProxy_changeLocalUserPassword_args() : login(), user(), password() {
   }
 
-  virtual ~AccumuloProxy_changeLocalUserPassword_args() throw() {}
-
+  virtual ~AccumuloProxy_changeLocalUserPassword_args() throw();
   std::string login;
   std::string user;
   std::string password;
 
   _AccumuloProxy_changeLocalUserPassword_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_password(const std::string& val) {
-    password = val;
-  }
+  void __set_password(const std::string& val);
 
   bool operator == (const AccumuloProxy_changeLocalUserPassword_args & rhs) const
   {
@@ -7590,8 +7099,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_changeLocalUserPassword_pargs() throw() {}
-
+  virtual ~AccumuloProxy_changeLocalUserPassword_pargs() throw();
   const std::string* login;
   const std::string* user;
   const std::string* password;
@@ -7602,30 +7110,27 @@
 
 typedef struct _AccumuloProxy_changeLocalUserPassword_result__isset {
   _AccumuloProxy_changeLocalUserPassword_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_changeLocalUserPassword_result__isset;
 
 class AccumuloProxy_changeLocalUserPassword_result {
  public:
 
+  AccumuloProxy_changeLocalUserPassword_result(const AccumuloProxy_changeLocalUserPassword_result&);
+  AccumuloProxy_changeLocalUserPassword_result& operator=(const AccumuloProxy_changeLocalUserPassword_result&);
   AccumuloProxy_changeLocalUserPassword_result() {
   }
 
-  virtual ~AccumuloProxy_changeLocalUserPassword_result() throw() {}
-
+  virtual ~AccumuloProxy_changeLocalUserPassword_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_changeLocalUserPassword_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_changeLocalUserPassword_result & rhs) const
   {
@@ -7648,16 +7153,15 @@
 
 typedef struct _AccumuloProxy_changeLocalUserPassword_presult__isset {
   _AccumuloProxy_changeLocalUserPassword_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_changeLocalUserPassword_presult__isset;
 
 class AccumuloProxy_changeLocalUserPassword_presult {
  public:
 
 
-  virtual ~AccumuloProxy_changeLocalUserPassword_presult() throw() {}
-
+  virtual ~AccumuloProxy_changeLocalUserPassword_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
@@ -7669,36 +7173,31 @@
 
 typedef struct _AccumuloProxy_createLocalUser_args__isset {
   _AccumuloProxy_createLocalUser_args__isset() : login(false), user(false), password(false) {}
-  bool login;
-  bool user;
-  bool password;
+  bool login :1;
+  bool user :1;
+  bool password :1;
 } _AccumuloProxy_createLocalUser_args__isset;
 
 class AccumuloProxy_createLocalUser_args {
  public:
 
+  AccumuloProxy_createLocalUser_args(const AccumuloProxy_createLocalUser_args&);
+  AccumuloProxy_createLocalUser_args& operator=(const AccumuloProxy_createLocalUser_args&);
   AccumuloProxy_createLocalUser_args() : login(), user(), password() {
   }
 
-  virtual ~AccumuloProxy_createLocalUser_args() throw() {}
-
+  virtual ~AccumuloProxy_createLocalUser_args() throw();
   std::string login;
   std::string user;
   std::string password;
 
   _AccumuloProxy_createLocalUser_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_password(const std::string& val) {
-    password = val;
-  }
+  void __set_password(const std::string& val);
 
   bool operator == (const AccumuloProxy_createLocalUser_args & rhs) const
   {
@@ -7726,8 +7225,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_createLocalUser_pargs() throw() {}
-
+  virtual ~AccumuloProxy_createLocalUser_pargs() throw();
   const std::string* login;
   const std::string* user;
   const std::string* password;
@@ -7738,30 +7236,27 @@
 
 typedef struct _AccumuloProxy_createLocalUser_result__isset {
   _AccumuloProxy_createLocalUser_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_createLocalUser_result__isset;
 
 class AccumuloProxy_createLocalUser_result {
  public:
 
+  AccumuloProxy_createLocalUser_result(const AccumuloProxy_createLocalUser_result&);
+  AccumuloProxy_createLocalUser_result& operator=(const AccumuloProxy_createLocalUser_result&);
   AccumuloProxy_createLocalUser_result() {
   }
 
-  virtual ~AccumuloProxy_createLocalUser_result() throw() {}
-
+  virtual ~AccumuloProxy_createLocalUser_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_createLocalUser_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_createLocalUser_result & rhs) const
   {
@@ -7784,16 +7279,15 @@
 
 typedef struct _AccumuloProxy_createLocalUser_presult__isset {
   _AccumuloProxy_createLocalUser_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_createLocalUser_presult__isset;
 
 class AccumuloProxy_createLocalUser_presult {
  public:
 
 
-  virtual ~AccumuloProxy_createLocalUser_presult() throw() {}
-
+  virtual ~AccumuloProxy_createLocalUser_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
@@ -7805,30 +7299,27 @@
 
 typedef struct _AccumuloProxy_dropLocalUser_args__isset {
   _AccumuloProxy_dropLocalUser_args__isset() : login(false), user(false) {}
-  bool login;
-  bool user;
+  bool login :1;
+  bool user :1;
 } _AccumuloProxy_dropLocalUser_args__isset;
 
 class AccumuloProxy_dropLocalUser_args {
  public:
 
+  AccumuloProxy_dropLocalUser_args(const AccumuloProxy_dropLocalUser_args&);
+  AccumuloProxy_dropLocalUser_args& operator=(const AccumuloProxy_dropLocalUser_args&);
   AccumuloProxy_dropLocalUser_args() : login(), user() {
   }
 
-  virtual ~AccumuloProxy_dropLocalUser_args() throw() {}
-
+  virtual ~AccumuloProxy_dropLocalUser_args() throw();
   std::string login;
   std::string user;
 
   _AccumuloProxy_dropLocalUser_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
   bool operator == (const AccumuloProxy_dropLocalUser_args & rhs) const
   {
@@ -7854,8 +7345,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_dropLocalUser_pargs() throw() {}
-
+  virtual ~AccumuloProxy_dropLocalUser_pargs() throw();
   const std::string* login;
   const std::string* user;
 
@@ -7865,30 +7355,27 @@
 
 typedef struct _AccumuloProxy_dropLocalUser_result__isset {
   _AccumuloProxy_dropLocalUser_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_dropLocalUser_result__isset;
 
 class AccumuloProxy_dropLocalUser_result {
  public:
 
+  AccumuloProxy_dropLocalUser_result(const AccumuloProxy_dropLocalUser_result&);
+  AccumuloProxy_dropLocalUser_result& operator=(const AccumuloProxy_dropLocalUser_result&);
   AccumuloProxy_dropLocalUser_result() {
   }
 
-  virtual ~AccumuloProxy_dropLocalUser_result() throw() {}
-
+  virtual ~AccumuloProxy_dropLocalUser_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_dropLocalUser_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_dropLocalUser_result & rhs) const
   {
@@ -7911,16 +7398,15 @@
 
 typedef struct _AccumuloProxy_dropLocalUser_presult__isset {
   _AccumuloProxy_dropLocalUser_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_dropLocalUser_presult__isset;
 
 class AccumuloProxy_dropLocalUser_presult {
  public:
 
 
-  virtual ~AccumuloProxy_dropLocalUser_presult() throw() {}
-
+  virtual ~AccumuloProxy_dropLocalUser_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
@@ -7932,30 +7418,27 @@
 
 typedef struct _AccumuloProxy_getUserAuthorizations_args__isset {
   _AccumuloProxy_getUserAuthorizations_args__isset() : login(false), user(false) {}
-  bool login;
-  bool user;
+  bool login :1;
+  bool user :1;
 } _AccumuloProxy_getUserAuthorizations_args__isset;
 
 class AccumuloProxy_getUserAuthorizations_args {
  public:
 
+  AccumuloProxy_getUserAuthorizations_args(const AccumuloProxy_getUserAuthorizations_args&);
+  AccumuloProxy_getUserAuthorizations_args& operator=(const AccumuloProxy_getUserAuthorizations_args&);
   AccumuloProxy_getUserAuthorizations_args() : login(), user() {
   }
 
-  virtual ~AccumuloProxy_getUserAuthorizations_args() throw() {}
-
+  virtual ~AccumuloProxy_getUserAuthorizations_args() throw();
   std::string login;
   std::string user;
 
   _AccumuloProxy_getUserAuthorizations_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
   bool operator == (const AccumuloProxy_getUserAuthorizations_args & rhs) const
   {
@@ -7981,8 +7464,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getUserAuthorizations_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getUserAuthorizations_pargs() throw();
   const std::string* login;
   const std::string* user;
 
@@ -7992,36 +7474,31 @@
 
 typedef struct _AccumuloProxy_getUserAuthorizations_result__isset {
   _AccumuloProxy_getUserAuthorizations_result__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_getUserAuthorizations_result__isset;
 
 class AccumuloProxy_getUserAuthorizations_result {
  public:
 
+  AccumuloProxy_getUserAuthorizations_result(const AccumuloProxy_getUserAuthorizations_result&);
+  AccumuloProxy_getUserAuthorizations_result& operator=(const AccumuloProxy_getUserAuthorizations_result&);
   AccumuloProxy_getUserAuthorizations_result() {
   }
 
-  virtual ~AccumuloProxy_getUserAuthorizations_result() throw() {}
-
+  virtual ~AccumuloProxy_getUserAuthorizations_result() throw();
   std::vector<std::string>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_getUserAuthorizations_result__isset __isset;
 
-  void __set_success(const std::vector<std::string> & val) {
-    success = val;
-  }
+  void __set_success(const std::vector<std::string> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_getUserAuthorizations_result & rhs) const
   {
@@ -8046,17 +7523,16 @@
 
 typedef struct _AccumuloProxy_getUserAuthorizations_presult__isset {
   _AccumuloProxy_getUserAuthorizations_presult__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_getUserAuthorizations_presult__isset;
 
 class AccumuloProxy_getUserAuthorizations_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getUserAuthorizations_presult() throw() {}
-
+  virtual ~AccumuloProxy_getUserAuthorizations_presult() throw();
   std::vector<std::string> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -8069,36 +7545,31 @@
 
 typedef struct _AccumuloProxy_grantSystemPermission_args__isset {
   _AccumuloProxy_grantSystemPermission_args__isset() : login(false), user(false), perm(false) {}
-  bool login;
-  bool user;
-  bool perm;
+  bool login :1;
+  bool user :1;
+  bool perm :1;
 } _AccumuloProxy_grantSystemPermission_args__isset;
 
 class AccumuloProxy_grantSystemPermission_args {
  public:
 
+  AccumuloProxy_grantSystemPermission_args(const AccumuloProxy_grantSystemPermission_args&);
+  AccumuloProxy_grantSystemPermission_args& operator=(const AccumuloProxy_grantSystemPermission_args&);
   AccumuloProxy_grantSystemPermission_args() : login(), user(), perm((SystemPermission::type)0) {
   }
 
-  virtual ~AccumuloProxy_grantSystemPermission_args() throw() {}
-
+  virtual ~AccumuloProxy_grantSystemPermission_args() throw();
   std::string login;
   std::string user;
   SystemPermission::type perm;
 
   _AccumuloProxy_grantSystemPermission_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_perm(const SystemPermission::type val) {
-    perm = val;
-  }
+  void __set_perm(const SystemPermission::type val);
 
   bool operator == (const AccumuloProxy_grantSystemPermission_args & rhs) const
   {
@@ -8126,8 +7597,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_grantSystemPermission_pargs() throw() {}
-
+  virtual ~AccumuloProxy_grantSystemPermission_pargs() throw();
   const std::string* login;
   const std::string* user;
   const SystemPermission::type* perm;
@@ -8138,30 +7608,27 @@
 
 typedef struct _AccumuloProxy_grantSystemPermission_result__isset {
   _AccumuloProxy_grantSystemPermission_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_grantSystemPermission_result__isset;
 
 class AccumuloProxy_grantSystemPermission_result {
  public:
 
+  AccumuloProxy_grantSystemPermission_result(const AccumuloProxy_grantSystemPermission_result&);
+  AccumuloProxy_grantSystemPermission_result& operator=(const AccumuloProxy_grantSystemPermission_result&);
   AccumuloProxy_grantSystemPermission_result() {
   }
 
-  virtual ~AccumuloProxy_grantSystemPermission_result() throw() {}
-
+  virtual ~AccumuloProxy_grantSystemPermission_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_grantSystemPermission_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_grantSystemPermission_result & rhs) const
   {
@@ -8184,16 +7651,15 @@
 
 typedef struct _AccumuloProxy_grantSystemPermission_presult__isset {
   _AccumuloProxy_grantSystemPermission_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_grantSystemPermission_presult__isset;
 
 class AccumuloProxy_grantSystemPermission_presult {
  public:
 
 
-  virtual ~AccumuloProxy_grantSystemPermission_presult() throw() {}
-
+  virtual ~AccumuloProxy_grantSystemPermission_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
@@ -8205,20 +7671,21 @@
 
 typedef struct _AccumuloProxy_grantTablePermission_args__isset {
   _AccumuloProxy_grantTablePermission_args__isset() : login(false), user(false), table(false), perm(false) {}
-  bool login;
-  bool user;
-  bool table;
-  bool perm;
+  bool login :1;
+  bool user :1;
+  bool table :1;
+  bool perm :1;
 } _AccumuloProxy_grantTablePermission_args__isset;
 
 class AccumuloProxy_grantTablePermission_args {
  public:
 
+  AccumuloProxy_grantTablePermission_args(const AccumuloProxy_grantTablePermission_args&);
+  AccumuloProxy_grantTablePermission_args& operator=(const AccumuloProxy_grantTablePermission_args&);
   AccumuloProxy_grantTablePermission_args() : login(), user(), table(), perm((TablePermission::type)0) {
   }
 
-  virtual ~AccumuloProxy_grantTablePermission_args() throw() {}
-
+  virtual ~AccumuloProxy_grantTablePermission_args() throw();
   std::string login;
   std::string user;
   std::string table;
@@ -8226,21 +7693,13 @@
 
   _AccumuloProxy_grantTablePermission_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_table(const std::string& val) {
-    table = val;
-  }
+  void __set_table(const std::string& val);
 
-  void __set_perm(const TablePermission::type val) {
-    perm = val;
-  }
+  void __set_perm(const TablePermission::type val);
 
   bool operator == (const AccumuloProxy_grantTablePermission_args & rhs) const
   {
@@ -8270,8 +7729,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_grantTablePermission_pargs() throw() {}
-
+  virtual ~AccumuloProxy_grantTablePermission_pargs() throw();
   const std::string* login;
   const std::string* user;
   const std::string* table;
@@ -8283,36 +7741,31 @@
 
 typedef struct _AccumuloProxy_grantTablePermission_result__isset {
   _AccumuloProxy_grantTablePermission_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_grantTablePermission_result__isset;
 
 class AccumuloProxy_grantTablePermission_result {
  public:
 
+  AccumuloProxy_grantTablePermission_result(const AccumuloProxy_grantTablePermission_result&);
+  AccumuloProxy_grantTablePermission_result& operator=(const AccumuloProxy_grantTablePermission_result&);
   AccumuloProxy_grantTablePermission_result() {
   }
 
-  virtual ~AccumuloProxy_grantTablePermission_result() throw() {}
-
+  virtual ~AccumuloProxy_grantTablePermission_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_grantTablePermission_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_grantTablePermission_result & rhs) const
   {
@@ -8337,17 +7790,16 @@
 
 typedef struct _AccumuloProxy_grantTablePermission_presult__isset {
   _AccumuloProxy_grantTablePermission_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_grantTablePermission_presult__isset;
 
 class AccumuloProxy_grantTablePermission_presult {
  public:
 
 
-  virtual ~AccumuloProxy_grantTablePermission_presult() throw() {}
-
+  virtual ~AccumuloProxy_grantTablePermission_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -8360,36 +7812,31 @@
 
 typedef struct _AccumuloProxy_hasSystemPermission_args__isset {
   _AccumuloProxy_hasSystemPermission_args__isset() : login(false), user(false), perm(false) {}
-  bool login;
-  bool user;
-  bool perm;
+  bool login :1;
+  bool user :1;
+  bool perm :1;
 } _AccumuloProxy_hasSystemPermission_args__isset;
 
 class AccumuloProxy_hasSystemPermission_args {
  public:
 
+  AccumuloProxy_hasSystemPermission_args(const AccumuloProxy_hasSystemPermission_args&);
+  AccumuloProxy_hasSystemPermission_args& operator=(const AccumuloProxy_hasSystemPermission_args&);
   AccumuloProxy_hasSystemPermission_args() : login(), user(), perm((SystemPermission::type)0) {
   }
 
-  virtual ~AccumuloProxy_hasSystemPermission_args() throw() {}
-
+  virtual ~AccumuloProxy_hasSystemPermission_args() throw();
   std::string login;
   std::string user;
   SystemPermission::type perm;
 
   _AccumuloProxy_hasSystemPermission_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_perm(const SystemPermission::type val) {
-    perm = val;
-  }
+  void __set_perm(const SystemPermission::type val);
 
   bool operator == (const AccumuloProxy_hasSystemPermission_args & rhs) const
   {
@@ -8417,8 +7864,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_hasSystemPermission_pargs() throw() {}
-
+  virtual ~AccumuloProxy_hasSystemPermission_pargs() throw();
   const std::string* login;
   const std::string* user;
   const SystemPermission::type* perm;
@@ -8429,36 +7875,31 @@
 
 typedef struct _AccumuloProxy_hasSystemPermission_result__isset {
   _AccumuloProxy_hasSystemPermission_result__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_hasSystemPermission_result__isset;
 
 class AccumuloProxy_hasSystemPermission_result {
  public:
 
+  AccumuloProxy_hasSystemPermission_result(const AccumuloProxy_hasSystemPermission_result&);
+  AccumuloProxy_hasSystemPermission_result& operator=(const AccumuloProxy_hasSystemPermission_result&);
   AccumuloProxy_hasSystemPermission_result() : success(0) {
   }
 
-  virtual ~AccumuloProxy_hasSystemPermission_result() throw() {}
-
+  virtual ~AccumuloProxy_hasSystemPermission_result() throw();
   bool success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_hasSystemPermission_result__isset __isset;
 
-  void __set_success(const bool val) {
-    success = val;
-  }
+  void __set_success(const bool val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_hasSystemPermission_result & rhs) const
   {
@@ -8483,17 +7924,16 @@
 
 typedef struct _AccumuloProxy_hasSystemPermission_presult__isset {
   _AccumuloProxy_hasSystemPermission_presult__isset() : success(false), ouch1(false), ouch2(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_hasSystemPermission_presult__isset;
 
 class AccumuloProxy_hasSystemPermission_presult {
  public:
 
 
-  virtual ~AccumuloProxy_hasSystemPermission_presult() throw() {}
-
+  virtual ~AccumuloProxy_hasSystemPermission_presult() throw();
   bool* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -8506,20 +7946,21 @@
 
 typedef struct _AccumuloProxy_hasTablePermission_args__isset {
   _AccumuloProxy_hasTablePermission_args__isset() : login(false), user(false), table(false), perm(false) {}
-  bool login;
-  bool user;
-  bool table;
-  bool perm;
+  bool login :1;
+  bool user :1;
+  bool table :1;
+  bool perm :1;
 } _AccumuloProxy_hasTablePermission_args__isset;
 
 class AccumuloProxy_hasTablePermission_args {
  public:
 
+  AccumuloProxy_hasTablePermission_args(const AccumuloProxy_hasTablePermission_args&);
+  AccumuloProxy_hasTablePermission_args& operator=(const AccumuloProxy_hasTablePermission_args&);
   AccumuloProxy_hasTablePermission_args() : login(), user(), table(), perm((TablePermission::type)0) {
   }
 
-  virtual ~AccumuloProxy_hasTablePermission_args() throw() {}
-
+  virtual ~AccumuloProxy_hasTablePermission_args() throw();
   std::string login;
   std::string user;
   std::string table;
@@ -8527,21 +7968,13 @@
 
   _AccumuloProxy_hasTablePermission_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_table(const std::string& val) {
-    table = val;
-  }
+  void __set_table(const std::string& val);
 
-  void __set_perm(const TablePermission::type val) {
-    perm = val;
-  }
+  void __set_perm(const TablePermission::type val);
 
   bool operator == (const AccumuloProxy_hasTablePermission_args & rhs) const
   {
@@ -8571,8 +8004,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_hasTablePermission_pargs() throw() {}
-
+  virtual ~AccumuloProxy_hasTablePermission_pargs() throw();
   const std::string* login;
   const std::string* user;
   const std::string* table;
@@ -8584,20 +8016,21 @@
 
 typedef struct _AccumuloProxy_hasTablePermission_result__isset {
   _AccumuloProxy_hasTablePermission_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_hasTablePermission_result__isset;
 
 class AccumuloProxy_hasTablePermission_result {
  public:
 
+  AccumuloProxy_hasTablePermission_result(const AccumuloProxy_hasTablePermission_result&);
+  AccumuloProxy_hasTablePermission_result& operator=(const AccumuloProxy_hasTablePermission_result&);
   AccumuloProxy_hasTablePermission_result() : success(0) {
   }
 
-  virtual ~AccumuloProxy_hasTablePermission_result() throw() {}
-
+  virtual ~AccumuloProxy_hasTablePermission_result() throw();
   bool success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -8605,21 +8038,13 @@
 
   _AccumuloProxy_hasTablePermission_result__isset __isset;
 
-  void __set_success(const bool val) {
-    success = val;
-  }
+  void __set_success(const bool val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_hasTablePermission_result & rhs) const
   {
@@ -8646,18 +8071,17 @@
 
 typedef struct _AccumuloProxy_hasTablePermission_presult__isset {
   _AccumuloProxy_hasTablePermission_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_hasTablePermission_presult__isset;
 
 class AccumuloProxy_hasTablePermission_presult {
  public:
 
 
-  virtual ~AccumuloProxy_hasTablePermission_presult() throw() {}
-
+  virtual ~AccumuloProxy_hasTablePermission_presult() throw();
   bool* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -8671,24 +8095,23 @@
 
 typedef struct _AccumuloProxy_listLocalUsers_args__isset {
   _AccumuloProxy_listLocalUsers_args__isset() : login(false) {}
-  bool login;
+  bool login :1;
 } _AccumuloProxy_listLocalUsers_args__isset;
 
 class AccumuloProxy_listLocalUsers_args {
  public:
 
+  AccumuloProxy_listLocalUsers_args(const AccumuloProxy_listLocalUsers_args&);
+  AccumuloProxy_listLocalUsers_args& operator=(const AccumuloProxy_listLocalUsers_args&);
   AccumuloProxy_listLocalUsers_args() : login() {
   }
 
-  virtual ~AccumuloProxy_listLocalUsers_args() throw() {}
-
+  virtual ~AccumuloProxy_listLocalUsers_args() throw();
   std::string login;
 
   _AccumuloProxy_listLocalUsers_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
   bool operator == (const AccumuloProxy_listLocalUsers_args & rhs) const
   {
@@ -8712,8 +8135,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_listLocalUsers_pargs() throw() {}
-
+  virtual ~AccumuloProxy_listLocalUsers_pargs() throw();
   const std::string* login;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -8722,20 +8144,21 @@
 
 typedef struct _AccumuloProxy_listLocalUsers_result__isset {
   _AccumuloProxy_listLocalUsers_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_listLocalUsers_result__isset;
 
 class AccumuloProxy_listLocalUsers_result {
  public:
 
+  AccumuloProxy_listLocalUsers_result(const AccumuloProxy_listLocalUsers_result&);
+  AccumuloProxy_listLocalUsers_result& operator=(const AccumuloProxy_listLocalUsers_result&);
   AccumuloProxy_listLocalUsers_result() {
   }
 
-  virtual ~AccumuloProxy_listLocalUsers_result() throw() {}
-
+  virtual ~AccumuloProxy_listLocalUsers_result() throw();
   std::set<std::string>  success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -8743,21 +8166,13 @@
 
   _AccumuloProxy_listLocalUsers_result__isset __isset;
 
-  void __set_success(const std::set<std::string> & val) {
-    success = val;
-  }
+  void __set_success(const std::set<std::string> & val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_listLocalUsers_result & rhs) const
   {
@@ -8784,18 +8199,17 @@
 
 typedef struct _AccumuloProxy_listLocalUsers_presult__isset {
   _AccumuloProxy_listLocalUsers_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_listLocalUsers_presult__isset;
 
 class AccumuloProxy_listLocalUsers_presult {
  public:
 
 
-  virtual ~AccumuloProxy_listLocalUsers_presult() throw() {}
-
+  virtual ~AccumuloProxy_listLocalUsers_presult() throw();
   std::set<std::string> * success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -8809,36 +8223,31 @@
 
 typedef struct _AccumuloProxy_revokeSystemPermission_args__isset {
   _AccumuloProxy_revokeSystemPermission_args__isset() : login(false), user(false), perm(false) {}
-  bool login;
-  bool user;
-  bool perm;
+  bool login :1;
+  bool user :1;
+  bool perm :1;
 } _AccumuloProxy_revokeSystemPermission_args__isset;
 
 class AccumuloProxy_revokeSystemPermission_args {
  public:
 
+  AccumuloProxy_revokeSystemPermission_args(const AccumuloProxy_revokeSystemPermission_args&);
+  AccumuloProxy_revokeSystemPermission_args& operator=(const AccumuloProxy_revokeSystemPermission_args&);
   AccumuloProxy_revokeSystemPermission_args() : login(), user(), perm((SystemPermission::type)0) {
   }
 
-  virtual ~AccumuloProxy_revokeSystemPermission_args() throw() {}
-
+  virtual ~AccumuloProxy_revokeSystemPermission_args() throw();
   std::string login;
   std::string user;
   SystemPermission::type perm;
 
   _AccumuloProxy_revokeSystemPermission_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_perm(const SystemPermission::type val) {
-    perm = val;
-  }
+  void __set_perm(const SystemPermission::type val);
 
   bool operator == (const AccumuloProxy_revokeSystemPermission_args & rhs) const
   {
@@ -8866,8 +8275,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_revokeSystemPermission_pargs() throw() {}
-
+  virtual ~AccumuloProxy_revokeSystemPermission_pargs() throw();
   const std::string* login;
   const std::string* user;
   const SystemPermission::type* perm;
@@ -8878,30 +8286,27 @@
 
 typedef struct _AccumuloProxy_revokeSystemPermission_result__isset {
   _AccumuloProxy_revokeSystemPermission_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_revokeSystemPermission_result__isset;
 
 class AccumuloProxy_revokeSystemPermission_result {
  public:
 
+  AccumuloProxy_revokeSystemPermission_result(const AccumuloProxy_revokeSystemPermission_result&);
+  AccumuloProxy_revokeSystemPermission_result& operator=(const AccumuloProxy_revokeSystemPermission_result&);
   AccumuloProxy_revokeSystemPermission_result() {
   }
 
-  virtual ~AccumuloProxy_revokeSystemPermission_result() throw() {}
-
+  virtual ~AccumuloProxy_revokeSystemPermission_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
   _AccumuloProxy_revokeSystemPermission_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_revokeSystemPermission_result & rhs) const
   {
@@ -8924,16 +8329,15 @@
 
 typedef struct _AccumuloProxy_revokeSystemPermission_presult__isset {
   _AccumuloProxy_revokeSystemPermission_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_revokeSystemPermission_presult__isset;
 
 class AccumuloProxy_revokeSystemPermission_presult {
  public:
 
 
-  virtual ~AccumuloProxy_revokeSystemPermission_presult() throw() {}
-
+  virtual ~AccumuloProxy_revokeSystemPermission_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
 
@@ -8945,20 +8349,21 @@
 
 typedef struct _AccumuloProxy_revokeTablePermission_args__isset {
   _AccumuloProxy_revokeTablePermission_args__isset() : login(false), user(false), table(false), perm(false) {}
-  bool login;
-  bool user;
-  bool table;
-  bool perm;
+  bool login :1;
+  bool user :1;
+  bool table :1;
+  bool perm :1;
 } _AccumuloProxy_revokeTablePermission_args__isset;
 
 class AccumuloProxy_revokeTablePermission_args {
  public:
 
+  AccumuloProxy_revokeTablePermission_args(const AccumuloProxy_revokeTablePermission_args&);
+  AccumuloProxy_revokeTablePermission_args& operator=(const AccumuloProxy_revokeTablePermission_args&);
   AccumuloProxy_revokeTablePermission_args() : login(), user(), table(), perm((TablePermission::type)0) {
   }
 
-  virtual ~AccumuloProxy_revokeTablePermission_args() throw() {}
-
+  virtual ~AccumuloProxy_revokeTablePermission_args() throw();
   std::string login;
   std::string user;
   std::string table;
@@ -8966,21 +8371,13 @@
 
   _AccumuloProxy_revokeTablePermission_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_table(const std::string& val) {
-    table = val;
-  }
+  void __set_table(const std::string& val);
 
-  void __set_perm(const TablePermission::type val) {
-    perm = val;
-  }
+  void __set_perm(const TablePermission::type val);
 
   bool operator == (const AccumuloProxy_revokeTablePermission_args & rhs) const
   {
@@ -9010,8 +8407,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_revokeTablePermission_pargs() throw() {}
-
+  virtual ~AccumuloProxy_revokeTablePermission_pargs() throw();
   const std::string* login;
   const std::string* user;
   const std::string* table;
@@ -9023,36 +8419,31 @@
 
 typedef struct _AccumuloProxy_revokeTablePermission_result__isset {
   _AccumuloProxy_revokeTablePermission_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_revokeTablePermission_result__isset;
 
 class AccumuloProxy_revokeTablePermission_result {
  public:
 
+  AccumuloProxy_revokeTablePermission_result(const AccumuloProxy_revokeTablePermission_result&);
+  AccumuloProxy_revokeTablePermission_result& operator=(const AccumuloProxy_revokeTablePermission_result&);
   AccumuloProxy_revokeTablePermission_result() {
   }
 
-  virtual ~AccumuloProxy_revokeTablePermission_result() throw() {}
-
+  virtual ~AccumuloProxy_revokeTablePermission_result() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
 
   _AccumuloProxy_revokeTablePermission_result__isset __isset;
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_revokeTablePermission_result & rhs) const
   {
@@ -9077,17 +8468,16 @@
 
 typedef struct _AccumuloProxy_revokeTablePermission_presult__isset {
   _AccumuloProxy_revokeTablePermission_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_revokeTablePermission_presult__isset;
 
 class AccumuloProxy_revokeTablePermission_presult {
  public:
 
 
-  virtual ~AccumuloProxy_revokeTablePermission_presult() throw() {}
-
+  virtual ~AccumuloProxy_revokeTablePermission_presult() throw();
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -9098,38 +8488,440 @@
 
 };
 
+typedef struct _AccumuloProxy_grantNamespacePermission_args__isset {
+  _AccumuloProxy_grantNamespacePermission_args__isset() : login(false), user(false), namespaceName(false), perm(false) {}
+  bool login :1;
+  bool user :1;
+  bool namespaceName :1;
+  bool perm :1;
+} _AccumuloProxy_grantNamespacePermission_args__isset;
+
+class AccumuloProxy_grantNamespacePermission_args {
+ public:
+
+  AccumuloProxy_grantNamespacePermission_args(const AccumuloProxy_grantNamespacePermission_args&);
+  AccumuloProxy_grantNamespacePermission_args& operator=(const AccumuloProxy_grantNamespacePermission_args&);
+  AccumuloProxy_grantNamespacePermission_args() : login(), user(), namespaceName(), perm((NamespacePermission::type)0) {
+  }
+
+  virtual ~AccumuloProxy_grantNamespacePermission_args() throw();
+  std::string login;
+  std::string user;
+  std::string namespaceName;
+  NamespacePermission::type perm;
+
+  _AccumuloProxy_grantNamespacePermission_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_user(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_perm(const NamespacePermission::type val);
+
+  bool operator == (const AccumuloProxy_grantNamespacePermission_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(user == rhs.user))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(perm == rhs.perm))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_grantNamespacePermission_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_grantNamespacePermission_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_grantNamespacePermission_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_grantNamespacePermission_pargs() throw();
+  const std::string* login;
+  const std::string* user;
+  const std::string* namespaceName;
+  const NamespacePermission::type* perm;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_grantNamespacePermission_result__isset {
+  _AccumuloProxy_grantNamespacePermission_result__isset() : ouch1(false), ouch2(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_grantNamespacePermission_result__isset;
+
+class AccumuloProxy_grantNamespacePermission_result {
+ public:
+
+  AccumuloProxy_grantNamespacePermission_result(const AccumuloProxy_grantNamespacePermission_result&);
+  AccumuloProxy_grantNamespacePermission_result& operator=(const AccumuloProxy_grantNamespacePermission_result&);
+  AccumuloProxy_grantNamespacePermission_result() {
+  }
+
+  virtual ~AccumuloProxy_grantNamespacePermission_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_grantNamespacePermission_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  bool operator == (const AccumuloProxy_grantNamespacePermission_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_grantNamespacePermission_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_grantNamespacePermission_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_grantNamespacePermission_presult__isset {
+  _AccumuloProxy_grantNamespacePermission_presult__isset() : ouch1(false), ouch2(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_grantNamespacePermission_presult__isset;
+
+class AccumuloProxy_grantNamespacePermission_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_grantNamespacePermission_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_grantNamespacePermission_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_hasNamespacePermission_args__isset {
+  _AccumuloProxy_hasNamespacePermission_args__isset() : login(false), user(false), namespaceName(false), perm(false) {}
+  bool login :1;
+  bool user :1;
+  bool namespaceName :1;
+  bool perm :1;
+} _AccumuloProxy_hasNamespacePermission_args__isset;
+
+class AccumuloProxy_hasNamespacePermission_args {
+ public:
+
+  AccumuloProxy_hasNamespacePermission_args(const AccumuloProxy_hasNamespacePermission_args&);
+  AccumuloProxy_hasNamespacePermission_args& operator=(const AccumuloProxy_hasNamespacePermission_args&);
+  AccumuloProxy_hasNamespacePermission_args() : login(), user(), namespaceName(), perm((NamespacePermission::type)0) {
+  }
+
+  virtual ~AccumuloProxy_hasNamespacePermission_args() throw();
+  std::string login;
+  std::string user;
+  std::string namespaceName;
+  NamespacePermission::type perm;
+
+  _AccumuloProxy_hasNamespacePermission_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_user(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_perm(const NamespacePermission::type val);
+
+  bool operator == (const AccumuloProxy_hasNamespacePermission_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(user == rhs.user))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(perm == rhs.perm))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_hasNamespacePermission_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_hasNamespacePermission_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_hasNamespacePermission_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_hasNamespacePermission_pargs() throw();
+  const std::string* login;
+  const std::string* user;
+  const std::string* namespaceName;
+  const NamespacePermission::type* perm;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_hasNamespacePermission_result__isset {
+  _AccumuloProxy_hasNamespacePermission_result__isset() : success(false), ouch1(false), ouch2(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_hasNamespacePermission_result__isset;
+
+class AccumuloProxy_hasNamespacePermission_result {
+ public:
+
+  AccumuloProxy_hasNamespacePermission_result(const AccumuloProxy_hasNamespacePermission_result&);
+  AccumuloProxy_hasNamespacePermission_result& operator=(const AccumuloProxy_hasNamespacePermission_result&);
+  AccumuloProxy_hasNamespacePermission_result() : success(0) {
+  }
+
+  virtual ~AccumuloProxy_hasNamespacePermission_result() throw();
+  bool success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_hasNamespacePermission_result__isset __isset;
+
+  void __set_success(const bool val);
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  bool operator == (const AccumuloProxy_hasNamespacePermission_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_hasNamespacePermission_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_hasNamespacePermission_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_hasNamespacePermission_presult__isset {
+  _AccumuloProxy_hasNamespacePermission_presult__isset() : success(false), ouch1(false), ouch2(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_hasNamespacePermission_presult__isset;
+
+class AccumuloProxy_hasNamespacePermission_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_hasNamespacePermission_presult() throw();
+  bool* success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_hasNamespacePermission_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_revokeNamespacePermission_args__isset {
+  _AccumuloProxy_revokeNamespacePermission_args__isset() : login(false), user(false), namespaceName(false), perm(false) {}
+  bool login :1;
+  bool user :1;
+  bool namespaceName :1;
+  bool perm :1;
+} _AccumuloProxy_revokeNamespacePermission_args__isset;
+
+class AccumuloProxy_revokeNamespacePermission_args {
+ public:
+
+  AccumuloProxy_revokeNamespacePermission_args(const AccumuloProxy_revokeNamespacePermission_args&);
+  AccumuloProxy_revokeNamespacePermission_args& operator=(const AccumuloProxy_revokeNamespacePermission_args&);
+  AccumuloProxy_revokeNamespacePermission_args() : login(), user(), namespaceName(), perm((NamespacePermission::type)0) {
+  }
+
+  virtual ~AccumuloProxy_revokeNamespacePermission_args() throw();
+  std::string login;
+  std::string user;
+  std::string namespaceName;
+  NamespacePermission::type perm;
+
+  _AccumuloProxy_revokeNamespacePermission_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_user(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_perm(const NamespacePermission::type val);
+
+  bool operator == (const AccumuloProxy_revokeNamespacePermission_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(user == rhs.user))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(perm == rhs.perm))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_revokeNamespacePermission_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_revokeNamespacePermission_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_revokeNamespacePermission_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_revokeNamespacePermission_pargs() throw();
+  const std::string* login;
+  const std::string* user;
+  const std::string* namespaceName;
+  const NamespacePermission::type* perm;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_revokeNamespacePermission_result__isset {
+  _AccumuloProxy_revokeNamespacePermission_result__isset() : ouch1(false), ouch2(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_revokeNamespacePermission_result__isset;
+
+class AccumuloProxy_revokeNamespacePermission_result {
+ public:
+
+  AccumuloProxy_revokeNamespacePermission_result(const AccumuloProxy_revokeNamespacePermission_result&);
+  AccumuloProxy_revokeNamespacePermission_result& operator=(const AccumuloProxy_revokeNamespacePermission_result&);
+  AccumuloProxy_revokeNamespacePermission_result() {
+  }
+
+  virtual ~AccumuloProxy_revokeNamespacePermission_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_revokeNamespacePermission_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  bool operator == (const AccumuloProxy_revokeNamespacePermission_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_revokeNamespacePermission_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_revokeNamespacePermission_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_revokeNamespacePermission_presult__isset {
+  _AccumuloProxy_revokeNamespacePermission_presult__isset() : ouch1(false), ouch2(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_revokeNamespacePermission_presult__isset;
+
+class AccumuloProxy_revokeNamespacePermission_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_revokeNamespacePermission_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_revokeNamespacePermission_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
 typedef struct _AccumuloProxy_createBatchScanner_args__isset {
   _AccumuloProxy_createBatchScanner_args__isset() : login(false), tableName(false), options(false) {}
-  bool login;
-  bool tableName;
-  bool options;
+  bool login :1;
+  bool tableName :1;
+  bool options :1;
 } _AccumuloProxy_createBatchScanner_args__isset;
 
 class AccumuloProxy_createBatchScanner_args {
  public:
 
+  AccumuloProxy_createBatchScanner_args(const AccumuloProxy_createBatchScanner_args&);
+  AccumuloProxy_createBatchScanner_args& operator=(const AccumuloProxy_createBatchScanner_args&);
   AccumuloProxy_createBatchScanner_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_createBatchScanner_args() throw() {}
-
+  virtual ~AccumuloProxy_createBatchScanner_args() throw();
   std::string login;
   std::string tableName;
   BatchScanOptions options;
 
   _AccumuloProxy_createBatchScanner_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_options(const BatchScanOptions& val) {
-    options = val;
-  }
+  void __set_options(const BatchScanOptions& val);
 
   bool operator == (const AccumuloProxy_createBatchScanner_args & rhs) const
   {
@@ -9157,8 +8949,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_createBatchScanner_pargs() throw() {}
-
+  virtual ~AccumuloProxy_createBatchScanner_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const BatchScanOptions* options;
@@ -9169,20 +8960,21 @@
 
 typedef struct _AccumuloProxy_createBatchScanner_result__isset {
   _AccumuloProxy_createBatchScanner_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_createBatchScanner_result__isset;
 
 class AccumuloProxy_createBatchScanner_result {
  public:
 
+  AccumuloProxy_createBatchScanner_result(const AccumuloProxy_createBatchScanner_result&);
+  AccumuloProxy_createBatchScanner_result& operator=(const AccumuloProxy_createBatchScanner_result&);
   AccumuloProxy_createBatchScanner_result() : success() {
   }
 
-  virtual ~AccumuloProxy_createBatchScanner_result() throw() {}
-
+  virtual ~AccumuloProxy_createBatchScanner_result() throw();
   std::string success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -9190,21 +8982,13 @@
 
   _AccumuloProxy_createBatchScanner_result__isset __isset;
 
-  void __set_success(const std::string& val) {
-    success = val;
-  }
+  void __set_success(const std::string& val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_createBatchScanner_result & rhs) const
   {
@@ -9231,18 +9015,17 @@
 
 typedef struct _AccumuloProxy_createBatchScanner_presult__isset {
   _AccumuloProxy_createBatchScanner_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_createBatchScanner_presult__isset;
 
 class AccumuloProxy_createBatchScanner_presult {
  public:
 
 
-  virtual ~AccumuloProxy_createBatchScanner_presult() throw() {}
-
+  virtual ~AccumuloProxy_createBatchScanner_presult() throw();
   std::string* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -9256,36 +9039,31 @@
 
 typedef struct _AccumuloProxy_createScanner_args__isset {
   _AccumuloProxy_createScanner_args__isset() : login(false), tableName(false), options(false) {}
-  bool login;
-  bool tableName;
-  bool options;
+  bool login :1;
+  bool tableName :1;
+  bool options :1;
 } _AccumuloProxy_createScanner_args__isset;
 
 class AccumuloProxy_createScanner_args {
  public:
 
+  AccumuloProxy_createScanner_args(const AccumuloProxy_createScanner_args&);
+  AccumuloProxy_createScanner_args& operator=(const AccumuloProxy_createScanner_args&);
   AccumuloProxy_createScanner_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_createScanner_args() throw() {}
-
+  virtual ~AccumuloProxy_createScanner_args() throw();
   std::string login;
   std::string tableName;
   ScanOptions options;
 
   _AccumuloProxy_createScanner_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_options(const ScanOptions& val) {
-    options = val;
-  }
+  void __set_options(const ScanOptions& val);
 
   bool operator == (const AccumuloProxy_createScanner_args & rhs) const
   {
@@ -9313,8 +9091,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_createScanner_pargs() throw() {}
-
+  virtual ~AccumuloProxy_createScanner_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const ScanOptions* options;
@@ -9325,20 +9102,21 @@
 
 typedef struct _AccumuloProxy_createScanner_result__isset {
   _AccumuloProxy_createScanner_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_createScanner_result__isset;
 
 class AccumuloProxy_createScanner_result {
  public:
 
+  AccumuloProxy_createScanner_result(const AccumuloProxy_createScanner_result&);
+  AccumuloProxy_createScanner_result& operator=(const AccumuloProxy_createScanner_result&);
   AccumuloProxy_createScanner_result() : success() {
   }
 
-  virtual ~AccumuloProxy_createScanner_result() throw() {}
-
+  virtual ~AccumuloProxy_createScanner_result() throw();
   std::string success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -9346,21 +9124,13 @@
 
   _AccumuloProxy_createScanner_result__isset __isset;
 
-  void __set_success(const std::string& val) {
-    success = val;
-  }
+  void __set_success(const std::string& val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_createScanner_result & rhs) const
   {
@@ -9387,18 +9157,17 @@
 
 typedef struct _AccumuloProxy_createScanner_presult__isset {
   _AccumuloProxy_createScanner_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_createScanner_presult__isset;
 
 class AccumuloProxy_createScanner_presult {
  public:
 
 
-  virtual ~AccumuloProxy_createScanner_presult() throw() {}
-
+  virtual ~AccumuloProxy_createScanner_presult() throw();
   std::string* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -9412,24 +9181,23 @@
 
 typedef struct _AccumuloProxy_hasNext_args__isset {
   _AccumuloProxy_hasNext_args__isset() : scanner(false) {}
-  bool scanner;
+  bool scanner :1;
 } _AccumuloProxy_hasNext_args__isset;
 
 class AccumuloProxy_hasNext_args {
  public:
 
+  AccumuloProxy_hasNext_args(const AccumuloProxy_hasNext_args&);
+  AccumuloProxy_hasNext_args& operator=(const AccumuloProxy_hasNext_args&);
   AccumuloProxy_hasNext_args() : scanner() {
   }
 
-  virtual ~AccumuloProxy_hasNext_args() throw() {}
-
+  virtual ~AccumuloProxy_hasNext_args() throw();
   std::string scanner;
 
   _AccumuloProxy_hasNext_args__isset __isset;
 
-  void __set_scanner(const std::string& val) {
-    scanner = val;
-  }
+  void __set_scanner(const std::string& val);
 
   bool operator == (const AccumuloProxy_hasNext_args & rhs) const
   {
@@ -9453,8 +9221,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_hasNext_pargs() throw() {}
-
+  virtual ~AccumuloProxy_hasNext_pargs() throw();
   const std::string* scanner;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -9463,30 +9230,27 @@
 
 typedef struct _AccumuloProxy_hasNext_result__isset {
   _AccumuloProxy_hasNext_result__isset() : success(false), ouch1(false) {}
-  bool success;
-  bool ouch1;
+  bool success :1;
+  bool ouch1 :1;
 } _AccumuloProxy_hasNext_result__isset;
 
 class AccumuloProxy_hasNext_result {
  public:
 
+  AccumuloProxy_hasNext_result(const AccumuloProxy_hasNext_result&);
+  AccumuloProxy_hasNext_result& operator=(const AccumuloProxy_hasNext_result&);
   AccumuloProxy_hasNext_result() : success(0) {
   }
 
-  virtual ~AccumuloProxy_hasNext_result() throw() {}
-
+  virtual ~AccumuloProxy_hasNext_result() throw();
   bool success;
   UnknownScanner ouch1;
 
   _AccumuloProxy_hasNext_result__isset __isset;
 
-  void __set_success(const bool val) {
-    success = val;
-  }
+  void __set_success(const bool val);
 
-  void __set_ouch1(const UnknownScanner& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const UnknownScanner& val);
 
   bool operator == (const AccumuloProxy_hasNext_result & rhs) const
   {
@@ -9509,16 +9273,15 @@
 
 typedef struct _AccumuloProxy_hasNext_presult__isset {
   _AccumuloProxy_hasNext_presult__isset() : success(false), ouch1(false) {}
-  bool success;
-  bool ouch1;
+  bool success :1;
+  bool ouch1 :1;
 } _AccumuloProxy_hasNext_presult__isset;
 
 class AccumuloProxy_hasNext_presult {
  public:
 
 
-  virtual ~AccumuloProxy_hasNext_presult() throw() {}
-
+  virtual ~AccumuloProxy_hasNext_presult() throw();
   bool* success;
   UnknownScanner ouch1;
 
@@ -9530,24 +9293,23 @@
 
 typedef struct _AccumuloProxy_nextEntry_args__isset {
   _AccumuloProxy_nextEntry_args__isset() : scanner(false) {}
-  bool scanner;
+  bool scanner :1;
 } _AccumuloProxy_nextEntry_args__isset;
 
 class AccumuloProxy_nextEntry_args {
  public:
 
+  AccumuloProxy_nextEntry_args(const AccumuloProxy_nextEntry_args&);
+  AccumuloProxy_nextEntry_args& operator=(const AccumuloProxy_nextEntry_args&);
   AccumuloProxy_nextEntry_args() : scanner() {
   }
 
-  virtual ~AccumuloProxy_nextEntry_args() throw() {}
-
+  virtual ~AccumuloProxy_nextEntry_args() throw();
   std::string scanner;
 
   _AccumuloProxy_nextEntry_args__isset __isset;
 
-  void __set_scanner(const std::string& val) {
-    scanner = val;
-  }
+  void __set_scanner(const std::string& val);
 
   bool operator == (const AccumuloProxy_nextEntry_args & rhs) const
   {
@@ -9571,8 +9333,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_nextEntry_pargs() throw() {}
-
+  virtual ~AccumuloProxy_nextEntry_pargs() throw();
   const std::string* scanner;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -9581,20 +9342,21 @@
 
 typedef struct _AccumuloProxy_nextEntry_result__isset {
   _AccumuloProxy_nextEntry_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_nextEntry_result__isset;
 
 class AccumuloProxy_nextEntry_result {
  public:
 
+  AccumuloProxy_nextEntry_result(const AccumuloProxy_nextEntry_result&);
+  AccumuloProxy_nextEntry_result& operator=(const AccumuloProxy_nextEntry_result&);
   AccumuloProxy_nextEntry_result() {
   }
 
-  virtual ~AccumuloProxy_nextEntry_result() throw() {}
-
+  virtual ~AccumuloProxy_nextEntry_result() throw();
   KeyValueAndPeek success;
   NoMoreEntriesException ouch1;
   UnknownScanner ouch2;
@@ -9602,21 +9364,13 @@
 
   _AccumuloProxy_nextEntry_result__isset __isset;
 
-  void __set_success(const KeyValueAndPeek& val) {
-    success = val;
-  }
+  void __set_success(const KeyValueAndPeek& val);
 
-  void __set_ouch1(const NoMoreEntriesException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const NoMoreEntriesException& val);
 
-  void __set_ouch2(const UnknownScanner& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const UnknownScanner& val);
 
-  void __set_ouch3(const AccumuloSecurityException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_nextEntry_result & rhs) const
   {
@@ -9643,18 +9397,17 @@
 
 typedef struct _AccumuloProxy_nextEntry_presult__isset {
   _AccumuloProxy_nextEntry_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_nextEntry_presult__isset;
 
 class AccumuloProxy_nextEntry_presult {
  public:
 
 
-  virtual ~AccumuloProxy_nextEntry_presult() throw() {}
-
+  virtual ~AccumuloProxy_nextEntry_presult() throw();
   KeyValueAndPeek* success;
   NoMoreEntriesException ouch1;
   UnknownScanner ouch2;
@@ -9668,30 +9421,27 @@
 
 typedef struct _AccumuloProxy_nextK_args__isset {
   _AccumuloProxy_nextK_args__isset() : scanner(false), k(false) {}
-  bool scanner;
-  bool k;
+  bool scanner :1;
+  bool k :1;
 } _AccumuloProxy_nextK_args__isset;
 
 class AccumuloProxy_nextK_args {
  public:
 
+  AccumuloProxy_nextK_args(const AccumuloProxy_nextK_args&);
+  AccumuloProxy_nextK_args& operator=(const AccumuloProxy_nextK_args&);
   AccumuloProxy_nextK_args() : scanner(), k(0) {
   }
 
-  virtual ~AccumuloProxy_nextK_args() throw() {}
-
+  virtual ~AccumuloProxy_nextK_args() throw();
   std::string scanner;
   int32_t k;
 
   _AccumuloProxy_nextK_args__isset __isset;
 
-  void __set_scanner(const std::string& val) {
-    scanner = val;
-  }
+  void __set_scanner(const std::string& val);
 
-  void __set_k(const int32_t val) {
-    k = val;
-  }
+  void __set_k(const int32_t val);
 
   bool operator == (const AccumuloProxy_nextK_args & rhs) const
   {
@@ -9717,8 +9467,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_nextK_pargs() throw() {}
-
+  virtual ~AccumuloProxy_nextK_pargs() throw();
   const std::string* scanner;
   const int32_t* k;
 
@@ -9728,20 +9477,21 @@
 
 typedef struct _AccumuloProxy_nextK_result__isset {
   _AccumuloProxy_nextK_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_nextK_result__isset;
 
 class AccumuloProxy_nextK_result {
  public:
 
+  AccumuloProxy_nextK_result(const AccumuloProxy_nextK_result&);
+  AccumuloProxy_nextK_result& operator=(const AccumuloProxy_nextK_result&);
   AccumuloProxy_nextK_result() {
   }
 
-  virtual ~AccumuloProxy_nextK_result() throw() {}
-
+  virtual ~AccumuloProxy_nextK_result() throw();
   ScanResult success;
   NoMoreEntriesException ouch1;
   UnknownScanner ouch2;
@@ -9749,21 +9499,13 @@
 
   _AccumuloProxy_nextK_result__isset __isset;
 
-  void __set_success(const ScanResult& val) {
-    success = val;
-  }
+  void __set_success(const ScanResult& val);
 
-  void __set_ouch1(const NoMoreEntriesException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const NoMoreEntriesException& val);
 
-  void __set_ouch2(const UnknownScanner& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const UnknownScanner& val);
 
-  void __set_ouch3(const AccumuloSecurityException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_nextK_result & rhs) const
   {
@@ -9790,18 +9532,17 @@
 
 typedef struct _AccumuloProxy_nextK_presult__isset {
   _AccumuloProxy_nextK_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_nextK_presult__isset;
 
 class AccumuloProxy_nextK_presult {
  public:
 
 
-  virtual ~AccumuloProxy_nextK_presult() throw() {}
-
+  virtual ~AccumuloProxy_nextK_presult() throw();
   ScanResult* success;
   NoMoreEntriesException ouch1;
   UnknownScanner ouch2;
@@ -9815,24 +9556,23 @@
 
 typedef struct _AccumuloProxy_closeScanner_args__isset {
   _AccumuloProxy_closeScanner_args__isset() : scanner(false) {}
-  bool scanner;
+  bool scanner :1;
 } _AccumuloProxy_closeScanner_args__isset;
 
 class AccumuloProxy_closeScanner_args {
  public:
 
+  AccumuloProxy_closeScanner_args(const AccumuloProxy_closeScanner_args&);
+  AccumuloProxy_closeScanner_args& operator=(const AccumuloProxy_closeScanner_args&);
   AccumuloProxy_closeScanner_args() : scanner() {
   }
 
-  virtual ~AccumuloProxy_closeScanner_args() throw() {}
-
+  virtual ~AccumuloProxy_closeScanner_args() throw();
   std::string scanner;
 
   _AccumuloProxy_closeScanner_args__isset __isset;
 
-  void __set_scanner(const std::string& val) {
-    scanner = val;
-  }
+  void __set_scanner(const std::string& val);
 
   bool operator == (const AccumuloProxy_closeScanner_args & rhs) const
   {
@@ -9856,8 +9596,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_closeScanner_pargs() throw() {}
-
+  virtual ~AccumuloProxy_closeScanner_pargs() throw();
   const std::string* scanner;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -9866,24 +9605,23 @@
 
 typedef struct _AccumuloProxy_closeScanner_result__isset {
   _AccumuloProxy_closeScanner_result__isset() : ouch1(false) {}
-  bool ouch1;
+  bool ouch1 :1;
 } _AccumuloProxy_closeScanner_result__isset;
 
 class AccumuloProxy_closeScanner_result {
  public:
 
+  AccumuloProxy_closeScanner_result(const AccumuloProxy_closeScanner_result&);
+  AccumuloProxy_closeScanner_result& operator=(const AccumuloProxy_closeScanner_result&);
   AccumuloProxy_closeScanner_result() {
   }
 
-  virtual ~AccumuloProxy_closeScanner_result() throw() {}
-
+  virtual ~AccumuloProxy_closeScanner_result() throw();
   UnknownScanner ouch1;
 
   _AccumuloProxy_closeScanner_result__isset __isset;
 
-  void __set_ouch1(const UnknownScanner& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const UnknownScanner& val);
 
   bool operator == (const AccumuloProxy_closeScanner_result & rhs) const
   {
@@ -9904,15 +9642,14 @@
 
 typedef struct _AccumuloProxy_closeScanner_presult__isset {
   _AccumuloProxy_closeScanner_presult__isset() : ouch1(false) {}
-  bool ouch1;
+  bool ouch1 :1;
 } _AccumuloProxy_closeScanner_presult__isset;
 
 class AccumuloProxy_closeScanner_presult {
  public:
 
 
-  virtual ~AccumuloProxy_closeScanner_presult() throw() {}
-
+  virtual ~AccumuloProxy_closeScanner_presult() throw();
   UnknownScanner ouch1;
 
   _AccumuloProxy_closeScanner_presult__isset __isset;
@@ -9923,36 +9660,31 @@
 
 typedef struct _AccumuloProxy_updateAndFlush_args__isset {
   _AccumuloProxy_updateAndFlush_args__isset() : login(false), tableName(false), cells(false) {}
-  bool login;
-  bool tableName;
-  bool cells;
+  bool login :1;
+  bool tableName :1;
+  bool cells :1;
 } _AccumuloProxy_updateAndFlush_args__isset;
 
 class AccumuloProxy_updateAndFlush_args {
  public:
 
+  AccumuloProxy_updateAndFlush_args(const AccumuloProxy_updateAndFlush_args&);
+  AccumuloProxy_updateAndFlush_args& operator=(const AccumuloProxy_updateAndFlush_args&);
   AccumuloProxy_updateAndFlush_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_updateAndFlush_args() throw() {}
-
+  virtual ~AccumuloProxy_updateAndFlush_args() throw();
   std::string login;
   std::string tableName;
   std::map<std::string, std::vector<ColumnUpdate> >  cells;
 
   _AccumuloProxy_updateAndFlush_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_cells(const std::map<std::string, std::vector<ColumnUpdate> > & val) {
-    cells = val;
-  }
+  void __set_cells(const std::map<std::string, std::vector<ColumnUpdate> > & val);
 
   bool operator == (const AccumuloProxy_updateAndFlush_args & rhs) const
   {
@@ -9980,8 +9712,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_updateAndFlush_pargs() throw() {}
-
+  virtual ~AccumuloProxy_updateAndFlush_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::map<std::string, std::vector<ColumnUpdate> > * cells;
@@ -9992,20 +9723,21 @@
 
 typedef struct _AccumuloProxy_updateAndFlush_result__isset {
   _AccumuloProxy_updateAndFlush_result__isset() : outch1(false), ouch2(false), ouch3(false), ouch4(false) {}
-  bool outch1;
-  bool ouch2;
-  bool ouch3;
-  bool ouch4;
+  bool outch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
 } _AccumuloProxy_updateAndFlush_result__isset;
 
 class AccumuloProxy_updateAndFlush_result {
  public:
 
+  AccumuloProxy_updateAndFlush_result(const AccumuloProxy_updateAndFlush_result&);
+  AccumuloProxy_updateAndFlush_result& operator=(const AccumuloProxy_updateAndFlush_result&);
   AccumuloProxy_updateAndFlush_result() {
   }
 
-  virtual ~AccumuloProxy_updateAndFlush_result() throw() {}
-
+  virtual ~AccumuloProxy_updateAndFlush_result() throw();
   AccumuloException outch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -10013,21 +9745,13 @@
 
   _AccumuloProxy_updateAndFlush_result__isset __isset;
 
-  void __set_outch1(const AccumuloException& val) {
-    outch1 = val;
-  }
+  void __set_outch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
-  void __set_ouch4(const MutationsRejectedException& val) {
-    ouch4 = val;
-  }
+  void __set_ouch4(const MutationsRejectedException& val);
 
   bool operator == (const AccumuloProxy_updateAndFlush_result & rhs) const
   {
@@ -10054,18 +9778,17 @@
 
 typedef struct _AccumuloProxy_updateAndFlush_presult__isset {
   _AccumuloProxy_updateAndFlush_presult__isset() : outch1(false), ouch2(false), ouch3(false), ouch4(false) {}
-  bool outch1;
-  bool ouch2;
-  bool ouch3;
-  bool ouch4;
+  bool outch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
 } _AccumuloProxy_updateAndFlush_presult__isset;
 
 class AccumuloProxy_updateAndFlush_presult {
  public:
 
 
-  virtual ~AccumuloProxy_updateAndFlush_presult() throw() {}
-
+  virtual ~AccumuloProxy_updateAndFlush_presult() throw();
   AccumuloException outch1;
   AccumuloSecurityException ouch2;
   TableNotFoundException ouch3;
@@ -10079,36 +9802,31 @@
 
 typedef struct _AccumuloProxy_createWriter_args__isset {
   _AccumuloProxy_createWriter_args__isset() : login(false), tableName(false), opts(false) {}
-  bool login;
-  bool tableName;
-  bool opts;
+  bool login :1;
+  bool tableName :1;
+  bool opts :1;
 } _AccumuloProxy_createWriter_args__isset;
 
 class AccumuloProxy_createWriter_args {
  public:
 
+  AccumuloProxy_createWriter_args(const AccumuloProxy_createWriter_args&);
+  AccumuloProxy_createWriter_args& operator=(const AccumuloProxy_createWriter_args&);
   AccumuloProxy_createWriter_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_createWriter_args() throw() {}
-
+  virtual ~AccumuloProxy_createWriter_args() throw();
   std::string login;
   std::string tableName;
   WriterOptions opts;
 
   _AccumuloProxy_createWriter_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_opts(const WriterOptions& val) {
-    opts = val;
-  }
+  void __set_opts(const WriterOptions& val);
 
   bool operator == (const AccumuloProxy_createWriter_args & rhs) const
   {
@@ -10136,8 +9854,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_createWriter_pargs() throw() {}
-
+  virtual ~AccumuloProxy_createWriter_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const WriterOptions* opts;
@@ -10148,20 +9865,21 @@
 
 typedef struct _AccumuloProxy_createWriter_result__isset {
   _AccumuloProxy_createWriter_result__isset() : success(false), outch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool outch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool outch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_createWriter_result__isset;
 
 class AccumuloProxy_createWriter_result {
  public:
 
+  AccumuloProxy_createWriter_result(const AccumuloProxy_createWriter_result&);
+  AccumuloProxy_createWriter_result& operator=(const AccumuloProxy_createWriter_result&);
   AccumuloProxy_createWriter_result() : success() {
   }
 
-  virtual ~AccumuloProxy_createWriter_result() throw() {}
-
+  virtual ~AccumuloProxy_createWriter_result() throw();
   std::string success;
   AccumuloException outch1;
   AccumuloSecurityException ouch2;
@@ -10169,21 +9887,13 @@
 
   _AccumuloProxy_createWriter_result__isset __isset;
 
-  void __set_success(const std::string& val) {
-    success = val;
-  }
+  void __set_success(const std::string& val);
 
-  void __set_outch1(const AccumuloException& val) {
-    outch1 = val;
-  }
+  void __set_outch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_createWriter_result & rhs) const
   {
@@ -10210,18 +9920,17 @@
 
 typedef struct _AccumuloProxy_createWriter_presult__isset {
   _AccumuloProxy_createWriter_presult__isset() : success(false), outch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool outch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool outch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_createWriter_presult__isset;
 
 class AccumuloProxy_createWriter_presult {
  public:
 
 
-  virtual ~AccumuloProxy_createWriter_presult() throw() {}
-
+  virtual ~AccumuloProxy_createWriter_presult() throw();
   std::string* success;
   AccumuloException outch1;
   AccumuloSecurityException ouch2;
@@ -10235,30 +9944,27 @@
 
 typedef struct _AccumuloProxy_update_args__isset {
   _AccumuloProxy_update_args__isset() : writer(false), cells(false) {}
-  bool writer;
-  bool cells;
+  bool writer :1;
+  bool cells :1;
 } _AccumuloProxy_update_args__isset;
 
 class AccumuloProxy_update_args {
  public:
 
+  AccumuloProxy_update_args(const AccumuloProxy_update_args&);
+  AccumuloProxy_update_args& operator=(const AccumuloProxy_update_args&);
   AccumuloProxy_update_args() : writer() {
   }
 
-  virtual ~AccumuloProxy_update_args() throw() {}
-
+  virtual ~AccumuloProxy_update_args() throw();
   std::string writer;
   std::map<std::string, std::vector<ColumnUpdate> >  cells;
 
   _AccumuloProxy_update_args__isset __isset;
 
-  void __set_writer(const std::string& val) {
-    writer = val;
-  }
+  void __set_writer(const std::string& val);
 
-  void __set_cells(const std::map<std::string, std::vector<ColumnUpdate> > & val) {
-    cells = val;
-  }
+  void __set_cells(const std::map<std::string, std::vector<ColumnUpdate> > & val);
 
   bool operator == (const AccumuloProxy_update_args & rhs) const
   {
@@ -10284,8 +9990,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_update_pargs() throw() {}
-
+  virtual ~AccumuloProxy_update_pargs() throw();
   const std::string* writer;
   const std::map<std::string, std::vector<ColumnUpdate> > * cells;
 
@@ -10295,24 +10000,23 @@
 
 typedef struct _AccumuloProxy_flush_args__isset {
   _AccumuloProxy_flush_args__isset() : writer(false) {}
-  bool writer;
+  bool writer :1;
 } _AccumuloProxy_flush_args__isset;
 
 class AccumuloProxy_flush_args {
  public:
 
+  AccumuloProxy_flush_args(const AccumuloProxy_flush_args&);
+  AccumuloProxy_flush_args& operator=(const AccumuloProxy_flush_args&);
   AccumuloProxy_flush_args() : writer() {
   }
 
-  virtual ~AccumuloProxy_flush_args() throw() {}
-
+  virtual ~AccumuloProxy_flush_args() throw();
   std::string writer;
 
   _AccumuloProxy_flush_args__isset __isset;
 
-  void __set_writer(const std::string& val) {
-    writer = val;
-  }
+  void __set_writer(const std::string& val);
 
   bool operator == (const AccumuloProxy_flush_args & rhs) const
   {
@@ -10336,8 +10040,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_flush_pargs() throw() {}
-
+  virtual ~AccumuloProxy_flush_pargs() throw();
   const std::string* writer;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -10346,30 +10049,27 @@
 
 typedef struct _AccumuloProxy_flush_result__isset {
   _AccumuloProxy_flush_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_flush_result__isset;
 
 class AccumuloProxy_flush_result {
  public:
 
+  AccumuloProxy_flush_result(const AccumuloProxy_flush_result&);
+  AccumuloProxy_flush_result& operator=(const AccumuloProxy_flush_result&);
   AccumuloProxy_flush_result() {
   }
 
-  virtual ~AccumuloProxy_flush_result() throw() {}
-
+  virtual ~AccumuloProxy_flush_result() throw();
   UnknownWriter ouch1;
   MutationsRejectedException ouch2;
 
   _AccumuloProxy_flush_result__isset __isset;
 
-  void __set_ouch1(const UnknownWriter& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const UnknownWriter& val);
 
-  void __set_ouch2(const MutationsRejectedException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const MutationsRejectedException& val);
 
   bool operator == (const AccumuloProxy_flush_result & rhs) const
   {
@@ -10392,16 +10092,15 @@
 
 typedef struct _AccumuloProxy_flush_presult__isset {
   _AccumuloProxy_flush_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_flush_presult__isset;
 
 class AccumuloProxy_flush_presult {
  public:
 
 
-  virtual ~AccumuloProxy_flush_presult() throw() {}
-
+  virtual ~AccumuloProxy_flush_presult() throw();
   UnknownWriter ouch1;
   MutationsRejectedException ouch2;
 
@@ -10413,24 +10112,23 @@
 
 typedef struct _AccumuloProxy_closeWriter_args__isset {
   _AccumuloProxy_closeWriter_args__isset() : writer(false) {}
-  bool writer;
+  bool writer :1;
 } _AccumuloProxy_closeWriter_args__isset;
 
 class AccumuloProxy_closeWriter_args {
  public:
 
+  AccumuloProxy_closeWriter_args(const AccumuloProxy_closeWriter_args&);
+  AccumuloProxy_closeWriter_args& operator=(const AccumuloProxy_closeWriter_args&);
   AccumuloProxy_closeWriter_args() : writer() {
   }
 
-  virtual ~AccumuloProxy_closeWriter_args() throw() {}
-
+  virtual ~AccumuloProxy_closeWriter_args() throw();
   std::string writer;
 
   _AccumuloProxy_closeWriter_args__isset __isset;
 
-  void __set_writer(const std::string& val) {
-    writer = val;
-  }
+  void __set_writer(const std::string& val);
 
   bool operator == (const AccumuloProxy_closeWriter_args & rhs) const
   {
@@ -10454,8 +10152,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_closeWriter_pargs() throw() {}
-
+  virtual ~AccumuloProxy_closeWriter_pargs() throw();
   const std::string* writer;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -10464,30 +10161,27 @@
 
 typedef struct _AccumuloProxy_closeWriter_result__isset {
   _AccumuloProxy_closeWriter_result__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_closeWriter_result__isset;
 
 class AccumuloProxy_closeWriter_result {
  public:
 
+  AccumuloProxy_closeWriter_result(const AccumuloProxy_closeWriter_result&);
+  AccumuloProxy_closeWriter_result& operator=(const AccumuloProxy_closeWriter_result&);
   AccumuloProxy_closeWriter_result() {
   }
 
-  virtual ~AccumuloProxy_closeWriter_result() throw() {}
-
+  virtual ~AccumuloProxy_closeWriter_result() throw();
   UnknownWriter ouch1;
   MutationsRejectedException ouch2;
 
   _AccumuloProxy_closeWriter_result__isset __isset;
 
-  void __set_ouch1(const UnknownWriter& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const UnknownWriter& val);
 
-  void __set_ouch2(const MutationsRejectedException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const MutationsRejectedException& val);
 
   bool operator == (const AccumuloProxy_closeWriter_result & rhs) const
   {
@@ -10510,16 +10204,15 @@
 
 typedef struct _AccumuloProxy_closeWriter_presult__isset {
   _AccumuloProxy_closeWriter_presult__isset() : ouch1(false), ouch2(false) {}
-  bool ouch1;
-  bool ouch2;
+  bool ouch1 :1;
+  bool ouch2 :1;
 } _AccumuloProxy_closeWriter_presult__isset;
 
 class AccumuloProxy_closeWriter_presult {
  public:
 
 
-  virtual ~AccumuloProxy_closeWriter_presult() throw() {}
-
+  virtual ~AccumuloProxy_closeWriter_presult() throw();
   UnknownWriter ouch1;
   MutationsRejectedException ouch2;
 
@@ -10531,20 +10224,21 @@
 
 typedef struct _AccumuloProxy_updateRowConditionally_args__isset {
   _AccumuloProxy_updateRowConditionally_args__isset() : login(false), tableName(false), row(false), updates(false) {}
-  bool login;
-  bool tableName;
-  bool row;
-  bool updates;
+  bool login :1;
+  bool tableName :1;
+  bool row :1;
+  bool updates :1;
 } _AccumuloProxy_updateRowConditionally_args__isset;
 
 class AccumuloProxy_updateRowConditionally_args {
  public:
 
+  AccumuloProxy_updateRowConditionally_args(const AccumuloProxy_updateRowConditionally_args&);
+  AccumuloProxy_updateRowConditionally_args& operator=(const AccumuloProxy_updateRowConditionally_args&);
   AccumuloProxy_updateRowConditionally_args() : login(), tableName(), row() {
   }
 
-  virtual ~AccumuloProxy_updateRowConditionally_args() throw() {}
-
+  virtual ~AccumuloProxy_updateRowConditionally_args() throw();
   std::string login;
   std::string tableName;
   std::string row;
@@ -10552,21 +10246,13 @@
 
   _AccumuloProxy_updateRowConditionally_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_row(const std::string& val) {
-    row = val;
-  }
+  void __set_row(const std::string& val);
 
-  void __set_updates(const ConditionalUpdates& val) {
-    updates = val;
-  }
+  void __set_updates(const ConditionalUpdates& val);
 
   bool operator == (const AccumuloProxy_updateRowConditionally_args & rhs) const
   {
@@ -10596,8 +10282,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_updateRowConditionally_pargs() throw() {}
-
+  virtual ~AccumuloProxy_updateRowConditionally_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const std::string* row;
@@ -10609,20 +10294,21 @@
 
 typedef struct _AccumuloProxy_updateRowConditionally_result__isset {
   _AccumuloProxy_updateRowConditionally_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_updateRowConditionally_result__isset;
 
 class AccumuloProxy_updateRowConditionally_result {
  public:
 
+  AccumuloProxy_updateRowConditionally_result(const AccumuloProxy_updateRowConditionally_result&);
+  AccumuloProxy_updateRowConditionally_result& operator=(const AccumuloProxy_updateRowConditionally_result&);
   AccumuloProxy_updateRowConditionally_result() : success((ConditionalStatus::type)0) {
   }
 
-  virtual ~AccumuloProxy_updateRowConditionally_result() throw() {}
-
+  virtual ~AccumuloProxy_updateRowConditionally_result() throw();
   ConditionalStatus::type success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -10630,21 +10316,13 @@
 
   _AccumuloProxy_updateRowConditionally_result__isset __isset;
 
-  void __set_success(const ConditionalStatus::type val) {
-    success = val;
-  }
+  void __set_success(const ConditionalStatus::type val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_updateRowConditionally_result & rhs) const
   {
@@ -10671,18 +10349,17 @@
 
 typedef struct _AccumuloProxy_updateRowConditionally_presult__isset {
   _AccumuloProxy_updateRowConditionally_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_updateRowConditionally_presult__isset;
 
 class AccumuloProxy_updateRowConditionally_presult {
  public:
 
 
-  virtual ~AccumuloProxy_updateRowConditionally_presult() throw() {}
-
+  virtual ~AccumuloProxy_updateRowConditionally_presult() throw();
   ConditionalStatus::type* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -10696,36 +10373,31 @@
 
 typedef struct _AccumuloProxy_createConditionalWriter_args__isset {
   _AccumuloProxy_createConditionalWriter_args__isset() : login(false), tableName(false), options(false) {}
-  bool login;
-  bool tableName;
-  bool options;
+  bool login :1;
+  bool tableName :1;
+  bool options :1;
 } _AccumuloProxy_createConditionalWriter_args__isset;
 
 class AccumuloProxy_createConditionalWriter_args {
  public:
 
+  AccumuloProxy_createConditionalWriter_args(const AccumuloProxy_createConditionalWriter_args&);
+  AccumuloProxy_createConditionalWriter_args& operator=(const AccumuloProxy_createConditionalWriter_args&);
   AccumuloProxy_createConditionalWriter_args() : login(), tableName() {
   }
 
-  virtual ~AccumuloProxy_createConditionalWriter_args() throw() {}
-
+  virtual ~AccumuloProxy_createConditionalWriter_args() throw();
   std::string login;
   std::string tableName;
   ConditionalWriterOptions options;
 
   _AccumuloProxy_createConditionalWriter_args__isset __isset;
 
-  void __set_login(const std::string& val) {
-    login = val;
-  }
+  void __set_login(const std::string& val);
 
-  void __set_tableName(const std::string& val) {
-    tableName = val;
-  }
+  void __set_tableName(const std::string& val);
 
-  void __set_options(const ConditionalWriterOptions& val) {
-    options = val;
-  }
+  void __set_options(const ConditionalWriterOptions& val);
 
   bool operator == (const AccumuloProxy_createConditionalWriter_args & rhs) const
   {
@@ -10753,8 +10425,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_createConditionalWriter_pargs() throw() {}
-
+  virtual ~AccumuloProxy_createConditionalWriter_pargs() throw();
   const std::string* login;
   const std::string* tableName;
   const ConditionalWriterOptions* options;
@@ -10765,20 +10436,21 @@
 
 typedef struct _AccumuloProxy_createConditionalWriter_result__isset {
   _AccumuloProxy_createConditionalWriter_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_createConditionalWriter_result__isset;
 
 class AccumuloProxy_createConditionalWriter_result {
  public:
 
+  AccumuloProxy_createConditionalWriter_result(const AccumuloProxy_createConditionalWriter_result&);
+  AccumuloProxy_createConditionalWriter_result& operator=(const AccumuloProxy_createConditionalWriter_result&);
   AccumuloProxy_createConditionalWriter_result() : success() {
   }
 
-  virtual ~AccumuloProxy_createConditionalWriter_result() throw() {}
-
+  virtual ~AccumuloProxy_createConditionalWriter_result() throw();
   std::string success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -10786,21 +10458,13 @@
 
   _AccumuloProxy_createConditionalWriter_result__isset __isset;
 
-  void __set_success(const std::string& val) {
-    success = val;
-  }
+  void __set_success(const std::string& val);
 
-  void __set_ouch1(const AccumuloException& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const AccumuloException& val);
 
-  void __set_ouch2(const AccumuloSecurityException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloSecurityException& val);
 
-  void __set_ouch3(const TableNotFoundException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const TableNotFoundException& val);
 
   bool operator == (const AccumuloProxy_createConditionalWriter_result & rhs) const
   {
@@ -10827,18 +10491,17 @@
 
 typedef struct _AccumuloProxy_createConditionalWriter_presult__isset {
   _AccumuloProxy_createConditionalWriter_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_createConditionalWriter_presult__isset;
 
 class AccumuloProxy_createConditionalWriter_presult {
  public:
 
 
-  virtual ~AccumuloProxy_createConditionalWriter_presult() throw() {}
-
+  virtual ~AccumuloProxy_createConditionalWriter_presult() throw();
   std::string* success;
   AccumuloException ouch1;
   AccumuloSecurityException ouch2;
@@ -10852,30 +10515,27 @@
 
 typedef struct _AccumuloProxy_updateRowsConditionally_args__isset {
   _AccumuloProxy_updateRowsConditionally_args__isset() : conditionalWriter(false), updates(false) {}
-  bool conditionalWriter;
-  bool updates;
+  bool conditionalWriter :1;
+  bool updates :1;
 } _AccumuloProxy_updateRowsConditionally_args__isset;
 
 class AccumuloProxy_updateRowsConditionally_args {
  public:
 
+  AccumuloProxy_updateRowsConditionally_args(const AccumuloProxy_updateRowsConditionally_args&);
+  AccumuloProxy_updateRowsConditionally_args& operator=(const AccumuloProxy_updateRowsConditionally_args&);
   AccumuloProxy_updateRowsConditionally_args() : conditionalWriter() {
   }
 
-  virtual ~AccumuloProxy_updateRowsConditionally_args() throw() {}
-
+  virtual ~AccumuloProxy_updateRowsConditionally_args() throw();
   std::string conditionalWriter;
   std::map<std::string, ConditionalUpdates>  updates;
 
   _AccumuloProxy_updateRowsConditionally_args__isset __isset;
 
-  void __set_conditionalWriter(const std::string& val) {
-    conditionalWriter = val;
-  }
+  void __set_conditionalWriter(const std::string& val);
 
-  void __set_updates(const std::map<std::string, ConditionalUpdates> & val) {
-    updates = val;
-  }
+  void __set_updates(const std::map<std::string, ConditionalUpdates> & val);
 
   bool operator == (const AccumuloProxy_updateRowsConditionally_args & rhs) const
   {
@@ -10901,8 +10561,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_updateRowsConditionally_pargs() throw() {}
-
+  virtual ~AccumuloProxy_updateRowsConditionally_pargs() throw();
   const std::string* conditionalWriter;
   const std::map<std::string, ConditionalUpdates> * updates;
 
@@ -10912,20 +10571,21 @@
 
 typedef struct _AccumuloProxy_updateRowsConditionally_result__isset {
   _AccumuloProxy_updateRowsConditionally_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_updateRowsConditionally_result__isset;
 
 class AccumuloProxy_updateRowsConditionally_result {
  public:
 
+  AccumuloProxy_updateRowsConditionally_result(const AccumuloProxy_updateRowsConditionally_result&);
+  AccumuloProxy_updateRowsConditionally_result& operator=(const AccumuloProxy_updateRowsConditionally_result&);
   AccumuloProxy_updateRowsConditionally_result() {
   }
 
-  virtual ~AccumuloProxy_updateRowsConditionally_result() throw() {}
-
+  virtual ~AccumuloProxy_updateRowsConditionally_result() throw();
   std::map<std::string, ConditionalStatus::type>  success;
   UnknownWriter ouch1;
   AccumuloException ouch2;
@@ -10933,21 +10593,13 @@
 
   _AccumuloProxy_updateRowsConditionally_result__isset __isset;
 
-  void __set_success(const std::map<std::string, ConditionalStatus::type> & val) {
-    success = val;
-  }
+  void __set_success(const std::map<std::string, ConditionalStatus::type> & val);
 
-  void __set_ouch1(const UnknownWriter& val) {
-    ouch1 = val;
-  }
+  void __set_ouch1(const UnknownWriter& val);
 
-  void __set_ouch2(const AccumuloException& val) {
-    ouch2 = val;
-  }
+  void __set_ouch2(const AccumuloException& val);
 
-  void __set_ouch3(const AccumuloSecurityException& val) {
-    ouch3 = val;
-  }
+  void __set_ouch3(const AccumuloSecurityException& val);
 
   bool operator == (const AccumuloProxy_updateRowsConditionally_result & rhs) const
   {
@@ -10974,18 +10626,17 @@
 
 typedef struct _AccumuloProxy_updateRowsConditionally_presult__isset {
   _AccumuloProxy_updateRowsConditionally_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
-  bool success;
-  bool ouch1;
-  bool ouch2;
-  bool ouch3;
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
 } _AccumuloProxy_updateRowsConditionally_presult__isset;
 
 class AccumuloProxy_updateRowsConditionally_presult {
  public:
 
 
-  virtual ~AccumuloProxy_updateRowsConditionally_presult() throw() {}
-
+  virtual ~AccumuloProxy_updateRowsConditionally_presult() throw();
   std::map<std::string, ConditionalStatus::type> * success;
   UnknownWriter ouch1;
   AccumuloException ouch2;
@@ -10999,24 +10650,23 @@
 
 typedef struct _AccumuloProxy_closeConditionalWriter_args__isset {
   _AccumuloProxy_closeConditionalWriter_args__isset() : conditionalWriter(false) {}
-  bool conditionalWriter;
+  bool conditionalWriter :1;
 } _AccumuloProxy_closeConditionalWriter_args__isset;
 
 class AccumuloProxy_closeConditionalWriter_args {
  public:
 
+  AccumuloProxy_closeConditionalWriter_args(const AccumuloProxy_closeConditionalWriter_args&);
+  AccumuloProxy_closeConditionalWriter_args& operator=(const AccumuloProxy_closeConditionalWriter_args&);
   AccumuloProxy_closeConditionalWriter_args() : conditionalWriter() {
   }
 
-  virtual ~AccumuloProxy_closeConditionalWriter_args() throw() {}
-
+  virtual ~AccumuloProxy_closeConditionalWriter_args() throw();
   std::string conditionalWriter;
 
   _AccumuloProxy_closeConditionalWriter_args__isset __isset;
 
-  void __set_conditionalWriter(const std::string& val) {
-    conditionalWriter = val;
-  }
+  void __set_conditionalWriter(const std::string& val);
 
   bool operator == (const AccumuloProxy_closeConditionalWriter_args & rhs) const
   {
@@ -11040,8 +10690,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_closeConditionalWriter_pargs() throw() {}
-
+  virtual ~AccumuloProxy_closeConditionalWriter_pargs() throw();
   const std::string* conditionalWriter;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -11052,11 +10701,12 @@
 class AccumuloProxy_closeConditionalWriter_result {
  public:
 
+  AccumuloProxy_closeConditionalWriter_result(const AccumuloProxy_closeConditionalWriter_result&);
+  AccumuloProxy_closeConditionalWriter_result& operator=(const AccumuloProxy_closeConditionalWriter_result&);
   AccumuloProxy_closeConditionalWriter_result() {
   }
 
-  virtual ~AccumuloProxy_closeConditionalWriter_result() throw() {}
-
+  virtual ~AccumuloProxy_closeConditionalWriter_result() throw();
 
   bool operator == (const AccumuloProxy_closeConditionalWriter_result & /* rhs */) const
   {
@@ -11078,8 +10728,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_closeConditionalWriter_presult() throw() {}
-
+  virtual ~AccumuloProxy_closeConditionalWriter_presult() throw();
 
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
 
@@ -11087,24 +10736,23 @@
 
 typedef struct _AccumuloProxy_getRowRange_args__isset {
   _AccumuloProxy_getRowRange_args__isset() : row(false) {}
-  bool row;
+  bool row :1;
 } _AccumuloProxy_getRowRange_args__isset;
 
 class AccumuloProxy_getRowRange_args {
  public:
 
+  AccumuloProxy_getRowRange_args(const AccumuloProxy_getRowRange_args&);
+  AccumuloProxy_getRowRange_args& operator=(const AccumuloProxy_getRowRange_args&);
   AccumuloProxy_getRowRange_args() : row() {
   }
 
-  virtual ~AccumuloProxy_getRowRange_args() throw() {}
-
+  virtual ~AccumuloProxy_getRowRange_args() throw();
   std::string row;
 
   _AccumuloProxy_getRowRange_args__isset __isset;
 
-  void __set_row(const std::string& val) {
-    row = val;
-  }
+  void __set_row(const std::string& val);
 
   bool operator == (const AccumuloProxy_getRowRange_args & rhs) const
   {
@@ -11128,8 +10776,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getRowRange_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getRowRange_pargs() throw();
   const std::string* row;
 
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
@@ -11138,24 +10785,23 @@
 
 typedef struct _AccumuloProxy_getRowRange_result__isset {
   _AccumuloProxy_getRowRange_result__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_getRowRange_result__isset;
 
 class AccumuloProxy_getRowRange_result {
  public:
 
+  AccumuloProxy_getRowRange_result(const AccumuloProxy_getRowRange_result&);
+  AccumuloProxy_getRowRange_result& operator=(const AccumuloProxy_getRowRange_result&);
   AccumuloProxy_getRowRange_result() {
   }
 
-  virtual ~AccumuloProxy_getRowRange_result() throw() {}
-
+  virtual ~AccumuloProxy_getRowRange_result() throw();
   Range success;
 
   _AccumuloProxy_getRowRange_result__isset __isset;
 
-  void __set_success(const Range& val) {
-    success = val;
-  }
+  void __set_success(const Range& val);
 
   bool operator == (const AccumuloProxy_getRowRange_result & rhs) const
   {
@@ -11176,15 +10822,14 @@
 
 typedef struct _AccumuloProxy_getRowRange_presult__isset {
   _AccumuloProxy_getRowRange_presult__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_getRowRange_presult__isset;
 
 class AccumuloProxy_getRowRange_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getRowRange_presult() throw() {}
-
+  virtual ~AccumuloProxy_getRowRange_presult() throw();
   Range* success;
 
   _AccumuloProxy_getRowRange_presult__isset __isset;
@@ -11195,30 +10840,27 @@
 
 typedef struct _AccumuloProxy_getFollowing_args__isset {
   _AccumuloProxy_getFollowing_args__isset() : key(false), part(false) {}
-  bool key;
-  bool part;
+  bool key :1;
+  bool part :1;
 } _AccumuloProxy_getFollowing_args__isset;
 
 class AccumuloProxy_getFollowing_args {
  public:
 
+  AccumuloProxy_getFollowing_args(const AccumuloProxy_getFollowing_args&);
+  AccumuloProxy_getFollowing_args& operator=(const AccumuloProxy_getFollowing_args&);
   AccumuloProxy_getFollowing_args() : part((PartialKey::type)0) {
   }
 
-  virtual ~AccumuloProxy_getFollowing_args() throw() {}
-
+  virtual ~AccumuloProxy_getFollowing_args() throw();
   Key key;
   PartialKey::type part;
 
   _AccumuloProxy_getFollowing_args__isset __isset;
 
-  void __set_key(const Key& val) {
-    key = val;
-  }
+  void __set_key(const Key& val);
 
-  void __set_part(const PartialKey::type val) {
-    part = val;
-  }
+  void __set_part(const PartialKey::type val);
 
   bool operator == (const AccumuloProxy_getFollowing_args & rhs) const
   {
@@ -11244,8 +10886,7 @@
  public:
 
 
-  virtual ~AccumuloProxy_getFollowing_pargs() throw() {}
-
+  virtual ~AccumuloProxy_getFollowing_pargs() throw();
   const Key* key;
   const PartialKey::type* part;
 
@@ -11255,24 +10896,23 @@
 
 typedef struct _AccumuloProxy_getFollowing_result__isset {
   _AccumuloProxy_getFollowing_result__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_getFollowing_result__isset;
 
 class AccumuloProxy_getFollowing_result {
  public:
 
+  AccumuloProxy_getFollowing_result(const AccumuloProxy_getFollowing_result&);
+  AccumuloProxy_getFollowing_result& operator=(const AccumuloProxy_getFollowing_result&);
   AccumuloProxy_getFollowing_result() {
   }
 
-  virtual ~AccumuloProxy_getFollowing_result() throw() {}
-
+  virtual ~AccumuloProxy_getFollowing_result() throw();
   Key success;
 
   _AccumuloProxy_getFollowing_result__isset __isset;
 
-  void __set_success(const Key& val) {
-    success = val;
-  }
+  void __set_success(const Key& val);
 
   bool operator == (const AccumuloProxy_getFollowing_result & rhs) const
   {
@@ -11293,15 +10933,14 @@
 
 typedef struct _AccumuloProxy_getFollowing_presult__isset {
   _AccumuloProxy_getFollowing_presult__isset() : success(false) {}
-  bool success;
+  bool success :1;
 } _AccumuloProxy_getFollowing_presult__isset;
 
 class AccumuloProxy_getFollowing_presult {
  public:
 
 
-  virtual ~AccumuloProxy_getFollowing_presult() throw() {}
-
+  virtual ~AccumuloProxy_getFollowing_presult() throw();
   Key* success;
 
   _AccumuloProxy_getFollowing_presult__isset __isset;
@@ -11310,20 +10949,2657 @@
 
 };
 
+
+class AccumuloProxy_systemNamespace_args {
+ public:
+
+  AccumuloProxy_systemNamespace_args(const AccumuloProxy_systemNamespace_args&);
+  AccumuloProxy_systemNamespace_args& operator=(const AccumuloProxy_systemNamespace_args&);
+  AccumuloProxy_systemNamespace_args() {
+  }
+
+  virtual ~AccumuloProxy_systemNamespace_args() throw();
+
+  bool operator == (const AccumuloProxy_systemNamespace_args & /* rhs */) const
+  {
+    return true;
+  }
+  bool operator != (const AccumuloProxy_systemNamespace_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_systemNamespace_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_systemNamespace_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_systemNamespace_pargs() throw();
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_systemNamespace_result__isset {
+  _AccumuloProxy_systemNamespace_result__isset() : success(false) {}
+  bool success :1;
+} _AccumuloProxy_systemNamespace_result__isset;
+
+class AccumuloProxy_systemNamespace_result {
+ public:
+
+  AccumuloProxy_systemNamespace_result(const AccumuloProxy_systemNamespace_result&);
+  AccumuloProxy_systemNamespace_result& operator=(const AccumuloProxy_systemNamespace_result&);
+  AccumuloProxy_systemNamespace_result() : success() {
+  }
+
+  virtual ~AccumuloProxy_systemNamespace_result() throw();
+  std::string success;
+
+  _AccumuloProxy_systemNamespace_result__isset __isset;
+
+  void __set_success(const std::string& val);
+
+  bool operator == (const AccumuloProxy_systemNamespace_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_systemNamespace_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_systemNamespace_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_systemNamespace_presult__isset {
+  _AccumuloProxy_systemNamespace_presult__isset() : success(false) {}
+  bool success :1;
+} _AccumuloProxy_systemNamespace_presult__isset;
+
+class AccumuloProxy_systemNamespace_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_systemNamespace_presult() throw();
+  std::string* success;
+
+  _AccumuloProxy_systemNamespace_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+
+class AccumuloProxy_defaultNamespace_args {
+ public:
+
+  AccumuloProxy_defaultNamespace_args(const AccumuloProxy_defaultNamespace_args&);
+  AccumuloProxy_defaultNamespace_args& operator=(const AccumuloProxy_defaultNamespace_args&);
+  AccumuloProxy_defaultNamespace_args() {
+  }
+
+  virtual ~AccumuloProxy_defaultNamespace_args() throw();
+
+  bool operator == (const AccumuloProxy_defaultNamespace_args & /* rhs */) const
+  {
+    return true;
+  }
+  bool operator != (const AccumuloProxy_defaultNamespace_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_defaultNamespace_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_defaultNamespace_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_defaultNamespace_pargs() throw();
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_defaultNamespace_result__isset {
+  _AccumuloProxy_defaultNamespace_result__isset() : success(false) {}
+  bool success :1;
+} _AccumuloProxy_defaultNamespace_result__isset;
+
+class AccumuloProxy_defaultNamespace_result {
+ public:
+
+  AccumuloProxy_defaultNamespace_result(const AccumuloProxy_defaultNamespace_result&);
+  AccumuloProxy_defaultNamespace_result& operator=(const AccumuloProxy_defaultNamespace_result&);
+  AccumuloProxy_defaultNamespace_result() : success() {
+  }
+
+  virtual ~AccumuloProxy_defaultNamespace_result() throw();
+  std::string success;
+
+  _AccumuloProxy_defaultNamespace_result__isset __isset;
+
+  void __set_success(const std::string& val);
+
+  bool operator == (const AccumuloProxy_defaultNamespace_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_defaultNamespace_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_defaultNamespace_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_defaultNamespace_presult__isset {
+  _AccumuloProxy_defaultNamespace_presult__isset() : success(false) {}
+  bool success :1;
+} _AccumuloProxy_defaultNamespace_presult__isset;
+
+class AccumuloProxy_defaultNamespace_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_defaultNamespace_presult() throw();
+  std::string* success;
+
+  _AccumuloProxy_defaultNamespace_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_listNamespaces_args__isset {
+  _AccumuloProxy_listNamespaces_args__isset() : login(false) {}
+  bool login :1;
+} _AccumuloProxy_listNamespaces_args__isset;
+
+class AccumuloProxy_listNamespaces_args {
+ public:
+
+  AccumuloProxy_listNamespaces_args(const AccumuloProxy_listNamespaces_args&);
+  AccumuloProxy_listNamespaces_args& operator=(const AccumuloProxy_listNamespaces_args&);
+  AccumuloProxy_listNamespaces_args() : login() {
+  }
+
+  virtual ~AccumuloProxy_listNamespaces_args() throw();
+  std::string login;
+
+  _AccumuloProxy_listNamespaces_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  bool operator == (const AccumuloProxy_listNamespaces_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_listNamespaces_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_listNamespaces_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_listNamespaces_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_listNamespaces_pargs() throw();
+  const std::string* login;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_listNamespaces_result__isset {
+  _AccumuloProxy_listNamespaces_result__isset() : success(false), ouch1(false), ouch2(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_listNamespaces_result__isset;
+
+class AccumuloProxy_listNamespaces_result {
+ public:
+
+  AccumuloProxy_listNamespaces_result(const AccumuloProxy_listNamespaces_result&);
+  AccumuloProxy_listNamespaces_result& operator=(const AccumuloProxy_listNamespaces_result&);
+  AccumuloProxy_listNamespaces_result() {
+  }
+
+  virtual ~AccumuloProxy_listNamespaces_result() throw();
+  std::vector<std::string>  success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_listNamespaces_result__isset __isset;
+
+  void __set_success(const std::vector<std::string> & val);
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  bool operator == (const AccumuloProxy_listNamespaces_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_listNamespaces_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_listNamespaces_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_listNamespaces_presult__isset {
+  _AccumuloProxy_listNamespaces_presult__isset() : success(false), ouch1(false), ouch2(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_listNamespaces_presult__isset;
+
+class AccumuloProxy_listNamespaces_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_listNamespaces_presult() throw();
+  std::vector<std::string> * success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_listNamespaces_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_namespaceExists_args__isset {
+  _AccumuloProxy_namespaceExists_args__isset() : login(false), namespaceName(false) {}
+  bool login :1;
+  bool namespaceName :1;
+} _AccumuloProxy_namespaceExists_args__isset;
+
+class AccumuloProxy_namespaceExists_args {
+ public:
+
+  AccumuloProxy_namespaceExists_args(const AccumuloProxy_namespaceExists_args&);
+  AccumuloProxy_namespaceExists_args& operator=(const AccumuloProxy_namespaceExists_args&);
+  AccumuloProxy_namespaceExists_args() : login(), namespaceName() {
+  }
+
+  virtual ~AccumuloProxy_namespaceExists_args() throw();
+  std::string login;
+  std::string namespaceName;
+
+  _AccumuloProxy_namespaceExists_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  bool operator == (const AccumuloProxy_namespaceExists_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_namespaceExists_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_namespaceExists_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_namespaceExists_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_namespaceExists_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_namespaceExists_result__isset {
+  _AccumuloProxy_namespaceExists_result__isset() : success(false), ouch1(false), ouch2(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_namespaceExists_result__isset;
+
+class AccumuloProxy_namespaceExists_result {
+ public:
+
+  AccumuloProxy_namespaceExists_result(const AccumuloProxy_namespaceExists_result&);
+  AccumuloProxy_namespaceExists_result& operator=(const AccumuloProxy_namespaceExists_result&);
+  AccumuloProxy_namespaceExists_result() : success(0) {
+  }
+
+  virtual ~AccumuloProxy_namespaceExists_result() throw();
+  bool success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_namespaceExists_result__isset __isset;
+
+  void __set_success(const bool val);
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  bool operator == (const AccumuloProxy_namespaceExists_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_namespaceExists_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_namespaceExists_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_namespaceExists_presult__isset {
+  _AccumuloProxy_namespaceExists_presult__isset() : success(false), ouch1(false), ouch2(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_namespaceExists_presult__isset;
+
+class AccumuloProxy_namespaceExists_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_namespaceExists_presult() throw();
+  bool* success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_namespaceExists_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_createNamespace_args__isset {
+  _AccumuloProxy_createNamespace_args__isset() : login(false), namespaceName(false) {}
+  bool login :1;
+  bool namespaceName :1;
+} _AccumuloProxy_createNamespace_args__isset;
+
+class AccumuloProxy_createNamespace_args {
+ public:
+
+  AccumuloProxy_createNamespace_args(const AccumuloProxy_createNamespace_args&);
+  AccumuloProxy_createNamespace_args& operator=(const AccumuloProxy_createNamespace_args&);
+  AccumuloProxy_createNamespace_args() : login(), namespaceName() {
+  }
+
+  virtual ~AccumuloProxy_createNamespace_args() throw();
+  std::string login;
+  std::string namespaceName;
+
+  _AccumuloProxy_createNamespace_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  bool operator == (const AccumuloProxy_createNamespace_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_createNamespace_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_createNamespace_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_createNamespace_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_createNamespace_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_createNamespace_result__isset {
+  _AccumuloProxy_createNamespace_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_createNamespace_result__isset;
+
+class AccumuloProxy_createNamespace_result {
+ public:
+
+  AccumuloProxy_createNamespace_result(const AccumuloProxy_createNamespace_result&);
+  AccumuloProxy_createNamespace_result& operator=(const AccumuloProxy_createNamespace_result&);
+  AccumuloProxy_createNamespace_result() {
+  }
+
+  virtual ~AccumuloProxy_createNamespace_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceExistsException ouch3;
+
+  _AccumuloProxy_createNamespace_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceExistsException& val);
+
+  bool operator == (const AccumuloProxy_createNamespace_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_createNamespace_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_createNamespace_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_createNamespace_presult__isset {
+  _AccumuloProxy_createNamespace_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_createNamespace_presult__isset;
+
+class AccumuloProxy_createNamespace_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_createNamespace_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceExistsException ouch3;
+
+  _AccumuloProxy_createNamespace_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_deleteNamespace_args__isset {
+  _AccumuloProxy_deleteNamespace_args__isset() : login(false), namespaceName(false) {}
+  bool login :1;
+  bool namespaceName :1;
+} _AccumuloProxy_deleteNamespace_args__isset;
+
+class AccumuloProxy_deleteNamespace_args {
+ public:
+
+  AccumuloProxy_deleteNamespace_args(const AccumuloProxy_deleteNamespace_args&);
+  AccumuloProxy_deleteNamespace_args& operator=(const AccumuloProxy_deleteNamespace_args&);
+  AccumuloProxy_deleteNamespace_args() : login(), namespaceName() {
+  }
+
+  virtual ~AccumuloProxy_deleteNamespace_args() throw();
+  std::string login;
+  std::string namespaceName;
+
+  _AccumuloProxy_deleteNamespace_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  bool operator == (const AccumuloProxy_deleteNamespace_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_deleteNamespace_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_deleteNamespace_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_deleteNamespace_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_deleteNamespace_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_deleteNamespace_result__isset {
+  _AccumuloProxy_deleteNamespace_result__isset() : ouch1(false), ouch2(false), ouch3(false), ouch4(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
+} _AccumuloProxy_deleteNamespace_result__isset;
+
+class AccumuloProxy_deleteNamespace_result {
+ public:
+
+  AccumuloProxy_deleteNamespace_result(const AccumuloProxy_deleteNamespace_result&);
+  AccumuloProxy_deleteNamespace_result& operator=(const AccumuloProxy_deleteNamespace_result&);
+  AccumuloProxy_deleteNamespace_result() {
+  }
+
+  virtual ~AccumuloProxy_deleteNamespace_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+  NamespaceNotEmptyException ouch4;
+
+  _AccumuloProxy_deleteNamespace_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  void __set_ouch4(const NamespaceNotEmptyException& val);
+
+  bool operator == (const AccumuloProxy_deleteNamespace_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    if (!(ouch4 == rhs.ouch4))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_deleteNamespace_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_deleteNamespace_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_deleteNamespace_presult__isset {
+  _AccumuloProxy_deleteNamespace_presult__isset() : ouch1(false), ouch2(false), ouch3(false), ouch4(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
+} _AccumuloProxy_deleteNamespace_presult__isset;
+
+class AccumuloProxy_deleteNamespace_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_deleteNamespace_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+  NamespaceNotEmptyException ouch4;
+
+  _AccumuloProxy_deleteNamespace_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_renameNamespace_args__isset {
+  _AccumuloProxy_renameNamespace_args__isset() : login(false), oldNamespaceName(false), newNamespaceName(false) {}
+  bool login :1;
+  bool oldNamespaceName :1;
+  bool newNamespaceName :1;
+} _AccumuloProxy_renameNamespace_args__isset;
+
+class AccumuloProxy_renameNamespace_args {
+ public:
+
+  AccumuloProxy_renameNamespace_args(const AccumuloProxy_renameNamespace_args&);
+  AccumuloProxy_renameNamespace_args& operator=(const AccumuloProxy_renameNamespace_args&);
+  AccumuloProxy_renameNamespace_args() : login(), oldNamespaceName(), newNamespaceName() {
+  }
+
+  virtual ~AccumuloProxy_renameNamespace_args() throw();
+  std::string login;
+  std::string oldNamespaceName;
+  std::string newNamespaceName;
+
+  _AccumuloProxy_renameNamespace_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_oldNamespaceName(const std::string& val);
+
+  void __set_newNamespaceName(const std::string& val);
+
+  bool operator == (const AccumuloProxy_renameNamespace_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(oldNamespaceName == rhs.oldNamespaceName))
+      return false;
+    if (!(newNamespaceName == rhs.newNamespaceName))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_renameNamespace_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_renameNamespace_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_renameNamespace_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_renameNamespace_pargs() throw();
+  const std::string* login;
+  const std::string* oldNamespaceName;
+  const std::string* newNamespaceName;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_renameNamespace_result__isset {
+  _AccumuloProxy_renameNamespace_result__isset() : ouch1(false), ouch2(false), ouch3(false), ouch4(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
+} _AccumuloProxy_renameNamespace_result__isset;
+
+class AccumuloProxy_renameNamespace_result {
+ public:
+
+  AccumuloProxy_renameNamespace_result(const AccumuloProxy_renameNamespace_result&);
+  AccumuloProxy_renameNamespace_result& operator=(const AccumuloProxy_renameNamespace_result&);
+  AccumuloProxy_renameNamespace_result() {
+  }
+
+  virtual ~AccumuloProxy_renameNamespace_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+  NamespaceExistsException ouch4;
+
+  _AccumuloProxy_renameNamespace_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  void __set_ouch4(const NamespaceExistsException& val);
+
+  bool operator == (const AccumuloProxy_renameNamespace_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    if (!(ouch4 == rhs.ouch4))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_renameNamespace_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_renameNamespace_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_renameNamespace_presult__isset {
+  _AccumuloProxy_renameNamespace_presult__isset() : ouch1(false), ouch2(false), ouch3(false), ouch4(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+  bool ouch4 :1;
+} _AccumuloProxy_renameNamespace_presult__isset;
+
+class AccumuloProxy_renameNamespace_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_renameNamespace_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+  NamespaceExistsException ouch4;
+
+  _AccumuloProxy_renameNamespace_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_setNamespaceProperty_args__isset {
+  _AccumuloProxy_setNamespaceProperty_args__isset() : login(false), namespaceName(false), property(false), value(false) {}
+  bool login :1;
+  bool namespaceName :1;
+  bool property :1;
+  bool value :1;
+} _AccumuloProxy_setNamespaceProperty_args__isset;
+
+class AccumuloProxy_setNamespaceProperty_args {
+ public:
+
+  AccumuloProxy_setNamespaceProperty_args(const AccumuloProxy_setNamespaceProperty_args&);
+  AccumuloProxy_setNamespaceProperty_args& operator=(const AccumuloProxy_setNamespaceProperty_args&);
+  AccumuloProxy_setNamespaceProperty_args() : login(), namespaceName(), property(), value() {
+  }
+
+  virtual ~AccumuloProxy_setNamespaceProperty_args() throw();
+  std::string login;
+  std::string namespaceName;
+  std::string property;
+  std::string value;
+
+  _AccumuloProxy_setNamespaceProperty_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_property(const std::string& val);
+
+  void __set_value(const std::string& val);
+
+  bool operator == (const AccumuloProxy_setNamespaceProperty_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(property == rhs.property))
+      return false;
+    if (!(value == rhs.value))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_setNamespaceProperty_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_setNamespaceProperty_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_setNamespaceProperty_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_setNamespaceProperty_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+  const std::string* property;
+  const std::string* value;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_setNamespaceProperty_result__isset {
+  _AccumuloProxy_setNamespaceProperty_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_setNamespaceProperty_result__isset;
+
+class AccumuloProxy_setNamespaceProperty_result {
+ public:
+
+  AccumuloProxy_setNamespaceProperty_result(const AccumuloProxy_setNamespaceProperty_result&);
+  AccumuloProxy_setNamespaceProperty_result& operator=(const AccumuloProxy_setNamespaceProperty_result&);
+  AccumuloProxy_setNamespaceProperty_result() {
+  }
+
+  virtual ~AccumuloProxy_setNamespaceProperty_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_setNamespaceProperty_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_setNamespaceProperty_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_setNamespaceProperty_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_setNamespaceProperty_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_setNamespaceProperty_presult__isset {
+  _AccumuloProxy_setNamespaceProperty_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_setNamespaceProperty_presult__isset;
+
+class AccumuloProxy_setNamespaceProperty_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_setNamespaceProperty_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_setNamespaceProperty_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_removeNamespaceProperty_args__isset {
+  _AccumuloProxy_removeNamespaceProperty_args__isset() : login(false), namespaceName(false), property(false) {}
+  bool login :1;
+  bool namespaceName :1;
+  bool property :1;
+} _AccumuloProxy_removeNamespaceProperty_args__isset;
+
+class AccumuloProxy_removeNamespaceProperty_args {
+ public:
+
+  AccumuloProxy_removeNamespaceProperty_args(const AccumuloProxy_removeNamespaceProperty_args&);
+  AccumuloProxy_removeNamespaceProperty_args& operator=(const AccumuloProxy_removeNamespaceProperty_args&);
+  AccumuloProxy_removeNamespaceProperty_args() : login(), namespaceName(), property() {
+  }
+
+  virtual ~AccumuloProxy_removeNamespaceProperty_args() throw();
+  std::string login;
+  std::string namespaceName;
+  std::string property;
+
+  _AccumuloProxy_removeNamespaceProperty_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_property(const std::string& val);
+
+  bool operator == (const AccumuloProxy_removeNamespaceProperty_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(property == rhs.property))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_removeNamespaceProperty_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_removeNamespaceProperty_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_removeNamespaceProperty_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_removeNamespaceProperty_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+  const std::string* property;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_removeNamespaceProperty_result__isset {
+  _AccumuloProxy_removeNamespaceProperty_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_removeNamespaceProperty_result__isset;
+
+class AccumuloProxy_removeNamespaceProperty_result {
+ public:
+
+  AccumuloProxy_removeNamespaceProperty_result(const AccumuloProxy_removeNamespaceProperty_result&);
+  AccumuloProxy_removeNamespaceProperty_result& operator=(const AccumuloProxy_removeNamespaceProperty_result&);
+  AccumuloProxy_removeNamespaceProperty_result() {
+  }
+
+  virtual ~AccumuloProxy_removeNamespaceProperty_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_removeNamespaceProperty_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_removeNamespaceProperty_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_removeNamespaceProperty_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_removeNamespaceProperty_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_removeNamespaceProperty_presult__isset {
+  _AccumuloProxy_removeNamespaceProperty_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_removeNamespaceProperty_presult__isset;
+
+class AccumuloProxy_removeNamespaceProperty_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_removeNamespaceProperty_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_removeNamespaceProperty_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_getNamespaceProperties_args__isset {
+  _AccumuloProxy_getNamespaceProperties_args__isset() : login(false), namespaceName(false) {}
+  bool login :1;
+  bool namespaceName :1;
+} _AccumuloProxy_getNamespaceProperties_args__isset;
+
+class AccumuloProxy_getNamespaceProperties_args {
+ public:
+
+  AccumuloProxy_getNamespaceProperties_args(const AccumuloProxy_getNamespaceProperties_args&);
+  AccumuloProxy_getNamespaceProperties_args& operator=(const AccumuloProxy_getNamespaceProperties_args&);
+  AccumuloProxy_getNamespaceProperties_args() : login(), namespaceName() {
+  }
+
+  virtual ~AccumuloProxy_getNamespaceProperties_args() throw();
+  std::string login;
+  std::string namespaceName;
+
+  _AccumuloProxy_getNamespaceProperties_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  bool operator == (const AccumuloProxy_getNamespaceProperties_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_getNamespaceProperties_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_getNamespaceProperties_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_getNamespaceProperties_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_getNamespaceProperties_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_getNamespaceProperties_result__isset {
+  _AccumuloProxy_getNamespaceProperties_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_getNamespaceProperties_result__isset;
+
+class AccumuloProxy_getNamespaceProperties_result {
+ public:
+
+  AccumuloProxy_getNamespaceProperties_result(const AccumuloProxy_getNamespaceProperties_result&);
+  AccumuloProxy_getNamespaceProperties_result& operator=(const AccumuloProxy_getNamespaceProperties_result&);
+  AccumuloProxy_getNamespaceProperties_result() {
+  }
+
+  virtual ~AccumuloProxy_getNamespaceProperties_result() throw();
+  std::map<std::string, std::string>  success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_getNamespaceProperties_result__isset __isset;
+
+  void __set_success(const std::map<std::string, std::string> & val);
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_getNamespaceProperties_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_getNamespaceProperties_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_getNamespaceProperties_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_getNamespaceProperties_presult__isset {
+  _AccumuloProxy_getNamespaceProperties_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_getNamespaceProperties_presult__isset;
+
+class AccumuloProxy_getNamespaceProperties_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_getNamespaceProperties_presult() throw();
+  std::map<std::string, std::string> * success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_getNamespaceProperties_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_namespaceIdMap_args__isset {
+  _AccumuloProxy_namespaceIdMap_args__isset() : login(false) {}
+  bool login :1;
+} _AccumuloProxy_namespaceIdMap_args__isset;
+
+class AccumuloProxy_namespaceIdMap_args {
+ public:
+
+  AccumuloProxy_namespaceIdMap_args(const AccumuloProxy_namespaceIdMap_args&);
+  AccumuloProxy_namespaceIdMap_args& operator=(const AccumuloProxy_namespaceIdMap_args&);
+  AccumuloProxy_namespaceIdMap_args() : login() {
+  }
+
+  virtual ~AccumuloProxy_namespaceIdMap_args() throw();
+  std::string login;
+
+  _AccumuloProxy_namespaceIdMap_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  bool operator == (const AccumuloProxy_namespaceIdMap_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_namespaceIdMap_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_namespaceIdMap_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_namespaceIdMap_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_namespaceIdMap_pargs() throw();
+  const std::string* login;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_namespaceIdMap_result__isset {
+  _AccumuloProxy_namespaceIdMap_result__isset() : success(false), ouch1(false), ouch2(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_namespaceIdMap_result__isset;
+
+class AccumuloProxy_namespaceIdMap_result {
+ public:
+
+  AccumuloProxy_namespaceIdMap_result(const AccumuloProxy_namespaceIdMap_result&);
+  AccumuloProxy_namespaceIdMap_result& operator=(const AccumuloProxy_namespaceIdMap_result&);
+  AccumuloProxy_namespaceIdMap_result() {
+  }
+
+  virtual ~AccumuloProxy_namespaceIdMap_result() throw();
+  std::map<std::string, std::string>  success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_namespaceIdMap_result__isset __isset;
+
+  void __set_success(const std::map<std::string, std::string> & val);
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  bool operator == (const AccumuloProxy_namespaceIdMap_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_namespaceIdMap_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_namespaceIdMap_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_namespaceIdMap_presult__isset {
+  _AccumuloProxy_namespaceIdMap_presult__isset() : success(false), ouch1(false), ouch2(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+} _AccumuloProxy_namespaceIdMap_presult__isset;
+
+class AccumuloProxy_namespaceIdMap_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_namespaceIdMap_presult() throw();
+  std::map<std::string, std::string> * success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+
+  _AccumuloProxy_namespaceIdMap_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_attachNamespaceIterator_args__isset {
+  _AccumuloProxy_attachNamespaceIterator_args__isset() : login(false), namespaceName(false), setting(false), scopes(false) {}
+  bool login :1;
+  bool namespaceName :1;
+  bool setting :1;
+  bool scopes :1;
+} _AccumuloProxy_attachNamespaceIterator_args__isset;
+
+class AccumuloProxy_attachNamespaceIterator_args {
+ public:
+
+  AccumuloProxy_attachNamespaceIterator_args(const AccumuloProxy_attachNamespaceIterator_args&);
+  AccumuloProxy_attachNamespaceIterator_args& operator=(const AccumuloProxy_attachNamespaceIterator_args&);
+  AccumuloProxy_attachNamespaceIterator_args() : login(), namespaceName() {
+  }
+
+  virtual ~AccumuloProxy_attachNamespaceIterator_args() throw();
+  std::string login;
+  std::string namespaceName;
+  IteratorSetting setting;
+  std::set<IteratorScope::type>  scopes;
+
+  _AccumuloProxy_attachNamespaceIterator_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_setting(const IteratorSetting& val);
+
+  void __set_scopes(const std::set<IteratorScope::type> & val);
+
+  bool operator == (const AccumuloProxy_attachNamespaceIterator_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(setting == rhs.setting))
+      return false;
+    if (!(scopes == rhs.scopes))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_attachNamespaceIterator_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_attachNamespaceIterator_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_attachNamespaceIterator_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_attachNamespaceIterator_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+  const IteratorSetting* setting;
+  const std::set<IteratorScope::type> * scopes;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_attachNamespaceIterator_result__isset {
+  _AccumuloProxy_attachNamespaceIterator_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_attachNamespaceIterator_result__isset;
+
+class AccumuloProxy_attachNamespaceIterator_result {
+ public:
+
+  AccumuloProxy_attachNamespaceIterator_result(const AccumuloProxy_attachNamespaceIterator_result&);
+  AccumuloProxy_attachNamespaceIterator_result& operator=(const AccumuloProxy_attachNamespaceIterator_result&);
+  AccumuloProxy_attachNamespaceIterator_result() {
+  }
+
+  virtual ~AccumuloProxy_attachNamespaceIterator_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_attachNamespaceIterator_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_attachNamespaceIterator_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_attachNamespaceIterator_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_attachNamespaceIterator_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_attachNamespaceIterator_presult__isset {
+  _AccumuloProxy_attachNamespaceIterator_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_attachNamespaceIterator_presult__isset;
+
+class AccumuloProxy_attachNamespaceIterator_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_attachNamespaceIterator_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_attachNamespaceIterator_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_removeNamespaceIterator_args__isset {
+  _AccumuloProxy_removeNamespaceIterator_args__isset() : login(false), namespaceName(false), name(false), scopes(false) {}
+  bool login :1;
+  bool namespaceName :1;
+  bool name :1;
+  bool scopes :1;
+} _AccumuloProxy_removeNamespaceIterator_args__isset;
+
+class AccumuloProxy_removeNamespaceIterator_args {
+ public:
+
+  AccumuloProxy_removeNamespaceIterator_args(const AccumuloProxy_removeNamespaceIterator_args&);
+  AccumuloProxy_removeNamespaceIterator_args& operator=(const AccumuloProxy_removeNamespaceIterator_args&);
+  AccumuloProxy_removeNamespaceIterator_args() : login(), namespaceName(), name() {
+  }
+
+  virtual ~AccumuloProxy_removeNamespaceIterator_args() throw();
+  std::string login;
+  std::string namespaceName;
+  std::string name;
+  std::set<IteratorScope::type>  scopes;
+
+  _AccumuloProxy_removeNamespaceIterator_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_name(const std::string& val);
+
+  void __set_scopes(const std::set<IteratorScope::type> & val);
+
+  bool operator == (const AccumuloProxy_removeNamespaceIterator_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(name == rhs.name))
+      return false;
+    if (!(scopes == rhs.scopes))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_removeNamespaceIterator_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_removeNamespaceIterator_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_removeNamespaceIterator_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_removeNamespaceIterator_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+  const std::string* name;
+  const std::set<IteratorScope::type> * scopes;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_removeNamespaceIterator_result__isset {
+  _AccumuloProxy_removeNamespaceIterator_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_removeNamespaceIterator_result__isset;
+
+class AccumuloProxy_removeNamespaceIterator_result {
+ public:
+
+  AccumuloProxy_removeNamespaceIterator_result(const AccumuloProxy_removeNamespaceIterator_result&);
+  AccumuloProxy_removeNamespaceIterator_result& operator=(const AccumuloProxy_removeNamespaceIterator_result&);
+  AccumuloProxy_removeNamespaceIterator_result() {
+  }
+
+  virtual ~AccumuloProxy_removeNamespaceIterator_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_removeNamespaceIterator_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_removeNamespaceIterator_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_removeNamespaceIterator_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_removeNamespaceIterator_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_removeNamespaceIterator_presult__isset {
+  _AccumuloProxy_removeNamespaceIterator_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_removeNamespaceIterator_presult__isset;
+
+class AccumuloProxy_removeNamespaceIterator_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_removeNamespaceIterator_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_removeNamespaceIterator_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_getNamespaceIteratorSetting_args__isset {
+  _AccumuloProxy_getNamespaceIteratorSetting_args__isset() : login(false), namespaceName(false), name(false), scope(false) {}
+  bool login :1;
+  bool namespaceName :1;
+  bool name :1;
+  bool scope :1;
+} _AccumuloProxy_getNamespaceIteratorSetting_args__isset;
+
+class AccumuloProxy_getNamespaceIteratorSetting_args {
+ public:
+
+  AccumuloProxy_getNamespaceIteratorSetting_args(const AccumuloProxy_getNamespaceIteratorSetting_args&);
+  AccumuloProxy_getNamespaceIteratorSetting_args& operator=(const AccumuloProxy_getNamespaceIteratorSetting_args&);
+  AccumuloProxy_getNamespaceIteratorSetting_args() : login(), namespaceName(), name(), scope((IteratorScope::type)0) {
+  }
+
+  virtual ~AccumuloProxy_getNamespaceIteratorSetting_args() throw();
+  std::string login;
+  std::string namespaceName;
+  std::string name;
+  IteratorScope::type scope;
+
+  _AccumuloProxy_getNamespaceIteratorSetting_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_name(const std::string& val);
+
+  void __set_scope(const IteratorScope::type val);
+
+  bool operator == (const AccumuloProxy_getNamespaceIteratorSetting_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(name == rhs.name))
+      return false;
+    if (!(scope == rhs.scope))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_getNamespaceIteratorSetting_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_getNamespaceIteratorSetting_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_getNamespaceIteratorSetting_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_getNamespaceIteratorSetting_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+  const std::string* name;
+  const IteratorScope::type* scope;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_getNamespaceIteratorSetting_result__isset {
+  _AccumuloProxy_getNamespaceIteratorSetting_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_getNamespaceIteratorSetting_result__isset;
+
+class AccumuloProxy_getNamespaceIteratorSetting_result {
+ public:
+
+  AccumuloProxy_getNamespaceIteratorSetting_result(const AccumuloProxy_getNamespaceIteratorSetting_result&);
+  AccumuloProxy_getNamespaceIteratorSetting_result& operator=(const AccumuloProxy_getNamespaceIteratorSetting_result&);
+  AccumuloProxy_getNamespaceIteratorSetting_result() {
+  }
+
+  virtual ~AccumuloProxy_getNamespaceIteratorSetting_result() throw();
+  IteratorSetting success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_getNamespaceIteratorSetting_result__isset __isset;
+
+  void __set_success(const IteratorSetting& val);
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_getNamespaceIteratorSetting_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_getNamespaceIteratorSetting_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_getNamespaceIteratorSetting_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_getNamespaceIteratorSetting_presult__isset {
+  _AccumuloProxy_getNamespaceIteratorSetting_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_getNamespaceIteratorSetting_presult__isset;
+
+class AccumuloProxy_getNamespaceIteratorSetting_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_getNamespaceIteratorSetting_presult() throw();
+  IteratorSetting* success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_getNamespaceIteratorSetting_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_listNamespaceIterators_args__isset {
+  _AccumuloProxy_listNamespaceIterators_args__isset() : login(false), namespaceName(false) {}
+  bool login :1;
+  bool namespaceName :1;
+} _AccumuloProxy_listNamespaceIterators_args__isset;
+
+class AccumuloProxy_listNamespaceIterators_args {
+ public:
+
+  AccumuloProxy_listNamespaceIterators_args(const AccumuloProxy_listNamespaceIterators_args&);
+  AccumuloProxy_listNamespaceIterators_args& operator=(const AccumuloProxy_listNamespaceIterators_args&);
+  AccumuloProxy_listNamespaceIterators_args() : login(), namespaceName() {
+  }
+
+  virtual ~AccumuloProxy_listNamespaceIterators_args() throw();
+  std::string login;
+  std::string namespaceName;
+
+  _AccumuloProxy_listNamespaceIterators_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  bool operator == (const AccumuloProxy_listNamespaceIterators_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_listNamespaceIterators_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_listNamespaceIterators_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_listNamespaceIterators_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_listNamespaceIterators_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_listNamespaceIterators_result__isset {
+  _AccumuloProxy_listNamespaceIterators_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_listNamespaceIterators_result__isset;
+
+class AccumuloProxy_listNamespaceIterators_result {
+ public:
+
+  AccumuloProxy_listNamespaceIterators_result(const AccumuloProxy_listNamespaceIterators_result&);
+  AccumuloProxy_listNamespaceIterators_result& operator=(const AccumuloProxy_listNamespaceIterators_result&);
+  AccumuloProxy_listNamespaceIterators_result() {
+  }
+
+  virtual ~AccumuloProxy_listNamespaceIterators_result() throw();
+  std::map<std::string, std::set<IteratorScope::type> >  success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_listNamespaceIterators_result__isset __isset;
+
+  void __set_success(const std::map<std::string, std::set<IteratorScope::type> > & val);
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_listNamespaceIterators_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_listNamespaceIterators_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_listNamespaceIterators_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_listNamespaceIterators_presult__isset {
+  _AccumuloProxy_listNamespaceIterators_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_listNamespaceIterators_presult__isset;
+
+class AccumuloProxy_listNamespaceIterators_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_listNamespaceIterators_presult() throw();
+  std::map<std::string, std::set<IteratorScope::type> > * success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_listNamespaceIterators_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_checkNamespaceIteratorConflicts_args__isset {
+  _AccumuloProxy_checkNamespaceIteratorConflicts_args__isset() : login(false), namespaceName(false), setting(false), scopes(false) {}
+  bool login :1;
+  bool namespaceName :1;
+  bool setting :1;
+  bool scopes :1;
+} _AccumuloProxy_checkNamespaceIteratorConflicts_args__isset;
+
+class AccumuloProxy_checkNamespaceIteratorConflicts_args {
+ public:
+
+  AccumuloProxy_checkNamespaceIteratorConflicts_args(const AccumuloProxy_checkNamespaceIteratorConflicts_args&);
+  AccumuloProxy_checkNamespaceIteratorConflicts_args& operator=(const AccumuloProxy_checkNamespaceIteratorConflicts_args&);
+  AccumuloProxy_checkNamespaceIteratorConflicts_args() : login(), namespaceName() {
+  }
+
+  virtual ~AccumuloProxy_checkNamespaceIteratorConflicts_args() throw();
+  std::string login;
+  std::string namespaceName;
+  IteratorSetting setting;
+  std::set<IteratorScope::type>  scopes;
+
+  _AccumuloProxy_checkNamespaceIteratorConflicts_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_setting(const IteratorSetting& val);
+
+  void __set_scopes(const std::set<IteratorScope::type> & val);
+
+  bool operator == (const AccumuloProxy_checkNamespaceIteratorConflicts_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(setting == rhs.setting))
+      return false;
+    if (!(scopes == rhs.scopes))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_checkNamespaceIteratorConflicts_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_checkNamespaceIteratorConflicts_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_checkNamespaceIteratorConflicts_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_checkNamespaceIteratorConflicts_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+  const IteratorSetting* setting;
+  const std::set<IteratorScope::type> * scopes;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_checkNamespaceIteratorConflicts_result__isset {
+  _AccumuloProxy_checkNamespaceIteratorConflicts_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_checkNamespaceIteratorConflicts_result__isset;
+
+class AccumuloProxy_checkNamespaceIteratorConflicts_result {
+ public:
+
+  AccumuloProxy_checkNamespaceIteratorConflicts_result(const AccumuloProxy_checkNamespaceIteratorConflicts_result&);
+  AccumuloProxy_checkNamespaceIteratorConflicts_result& operator=(const AccumuloProxy_checkNamespaceIteratorConflicts_result&);
+  AccumuloProxy_checkNamespaceIteratorConflicts_result() {
+  }
+
+  virtual ~AccumuloProxy_checkNamespaceIteratorConflicts_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_checkNamespaceIteratorConflicts_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_checkNamespaceIteratorConflicts_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_checkNamespaceIteratorConflicts_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_checkNamespaceIteratorConflicts_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_checkNamespaceIteratorConflicts_presult__isset {
+  _AccumuloProxy_checkNamespaceIteratorConflicts_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_checkNamespaceIteratorConflicts_presult__isset;
+
+class AccumuloProxy_checkNamespaceIteratorConflicts_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_checkNamespaceIteratorConflicts_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_checkNamespaceIteratorConflicts_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_addNamespaceConstraint_args__isset {
+  _AccumuloProxy_addNamespaceConstraint_args__isset() : login(false), namespaceName(false), constraintClassName(false) {}
+  bool login :1;
+  bool namespaceName :1;
+  bool constraintClassName :1;
+} _AccumuloProxy_addNamespaceConstraint_args__isset;
+
+class AccumuloProxy_addNamespaceConstraint_args {
+ public:
+
+  AccumuloProxy_addNamespaceConstraint_args(const AccumuloProxy_addNamespaceConstraint_args&);
+  AccumuloProxy_addNamespaceConstraint_args& operator=(const AccumuloProxy_addNamespaceConstraint_args&);
+  AccumuloProxy_addNamespaceConstraint_args() : login(), namespaceName(), constraintClassName() {
+  }
+
+  virtual ~AccumuloProxy_addNamespaceConstraint_args() throw();
+  std::string login;
+  std::string namespaceName;
+  std::string constraintClassName;
+
+  _AccumuloProxy_addNamespaceConstraint_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_constraintClassName(const std::string& val);
+
+  bool operator == (const AccumuloProxy_addNamespaceConstraint_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(constraintClassName == rhs.constraintClassName))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_addNamespaceConstraint_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_addNamespaceConstraint_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_addNamespaceConstraint_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_addNamespaceConstraint_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+  const std::string* constraintClassName;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_addNamespaceConstraint_result__isset {
+  _AccumuloProxy_addNamespaceConstraint_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_addNamespaceConstraint_result__isset;
+
+class AccumuloProxy_addNamespaceConstraint_result {
+ public:
+
+  AccumuloProxy_addNamespaceConstraint_result(const AccumuloProxy_addNamespaceConstraint_result&);
+  AccumuloProxy_addNamespaceConstraint_result& operator=(const AccumuloProxy_addNamespaceConstraint_result&);
+  AccumuloProxy_addNamespaceConstraint_result() : success(0) {
+  }
+
+  virtual ~AccumuloProxy_addNamespaceConstraint_result() throw();
+  int32_t success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_addNamespaceConstraint_result__isset __isset;
+
+  void __set_success(const int32_t val);
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_addNamespaceConstraint_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_addNamespaceConstraint_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_addNamespaceConstraint_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_addNamespaceConstraint_presult__isset {
+  _AccumuloProxy_addNamespaceConstraint_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_addNamespaceConstraint_presult__isset;
+
+class AccumuloProxy_addNamespaceConstraint_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_addNamespaceConstraint_presult() throw();
+  int32_t* success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_addNamespaceConstraint_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_removeNamespaceConstraint_args__isset {
+  _AccumuloProxy_removeNamespaceConstraint_args__isset() : login(false), namespaceName(false), id(false) {}
+  bool login :1;
+  bool namespaceName :1;
+  bool id :1;
+} _AccumuloProxy_removeNamespaceConstraint_args__isset;
+
+class AccumuloProxy_removeNamespaceConstraint_args {
+ public:
+
+  AccumuloProxy_removeNamespaceConstraint_args(const AccumuloProxy_removeNamespaceConstraint_args&);
+  AccumuloProxy_removeNamespaceConstraint_args& operator=(const AccumuloProxy_removeNamespaceConstraint_args&);
+  AccumuloProxy_removeNamespaceConstraint_args() : login(), namespaceName(), id(0) {
+  }
+
+  virtual ~AccumuloProxy_removeNamespaceConstraint_args() throw();
+  std::string login;
+  std::string namespaceName;
+  int32_t id;
+
+  _AccumuloProxy_removeNamespaceConstraint_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_id(const int32_t val);
+
+  bool operator == (const AccumuloProxy_removeNamespaceConstraint_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(id == rhs.id))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_removeNamespaceConstraint_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_removeNamespaceConstraint_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_removeNamespaceConstraint_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_removeNamespaceConstraint_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+  const int32_t* id;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_removeNamespaceConstraint_result__isset {
+  _AccumuloProxy_removeNamespaceConstraint_result__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_removeNamespaceConstraint_result__isset;
+
+class AccumuloProxy_removeNamespaceConstraint_result {
+ public:
+
+  AccumuloProxy_removeNamespaceConstraint_result(const AccumuloProxy_removeNamespaceConstraint_result&);
+  AccumuloProxy_removeNamespaceConstraint_result& operator=(const AccumuloProxy_removeNamespaceConstraint_result&);
+  AccumuloProxy_removeNamespaceConstraint_result() {
+  }
+
+  virtual ~AccumuloProxy_removeNamespaceConstraint_result() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_removeNamespaceConstraint_result__isset __isset;
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_removeNamespaceConstraint_result & rhs) const
+  {
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_removeNamespaceConstraint_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_removeNamespaceConstraint_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_removeNamespaceConstraint_presult__isset {
+  _AccumuloProxy_removeNamespaceConstraint_presult__isset() : ouch1(false), ouch2(false), ouch3(false) {}
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_removeNamespaceConstraint_presult__isset;
+
+class AccumuloProxy_removeNamespaceConstraint_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_removeNamespaceConstraint_presult() throw();
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_removeNamespaceConstraint_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_listNamespaceConstraints_args__isset {
+  _AccumuloProxy_listNamespaceConstraints_args__isset() : login(false), namespaceName(false) {}
+  bool login :1;
+  bool namespaceName :1;
+} _AccumuloProxy_listNamespaceConstraints_args__isset;
+
+class AccumuloProxy_listNamespaceConstraints_args {
+ public:
+
+  AccumuloProxy_listNamespaceConstraints_args(const AccumuloProxy_listNamespaceConstraints_args&);
+  AccumuloProxy_listNamespaceConstraints_args& operator=(const AccumuloProxy_listNamespaceConstraints_args&);
+  AccumuloProxy_listNamespaceConstraints_args() : login(), namespaceName() {
+  }
+
+  virtual ~AccumuloProxy_listNamespaceConstraints_args() throw();
+  std::string login;
+  std::string namespaceName;
+
+  _AccumuloProxy_listNamespaceConstraints_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  bool operator == (const AccumuloProxy_listNamespaceConstraints_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_listNamespaceConstraints_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_listNamespaceConstraints_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_listNamespaceConstraints_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_listNamespaceConstraints_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_listNamespaceConstraints_result__isset {
+  _AccumuloProxy_listNamespaceConstraints_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_listNamespaceConstraints_result__isset;
+
+class AccumuloProxy_listNamespaceConstraints_result {
+ public:
+
+  AccumuloProxy_listNamespaceConstraints_result(const AccumuloProxy_listNamespaceConstraints_result&);
+  AccumuloProxy_listNamespaceConstraints_result& operator=(const AccumuloProxy_listNamespaceConstraints_result&);
+  AccumuloProxy_listNamespaceConstraints_result() {
+  }
+
+  virtual ~AccumuloProxy_listNamespaceConstraints_result() throw();
+  std::map<std::string, int32_t>  success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_listNamespaceConstraints_result__isset __isset;
+
+  void __set_success(const std::map<std::string, int32_t> & val);
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_listNamespaceConstraints_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_listNamespaceConstraints_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_listNamespaceConstraints_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_listNamespaceConstraints_presult__isset {
+  _AccumuloProxy_listNamespaceConstraints_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_listNamespaceConstraints_presult__isset;
+
+class AccumuloProxy_listNamespaceConstraints_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_listNamespaceConstraints_presult() throw();
+  std::map<std::string, int32_t> * success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_listNamespaceConstraints_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
+typedef struct _AccumuloProxy_testNamespaceClassLoad_args__isset {
+  _AccumuloProxy_testNamespaceClassLoad_args__isset() : login(false), namespaceName(false), className(false), asTypeName(false) {}
+  bool login :1;
+  bool namespaceName :1;
+  bool className :1;
+  bool asTypeName :1;
+} _AccumuloProxy_testNamespaceClassLoad_args__isset;
+
+class AccumuloProxy_testNamespaceClassLoad_args {
+ public:
+
+  AccumuloProxy_testNamespaceClassLoad_args(const AccumuloProxy_testNamespaceClassLoad_args&);
+  AccumuloProxy_testNamespaceClassLoad_args& operator=(const AccumuloProxy_testNamespaceClassLoad_args&);
+  AccumuloProxy_testNamespaceClassLoad_args() : login(), namespaceName(), className(), asTypeName() {
+  }
+
+  virtual ~AccumuloProxy_testNamespaceClassLoad_args() throw();
+  std::string login;
+  std::string namespaceName;
+  std::string className;
+  std::string asTypeName;
+
+  _AccumuloProxy_testNamespaceClassLoad_args__isset __isset;
+
+  void __set_login(const std::string& val);
+
+  void __set_namespaceName(const std::string& val);
+
+  void __set_className(const std::string& val);
+
+  void __set_asTypeName(const std::string& val);
+
+  bool operator == (const AccumuloProxy_testNamespaceClassLoad_args & rhs) const
+  {
+    if (!(login == rhs.login))
+      return false;
+    if (!(namespaceName == rhs.namespaceName))
+      return false;
+    if (!(className == rhs.className))
+      return false;
+    if (!(asTypeName == rhs.asTypeName))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_testNamespaceClassLoad_args &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_testNamespaceClassLoad_args & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+
+class AccumuloProxy_testNamespaceClassLoad_pargs {
+ public:
+
+
+  virtual ~AccumuloProxy_testNamespaceClassLoad_pargs() throw();
+  const std::string* login;
+  const std::string* namespaceName;
+  const std::string* className;
+  const std::string* asTypeName;
+
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_testNamespaceClassLoad_result__isset {
+  _AccumuloProxy_testNamespaceClassLoad_result__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_testNamespaceClassLoad_result__isset;
+
+class AccumuloProxy_testNamespaceClassLoad_result {
+ public:
+
+  AccumuloProxy_testNamespaceClassLoad_result(const AccumuloProxy_testNamespaceClassLoad_result&);
+  AccumuloProxy_testNamespaceClassLoad_result& operator=(const AccumuloProxy_testNamespaceClassLoad_result&);
+  AccumuloProxy_testNamespaceClassLoad_result() : success(0) {
+  }
+
+  virtual ~AccumuloProxy_testNamespaceClassLoad_result() throw();
+  bool success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_testNamespaceClassLoad_result__isset __isset;
+
+  void __set_success(const bool val);
+
+  void __set_ouch1(const AccumuloException& val);
+
+  void __set_ouch2(const AccumuloSecurityException& val);
+
+  void __set_ouch3(const NamespaceNotFoundException& val);
+
+  bool operator == (const AccumuloProxy_testNamespaceClassLoad_result & rhs) const
+  {
+    if (!(success == rhs.success))
+      return false;
+    if (!(ouch1 == rhs.ouch1))
+      return false;
+    if (!(ouch2 == rhs.ouch2))
+      return false;
+    if (!(ouch3 == rhs.ouch3))
+      return false;
+    return true;
+  }
+  bool operator != (const AccumuloProxy_testNamespaceClassLoad_result &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const AccumuloProxy_testNamespaceClassLoad_result & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+};
+
+typedef struct _AccumuloProxy_testNamespaceClassLoad_presult__isset {
+  _AccumuloProxy_testNamespaceClassLoad_presult__isset() : success(false), ouch1(false), ouch2(false), ouch3(false) {}
+  bool success :1;
+  bool ouch1 :1;
+  bool ouch2 :1;
+  bool ouch3 :1;
+} _AccumuloProxy_testNamespaceClassLoad_presult__isset;
+
+class AccumuloProxy_testNamespaceClassLoad_presult {
+ public:
+
+
+  virtual ~AccumuloProxy_testNamespaceClassLoad_presult() throw();
+  bool* success;
+  AccumuloException ouch1;
+  AccumuloSecurityException ouch2;
+  NamespaceNotFoundException ouch3;
+
+  _AccumuloProxy_testNamespaceClassLoad_presult__isset __isset;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+
+};
+
 class AccumuloProxyClient : virtual public AccumuloProxyIf {
  public:
-  AccumuloProxyClient(boost::shared_ptr< ::apache::thrift::protocol::TProtocol> prot) :
-    piprot_(prot),
-    poprot_(prot) {
-    iprot_ = prot.get();
-    oprot_ = prot.get();
+  AccumuloProxyClient(boost::shared_ptr< ::apache::thrift::protocol::TProtocol> prot) {
+    setProtocol(prot);
   }
-  AccumuloProxyClient(boost::shared_ptr< ::apache::thrift::protocol::TProtocol> iprot, boost::shared_ptr< ::apache::thrift::protocol::TProtocol> oprot) :
-    piprot_(iprot),
-    poprot_(oprot) {
+  AccumuloProxyClient(boost::shared_ptr< ::apache::thrift::protocol::TProtocol> iprot, boost::shared_ptr< ::apache::thrift::protocol::TProtocol> oprot) {
+    setProtocol(iprot,oprot);
+  }
+ private:
+  void setProtocol(boost::shared_ptr< ::apache::thrift::protocol::TProtocol> prot) {
+  setProtocol(prot,prot);
+  }
+  void setProtocol(boost::shared_ptr< ::apache::thrift::protocol::TProtocol> iprot, boost::shared_ptr< ::apache::thrift::protocol::TProtocol> oprot) {
+    piprot_=iprot;
+    poprot_=oprot;
     iprot_ = iprot.get();
     oprot_ = oprot.get();
   }
+ public:
   boost::shared_ptr< ::apache::thrift::protocol::TProtocol> getInputProtocol() {
     return piprot_;
   }
@@ -11510,6 +13786,15 @@
   void revokeTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm);
   void send_revokeTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm);
   void recv_revokeTablePermission();
+  void grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  void send_grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  void recv_grantNamespacePermission();
+  bool hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  void send_hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  bool recv_hasNamespacePermission();
+  void revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  void send_revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  void recv_revokeNamespacePermission();
   void createBatchScanner(std::string& _return, const std::string& login, const std::string& tableName, const BatchScanOptions& options);
   void send_createBatchScanner(const std::string& login, const std::string& tableName, const BatchScanOptions& options);
   void recv_createBatchScanner(std::string& _return);
@@ -11560,6 +13845,66 @@
   void getFollowing(Key& _return, const Key& key, const PartialKey::type part);
   void send_getFollowing(const Key& key, const PartialKey::type part);
   void recv_getFollowing(Key& _return);
+  void systemNamespace(std::string& _return);
+  void send_systemNamespace();
+  void recv_systemNamespace(std::string& _return);
+  void defaultNamespace(std::string& _return);
+  void send_defaultNamespace();
+  void recv_defaultNamespace(std::string& _return);
+  void listNamespaces(std::vector<std::string> & _return, const std::string& login);
+  void send_listNamespaces(const std::string& login);
+  void recv_listNamespaces(std::vector<std::string> & _return);
+  bool namespaceExists(const std::string& login, const std::string& namespaceName);
+  void send_namespaceExists(const std::string& login, const std::string& namespaceName);
+  bool recv_namespaceExists();
+  void createNamespace(const std::string& login, const std::string& namespaceName);
+  void send_createNamespace(const std::string& login, const std::string& namespaceName);
+  void recv_createNamespace();
+  void deleteNamespace(const std::string& login, const std::string& namespaceName);
+  void send_deleteNamespace(const std::string& login, const std::string& namespaceName);
+  void recv_deleteNamespace();
+  void renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName);
+  void send_renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName);
+  void recv_renameNamespace();
+  void setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value);
+  void send_setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value);
+  void recv_setNamespaceProperty();
+  void removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property);
+  void send_removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property);
+  void recv_removeNamespaceProperty();
+  void getNamespaceProperties(std::map<std::string, std::string> & _return, const std::string& login, const std::string& namespaceName);
+  void send_getNamespaceProperties(const std::string& login, const std::string& namespaceName);
+  void recv_getNamespaceProperties(std::map<std::string, std::string> & _return);
+  void namespaceIdMap(std::map<std::string, std::string> & _return, const std::string& login);
+  void send_namespaceIdMap(const std::string& login);
+  void recv_namespaceIdMap(std::map<std::string, std::string> & _return);
+  void attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  void send_attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  void recv_attachNamespaceIterator();
+  void removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes);
+  void send_removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes);
+  void recv_removeNamespaceIterator();
+  void getNamespaceIteratorSetting(IteratorSetting& _return, const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope);
+  void send_getNamespaceIteratorSetting(const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope);
+  void recv_getNamespaceIteratorSetting(IteratorSetting& _return);
+  void listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const std::string& login, const std::string& namespaceName);
+  void send_listNamespaceIterators(const std::string& login, const std::string& namespaceName);
+  void recv_listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return);
+  void checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  void send_checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  void recv_checkNamespaceIteratorConflicts();
+  int32_t addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName);
+  void send_addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName);
+  int32_t recv_addNamespaceConstraint();
+  void removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id);
+  void send_removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id);
+  void recv_removeNamespaceConstraint();
+  void listNamespaceConstraints(std::map<std::string, int32_t> & _return, const std::string& login, const std::string& namespaceName);
+  void send_listNamespaceConstraints(const std::string& login, const std::string& namespaceName);
+  void recv_listNamespaceConstraints(std::map<std::string, int32_t> & _return);
+  bool testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName);
+  void send_testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName);
+  bool recv_testNamespaceClassLoad();
  protected:
   boost::shared_ptr< ::apache::thrift::protocol::TProtocol> piprot_;
   boost::shared_ptr< ::apache::thrift::protocol::TProtocol> poprot_;
@@ -11635,6 +13980,9 @@
   void process_listLocalUsers(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
   void process_revokeSystemPermission(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
   void process_revokeTablePermission(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_grantNamespacePermission(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_hasNamespacePermission(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_revokeNamespacePermission(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
   void process_createBatchScanner(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
   void process_createScanner(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
   void process_hasNext(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
@@ -11652,6 +14000,26 @@
   void process_closeConditionalWriter(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
   void process_getRowRange(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
   void process_getFollowing(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_systemNamespace(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_defaultNamespace(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_listNamespaces(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_namespaceExists(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_createNamespace(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_deleteNamespace(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_renameNamespace(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_setNamespaceProperty(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_removeNamespaceProperty(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_getNamespaceProperties(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_namespaceIdMap(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_attachNamespaceIterator(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_removeNamespaceIterator(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_getNamespaceIteratorSetting(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_listNamespaceIterators(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_checkNamespaceIteratorConflicts(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_addNamespaceConstraint(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_removeNamespaceConstraint(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_listNamespaceConstraints(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
+  void process_testNamespaceClassLoad(int32_t seqid, ::apache::thrift::protocol::TProtocol* iprot, ::apache::thrift::protocol::TProtocol* oprot, void* callContext);
  public:
   AccumuloProxyProcessor(boost::shared_ptr<AccumuloProxyIf> iface) :
     iface_(iface) {
@@ -11715,6 +14083,9 @@
     processMap_["listLocalUsers"] = &AccumuloProxyProcessor::process_listLocalUsers;
     processMap_["revokeSystemPermission"] = &AccumuloProxyProcessor::process_revokeSystemPermission;
     processMap_["revokeTablePermission"] = &AccumuloProxyProcessor::process_revokeTablePermission;
+    processMap_["grantNamespacePermission"] = &AccumuloProxyProcessor::process_grantNamespacePermission;
+    processMap_["hasNamespacePermission"] = &AccumuloProxyProcessor::process_hasNamespacePermission;
+    processMap_["revokeNamespacePermission"] = &AccumuloProxyProcessor::process_revokeNamespacePermission;
     processMap_["createBatchScanner"] = &AccumuloProxyProcessor::process_createBatchScanner;
     processMap_["createScanner"] = &AccumuloProxyProcessor::process_createScanner;
     processMap_["hasNext"] = &AccumuloProxyProcessor::process_hasNext;
@@ -11732,6 +14103,26 @@
     processMap_["closeConditionalWriter"] = &AccumuloProxyProcessor::process_closeConditionalWriter;
     processMap_["getRowRange"] = &AccumuloProxyProcessor::process_getRowRange;
     processMap_["getFollowing"] = &AccumuloProxyProcessor::process_getFollowing;
+    processMap_["systemNamespace"] = &AccumuloProxyProcessor::process_systemNamespace;
+    processMap_["defaultNamespace"] = &AccumuloProxyProcessor::process_defaultNamespace;
+    processMap_["listNamespaces"] = &AccumuloProxyProcessor::process_listNamespaces;
+    processMap_["namespaceExists"] = &AccumuloProxyProcessor::process_namespaceExists;
+    processMap_["createNamespace"] = &AccumuloProxyProcessor::process_createNamespace;
+    processMap_["deleteNamespace"] = &AccumuloProxyProcessor::process_deleteNamespace;
+    processMap_["renameNamespace"] = &AccumuloProxyProcessor::process_renameNamespace;
+    processMap_["setNamespaceProperty"] = &AccumuloProxyProcessor::process_setNamespaceProperty;
+    processMap_["removeNamespaceProperty"] = &AccumuloProxyProcessor::process_removeNamespaceProperty;
+    processMap_["getNamespaceProperties"] = &AccumuloProxyProcessor::process_getNamespaceProperties;
+    processMap_["namespaceIdMap"] = &AccumuloProxyProcessor::process_namespaceIdMap;
+    processMap_["attachNamespaceIterator"] = &AccumuloProxyProcessor::process_attachNamespaceIterator;
+    processMap_["removeNamespaceIterator"] = &AccumuloProxyProcessor::process_removeNamespaceIterator;
+    processMap_["getNamespaceIteratorSetting"] = &AccumuloProxyProcessor::process_getNamespaceIteratorSetting;
+    processMap_["listNamespaceIterators"] = &AccumuloProxyProcessor::process_listNamespaceIterators;
+    processMap_["checkNamespaceIteratorConflicts"] = &AccumuloProxyProcessor::process_checkNamespaceIteratorConflicts;
+    processMap_["addNamespaceConstraint"] = &AccumuloProxyProcessor::process_addNamespaceConstraint;
+    processMap_["removeNamespaceConstraint"] = &AccumuloProxyProcessor::process_removeNamespaceConstraint;
+    processMap_["listNamespaceConstraints"] = &AccumuloProxyProcessor::process_listNamespaceConstraints;
+    processMap_["testNamespaceClassLoad"] = &AccumuloProxyProcessor::process_testNamespaceClassLoad;
   }
 
   virtual ~AccumuloProxyProcessor() {}
@@ -12319,6 +14710,33 @@
     ifaces_[i]->revokeTablePermission(login, user, table, perm);
   }
 
+  void grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->grantNamespacePermission(login, user, namespaceName, perm);
+    }
+    ifaces_[i]->grantNamespacePermission(login, user, namespaceName, perm);
+  }
+
+  bool hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->hasNamespacePermission(login, user, namespaceName, perm);
+    }
+    return ifaces_[i]->hasNamespacePermission(login, user, namespaceName, perm);
+  }
+
+  void revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->revokeNamespacePermission(login, user, namespaceName, perm);
+    }
+    ifaces_[i]->revokeNamespacePermission(login, user, namespaceName, perm);
+  }
+
   void createBatchScanner(std::string& _return, const std::string& login, const std::string& tableName, const BatchScanOptions& options) {
     size_t sz = ifaces_.size();
     size_t i = 0;
@@ -12481,8 +14899,535 @@
     return;
   }
 
+  void systemNamespace(std::string& _return) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->systemNamespace(_return);
+    }
+    ifaces_[i]->systemNamespace(_return);
+    return;
+  }
+
+  void defaultNamespace(std::string& _return) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->defaultNamespace(_return);
+    }
+    ifaces_[i]->defaultNamespace(_return);
+    return;
+  }
+
+  void listNamespaces(std::vector<std::string> & _return, const std::string& login) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->listNamespaces(_return, login);
+    }
+    ifaces_[i]->listNamespaces(_return, login);
+    return;
+  }
+
+  bool namespaceExists(const std::string& login, const std::string& namespaceName) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->namespaceExists(login, namespaceName);
+    }
+    return ifaces_[i]->namespaceExists(login, namespaceName);
+  }
+
+  void createNamespace(const std::string& login, const std::string& namespaceName) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->createNamespace(login, namespaceName);
+    }
+    ifaces_[i]->createNamespace(login, namespaceName);
+  }
+
+  void deleteNamespace(const std::string& login, const std::string& namespaceName) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->deleteNamespace(login, namespaceName);
+    }
+    ifaces_[i]->deleteNamespace(login, namespaceName);
+  }
+
+  void renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->renameNamespace(login, oldNamespaceName, newNamespaceName);
+    }
+    ifaces_[i]->renameNamespace(login, oldNamespaceName, newNamespaceName);
+  }
+
+  void setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->setNamespaceProperty(login, namespaceName, property, value);
+    }
+    ifaces_[i]->setNamespaceProperty(login, namespaceName, property, value);
+  }
+
+  void removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->removeNamespaceProperty(login, namespaceName, property);
+    }
+    ifaces_[i]->removeNamespaceProperty(login, namespaceName, property);
+  }
+
+  void getNamespaceProperties(std::map<std::string, std::string> & _return, const std::string& login, const std::string& namespaceName) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->getNamespaceProperties(_return, login, namespaceName);
+    }
+    ifaces_[i]->getNamespaceProperties(_return, login, namespaceName);
+    return;
+  }
+
+  void namespaceIdMap(std::map<std::string, std::string> & _return, const std::string& login) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->namespaceIdMap(_return, login);
+    }
+    ifaces_[i]->namespaceIdMap(_return, login);
+    return;
+  }
+
+  void attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->attachNamespaceIterator(login, namespaceName, setting, scopes);
+    }
+    ifaces_[i]->attachNamespaceIterator(login, namespaceName, setting, scopes);
+  }
+
+  void removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->removeNamespaceIterator(login, namespaceName, name, scopes);
+    }
+    ifaces_[i]->removeNamespaceIterator(login, namespaceName, name, scopes);
+  }
+
+  void getNamespaceIteratorSetting(IteratorSetting& _return, const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->getNamespaceIteratorSetting(_return, login, namespaceName, name, scope);
+    }
+    ifaces_[i]->getNamespaceIteratorSetting(_return, login, namespaceName, name, scope);
+    return;
+  }
+
+  void listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const std::string& login, const std::string& namespaceName) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->listNamespaceIterators(_return, login, namespaceName);
+    }
+    ifaces_[i]->listNamespaceIterators(_return, login, namespaceName);
+    return;
+  }
+
+  void checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->checkNamespaceIteratorConflicts(login, namespaceName, setting, scopes);
+    }
+    ifaces_[i]->checkNamespaceIteratorConflicts(login, namespaceName, setting, scopes);
+  }
+
+  int32_t addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->addNamespaceConstraint(login, namespaceName, constraintClassName);
+    }
+    return ifaces_[i]->addNamespaceConstraint(login, namespaceName, constraintClassName);
+  }
+
+  void removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->removeNamespaceConstraint(login, namespaceName, id);
+    }
+    ifaces_[i]->removeNamespaceConstraint(login, namespaceName, id);
+  }
+
+  void listNamespaceConstraints(std::map<std::string, int32_t> & _return, const std::string& login, const std::string& namespaceName) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->listNamespaceConstraints(_return, login, namespaceName);
+    }
+    ifaces_[i]->listNamespaceConstraints(_return, login, namespaceName);
+    return;
+  }
+
+  bool testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName) {
+    size_t sz = ifaces_.size();
+    size_t i = 0;
+    for (; i < (sz - 1); ++i) {
+      ifaces_[i]->testNamespaceClassLoad(login, namespaceName, className, asTypeName);
+    }
+    return ifaces_[i]->testNamespaceClassLoad(login, namespaceName, className, asTypeName);
+  }
+
 };
 
+// The 'concurrent' client is a thread safe client that correctly handles
+// out of order responses.  It is slower than the regular client, so should
+// only be used when you need to share a connection among multiple threads
+class AccumuloProxyConcurrentClient : virtual public AccumuloProxyIf {
+ public:
+  AccumuloProxyConcurrentClient(boost::shared_ptr< ::apache::thrift::protocol::TProtocol> prot) {
+    setProtocol(prot);
+  }
+  AccumuloProxyConcurrentClient(boost::shared_ptr< ::apache::thrift::protocol::TProtocol> iprot, boost::shared_ptr< ::apache::thrift::protocol::TProtocol> oprot) {
+    setProtocol(iprot,oprot);
+  }
+ private:
+  void setProtocol(boost::shared_ptr< ::apache::thrift::protocol::TProtocol> prot) {
+  setProtocol(prot,prot);
+  }
+  void setProtocol(boost::shared_ptr< ::apache::thrift::protocol::TProtocol> iprot, boost::shared_ptr< ::apache::thrift::protocol::TProtocol> oprot) {
+    piprot_=iprot;
+    poprot_=oprot;
+    iprot_ = iprot.get();
+    oprot_ = oprot.get();
+  }
+ public:
+  boost::shared_ptr< ::apache::thrift::protocol::TProtocol> getInputProtocol() {
+    return piprot_;
+  }
+  boost::shared_ptr< ::apache::thrift::protocol::TProtocol> getOutputProtocol() {
+    return poprot_;
+  }
+  void login(std::string& _return, const std::string& principal, const std::map<std::string, std::string> & loginProperties);
+  int32_t send_login(const std::string& principal, const std::map<std::string, std::string> & loginProperties);
+  void recv_login(std::string& _return, const int32_t seqid);
+  int32_t addConstraint(const std::string& login, const std::string& tableName, const std::string& constraintClassName);
+  int32_t send_addConstraint(const std::string& login, const std::string& tableName, const std::string& constraintClassName);
+  int32_t recv_addConstraint(const int32_t seqid);
+  void addSplits(const std::string& login, const std::string& tableName, const std::set<std::string> & splits);
+  int32_t send_addSplits(const std::string& login, const std::string& tableName, const std::set<std::string> & splits);
+  void recv_addSplits(const int32_t seqid);
+  void attachIterator(const std::string& login, const std::string& tableName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  int32_t send_attachIterator(const std::string& login, const std::string& tableName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  void recv_attachIterator(const int32_t seqid);
+  void checkIteratorConflicts(const std::string& login, const std::string& tableName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  int32_t send_checkIteratorConflicts(const std::string& login, const std::string& tableName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  void recv_checkIteratorConflicts(const int32_t seqid);
+  void clearLocatorCache(const std::string& login, const std::string& tableName);
+  int32_t send_clearLocatorCache(const std::string& login, const std::string& tableName);
+  void recv_clearLocatorCache(const int32_t seqid);
+  void cloneTable(const std::string& login, const std::string& tableName, const std::string& newTableName, const bool flush, const std::map<std::string, std::string> & propertiesToSet, const std::set<std::string> & propertiesToExclude);
+  int32_t send_cloneTable(const std::string& login, const std::string& tableName, const std::string& newTableName, const bool flush, const std::map<std::string, std::string> & propertiesToSet, const std::set<std::string> & propertiesToExclude);
+  void recv_cloneTable(const int32_t seqid);
+  void compactTable(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow, const std::vector<IteratorSetting> & iterators, const bool flush, const bool wait, const CompactionStrategyConfig& compactionStrategy);
+  int32_t send_compactTable(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow, const std::vector<IteratorSetting> & iterators, const bool flush, const bool wait, const CompactionStrategyConfig& compactionStrategy);
+  void recv_compactTable(const int32_t seqid);
+  void cancelCompaction(const std::string& login, const std::string& tableName);
+  int32_t send_cancelCompaction(const std::string& login, const std::string& tableName);
+  void recv_cancelCompaction(const int32_t seqid);
+  void createTable(const std::string& login, const std::string& tableName, const bool versioningIter, const TimeType::type type);
+  int32_t send_createTable(const std::string& login, const std::string& tableName, const bool versioningIter, const TimeType::type type);
+  void recv_createTable(const int32_t seqid);
+  void deleteTable(const std::string& login, const std::string& tableName);
+  int32_t send_deleteTable(const std::string& login, const std::string& tableName);
+  void recv_deleteTable(const int32_t seqid);
+  void deleteRows(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow);
+  int32_t send_deleteRows(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow);
+  void recv_deleteRows(const int32_t seqid);
+  void exportTable(const std::string& login, const std::string& tableName, const std::string& exportDir);
+  int32_t send_exportTable(const std::string& login, const std::string& tableName, const std::string& exportDir);
+  void recv_exportTable(const int32_t seqid);
+  void flushTable(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow, const bool wait);
+  int32_t send_flushTable(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow, const bool wait);
+  void recv_flushTable(const int32_t seqid);
+  void getDiskUsage(std::vector<DiskUsage> & _return, const std::string& login, const std::set<std::string> & tables);
+  int32_t send_getDiskUsage(const std::string& login, const std::set<std::string> & tables);
+  void recv_getDiskUsage(std::vector<DiskUsage> & _return, const int32_t seqid);
+  void getLocalityGroups(std::map<std::string, std::set<std::string> > & _return, const std::string& login, const std::string& tableName);
+  int32_t send_getLocalityGroups(const std::string& login, const std::string& tableName);
+  void recv_getLocalityGroups(std::map<std::string, std::set<std::string> > & _return, const int32_t seqid);
+  void getIteratorSetting(IteratorSetting& _return, const std::string& login, const std::string& tableName, const std::string& iteratorName, const IteratorScope::type scope);
+  int32_t send_getIteratorSetting(const std::string& login, const std::string& tableName, const std::string& iteratorName, const IteratorScope::type scope);
+  void recv_getIteratorSetting(IteratorSetting& _return, const int32_t seqid);
+  void getMaxRow(std::string& _return, const std::string& login, const std::string& tableName, const std::set<std::string> & auths, const std::string& startRow, const bool startInclusive, const std::string& endRow, const bool endInclusive);
+  int32_t send_getMaxRow(const std::string& login, const std::string& tableName, const std::set<std::string> & auths, const std::string& startRow, const bool startInclusive, const std::string& endRow, const bool endInclusive);
+  void recv_getMaxRow(std::string& _return, const int32_t seqid);
+  void getTableProperties(std::map<std::string, std::string> & _return, const std::string& login, const std::string& tableName);
+  int32_t send_getTableProperties(const std::string& login, const std::string& tableName);
+  void recv_getTableProperties(std::map<std::string, std::string> & _return, const int32_t seqid);
+  void importDirectory(const std::string& login, const std::string& tableName, const std::string& importDir, const std::string& failureDir, const bool setTime);
+  int32_t send_importDirectory(const std::string& login, const std::string& tableName, const std::string& importDir, const std::string& failureDir, const bool setTime);
+  void recv_importDirectory(const int32_t seqid);
+  void importTable(const std::string& login, const std::string& tableName, const std::string& importDir);
+  int32_t send_importTable(const std::string& login, const std::string& tableName, const std::string& importDir);
+  void recv_importTable(const int32_t seqid);
+  void listSplits(std::vector<std::string> & _return, const std::string& login, const std::string& tableName, const int32_t maxSplits);
+  int32_t send_listSplits(const std::string& login, const std::string& tableName, const int32_t maxSplits);
+  void recv_listSplits(std::vector<std::string> & _return, const int32_t seqid);
+  void listTables(std::set<std::string> & _return, const std::string& login);
+  int32_t send_listTables(const std::string& login);
+  void recv_listTables(std::set<std::string> & _return, const int32_t seqid);
+  void listIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const std::string& login, const std::string& tableName);
+  int32_t send_listIterators(const std::string& login, const std::string& tableName);
+  void recv_listIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const int32_t seqid);
+  void listConstraints(std::map<std::string, int32_t> & _return, const std::string& login, const std::string& tableName);
+  int32_t send_listConstraints(const std::string& login, const std::string& tableName);
+  void recv_listConstraints(std::map<std::string, int32_t> & _return, const int32_t seqid);
+  void mergeTablets(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow);
+  int32_t send_mergeTablets(const std::string& login, const std::string& tableName, const std::string& startRow, const std::string& endRow);
+  void recv_mergeTablets(const int32_t seqid);
+  void offlineTable(const std::string& login, const std::string& tableName, const bool wait);
+  int32_t send_offlineTable(const std::string& login, const std::string& tableName, const bool wait);
+  void recv_offlineTable(const int32_t seqid);
+  void onlineTable(const std::string& login, const std::string& tableName, const bool wait);
+  int32_t send_onlineTable(const std::string& login, const std::string& tableName, const bool wait);
+  void recv_onlineTable(const int32_t seqid);
+  void removeConstraint(const std::string& login, const std::string& tableName, const int32_t constraint);
+  int32_t send_removeConstraint(const std::string& login, const std::string& tableName, const int32_t constraint);
+  void recv_removeConstraint(const int32_t seqid);
+  void removeIterator(const std::string& login, const std::string& tableName, const std::string& iterName, const std::set<IteratorScope::type> & scopes);
+  int32_t send_removeIterator(const std::string& login, const std::string& tableName, const std::string& iterName, const std::set<IteratorScope::type> & scopes);
+  void recv_removeIterator(const int32_t seqid);
+  void removeTableProperty(const std::string& login, const std::string& tableName, const std::string& property);
+  int32_t send_removeTableProperty(const std::string& login, const std::string& tableName, const std::string& property);
+  void recv_removeTableProperty(const int32_t seqid);
+  void renameTable(const std::string& login, const std::string& oldTableName, const std::string& newTableName);
+  int32_t send_renameTable(const std::string& login, const std::string& oldTableName, const std::string& newTableName);
+  void recv_renameTable(const int32_t seqid);
+  void setLocalityGroups(const std::string& login, const std::string& tableName, const std::map<std::string, std::set<std::string> > & groups);
+  int32_t send_setLocalityGroups(const std::string& login, const std::string& tableName, const std::map<std::string, std::set<std::string> > & groups);
+  void recv_setLocalityGroups(const int32_t seqid);
+  void setTableProperty(const std::string& login, const std::string& tableName, const std::string& property, const std::string& value);
+  int32_t send_setTableProperty(const std::string& login, const std::string& tableName, const std::string& property, const std::string& value);
+  void recv_setTableProperty(const int32_t seqid);
+  void splitRangeByTablets(std::set<Range> & _return, const std::string& login, const std::string& tableName, const Range& range, const int32_t maxSplits);
+  int32_t send_splitRangeByTablets(const std::string& login, const std::string& tableName, const Range& range, const int32_t maxSplits);
+  void recv_splitRangeByTablets(std::set<Range> & _return, const int32_t seqid);
+  bool tableExists(const std::string& login, const std::string& tableName);
+  int32_t send_tableExists(const std::string& login, const std::string& tableName);
+  bool recv_tableExists(const int32_t seqid);
+  void tableIdMap(std::map<std::string, std::string> & _return, const std::string& login);
+  int32_t send_tableIdMap(const std::string& login);
+  void recv_tableIdMap(std::map<std::string, std::string> & _return, const int32_t seqid);
+  bool testTableClassLoad(const std::string& login, const std::string& tableName, const std::string& className, const std::string& asTypeName);
+  int32_t send_testTableClassLoad(const std::string& login, const std::string& tableName, const std::string& className, const std::string& asTypeName);
+  bool recv_testTableClassLoad(const int32_t seqid);
+  void pingTabletServer(const std::string& login, const std::string& tserver);
+  int32_t send_pingTabletServer(const std::string& login, const std::string& tserver);
+  void recv_pingTabletServer(const int32_t seqid);
+  void getActiveScans(std::vector<ActiveScan> & _return, const std::string& login, const std::string& tserver);
+  int32_t send_getActiveScans(const std::string& login, const std::string& tserver);
+  void recv_getActiveScans(std::vector<ActiveScan> & _return, const int32_t seqid);
+  void getActiveCompactions(std::vector<ActiveCompaction> & _return, const std::string& login, const std::string& tserver);
+  int32_t send_getActiveCompactions(const std::string& login, const std::string& tserver);
+  void recv_getActiveCompactions(std::vector<ActiveCompaction> & _return, const int32_t seqid);
+  void getSiteConfiguration(std::map<std::string, std::string> & _return, const std::string& login);
+  int32_t send_getSiteConfiguration(const std::string& login);
+  void recv_getSiteConfiguration(std::map<std::string, std::string> & _return, const int32_t seqid);
+  void getSystemConfiguration(std::map<std::string, std::string> & _return, const std::string& login);
+  int32_t send_getSystemConfiguration(const std::string& login);
+  void recv_getSystemConfiguration(std::map<std::string, std::string> & _return, const int32_t seqid);
+  void getTabletServers(std::vector<std::string> & _return, const std::string& login);
+  int32_t send_getTabletServers(const std::string& login);
+  void recv_getTabletServers(std::vector<std::string> & _return, const int32_t seqid);
+  void removeProperty(const std::string& login, const std::string& property);
+  int32_t send_removeProperty(const std::string& login, const std::string& property);
+  void recv_removeProperty(const int32_t seqid);
+  void setProperty(const std::string& login, const std::string& property, const std::string& value);
+  int32_t send_setProperty(const std::string& login, const std::string& property, const std::string& value);
+  void recv_setProperty(const int32_t seqid);
+  bool testClassLoad(const std::string& login, const std::string& className, const std::string& asTypeName);
+  int32_t send_testClassLoad(const std::string& login, const std::string& className, const std::string& asTypeName);
+  bool recv_testClassLoad(const int32_t seqid);
+  bool authenticateUser(const std::string& login, const std::string& user, const std::map<std::string, std::string> & properties);
+  int32_t send_authenticateUser(const std::string& login, const std::string& user, const std::map<std::string, std::string> & properties);
+  bool recv_authenticateUser(const int32_t seqid);
+  void changeUserAuthorizations(const std::string& login, const std::string& user, const std::set<std::string> & authorizations);
+  int32_t send_changeUserAuthorizations(const std::string& login, const std::string& user, const std::set<std::string> & authorizations);
+  void recv_changeUserAuthorizations(const int32_t seqid);
+  void changeLocalUserPassword(const std::string& login, const std::string& user, const std::string& password);
+  int32_t send_changeLocalUserPassword(const std::string& login, const std::string& user, const std::string& password);
+  void recv_changeLocalUserPassword(const int32_t seqid);
+  void createLocalUser(const std::string& login, const std::string& user, const std::string& password);
+  int32_t send_createLocalUser(const std::string& login, const std::string& user, const std::string& password);
+  void recv_createLocalUser(const int32_t seqid);
+  void dropLocalUser(const std::string& login, const std::string& user);
+  int32_t send_dropLocalUser(const std::string& login, const std::string& user);
+  void recv_dropLocalUser(const int32_t seqid);
+  void getUserAuthorizations(std::vector<std::string> & _return, const std::string& login, const std::string& user);
+  int32_t send_getUserAuthorizations(const std::string& login, const std::string& user);
+  void recv_getUserAuthorizations(std::vector<std::string> & _return, const int32_t seqid);
+  void grantSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm);
+  int32_t send_grantSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm);
+  void recv_grantSystemPermission(const int32_t seqid);
+  void grantTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm);
+  int32_t send_grantTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm);
+  void recv_grantTablePermission(const int32_t seqid);
+  bool hasSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm);
+  int32_t send_hasSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm);
+  bool recv_hasSystemPermission(const int32_t seqid);
+  bool hasTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm);
+  int32_t send_hasTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm);
+  bool recv_hasTablePermission(const int32_t seqid);
+  void listLocalUsers(std::set<std::string> & _return, const std::string& login);
+  int32_t send_listLocalUsers(const std::string& login);
+  void recv_listLocalUsers(std::set<std::string> & _return, const int32_t seqid);
+  void revokeSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm);
+  int32_t send_revokeSystemPermission(const std::string& login, const std::string& user, const SystemPermission::type perm);
+  void recv_revokeSystemPermission(const int32_t seqid);
+  void revokeTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm);
+  int32_t send_revokeTablePermission(const std::string& login, const std::string& user, const std::string& table, const TablePermission::type perm);
+  void recv_revokeTablePermission(const int32_t seqid);
+  void grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  int32_t send_grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  void recv_grantNamespacePermission(const int32_t seqid);
+  bool hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  int32_t send_hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  bool recv_hasNamespacePermission(const int32_t seqid);
+  void revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  int32_t send_revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm);
+  void recv_revokeNamespacePermission(const int32_t seqid);
+  void createBatchScanner(std::string& _return, const std::string& login, const std::string& tableName, const BatchScanOptions& options);
+  int32_t send_createBatchScanner(const std::string& login, const std::string& tableName, const BatchScanOptions& options);
+  void recv_createBatchScanner(std::string& _return, const int32_t seqid);
+  void createScanner(std::string& _return, const std::string& login, const std::string& tableName, const ScanOptions& options);
+  int32_t send_createScanner(const std::string& login, const std::string& tableName, const ScanOptions& options);
+  void recv_createScanner(std::string& _return, const int32_t seqid);
+  bool hasNext(const std::string& scanner);
+  int32_t send_hasNext(const std::string& scanner);
+  bool recv_hasNext(const int32_t seqid);
+  void nextEntry(KeyValueAndPeek& _return, const std::string& scanner);
+  int32_t send_nextEntry(const std::string& scanner);
+  void recv_nextEntry(KeyValueAndPeek& _return, const int32_t seqid);
+  void nextK(ScanResult& _return, const std::string& scanner, const int32_t k);
+  int32_t send_nextK(const std::string& scanner, const int32_t k);
+  void recv_nextK(ScanResult& _return, const int32_t seqid);
+  void closeScanner(const std::string& scanner);
+  int32_t send_closeScanner(const std::string& scanner);
+  void recv_closeScanner(const int32_t seqid);
+  void updateAndFlush(const std::string& login, const std::string& tableName, const std::map<std::string, std::vector<ColumnUpdate> > & cells);
+  int32_t send_updateAndFlush(const std::string& login, const std::string& tableName, const std::map<std::string, std::vector<ColumnUpdate> > & cells);
+  void recv_updateAndFlush(const int32_t seqid);
+  void createWriter(std::string& _return, const std::string& login, const std::string& tableName, const WriterOptions& opts);
+  int32_t send_createWriter(const std::string& login, const std::string& tableName, const WriterOptions& opts);
+  void recv_createWriter(std::string& _return, const int32_t seqid);
+  void update(const std::string& writer, const std::map<std::string, std::vector<ColumnUpdate> > & cells);
+  void send_update(const std::string& writer, const std::map<std::string, std::vector<ColumnUpdate> > & cells);
+  void flush(const std::string& writer);
+  int32_t send_flush(const std::string& writer);
+  void recv_flush(const int32_t seqid);
+  void closeWriter(const std::string& writer);
+  int32_t send_closeWriter(const std::string& writer);
+  void recv_closeWriter(const int32_t seqid);
+  ConditionalStatus::type updateRowConditionally(const std::string& login, const std::string& tableName, const std::string& row, const ConditionalUpdates& updates);
+  int32_t send_updateRowConditionally(const std::string& login, const std::string& tableName, const std::string& row, const ConditionalUpdates& updates);
+  ConditionalStatus::type recv_updateRowConditionally(const int32_t seqid);
+  void createConditionalWriter(std::string& _return, const std::string& login, const std::string& tableName, const ConditionalWriterOptions& options);
+  int32_t send_createConditionalWriter(const std::string& login, const std::string& tableName, const ConditionalWriterOptions& options);
+  void recv_createConditionalWriter(std::string& _return, const int32_t seqid);
+  void updateRowsConditionally(std::map<std::string, ConditionalStatus::type> & _return, const std::string& conditionalWriter, const std::map<std::string, ConditionalUpdates> & updates);
+  int32_t send_updateRowsConditionally(const std::string& conditionalWriter, const std::map<std::string, ConditionalUpdates> & updates);
+  void recv_updateRowsConditionally(std::map<std::string, ConditionalStatus::type> & _return, const int32_t seqid);
+  void closeConditionalWriter(const std::string& conditionalWriter);
+  int32_t send_closeConditionalWriter(const std::string& conditionalWriter);
+  void recv_closeConditionalWriter(const int32_t seqid);
+  void getRowRange(Range& _return, const std::string& row);
+  int32_t send_getRowRange(const std::string& row);
+  void recv_getRowRange(Range& _return, const int32_t seqid);
+  void getFollowing(Key& _return, const Key& key, const PartialKey::type part);
+  int32_t send_getFollowing(const Key& key, const PartialKey::type part);
+  void recv_getFollowing(Key& _return, const int32_t seqid);
+  void systemNamespace(std::string& _return);
+  int32_t send_systemNamespace();
+  void recv_systemNamespace(std::string& _return, const int32_t seqid);
+  void defaultNamespace(std::string& _return);
+  int32_t send_defaultNamespace();
+  void recv_defaultNamespace(std::string& _return, const int32_t seqid);
+  void listNamespaces(std::vector<std::string> & _return, const std::string& login);
+  int32_t send_listNamespaces(const std::string& login);
+  void recv_listNamespaces(std::vector<std::string> & _return, const int32_t seqid);
+  bool namespaceExists(const std::string& login, const std::string& namespaceName);
+  int32_t send_namespaceExists(const std::string& login, const std::string& namespaceName);
+  bool recv_namespaceExists(const int32_t seqid);
+  void createNamespace(const std::string& login, const std::string& namespaceName);
+  int32_t send_createNamespace(const std::string& login, const std::string& namespaceName);
+  void recv_createNamespace(const int32_t seqid);
+  void deleteNamespace(const std::string& login, const std::string& namespaceName);
+  int32_t send_deleteNamespace(const std::string& login, const std::string& namespaceName);
+  void recv_deleteNamespace(const int32_t seqid);
+  void renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName);
+  int32_t send_renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName);
+  void recv_renameNamespace(const int32_t seqid);
+  void setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value);
+  int32_t send_setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value);
+  void recv_setNamespaceProperty(const int32_t seqid);
+  void removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property);
+  int32_t send_removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property);
+  void recv_removeNamespaceProperty(const int32_t seqid);
+  void getNamespaceProperties(std::map<std::string, std::string> & _return, const std::string& login, const std::string& namespaceName);
+  int32_t send_getNamespaceProperties(const std::string& login, const std::string& namespaceName);
+  void recv_getNamespaceProperties(std::map<std::string, std::string> & _return, const int32_t seqid);
+  void namespaceIdMap(std::map<std::string, std::string> & _return, const std::string& login);
+  int32_t send_namespaceIdMap(const std::string& login);
+  void recv_namespaceIdMap(std::map<std::string, std::string> & _return, const int32_t seqid);
+  void attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  int32_t send_attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  void recv_attachNamespaceIterator(const int32_t seqid);
+  void removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes);
+  int32_t send_removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes);
+  void recv_removeNamespaceIterator(const int32_t seqid);
+  void getNamespaceIteratorSetting(IteratorSetting& _return, const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope);
+  int32_t send_getNamespaceIteratorSetting(const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope);
+  void recv_getNamespaceIteratorSetting(IteratorSetting& _return, const int32_t seqid);
+  void listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const std::string& login, const std::string& namespaceName);
+  int32_t send_listNamespaceIterators(const std::string& login, const std::string& namespaceName);
+  void recv_listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const int32_t seqid);
+  void checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  int32_t send_checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes);
+  void recv_checkNamespaceIteratorConflicts(const int32_t seqid);
+  int32_t addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName);
+  int32_t send_addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName);
+  int32_t recv_addNamespaceConstraint(const int32_t seqid);
+  void removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id);
+  int32_t send_removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id);
+  void recv_removeNamespaceConstraint(const int32_t seqid);
+  void listNamespaceConstraints(std::map<std::string, int32_t> & _return, const std::string& login, const std::string& namespaceName);
+  int32_t send_listNamespaceConstraints(const std::string& login, const std::string& namespaceName);
+  void recv_listNamespaceConstraints(std::map<std::string, int32_t> & _return, const int32_t seqid);
+  bool testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName);
+  int32_t send_testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName);
+  bool recv_testNamespaceClassLoad(const int32_t seqid);
+ protected:
+  boost::shared_ptr< ::apache::thrift::protocol::TProtocol> piprot_;
+  boost::shared_ptr< ::apache::thrift::protocol::TProtocol> poprot_;
+  ::apache::thrift::protocol::TProtocol* iprot_;
+  ::apache::thrift::protocol::TProtocol* oprot_;
+  ::apache::thrift::async::TConcurrentClientSyncInfo sync_;
+};
+
+#ifdef _WIN32
+  #pragma warning( pop )
+#endif
+
 } // namespace
 
 #endif
diff --git a/proxy/src/main/cpp/AccumuloProxy_server.skeleton.cpp b/proxy/src/main/cpp/AccumuloProxy_server.skeleton.cpp
index 302aec2..6c2f52f 100644
--- a/proxy/src/main/cpp/AccumuloProxy_server.skeleton.cpp
+++ b/proxy/src/main/cpp/AccumuloProxy_server.skeleton.cpp
@@ -338,6 +338,21 @@
     printf("revokeTablePermission\n");
   }
 
+  void grantNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm) {
+    // Your implementation goes here
+    printf("grantNamespacePermission\n");
+  }
+
+  bool hasNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm) {
+    // Your implementation goes here
+    printf("hasNamespacePermission\n");
+  }
+
+  void revokeNamespacePermission(const std::string& login, const std::string& user, const std::string& namespaceName, const NamespacePermission::type perm) {
+    // Your implementation goes here
+    printf("revokeNamespacePermission\n");
+  }
+
   void createBatchScanner(std::string& _return, const std::string& login, const std::string& tableName, const BatchScanOptions& options) {
     // Your implementation goes here
     printf("createBatchScanner\n");
@@ -423,6 +438,106 @@
     printf("getFollowing\n");
   }
 
+  void systemNamespace(std::string& _return) {
+    // Your implementation goes here
+    printf("systemNamespace\n");
+  }
+
+  void defaultNamespace(std::string& _return) {
+    // Your implementation goes here
+    printf("defaultNamespace\n");
+  }
+
+  void listNamespaces(std::vector<std::string> & _return, const std::string& login) {
+    // Your implementation goes here
+    printf("listNamespaces\n");
+  }
+
+  bool namespaceExists(const std::string& login, const std::string& namespaceName) {
+    // Your implementation goes here
+    printf("namespaceExists\n");
+  }
+
+  void createNamespace(const std::string& login, const std::string& namespaceName) {
+    // Your implementation goes here
+    printf("createNamespace\n");
+  }
+
+  void deleteNamespace(const std::string& login, const std::string& namespaceName) {
+    // Your implementation goes here
+    printf("deleteNamespace\n");
+  }
+
+  void renameNamespace(const std::string& login, const std::string& oldNamespaceName, const std::string& newNamespaceName) {
+    // Your implementation goes here
+    printf("renameNamespace\n");
+  }
+
+  void setNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property, const std::string& value) {
+    // Your implementation goes here
+    printf("setNamespaceProperty\n");
+  }
+
+  void removeNamespaceProperty(const std::string& login, const std::string& namespaceName, const std::string& property) {
+    // Your implementation goes here
+    printf("removeNamespaceProperty\n");
+  }
+
+  void getNamespaceProperties(std::map<std::string, std::string> & _return, const std::string& login, const std::string& namespaceName) {
+    // Your implementation goes here
+    printf("getNamespaceProperties\n");
+  }
+
+  void namespaceIdMap(std::map<std::string, std::string> & _return, const std::string& login) {
+    // Your implementation goes here
+    printf("namespaceIdMap\n");
+  }
+
+  void attachNamespaceIterator(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes) {
+    // Your implementation goes here
+    printf("attachNamespaceIterator\n");
+  }
+
+  void removeNamespaceIterator(const std::string& login, const std::string& namespaceName, const std::string& name, const std::set<IteratorScope::type> & scopes) {
+    // Your implementation goes here
+    printf("removeNamespaceIterator\n");
+  }
+
+  void getNamespaceIteratorSetting(IteratorSetting& _return, const std::string& login, const std::string& namespaceName, const std::string& name, const IteratorScope::type scope) {
+    // Your implementation goes here
+    printf("getNamespaceIteratorSetting\n");
+  }
+
+  void listNamespaceIterators(std::map<std::string, std::set<IteratorScope::type> > & _return, const std::string& login, const std::string& namespaceName) {
+    // Your implementation goes here
+    printf("listNamespaceIterators\n");
+  }
+
+  void checkNamespaceIteratorConflicts(const std::string& login, const std::string& namespaceName, const IteratorSetting& setting, const std::set<IteratorScope::type> & scopes) {
+    // Your implementation goes here
+    printf("checkNamespaceIteratorConflicts\n");
+  }
+
+  int32_t addNamespaceConstraint(const std::string& login, const std::string& namespaceName, const std::string& constraintClassName) {
+    // Your implementation goes here
+    printf("addNamespaceConstraint\n");
+  }
+
+  void removeNamespaceConstraint(const std::string& login, const std::string& namespaceName, const int32_t id) {
+    // Your implementation goes here
+    printf("removeNamespaceConstraint\n");
+  }
+
+  void listNamespaceConstraints(std::map<std::string, int32_t> & _return, const std::string& login, const std::string& namespaceName) {
+    // Your implementation goes here
+    printf("listNamespaceConstraints\n");
+  }
+
+  bool testNamespaceClassLoad(const std::string& login, const std::string& namespaceName, const std::string& className, const std::string& asTypeName) {
+    // Your implementation goes here
+    printf("testNamespaceClassLoad\n");
+  }
+
 };
 
 int main(int argc, char **argv) {
diff --git a/proxy/src/main/cpp/proxy_constants.cpp b/proxy/src/main/cpp/proxy_constants.cpp
index 39a574e..d177867 100644
--- a/proxy/src/main/cpp/proxy_constants.cpp
+++ b/proxy/src/main/cpp/proxy_constants.cpp
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/cpp/proxy_constants.h b/proxy/src/main/cpp/proxy_constants.h
index 8f789c6..c2d6fd6 100644
--- a/proxy/src/main/cpp/proxy_constants.h
+++ b/proxy/src/main/cpp/proxy_constants.h
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/cpp/proxy_types.cpp b/proxy/src/main/cpp/proxy_types.cpp
index a055b48..09c2d9c 100644
--- a/proxy/src/main/cpp/proxy_types.cpp
+++ b/proxy/src/main/cpp/proxy_types.cpp
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -23,6 +23,9 @@
 #include "proxy_types.h"
 
 #include <algorithm>
+#include <ostream>
+
+#include <thrift/TToString.h>
 
 namespace accumulo {
 
@@ -84,6 +87,30 @@
 };
 const std::map<int, const char*> _SystemPermission_VALUES_TO_NAMES(::apache::thrift::TEnumIterator(8, _kSystemPermissionValues, _kSystemPermissionNames), ::apache::thrift::TEnumIterator(-1, NULL, NULL));
 
+int _kNamespacePermissionValues[] = {
+  NamespacePermission::READ,
+  NamespacePermission::WRITE,
+  NamespacePermission::ALTER_NAMESPACE,
+  NamespacePermission::GRANT,
+  NamespacePermission::ALTER_TABLE,
+  NamespacePermission::CREATE_TABLE,
+  NamespacePermission::DROP_TABLE,
+  NamespacePermission::BULK_IMPORT,
+  NamespacePermission::DROP_NAMESPACE
+};
+const char* _kNamespacePermissionNames[] = {
+  "READ",
+  "WRITE",
+  "ALTER_NAMESPACE",
+  "GRANT",
+  "ALTER_TABLE",
+  "CREATE_TABLE",
+  "DROP_TABLE",
+  "BULK_IMPORT",
+  "DROP_NAMESPACE"
+};
+const std::map<int, const char*> _NamespacePermission_VALUES_TO_NAMES(::apache::thrift::TEnumIterator(9, _kNamespacePermissionValues, _kNamespacePermissionNames), ::apache::thrift::TEnumIterator(-1, NULL, NULL));
+
 int _kScanTypeValues[] = {
   ScanType::SINGLE,
   ScanType::BATCH
@@ -190,11 +217,35 @@
 };
 const std::map<int, const char*> _TimeType_VALUES_TO_NAMES(::apache::thrift::TEnumIterator(2, _kTimeTypeValues, _kTimeTypeNames), ::apache::thrift::TEnumIterator(-1, NULL, NULL));
 
-const char* Key::ascii_fingerprint = "91151A432E03F5E8564877B5194B48E2";
-const uint8_t Key::binary_fingerprint[16] = {0x91,0x15,0x1A,0x43,0x2E,0x03,0xF5,0xE8,0x56,0x48,0x77,0xB5,0x19,0x4B,0x48,0xE2};
+
+Key::~Key() throw() {
+}
+
+
+void Key::__set_row(const std::string& val) {
+  this->row = val;
+}
+
+void Key::__set_colFamily(const std::string& val) {
+  this->colFamily = val;
+}
+
+void Key::__set_colQualifier(const std::string& val) {
+  this->colQualifier = val;
+}
+
+void Key::__set_colVisibility(const std::string& val) {
+  this->colVisibility = val;
+}
+
+void Key::__set_timestamp(const int64_t val) {
+  this->timestamp = val;
+__isset.timestamp = true;
+}
 
 uint32_t Key::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -267,6 +318,7 @@
 
 uint32_t Key::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("Key");
 
   xfer += oprot->writeFieldBegin("row", ::apache::thrift::protocol::T_STRING, 1);
@@ -305,11 +357,70 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* ColumnUpdate::ascii_fingerprint = "65CC1863F7DDC1DE75B9EAF9E2DC0D1F";
-const uint8_t ColumnUpdate::binary_fingerprint[16] = {0x65,0xCC,0x18,0x63,0xF7,0xDD,0xC1,0xDE,0x75,0xB9,0xEA,0xF9,0xE2,0xDC,0x0D,0x1F};
+Key::Key(const Key& other0) {
+  row = other0.row;
+  colFamily = other0.colFamily;
+  colQualifier = other0.colQualifier;
+  colVisibility = other0.colVisibility;
+  timestamp = other0.timestamp;
+  __isset = other0.__isset;
+}
+Key& Key::operator=(const Key& other1) {
+  row = other1.row;
+  colFamily = other1.colFamily;
+  colQualifier = other1.colQualifier;
+  colVisibility = other1.colVisibility;
+  timestamp = other1.timestamp;
+  __isset = other1.__isset;
+  return *this;
+}
+void Key::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "Key(";
+  out << "row=" << to_string(row);
+  out << ", " << "colFamily=" << to_string(colFamily);
+  out << ", " << "colQualifier=" << to_string(colQualifier);
+  out << ", " << "colVisibility=" << to_string(colVisibility);
+  out << ", " << "timestamp="; (__isset.timestamp ? (out << to_string(timestamp)) : (out << "<null>"));
+  out << ")";
+}
+
+
+ColumnUpdate::~ColumnUpdate() throw() {
+}
+
+
+void ColumnUpdate::__set_colFamily(const std::string& val) {
+  this->colFamily = val;
+}
+
+void ColumnUpdate::__set_colQualifier(const std::string& val) {
+  this->colQualifier = val;
+}
+
+void ColumnUpdate::__set_colVisibility(const std::string& val) {
+  this->colVisibility = val;
+__isset.colVisibility = true;
+}
+
+void ColumnUpdate::__set_timestamp(const int64_t val) {
+  this->timestamp = val;
+__isset.timestamp = true;
+}
+
+void ColumnUpdate::__set_value(const std::string& val) {
+  this->value = val;
+__isset.value = true;
+}
+
+void ColumnUpdate::__set_deleteCell(const bool val) {
+  this->deleteCell = val;
+__isset.deleteCell = true;
+}
 
 uint32_t ColumnUpdate::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -390,6 +501,7 @@
 
 uint32_t ColumnUpdate::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("ColumnUpdate");
 
   xfer += oprot->writeFieldBegin("colFamily", ::apache::thrift::protocol::T_STRING, 1);
@@ -436,11 +548,53 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* DiskUsage::ascii_fingerprint = "D26F4F5E2867D41CF7E0391263932D6B";
-const uint8_t DiskUsage::binary_fingerprint[16] = {0xD2,0x6F,0x4F,0x5E,0x28,0x67,0xD4,0x1C,0xF7,0xE0,0x39,0x12,0x63,0x93,0x2D,0x6B};
+ColumnUpdate::ColumnUpdate(const ColumnUpdate& other2) {
+  colFamily = other2.colFamily;
+  colQualifier = other2.colQualifier;
+  colVisibility = other2.colVisibility;
+  timestamp = other2.timestamp;
+  value = other2.value;
+  deleteCell = other2.deleteCell;
+  __isset = other2.__isset;
+}
+ColumnUpdate& ColumnUpdate::operator=(const ColumnUpdate& other3) {
+  colFamily = other3.colFamily;
+  colQualifier = other3.colQualifier;
+  colVisibility = other3.colVisibility;
+  timestamp = other3.timestamp;
+  value = other3.value;
+  deleteCell = other3.deleteCell;
+  __isset = other3.__isset;
+  return *this;
+}
+void ColumnUpdate::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "ColumnUpdate(";
+  out << "colFamily=" << to_string(colFamily);
+  out << ", " << "colQualifier=" << to_string(colQualifier);
+  out << ", " << "colVisibility="; (__isset.colVisibility ? (out << to_string(colVisibility)) : (out << "<null>"));
+  out << ", " << "timestamp="; (__isset.timestamp ? (out << to_string(timestamp)) : (out << "<null>"));
+  out << ", " << "value="; (__isset.value ? (out << to_string(value)) : (out << "<null>"));
+  out << ", " << "deleteCell="; (__isset.deleteCell ? (out << to_string(deleteCell)) : (out << "<null>"));
+  out << ")";
+}
+
+
+DiskUsage::~DiskUsage() throw() {
+}
+
+
+void DiskUsage::__set_tables(const std::vector<std::string> & val) {
+  this->tables = val;
+}
+
+void DiskUsage::__set_usage(const int64_t val) {
+  this->usage = val;
+}
 
 uint32_t DiskUsage::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -463,14 +617,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->tables.clear();
-            uint32_t _size0;
-            ::apache::thrift::protocol::TType _etype3;
-            xfer += iprot->readListBegin(_etype3, _size0);
-            this->tables.resize(_size0);
-            uint32_t _i4;
-            for (_i4 = 0; _i4 < _size0; ++_i4)
+            uint32_t _size4;
+            ::apache::thrift::protocol::TType _etype7;
+            xfer += iprot->readListBegin(_etype7, _size4);
+            this->tables.resize(_size4);
+            uint32_t _i8;
+            for (_i8 = 0; _i8 < _size4; ++_i8)
             {
-              xfer += iprot->readString(this->tables[_i4]);
+              xfer += iprot->readString(this->tables[_i8]);
             }
             xfer += iprot->readListEnd();
           }
@@ -501,15 +655,16 @@
 
 uint32_t DiskUsage::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("DiskUsage");
 
   xfer += oprot->writeFieldBegin("tables", ::apache::thrift::protocol::T_LIST, 1);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->tables.size()));
-    std::vector<std::string> ::const_iterator _iter5;
-    for (_iter5 = this->tables.begin(); _iter5 != this->tables.end(); ++_iter5)
+    std::vector<std::string> ::const_iterator _iter9;
+    for (_iter9 = this->tables.begin(); _iter9 != this->tables.end(); ++_iter9)
     {
-      xfer += oprot->writeString((*_iter5));
+      xfer += oprot->writeString((*_iter9));
     }
     xfer += oprot->writeListEnd();
   }
@@ -531,11 +686,41 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* KeyValue::ascii_fingerprint = "0D0CA44F233F983E00E94228C31ABBD4";
-const uint8_t KeyValue::binary_fingerprint[16] = {0x0D,0x0C,0xA4,0x4F,0x23,0x3F,0x98,0x3E,0x00,0xE9,0x42,0x28,0xC3,0x1A,0xBB,0xD4};
+DiskUsage::DiskUsage(const DiskUsage& other10) {
+  tables = other10.tables;
+  usage = other10.usage;
+  __isset = other10.__isset;
+}
+DiskUsage& DiskUsage::operator=(const DiskUsage& other11) {
+  tables = other11.tables;
+  usage = other11.usage;
+  __isset = other11.__isset;
+  return *this;
+}
+void DiskUsage::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "DiskUsage(";
+  out << "tables=" << to_string(tables);
+  out << ", " << "usage=" << to_string(usage);
+  out << ")";
+}
+
+
+KeyValue::~KeyValue() throw() {
+}
+
+
+void KeyValue::__set_key(const Key& val) {
+  this->key = val;
+}
+
+void KeyValue::__set_value(const std::string& val) {
+  this->value = val;
+}
 
 uint32_t KeyValue::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -584,6 +769,7 @@
 
 uint32_t KeyValue::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("KeyValue");
 
   xfer += oprot->writeFieldBegin("key", ::apache::thrift::protocol::T_STRUCT, 1);
@@ -606,11 +792,41 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* ScanResult::ascii_fingerprint = "684A3FCA76EA202FE071A17F8B510E7A";
-const uint8_t ScanResult::binary_fingerprint[16] = {0x68,0x4A,0x3F,0xCA,0x76,0xEA,0x20,0x2F,0xE0,0x71,0xA1,0x7F,0x8B,0x51,0x0E,0x7A};
+KeyValue::KeyValue(const KeyValue& other12) {
+  key = other12.key;
+  value = other12.value;
+  __isset = other12.__isset;
+}
+KeyValue& KeyValue::operator=(const KeyValue& other13) {
+  key = other13.key;
+  value = other13.value;
+  __isset = other13.__isset;
+  return *this;
+}
+void KeyValue::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "KeyValue(";
+  out << "key=" << to_string(key);
+  out << ", " << "value=" << to_string(value);
+  out << ")";
+}
+
+
+ScanResult::~ScanResult() throw() {
+}
+
+
+void ScanResult::__set_results(const std::vector<KeyValue> & val) {
+  this->results = val;
+}
+
+void ScanResult::__set_more(const bool val) {
+  this->more = val;
+}
 
 uint32_t ScanResult::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -633,14 +849,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->results.clear();
-            uint32_t _size6;
-            ::apache::thrift::protocol::TType _etype9;
-            xfer += iprot->readListBegin(_etype9, _size6);
-            this->results.resize(_size6);
-            uint32_t _i10;
-            for (_i10 = 0; _i10 < _size6; ++_i10)
+            uint32_t _size14;
+            ::apache::thrift::protocol::TType _etype17;
+            xfer += iprot->readListBegin(_etype17, _size14);
+            this->results.resize(_size14);
+            uint32_t _i18;
+            for (_i18 = 0; _i18 < _size14; ++_i18)
             {
-              xfer += this->results[_i10].read(iprot);
+              xfer += this->results[_i18].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -671,15 +887,16 @@
 
 uint32_t ScanResult::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("ScanResult");
 
   xfer += oprot->writeFieldBegin("results", ::apache::thrift::protocol::T_LIST, 1);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->results.size()));
-    std::vector<KeyValue> ::const_iterator _iter11;
-    for (_iter11 = this->results.begin(); _iter11 != this->results.end(); ++_iter11)
+    std::vector<KeyValue> ::const_iterator _iter19;
+    for (_iter19 = this->results.begin(); _iter19 != this->results.end(); ++_iter19)
     {
-      xfer += (*_iter11).write(oprot);
+      xfer += (*_iter19).write(oprot);
     }
     xfer += oprot->writeListEnd();
   }
@@ -701,11 +918,49 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* Range::ascii_fingerprint = "84C5BA8DB718E60BFBF3F83867647B45";
-const uint8_t Range::binary_fingerprint[16] = {0x84,0xC5,0xBA,0x8D,0xB7,0x18,0xE6,0x0B,0xFB,0xF3,0xF8,0x38,0x67,0x64,0x7B,0x45};
+ScanResult::ScanResult(const ScanResult& other20) {
+  results = other20.results;
+  more = other20.more;
+  __isset = other20.__isset;
+}
+ScanResult& ScanResult::operator=(const ScanResult& other21) {
+  results = other21.results;
+  more = other21.more;
+  __isset = other21.__isset;
+  return *this;
+}
+void ScanResult::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "ScanResult(";
+  out << "results=" << to_string(results);
+  out << ", " << "more=" << to_string(more);
+  out << ")";
+}
+
+
+Range::~Range() throw() {
+}
+
+
+void Range::__set_start(const Key& val) {
+  this->start = val;
+}
+
+void Range::__set_startInclusive(const bool val) {
+  this->startInclusive = val;
+}
+
+void Range::__set_stop(const Key& val) {
+  this->stop = val;
+}
+
+void Range::__set_stopInclusive(const bool val) {
+  this->stopInclusive = val;
+}
 
 uint32_t Range::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -770,6 +1025,7 @@
 
 uint32_t Range::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("Range");
 
   xfer += oprot->writeFieldBegin("start", ::apache::thrift::protocol::T_STRUCT, 1);
@@ -802,11 +1058,48 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* ScanColumn::ascii_fingerprint = "5B708A954C550ECA9C1A49D3C5CAFAB9";
-const uint8_t ScanColumn::binary_fingerprint[16] = {0x5B,0x70,0x8A,0x95,0x4C,0x55,0x0E,0xCA,0x9C,0x1A,0x49,0xD3,0xC5,0xCA,0xFA,0xB9};
+Range::Range(const Range& other22) {
+  start = other22.start;
+  startInclusive = other22.startInclusive;
+  stop = other22.stop;
+  stopInclusive = other22.stopInclusive;
+  __isset = other22.__isset;
+}
+Range& Range::operator=(const Range& other23) {
+  start = other23.start;
+  startInclusive = other23.startInclusive;
+  stop = other23.stop;
+  stopInclusive = other23.stopInclusive;
+  __isset = other23.__isset;
+  return *this;
+}
+void Range::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "Range(";
+  out << "start=" << to_string(start);
+  out << ", " << "startInclusive=" << to_string(startInclusive);
+  out << ", " << "stop=" << to_string(stop);
+  out << ", " << "stopInclusive=" << to_string(stopInclusive);
+  out << ")";
+}
+
+
+ScanColumn::~ScanColumn() throw() {
+}
+
+
+void ScanColumn::__set_colFamily(const std::string& val) {
+  this->colFamily = val;
+}
+
+void ScanColumn::__set_colQualifier(const std::string& val) {
+  this->colQualifier = val;
+__isset.colQualifier = true;
+}
 
 uint32_t ScanColumn::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -855,6 +1148,7 @@
 
 uint32_t ScanColumn::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("ScanColumn");
 
   xfer += oprot->writeFieldBegin("colFamily", ::apache::thrift::protocol::T_STRING, 1);
@@ -878,11 +1172,49 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* IteratorSetting::ascii_fingerprint = "985C857916964E43205EAC92A157CB4E";
-const uint8_t IteratorSetting::binary_fingerprint[16] = {0x98,0x5C,0x85,0x79,0x16,0x96,0x4E,0x43,0x20,0x5E,0xAC,0x92,0xA1,0x57,0xCB,0x4E};
+ScanColumn::ScanColumn(const ScanColumn& other24) {
+  colFamily = other24.colFamily;
+  colQualifier = other24.colQualifier;
+  __isset = other24.__isset;
+}
+ScanColumn& ScanColumn::operator=(const ScanColumn& other25) {
+  colFamily = other25.colFamily;
+  colQualifier = other25.colQualifier;
+  __isset = other25.__isset;
+  return *this;
+}
+void ScanColumn::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "ScanColumn(";
+  out << "colFamily=" << to_string(colFamily);
+  out << ", " << "colQualifier="; (__isset.colQualifier ? (out << to_string(colQualifier)) : (out << "<null>"));
+  out << ")";
+}
+
+
+IteratorSetting::~IteratorSetting() throw() {
+}
+
+
+void IteratorSetting::__set_priority(const int32_t val) {
+  this->priority = val;
+}
+
+void IteratorSetting::__set_name(const std::string& val) {
+  this->name = val;
+}
+
+void IteratorSetting::__set_iteratorClass(const std::string& val) {
+  this->iteratorClass = val;
+}
+
+void IteratorSetting::__set_properties(const std::map<std::string, std::string> & val) {
+  this->properties = val;
+}
 
 uint32_t IteratorSetting::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -929,17 +1261,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->properties.clear();
-            uint32_t _size12;
-            ::apache::thrift::protocol::TType _ktype13;
-            ::apache::thrift::protocol::TType _vtype14;
-            xfer += iprot->readMapBegin(_ktype13, _vtype14, _size12);
-            uint32_t _i16;
-            for (_i16 = 0; _i16 < _size12; ++_i16)
+            uint32_t _size26;
+            ::apache::thrift::protocol::TType _ktype27;
+            ::apache::thrift::protocol::TType _vtype28;
+            xfer += iprot->readMapBegin(_ktype27, _vtype28, _size26);
+            uint32_t _i30;
+            for (_i30 = 0; _i30 < _size26; ++_i30)
             {
-              std::string _key17;
-              xfer += iprot->readString(_key17);
-              std::string& _val18 = this->properties[_key17];
-              xfer += iprot->readString(_val18);
+              std::string _key31;
+              xfer += iprot->readString(_key31);
+              std::string& _val32 = this->properties[_key31];
+              xfer += iprot->readString(_val32);
             }
             xfer += iprot->readMapEnd();
           }
@@ -962,6 +1294,7 @@
 
 uint32_t IteratorSetting::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("IteratorSetting");
 
   xfer += oprot->writeFieldBegin("priority", ::apache::thrift::protocol::T_I32, 1);
@@ -979,11 +1312,11 @@
   xfer += oprot->writeFieldBegin("properties", ::apache::thrift::protocol::T_MAP, 4);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->properties.size()));
-    std::map<std::string, std::string> ::const_iterator _iter19;
-    for (_iter19 = this->properties.begin(); _iter19 != this->properties.end(); ++_iter19)
+    std::map<std::string, std::string> ::const_iterator _iter33;
+    for (_iter33 = this->properties.begin(); _iter33 != this->properties.end(); ++_iter33)
     {
-      xfer += oprot->writeString(_iter19->first);
-      xfer += oprot->writeString(_iter19->second);
+      xfer += oprot->writeString(_iter33->first);
+      xfer += oprot->writeString(_iter33->second);
     }
     xfer += oprot->writeMapEnd();
   }
@@ -1003,11 +1336,64 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* ScanOptions::ascii_fingerprint = "3D87D0CD05FA62E15880C4D2C595907C";
-const uint8_t ScanOptions::binary_fingerprint[16] = {0x3D,0x87,0xD0,0xCD,0x05,0xFA,0x62,0xE1,0x58,0x80,0xC4,0xD2,0xC5,0x95,0x90,0x7C};
+IteratorSetting::IteratorSetting(const IteratorSetting& other34) {
+  priority = other34.priority;
+  name = other34.name;
+  iteratorClass = other34.iteratorClass;
+  properties = other34.properties;
+  __isset = other34.__isset;
+}
+IteratorSetting& IteratorSetting::operator=(const IteratorSetting& other35) {
+  priority = other35.priority;
+  name = other35.name;
+  iteratorClass = other35.iteratorClass;
+  properties = other35.properties;
+  __isset = other35.__isset;
+  return *this;
+}
+void IteratorSetting::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "IteratorSetting(";
+  out << "priority=" << to_string(priority);
+  out << ", " << "name=" << to_string(name);
+  out << ", " << "iteratorClass=" << to_string(iteratorClass);
+  out << ", " << "properties=" << to_string(properties);
+  out << ")";
+}
+
+
+ScanOptions::~ScanOptions() throw() {
+}
+
+
+void ScanOptions::__set_authorizations(const std::set<std::string> & val) {
+  this->authorizations = val;
+__isset.authorizations = true;
+}
+
+void ScanOptions::__set_range(const Range& val) {
+  this->range = val;
+__isset.range = true;
+}
+
+void ScanOptions::__set_columns(const std::vector<ScanColumn> & val) {
+  this->columns = val;
+__isset.columns = true;
+}
+
+void ScanOptions::__set_iterators(const std::vector<IteratorSetting> & val) {
+  this->iterators = val;
+__isset.iterators = true;
+}
+
+void ScanOptions::__set_bufferSize(const int32_t val) {
+  this->bufferSize = val;
+__isset.bufferSize = true;
+}
 
 uint32_t ScanOptions::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1030,15 +1416,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->authorizations.clear();
-            uint32_t _size20;
-            ::apache::thrift::protocol::TType _etype23;
-            xfer += iprot->readSetBegin(_etype23, _size20);
-            uint32_t _i24;
-            for (_i24 = 0; _i24 < _size20; ++_i24)
+            uint32_t _size36;
+            ::apache::thrift::protocol::TType _etype39;
+            xfer += iprot->readSetBegin(_etype39, _size36);
+            uint32_t _i40;
+            for (_i40 = 0; _i40 < _size36; ++_i40)
             {
-              std::string _elem25;
-              xfer += iprot->readBinary(_elem25);
-              this->authorizations.insert(_elem25);
+              std::string _elem41;
+              xfer += iprot->readBinary(_elem41);
+              this->authorizations.insert(_elem41);
             }
             xfer += iprot->readSetEnd();
           }
@@ -1059,14 +1445,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->columns.clear();
-            uint32_t _size26;
-            ::apache::thrift::protocol::TType _etype29;
-            xfer += iprot->readListBegin(_etype29, _size26);
-            this->columns.resize(_size26);
-            uint32_t _i30;
-            for (_i30 = 0; _i30 < _size26; ++_i30)
+            uint32_t _size42;
+            ::apache::thrift::protocol::TType _etype45;
+            xfer += iprot->readListBegin(_etype45, _size42);
+            this->columns.resize(_size42);
+            uint32_t _i46;
+            for (_i46 = 0; _i46 < _size42; ++_i46)
             {
-              xfer += this->columns[_i30].read(iprot);
+              xfer += this->columns[_i46].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -1079,14 +1465,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->iterators.clear();
-            uint32_t _size31;
-            ::apache::thrift::protocol::TType _etype34;
-            xfer += iprot->readListBegin(_etype34, _size31);
-            this->iterators.resize(_size31);
-            uint32_t _i35;
-            for (_i35 = 0; _i35 < _size31; ++_i35)
+            uint32_t _size47;
+            ::apache::thrift::protocol::TType _etype50;
+            xfer += iprot->readListBegin(_etype50, _size47);
+            this->iterators.resize(_size47);
+            uint32_t _i51;
+            for (_i51 = 0; _i51 < _size47; ++_i51)
             {
-              xfer += this->iterators[_i35].read(iprot);
+              xfer += this->iterators[_i51].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -1117,16 +1503,17 @@
 
 uint32_t ScanOptions::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("ScanOptions");
 
   if (this->__isset.authorizations) {
     xfer += oprot->writeFieldBegin("authorizations", ::apache::thrift::protocol::T_SET, 1);
     {
       xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->authorizations.size()));
-      std::set<std::string> ::const_iterator _iter36;
-      for (_iter36 = this->authorizations.begin(); _iter36 != this->authorizations.end(); ++_iter36)
+      std::set<std::string> ::const_iterator _iter52;
+      for (_iter52 = this->authorizations.begin(); _iter52 != this->authorizations.end(); ++_iter52)
       {
-        xfer += oprot->writeBinary((*_iter36));
+        xfer += oprot->writeBinary((*_iter52));
       }
       xfer += oprot->writeSetEnd();
     }
@@ -1141,10 +1528,10 @@
     xfer += oprot->writeFieldBegin("columns", ::apache::thrift::protocol::T_LIST, 3);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->columns.size()));
-      std::vector<ScanColumn> ::const_iterator _iter37;
-      for (_iter37 = this->columns.begin(); _iter37 != this->columns.end(); ++_iter37)
+      std::vector<ScanColumn> ::const_iterator _iter53;
+      for (_iter53 = this->columns.begin(); _iter53 != this->columns.end(); ++_iter53)
       {
-        xfer += (*_iter37).write(oprot);
+        xfer += (*_iter53).write(oprot);
       }
       xfer += oprot->writeListEnd();
     }
@@ -1154,10 +1541,10 @@
     xfer += oprot->writeFieldBegin("iterators", ::apache::thrift::protocol::T_LIST, 4);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->iterators.size()));
-      std::vector<IteratorSetting> ::const_iterator _iter38;
-      for (_iter38 = this->iterators.begin(); _iter38 != this->iterators.end(); ++_iter38)
+      std::vector<IteratorSetting> ::const_iterator _iter54;
+      for (_iter54 = this->iterators.begin(); _iter54 != this->iterators.end(); ++_iter54)
       {
-        xfer += (*_iter38).write(oprot);
+        xfer += (*_iter54).write(oprot);
       }
       xfer += oprot->writeListEnd();
     }
@@ -1183,11 +1570,67 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* BatchScanOptions::ascii_fingerprint = "6ADFA1FBE31B1220D2C103284E002308";
-const uint8_t BatchScanOptions::binary_fingerprint[16] = {0x6A,0xDF,0xA1,0xFB,0xE3,0x1B,0x12,0x20,0xD2,0xC1,0x03,0x28,0x4E,0x00,0x23,0x08};
+ScanOptions::ScanOptions(const ScanOptions& other55) {
+  authorizations = other55.authorizations;
+  range = other55.range;
+  columns = other55.columns;
+  iterators = other55.iterators;
+  bufferSize = other55.bufferSize;
+  __isset = other55.__isset;
+}
+ScanOptions& ScanOptions::operator=(const ScanOptions& other56) {
+  authorizations = other56.authorizations;
+  range = other56.range;
+  columns = other56.columns;
+  iterators = other56.iterators;
+  bufferSize = other56.bufferSize;
+  __isset = other56.__isset;
+  return *this;
+}
+void ScanOptions::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "ScanOptions(";
+  out << "authorizations="; (__isset.authorizations ? (out << to_string(authorizations)) : (out << "<null>"));
+  out << ", " << "range="; (__isset.range ? (out << to_string(range)) : (out << "<null>"));
+  out << ", " << "columns="; (__isset.columns ? (out << to_string(columns)) : (out << "<null>"));
+  out << ", " << "iterators="; (__isset.iterators ? (out << to_string(iterators)) : (out << "<null>"));
+  out << ", " << "bufferSize="; (__isset.bufferSize ? (out << to_string(bufferSize)) : (out << "<null>"));
+  out << ")";
+}
+
+
+BatchScanOptions::~BatchScanOptions() throw() {
+}
+
+
+void BatchScanOptions::__set_authorizations(const std::set<std::string> & val) {
+  this->authorizations = val;
+__isset.authorizations = true;
+}
+
+void BatchScanOptions::__set_ranges(const std::vector<Range> & val) {
+  this->ranges = val;
+__isset.ranges = true;
+}
+
+void BatchScanOptions::__set_columns(const std::vector<ScanColumn> & val) {
+  this->columns = val;
+__isset.columns = true;
+}
+
+void BatchScanOptions::__set_iterators(const std::vector<IteratorSetting> & val) {
+  this->iterators = val;
+__isset.iterators = true;
+}
+
+void BatchScanOptions::__set_threads(const int32_t val) {
+  this->threads = val;
+__isset.threads = true;
+}
 
 uint32_t BatchScanOptions::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1210,15 +1653,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->authorizations.clear();
-            uint32_t _size39;
-            ::apache::thrift::protocol::TType _etype42;
-            xfer += iprot->readSetBegin(_etype42, _size39);
-            uint32_t _i43;
-            for (_i43 = 0; _i43 < _size39; ++_i43)
+            uint32_t _size57;
+            ::apache::thrift::protocol::TType _etype60;
+            xfer += iprot->readSetBegin(_etype60, _size57);
+            uint32_t _i61;
+            for (_i61 = 0; _i61 < _size57; ++_i61)
             {
-              std::string _elem44;
-              xfer += iprot->readBinary(_elem44);
-              this->authorizations.insert(_elem44);
+              std::string _elem62;
+              xfer += iprot->readBinary(_elem62);
+              this->authorizations.insert(_elem62);
             }
             xfer += iprot->readSetEnd();
           }
@@ -1231,14 +1674,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->ranges.clear();
-            uint32_t _size45;
-            ::apache::thrift::protocol::TType _etype48;
-            xfer += iprot->readListBegin(_etype48, _size45);
-            this->ranges.resize(_size45);
-            uint32_t _i49;
-            for (_i49 = 0; _i49 < _size45; ++_i49)
+            uint32_t _size63;
+            ::apache::thrift::protocol::TType _etype66;
+            xfer += iprot->readListBegin(_etype66, _size63);
+            this->ranges.resize(_size63);
+            uint32_t _i67;
+            for (_i67 = 0; _i67 < _size63; ++_i67)
             {
-              xfer += this->ranges[_i49].read(iprot);
+              xfer += this->ranges[_i67].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -1251,14 +1694,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->columns.clear();
-            uint32_t _size50;
-            ::apache::thrift::protocol::TType _etype53;
-            xfer += iprot->readListBegin(_etype53, _size50);
-            this->columns.resize(_size50);
-            uint32_t _i54;
-            for (_i54 = 0; _i54 < _size50; ++_i54)
+            uint32_t _size68;
+            ::apache::thrift::protocol::TType _etype71;
+            xfer += iprot->readListBegin(_etype71, _size68);
+            this->columns.resize(_size68);
+            uint32_t _i72;
+            for (_i72 = 0; _i72 < _size68; ++_i72)
             {
-              xfer += this->columns[_i54].read(iprot);
+              xfer += this->columns[_i72].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -1271,14 +1714,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->iterators.clear();
-            uint32_t _size55;
-            ::apache::thrift::protocol::TType _etype58;
-            xfer += iprot->readListBegin(_etype58, _size55);
-            this->iterators.resize(_size55);
-            uint32_t _i59;
-            for (_i59 = 0; _i59 < _size55; ++_i59)
+            uint32_t _size73;
+            ::apache::thrift::protocol::TType _etype76;
+            xfer += iprot->readListBegin(_etype76, _size73);
+            this->iterators.resize(_size73);
+            uint32_t _i77;
+            for (_i77 = 0; _i77 < _size73; ++_i77)
             {
-              xfer += this->iterators[_i59].read(iprot);
+              xfer += this->iterators[_i77].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -1309,16 +1752,17 @@
 
 uint32_t BatchScanOptions::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("BatchScanOptions");
 
   if (this->__isset.authorizations) {
     xfer += oprot->writeFieldBegin("authorizations", ::apache::thrift::protocol::T_SET, 1);
     {
       xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->authorizations.size()));
-      std::set<std::string> ::const_iterator _iter60;
-      for (_iter60 = this->authorizations.begin(); _iter60 != this->authorizations.end(); ++_iter60)
+      std::set<std::string> ::const_iterator _iter78;
+      for (_iter78 = this->authorizations.begin(); _iter78 != this->authorizations.end(); ++_iter78)
       {
-        xfer += oprot->writeBinary((*_iter60));
+        xfer += oprot->writeBinary((*_iter78));
       }
       xfer += oprot->writeSetEnd();
     }
@@ -1328,10 +1772,10 @@
     xfer += oprot->writeFieldBegin("ranges", ::apache::thrift::protocol::T_LIST, 2);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->ranges.size()));
-      std::vector<Range> ::const_iterator _iter61;
-      for (_iter61 = this->ranges.begin(); _iter61 != this->ranges.end(); ++_iter61)
+      std::vector<Range> ::const_iterator _iter79;
+      for (_iter79 = this->ranges.begin(); _iter79 != this->ranges.end(); ++_iter79)
       {
-        xfer += (*_iter61).write(oprot);
+        xfer += (*_iter79).write(oprot);
       }
       xfer += oprot->writeListEnd();
     }
@@ -1341,10 +1785,10 @@
     xfer += oprot->writeFieldBegin("columns", ::apache::thrift::protocol::T_LIST, 3);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->columns.size()));
-      std::vector<ScanColumn> ::const_iterator _iter62;
-      for (_iter62 = this->columns.begin(); _iter62 != this->columns.end(); ++_iter62)
+      std::vector<ScanColumn> ::const_iterator _iter80;
+      for (_iter80 = this->columns.begin(); _iter80 != this->columns.end(); ++_iter80)
       {
-        xfer += (*_iter62).write(oprot);
+        xfer += (*_iter80).write(oprot);
       }
       xfer += oprot->writeListEnd();
     }
@@ -1354,10 +1798,10 @@
     xfer += oprot->writeFieldBegin("iterators", ::apache::thrift::protocol::T_LIST, 4);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->iterators.size()));
-      std::vector<IteratorSetting> ::const_iterator _iter63;
-      for (_iter63 = this->iterators.begin(); _iter63 != this->iterators.end(); ++_iter63)
+      std::vector<IteratorSetting> ::const_iterator _iter81;
+      for (_iter81 = this->iterators.begin(); _iter81 != this->iterators.end(); ++_iter81)
       {
-        xfer += (*_iter63).write(oprot);
+        xfer += (*_iter81).write(oprot);
       }
       xfer += oprot->writeListEnd();
     }
@@ -1383,11 +1827,50 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* KeyValueAndPeek::ascii_fingerprint = "CBBC6AB9C7EA5E5E748C13F970862FAB";
-const uint8_t KeyValueAndPeek::binary_fingerprint[16] = {0xCB,0xBC,0x6A,0xB9,0xC7,0xEA,0x5E,0x5E,0x74,0x8C,0x13,0xF9,0x70,0x86,0x2F,0xAB};
+BatchScanOptions::BatchScanOptions(const BatchScanOptions& other82) {
+  authorizations = other82.authorizations;
+  ranges = other82.ranges;
+  columns = other82.columns;
+  iterators = other82.iterators;
+  threads = other82.threads;
+  __isset = other82.__isset;
+}
+BatchScanOptions& BatchScanOptions::operator=(const BatchScanOptions& other83) {
+  authorizations = other83.authorizations;
+  ranges = other83.ranges;
+  columns = other83.columns;
+  iterators = other83.iterators;
+  threads = other83.threads;
+  __isset = other83.__isset;
+  return *this;
+}
+void BatchScanOptions::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "BatchScanOptions(";
+  out << "authorizations="; (__isset.authorizations ? (out << to_string(authorizations)) : (out << "<null>"));
+  out << ", " << "ranges="; (__isset.ranges ? (out << to_string(ranges)) : (out << "<null>"));
+  out << ", " << "columns="; (__isset.columns ? (out << to_string(columns)) : (out << "<null>"));
+  out << ", " << "iterators="; (__isset.iterators ? (out << to_string(iterators)) : (out << "<null>"));
+  out << ", " << "threads="; (__isset.threads ? (out << to_string(threads)) : (out << "<null>"));
+  out << ")";
+}
+
+
+KeyValueAndPeek::~KeyValueAndPeek() throw() {
+}
+
+
+void KeyValueAndPeek::__set_keyValue(const KeyValue& val) {
+  this->keyValue = val;
+}
+
+void KeyValueAndPeek::__set_hasNext(const bool val) {
+  this->hasNext = val;
+}
 
 uint32_t KeyValueAndPeek::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1436,6 +1919,7 @@
 
 uint32_t KeyValueAndPeek::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("KeyValueAndPeek");
 
   xfer += oprot->writeFieldBegin("keyValue", ::apache::thrift::protocol::T_STRUCT, 1);
@@ -1458,11 +1942,45 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* KeyExtent::ascii_fingerprint = "AB879940BD15B6B25691265F7384B271";
-const uint8_t KeyExtent::binary_fingerprint[16] = {0xAB,0x87,0x99,0x40,0xBD,0x15,0xB6,0xB2,0x56,0x91,0x26,0x5F,0x73,0x84,0xB2,0x71};
+KeyValueAndPeek::KeyValueAndPeek(const KeyValueAndPeek& other84) {
+  keyValue = other84.keyValue;
+  hasNext = other84.hasNext;
+  __isset = other84.__isset;
+}
+KeyValueAndPeek& KeyValueAndPeek::operator=(const KeyValueAndPeek& other85) {
+  keyValue = other85.keyValue;
+  hasNext = other85.hasNext;
+  __isset = other85.__isset;
+  return *this;
+}
+void KeyValueAndPeek::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "KeyValueAndPeek(";
+  out << "keyValue=" << to_string(keyValue);
+  out << ", " << "hasNext=" << to_string(hasNext);
+  out << ")";
+}
+
+
+KeyExtent::~KeyExtent() throw() {
+}
+
+
+void KeyExtent::__set_tableId(const std::string& val) {
+  this->tableId = val;
+}
+
+void KeyExtent::__set_endRow(const std::string& val) {
+  this->endRow = val;
+}
+
+void KeyExtent::__set_prevEndRow(const std::string& val) {
+  this->prevEndRow = val;
+}
 
 uint32_t KeyExtent::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1519,6 +2037,7 @@
 
 uint32_t KeyExtent::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("KeyExtent");
 
   xfer += oprot->writeFieldBegin("tableId", ::apache::thrift::protocol::T_STRING, 1);
@@ -1546,11 +2065,48 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* Column::ascii_fingerprint = "AB879940BD15B6B25691265F7384B271";
-const uint8_t Column::binary_fingerprint[16] = {0xAB,0x87,0x99,0x40,0xBD,0x15,0xB6,0xB2,0x56,0x91,0x26,0x5F,0x73,0x84,0xB2,0x71};
+KeyExtent::KeyExtent(const KeyExtent& other86) {
+  tableId = other86.tableId;
+  endRow = other86.endRow;
+  prevEndRow = other86.prevEndRow;
+  __isset = other86.__isset;
+}
+KeyExtent& KeyExtent::operator=(const KeyExtent& other87) {
+  tableId = other87.tableId;
+  endRow = other87.endRow;
+  prevEndRow = other87.prevEndRow;
+  __isset = other87.__isset;
+  return *this;
+}
+void KeyExtent::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "KeyExtent(";
+  out << "tableId=" << to_string(tableId);
+  out << ", " << "endRow=" << to_string(endRow);
+  out << ", " << "prevEndRow=" << to_string(prevEndRow);
+  out << ")";
+}
+
+
+Column::~Column() throw() {
+}
+
+
+void Column::__set_colFamily(const std::string& val) {
+  this->colFamily = val;
+}
+
+void Column::__set_colQualifier(const std::string& val) {
+  this->colQualifier = val;
+}
+
+void Column::__set_colVisibility(const std::string& val) {
+  this->colVisibility = val;
+}
 
 uint32_t Column::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1607,6 +2163,7 @@
 
 uint32_t Column::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("Column");
 
   xfer += oprot->writeFieldBegin("colFamily", ::apache::thrift::protocol::T_STRING, 1);
@@ -1634,11 +2191,55 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* Condition::ascii_fingerprint = "C4022914C22D89E33B1A46A7D511C58F";
-const uint8_t Condition::binary_fingerprint[16] = {0xC4,0x02,0x29,0x14,0xC2,0x2D,0x89,0xE3,0x3B,0x1A,0x46,0xA7,0xD5,0x11,0xC5,0x8F};
+Column::Column(const Column& other88) {
+  colFamily = other88.colFamily;
+  colQualifier = other88.colQualifier;
+  colVisibility = other88.colVisibility;
+  __isset = other88.__isset;
+}
+Column& Column::operator=(const Column& other89) {
+  colFamily = other89.colFamily;
+  colQualifier = other89.colQualifier;
+  colVisibility = other89.colVisibility;
+  __isset = other89.__isset;
+  return *this;
+}
+void Column::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "Column(";
+  out << "colFamily=" << to_string(colFamily);
+  out << ", " << "colQualifier=" << to_string(colQualifier);
+  out << ", " << "colVisibility=" << to_string(colVisibility);
+  out << ")";
+}
+
+
+Condition::~Condition() throw() {
+}
+
+
+void Condition::__set_column(const Column& val) {
+  this->column = val;
+}
+
+void Condition::__set_timestamp(const int64_t val) {
+  this->timestamp = val;
+__isset.timestamp = true;
+}
+
+void Condition::__set_value(const std::string& val) {
+  this->value = val;
+__isset.value = true;
+}
+
+void Condition::__set_iterators(const std::vector<IteratorSetting> & val) {
+  this->iterators = val;
+__isset.iterators = true;
+}
 
 uint32_t Condition::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1685,14 +2286,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->iterators.clear();
-            uint32_t _size64;
-            ::apache::thrift::protocol::TType _etype67;
-            xfer += iprot->readListBegin(_etype67, _size64);
-            this->iterators.resize(_size64);
-            uint32_t _i68;
-            for (_i68 = 0; _i68 < _size64; ++_i68)
+            uint32_t _size90;
+            ::apache::thrift::protocol::TType _etype93;
+            xfer += iprot->readListBegin(_etype93, _size90);
+            this->iterators.resize(_size90);
+            uint32_t _i94;
+            for (_i94 = 0; _i94 < _size90; ++_i94)
             {
-              xfer += this->iterators[_i68].read(iprot);
+              xfer += this->iterators[_i94].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -1715,6 +2316,7 @@
 
 uint32_t Condition::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("Condition");
 
   xfer += oprot->writeFieldBegin("column", ::apache::thrift::protocol::T_STRUCT, 1);
@@ -1735,10 +2337,10 @@
     xfer += oprot->writeFieldBegin("iterators", ::apache::thrift::protocol::T_LIST, 4);
     {
       xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->iterators.size()));
-      std::vector<IteratorSetting> ::const_iterator _iter69;
-      for (_iter69 = this->iterators.begin(); _iter69 != this->iterators.end(); ++_iter69)
+      std::vector<IteratorSetting> ::const_iterator _iter95;
+      for (_iter95 = this->iterators.begin(); _iter95 != this->iterators.end(); ++_iter95)
       {
-        xfer += (*_iter69).write(oprot);
+        xfer += (*_iter95).write(oprot);
       }
       xfer += oprot->writeListEnd();
     }
@@ -1758,11 +2360,47 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* ConditionalUpdates::ascii_fingerprint = "1C1808872D1A8E04F114974ADD77F356";
-const uint8_t ConditionalUpdates::binary_fingerprint[16] = {0x1C,0x18,0x08,0x87,0x2D,0x1A,0x8E,0x04,0xF1,0x14,0x97,0x4A,0xDD,0x77,0xF3,0x56};
+Condition::Condition(const Condition& other96) {
+  column = other96.column;
+  timestamp = other96.timestamp;
+  value = other96.value;
+  iterators = other96.iterators;
+  __isset = other96.__isset;
+}
+Condition& Condition::operator=(const Condition& other97) {
+  column = other97.column;
+  timestamp = other97.timestamp;
+  value = other97.value;
+  iterators = other97.iterators;
+  __isset = other97.__isset;
+  return *this;
+}
+void Condition::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "Condition(";
+  out << "column=" << to_string(column);
+  out << ", " << "timestamp="; (__isset.timestamp ? (out << to_string(timestamp)) : (out << "<null>"));
+  out << ", " << "value="; (__isset.value ? (out << to_string(value)) : (out << "<null>"));
+  out << ", " << "iterators="; (__isset.iterators ? (out << to_string(iterators)) : (out << "<null>"));
+  out << ")";
+}
+
+
+ConditionalUpdates::~ConditionalUpdates() throw() {
+}
+
+
+void ConditionalUpdates::__set_conditions(const std::vector<Condition> & val) {
+  this->conditions = val;
+}
+
+void ConditionalUpdates::__set_updates(const std::vector<ColumnUpdate> & val) {
+  this->updates = val;
+}
 
 uint32_t ConditionalUpdates::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1785,14 +2423,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->conditions.clear();
-            uint32_t _size70;
-            ::apache::thrift::protocol::TType _etype73;
-            xfer += iprot->readListBegin(_etype73, _size70);
-            this->conditions.resize(_size70);
-            uint32_t _i74;
-            for (_i74 = 0; _i74 < _size70; ++_i74)
+            uint32_t _size98;
+            ::apache::thrift::protocol::TType _etype101;
+            xfer += iprot->readListBegin(_etype101, _size98);
+            this->conditions.resize(_size98);
+            uint32_t _i102;
+            for (_i102 = 0; _i102 < _size98; ++_i102)
             {
-              xfer += this->conditions[_i74].read(iprot);
+              xfer += this->conditions[_i102].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -1805,14 +2443,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->updates.clear();
-            uint32_t _size75;
-            ::apache::thrift::protocol::TType _etype78;
-            xfer += iprot->readListBegin(_etype78, _size75);
-            this->updates.resize(_size75);
-            uint32_t _i79;
-            for (_i79 = 0; _i79 < _size75; ++_i79)
+            uint32_t _size103;
+            ::apache::thrift::protocol::TType _etype106;
+            xfer += iprot->readListBegin(_etype106, _size103);
+            this->updates.resize(_size103);
+            uint32_t _i107;
+            for (_i107 = 0; _i107 < _size103; ++_i107)
             {
-              xfer += this->updates[_i79].read(iprot);
+              xfer += this->updates[_i107].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -1835,15 +2473,16 @@
 
 uint32_t ConditionalUpdates::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("ConditionalUpdates");
 
   xfer += oprot->writeFieldBegin("conditions", ::apache::thrift::protocol::T_LIST, 2);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->conditions.size()));
-    std::vector<Condition> ::const_iterator _iter80;
-    for (_iter80 = this->conditions.begin(); _iter80 != this->conditions.end(); ++_iter80)
+    std::vector<Condition> ::const_iterator _iter108;
+    for (_iter108 = this->conditions.begin(); _iter108 != this->conditions.end(); ++_iter108)
     {
-      xfer += (*_iter80).write(oprot);
+      xfer += (*_iter108).write(oprot);
     }
     xfer += oprot->writeListEnd();
   }
@@ -1852,10 +2491,10 @@
   xfer += oprot->writeFieldBegin("updates", ::apache::thrift::protocol::T_LIST, 3);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->updates.size()));
-    std::vector<ColumnUpdate> ::const_iterator _iter81;
-    for (_iter81 = this->updates.begin(); _iter81 != this->updates.end(); ++_iter81)
+    std::vector<ColumnUpdate> ::const_iterator _iter109;
+    for (_iter109 = this->updates.begin(); _iter109 != this->updates.end(); ++_iter109)
     {
-      xfer += (*_iter81).write(oprot);
+      xfer += (*_iter109).write(oprot);
     }
     xfer += oprot->writeListEnd();
   }
@@ -1873,11 +2512,58 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* ConditionalWriterOptions::ascii_fingerprint = "C345C04E84A351638B6EACB741BD600E";
-const uint8_t ConditionalWriterOptions::binary_fingerprint[16] = {0xC3,0x45,0xC0,0x4E,0x84,0xA3,0x51,0x63,0x8B,0x6E,0xAC,0xB7,0x41,0xBD,0x60,0x0E};
+ConditionalUpdates::ConditionalUpdates(const ConditionalUpdates& other110) {
+  conditions = other110.conditions;
+  updates = other110.updates;
+  __isset = other110.__isset;
+}
+ConditionalUpdates& ConditionalUpdates::operator=(const ConditionalUpdates& other111) {
+  conditions = other111.conditions;
+  updates = other111.updates;
+  __isset = other111.__isset;
+  return *this;
+}
+void ConditionalUpdates::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "ConditionalUpdates(";
+  out << "conditions=" << to_string(conditions);
+  out << ", " << "updates=" << to_string(updates);
+  out << ")";
+}
+
+
+ConditionalWriterOptions::~ConditionalWriterOptions() throw() {
+}
+
+
+void ConditionalWriterOptions::__set_maxMemory(const int64_t val) {
+  this->maxMemory = val;
+__isset.maxMemory = true;
+}
+
+void ConditionalWriterOptions::__set_timeoutMs(const int64_t val) {
+  this->timeoutMs = val;
+__isset.timeoutMs = true;
+}
+
+void ConditionalWriterOptions::__set_threads(const int32_t val) {
+  this->threads = val;
+__isset.threads = true;
+}
+
+void ConditionalWriterOptions::__set_authorizations(const std::set<std::string> & val) {
+  this->authorizations = val;
+__isset.authorizations = true;
+}
+
+void ConditionalWriterOptions::__set_durability(const Durability::type val) {
+  this->durability = val;
+__isset.durability = true;
+}
 
 uint32_t ConditionalWriterOptions::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -1924,15 +2610,15 @@
         if (ftype == ::apache::thrift::protocol::T_SET) {
           {
             this->authorizations.clear();
-            uint32_t _size82;
-            ::apache::thrift::protocol::TType _etype85;
-            xfer += iprot->readSetBegin(_etype85, _size82);
-            uint32_t _i86;
-            for (_i86 = 0; _i86 < _size82; ++_i86)
+            uint32_t _size112;
+            ::apache::thrift::protocol::TType _etype115;
+            xfer += iprot->readSetBegin(_etype115, _size112);
+            uint32_t _i116;
+            for (_i116 = 0; _i116 < _size112; ++_i116)
             {
-              std::string _elem87;
-              xfer += iprot->readBinary(_elem87);
-              this->authorizations.insert(_elem87);
+              std::string _elem117;
+              xfer += iprot->readBinary(_elem117);
+              this->authorizations.insert(_elem117);
             }
             xfer += iprot->readSetEnd();
           }
@@ -1943,9 +2629,9 @@
         break;
       case 5:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast88;
-          xfer += iprot->readI32(ecast88);
-          this->durability = (Durability::type)ecast88;
+          int32_t ecast118;
+          xfer += iprot->readI32(ecast118);
+          this->durability = (Durability::type)ecast118;
           this->__isset.durability = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -1965,6 +2651,7 @@
 
 uint32_t ConditionalWriterOptions::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("ConditionalWriterOptions");
 
   if (this->__isset.maxMemory) {
@@ -1986,10 +2673,10 @@
     xfer += oprot->writeFieldBegin("authorizations", ::apache::thrift::protocol::T_SET, 4);
     {
       xfer += oprot->writeSetBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->authorizations.size()));
-      std::set<std::string> ::const_iterator _iter89;
-      for (_iter89 = this->authorizations.begin(); _iter89 != this->authorizations.end(); ++_iter89)
+      std::set<std::string> ::const_iterator _iter119;
+      for (_iter119 = this->authorizations.begin(); _iter119 != this->authorizations.end(); ++_iter119)
       {
-        xfer += oprot->writeBinary((*_iter89));
+        xfer += oprot->writeBinary((*_iter119));
       }
       xfer += oprot->writeSetEnd();
     }
@@ -2015,11 +2702,86 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* ActiveScan::ascii_fingerprint = "1B97541CB4E900A054266BBBEE61D004";
-const uint8_t ActiveScan::binary_fingerprint[16] = {0x1B,0x97,0x54,0x1C,0xB4,0xE9,0x00,0xA0,0x54,0x26,0x6B,0xBB,0xEE,0x61,0xD0,0x04};
+ConditionalWriterOptions::ConditionalWriterOptions(const ConditionalWriterOptions& other120) {
+  maxMemory = other120.maxMemory;
+  timeoutMs = other120.timeoutMs;
+  threads = other120.threads;
+  authorizations = other120.authorizations;
+  durability = other120.durability;
+  __isset = other120.__isset;
+}
+ConditionalWriterOptions& ConditionalWriterOptions::operator=(const ConditionalWriterOptions& other121) {
+  maxMemory = other121.maxMemory;
+  timeoutMs = other121.timeoutMs;
+  threads = other121.threads;
+  authorizations = other121.authorizations;
+  durability = other121.durability;
+  __isset = other121.__isset;
+  return *this;
+}
+void ConditionalWriterOptions::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "ConditionalWriterOptions(";
+  out << "maxMemory="; (__isset.maxMemory ? (out << to_string(maxMemory)) : (out << "<null>"));
+  out << ", " << "timeoutMs="; (__isset.timeoutMs ? (out << to_string(timeoutMs)) : (out << "<null>"));
+  out << ", " << "threads="; (__isset.threads ? (out << to_string(threads)) : (out << "<null>"));
+  out << ", " << "authorizations="; (__isset.authorizations ? (out << to_string(authorizations)) : (out << "<null>"));
+  out << ", " << "durability="; (__isset.durability ? (out << to_string(durability)) : (out << "<null>"));
+  out << ")";
+}
+
+
+ActiveScan::~ActiveScan() throw() {
+}
+
+
+void ActiveScan::__set_client(const std::string& val) {
+  this->client = val;
+}
+
+void ActiveScan::__set_user(const std::string& val) {
+  this->user = val;
+}
+
+void ActiveScan::__set_table(const std::string& val) {
+  this->table = val;
+}
+
+void ActiveScan::__set_age(const int64_t val) {
+  this->age = val;
+}
+
+void ActiveScan::__set_idleTime(const int64_t val) {
+  this->idleTime = val;
+}
+
+void ActiveScan::__set_type(const ScanType::type val) {
+  this->type = val;
+}
+
+void ActiveScan::__set_state(const ScanState::type val) {
+  this->state = val;
+}
+
+void ActiveScan::__set_extent(const KeyExtent& val) {
+  this->extent = val;
+}
+
+void ActiveScan::__set_columns(const std::vector<Column> & val) {
+  this->columns = val;
+}
+
+void ActiveScan::__set_iterators(const std::vector<IteratorSetting> & val) {
+  this->iterators = val;
+}
+
+void ActiveScan::__set_authorizations(const std::vector<std::string> & val) {
+  this->authorizations = val;
+}
 
 uint32_t ActiveScan::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2080,9 +2842,9 @@
         break;
       case 6:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast90;
-          xfer += iprot->readI32(ecast90);
-          this->type = (ScanType::type)ecast90;
+          int32_t ecast122;
+          xfer += iprot->readI32(ecast122);
+          this->type = (ScanType::type)ecast122;
           this->__isset.type = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -2090,9 +2852,9 @@
         break;
       case 7:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast91;
-          xfer += iprot->readI32(ecast91);
-          this->state = (ScanState::type)ecast91;
+          int32_t ecast123;
+          xfer += iprot->readI32(ecast123);
+          this->state = (ScanState::type)ecast123;
           this->__isset.state = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -2110,14 +2872,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->columns.clear();
-            uint32_t _size92;
-            ::apache::thrift::protocol::TType _etype95;
-            xfer += iprot->readListBegin(_etype95, _size92);
-            this->columns.resize(_size92);
-            uint32_t _i96;
-            for (_i96 = 0; _i96 < _size92; ++_i96)
+            uint32_t _size124;
+            ::apache::thrift::protocol::TType _etype127;
+            xfer += iprot->readListBegin(_etype127, _size124);
+            this->columns.resize(_size124);
+            uint32_t _i128;
+            for (_i128 = 0; _i128 < _size124; ++_i128)
             {
-              xfer += this->columns[_i96].read(iprot);
+              xfer += this->columns[_i128].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -2130,14 +2892,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->iterators.clear();
-            uint32_t _size97;
-            ::apache::thrift::protocol::TType _etype100;
-            xfer += iprot->readListBegin(_etype100, _size97);
-            this->iterators.resize(_size97);
-            uint32_t _i101;
-            for (_i101 = 0; _i101 < _size97; ++_i101)
+            uint32_t _size129;
+            ::apache::thrift::protocol::TType _etype132;
+            xfer += iprot->readListBegin(_etype132, _size129);
+            this->iterators.resize(_size129);
+            uint32_t _i133;
+            for (_i133 = 0; _i133 < _size129; ++_i133)
             {
-              xfer += this->iterators[_i101].read(iprot);
+              xfer += this->iterators[_i133].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -2150,14 +2912,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->authorizations.clear();
-            uint32_t _size102;
-            ::apache::thrift::protocol::TType _etype105;
-            xfer += iprot->readListBegin(_etype105, _size102);
-            this->authorizations.resize(_size102);
-            uint32_t _i106;
-            for (_i106 = 0; _i106 < _size102; ++_i106)
+            uint32_t _size134;
+            ::apache::thrift::protocol::TType _etype137;
+            xfer += iprot->readListBegin(_etype137, _size134);
+            this->authorizations.resize(_size134);
+            uint32_t _i138;
+            for (_i138 = 0; _i138 < _size134; ++_i138)
             {
-              xfer += iprot->readBinary(this->authorizations[_i106]);
+              xfer += iprot->readBinary(this->authorizations[_i138]);
             }
             xfer += iprot->readListEnd();
           }
@@ -2180,6 +2942,7 @@
 
 uint32_t ActiveScan::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("ActiveScan");
 
   xfer += oprot->writeFieldBegin("client", ::apache::thrift::protocol::T_STRING, 1);
@@ -2217,10 +2980,10 @@
   xfer += oprot->writeFieldBegin("columns", ::apache::thrift::protocol::T_LIST, 9);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->columns.size()));
-    std::vector<Column> ::const_iterator _iter107;
-    for (_iter107 = this->columns.begin(); _iter107 != this->columns.end(); ++_iter107)
+    std::vector<Column> ::const_iterator _iter139;
+    for (_iter139 = this->columns.begin(); _iter139 != this->columns.end(); ++_iter139)
     {
-      xfer += (*_iter107).write(oprot);
+      xfer += (*_iter139).write(oprot);
     }
     xfer += oprot->writeListEnd();
   }
@@ -2229,10 +2992,10 @@
   xfer += oprot->writeFieldBegin("iterators", ::apache::thrift::protocol::T_LIST, 10);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->iterators.size()));
-    std::vector<IteratorSetting> ::const_iterator _iter108;
-    for (_iter108 = this->iterators.begin(); _iter108 != this->iterators.end(); ++_iter108)
+    std::vector<IteratorSetting> ::const_iterator _iter140;
+    for (_iter140 = this->iterators.begin(); _iter140 != this->iterators.end(); ++_iter140)
     {
-      xfer += (*_iter108).write(oprot);
+      xfer += (*_iter140).write(oprot);
     }
     xfer += oprot->writeListEnd();
   }
@@ -2241,10 +3004,10 @@
   xfer += oprot->writeFieldBegin("authorizations", ::apache::thrift::protocol::T_LIST, 11);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->authorizations.size()));
-    std::vector<std::string> ::const_iterator _iter109;
-    for (_iter109 = this->authorizations.begin(); _iter109 != this->authorizations.end(); ++_iter109)
+    std::vector<std::string> ::const_iterator _iter141;
+    for (_iter141 = this->authorizations.begin(); _iter141 != this->authorizations.end(); ++_iter141)
     {
-      xfer += oprot->writeBinary((*_iter109));
+      xfer += oprot->writeBinary((*_iter141));
     }
     xfer += oprot->writeListEnd();
   }
@@ -2271,11 +3034,100 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* ActiveCompaction::ascii_fingerprint = "2BB155CC901109464666B6C7E6A8C1A6";
-const uint8_t ActiveCompaction::binary_fingerprint[16] = {0x2B,0xB1,0x55,0xCC,0x90,0x11,0x09,0x46,0x46,0x66,0xB6,0xC7,0xE6,0xA8,0xC1,0xA6};
+ActiveScan::ActiveScan(const ActiveScan& other142) {
+  client = other142.client;
+  user = other142.user;
+  table = other142.table;
+  age = other142.age;
+  idleTime = other142.idleTime;
+  type = other142.type;
+  state = other142.state;
+  extent = other142.extent;
+  columns = other142.columns;
+  iterators = other142.iterators;
+  authorizations = other142.authorizations;
+  __isset = other142.__isset;
+}
+ActiveScan& ActiveScan::operator=(const ActiveScan& other143) {
+  client = other143.client;
+  user = other143.user;
+  table = other143.table;
+  age = other143.age;
+  idleTime = other143.idleTime;
+  type = other143.type;
+  state = other143.state;
+  extent = other143.extent;
+  columns = other143.columns;
+  iterators = other143.iterators;
+  authorizations = other143.authorizations;
+  __isset = other143.__isset;
+  return *this;
+}
+void ActiveScan::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "ActiveScan(";
+  out << "client=" << to_string(client);
+  out << ", " << "user=" << to_string(user);
+  out << ", " << "table=" << to_string(table);
+  out << ", " << "age=" << to_string(age);
+  out << ", " << "idleTime=" << to_string(idleTime);
+  out << ", " << "type=" << to_string(type);
+  out << ", " << "state=" << to_string(state);
+  out << ", " << "extent=" << to_string(extent);
+  out << ", " << "columns=" << to_string(columns);
+  out << ", " << "iterators=" << to_string(iterators);
+  out << ", " << "authorizations=" << to_string(authorizations);
+  out << ")";
+}
+
+
+ActiveCompaction::~ActiveCompaction() throw() {
+}
+
+
+void ActiveCompaction::__set_extent(const KeyExtent& val) {
+  this->extent = val;
+}
+
+void ActiveCompaction::__set_age(const int64_t val) {
+  this->age = val;
+}
+
+void ActiveCompaction::__set_inputFiles(const std::vector<std::string> & val) {
+  this->inputFiles = val;
+}
+
+void ActiveCompaction::__set_outputFile(const std::string& val) {
+  this->outputFile = val;
+}
+
+void ActiveCompaction::__set_type(const CompactionType::type val) {
+  this->type = val;
+}
+
+void ActiveCompaction::__set_reason(const CompactionReason::type val) {
+  this->reason = val;
+}
+
+void ActiveCompaction::__set_localityGroup(const std::string& val) {
+  this->localityGroup = val;
+}
+
+void ActiveCompaction::__set_entriesRead(const int64_t val) {
+  this->entriesRead = val;
+}
+
+void ActiveCompaction::__set_entriesWritten(const int64_t val) {
+  this->entriesWritten = val;
+}
+
+void ActiveCompaction::__set_iterators(const std::vector<IteratorSetting> & val) {
+  this->iterators = val;
+}
 
 uint32_t ActiveCompaction::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2314,14 +3166,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->inputFiles.clear();
-            uint32_t _size110;
-            ::apache::thrift::protocol::TType _etype113;
-            xfer += iprot->readListBegin(_etype113, _size110);
-            this->inputFiles.resize(_size110);
-            uint32_t _i114;
-            for (_i114 = 0; _i114 < _size110; ++_i114)
+            uint32_t _size144;
+            ::apache::thrift::protocol::TType _etype147;
+            xfer += iprot->readListBegin(_etype147, _size144);
+            this->inputFiles.resize(_size144);
+            uint32_t _i148;
+            for (_i148 = 0; _i148 < _size144; ++_i148)
             {
-              xfer += iprot->readString(this->inputFiles[_i114]);
+              xfer += iprot->readString(this->inputFiles[_i148]);
             }
             xfer += iprot->readListEnd();
           }
@@ -2340,9 +3192,9 @@
         break;
       case 5:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast115;
-          xfer += iprot->readI32(ecast115);
-          this->type = (CompactionType::type)ecast115;
+          int32_t ecast149;
+          xfer += iprot->readI32(ecast149);
+          this->type = (CompactionType::type)ecast149;
           this->__isset.type = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -2350,9 +3202,9 @@
         break;
       case 6:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast116;
-          xfer += iprot->readI32(ecast116);
-          this->reason = (CompactionReason::type)ecast116;
+          int32_t ecast150;
+          xfer += iprot->readI32(ecast150);
+          this->reason = (CompactionReason::type)ecast150;
           this->__isset.reason = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -2386,14 +3238,14 @@
         if (ftype == ::apache::thrift::protocol::T_LIST) {
           {
             this->iterators.clear();
-            uint32_t _size117;
-            ::apache::thrift::protocol::TType _etype120;
-            xfer += iprot->readListBegin(_etype120, _size117);
-            this->iterators.resize(_size117);
-            uint32_t _i121;
-            for (_i121 = 0; _i121 < _size117; ++_i121)
+            uint32_t _size151;
+            ::apache::thrift::protocol::TType _etype154;
+            xfer += iprot->readListBegin(_etype154, _size151);
+            this->iterators.resize(_size151);
+            uint32_t _i155;
+            for (_i155 = 0; _i155 < _size151; ++_i155)
             {
-              xfer += this->iterators[_i121].read(iprot);
+              xfer += this->iterators[_i155].read(iprot);
             }
             xfer += iprot->readListEnd();
           }
@@ -2416,6 +3268,7 @@
 
 uint32_t ActiveCompaction::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("ActiveCompaction");
 
   xfer += oprot->writeFieldBegin("extent", ::apache::thrift::protocol::T_STRUCT, 1);
@@ -2429,10 +3282,10 @@
   xfer += oprot->writeFieldBegin("inputFiles", ::apache::thrift::protocol::T_LIST, 3);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->inputFiles.size()));
-    std::vector<std::string> ::const_iterator _iter122;
-    for (_iter122 = this->inputFiles.begin(); _iter122 != this->inputFiles.end(); ++_iter122)
+    std::vector<std::string> ::const_iterator _iter156;
+    for (_iter156 = this->inputFiles.begin(); _iter156 != this->inputFiles.end(); ++_iter156)
     {
-      xfer += oprot->writeString((*_iter122));
+      xfer += oprot->writeString((*_iter156));
     }
     xfer += oprot->writeListEnd();
   }
@@ -2465,10 +3318,10 @@
   xfer += oprot->writeFieldBegin("iterators", ::apache::thrift::protocol::T_LIST, 10);
   {
     xfer += oprot->writeListBegin(::apache::thrift::protocol::T_STRUCT, static_cast<uint32_t>(this->iterators.size()));
-    std::vector<IteratorSetting> ::const_iterator _iter123;
-    for (_iter123 = this->iterators.begin(); _iter123 != this->iterators.end(); ++_iter123)
+    std::vector<IteratorSetting> ::const_iterator _iter157;
+    for (_iter157 = this->iterators.begin(); _iter157 != this->iterators.end(); ++_iter157)
     {
-      xfer += (*_iter123).write(oprot);
+      xfer += (*_iter157).write(oprot);
     }
     xfer += oprot->writeListEnd();
   }
@@ -2494,11 +3347,78 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* WriterOptions::ascii_fingerprint = "6640C55D2C0D4C8C2E7589456EA0C61A";
-const uint8_t WriterOptions::binary_fingerprint[16] = {0x66,0x40,0xC5,0x5D,0x2C,0x0D,0x4C,0x8C,0x2E,0x75,0x89,0x45,0x6E,0xA0,0xC6,0x1A};
+ActiveCompaction::ActiveCompaction(const ActiveCompaction& other158) {
+  extent = other158.extent;
+  age = other158.age;
+  inputFiles = other158.inputFiles;
+  outputFile = other158.outputFile;
+  type = other158.type;
+  reason = other158.reason;
+  localityGroup = other158.localityGroup;
+  entriesRead = other158.entriesRead;
+  entriesWritten = other158.entriesWritten;
+  iterators = other158.iterators;
+  __isset = other158.__isset;
+}
+ActiveCompaction& ActiveCompaction::operator=(const ActiveCompaction& other159) {
+  extent = other159.extent;
+  age = other159.age;
+  inputFiles = other159.inputFiles;
+  outputFile = other159.outputFile;
+  type = other159.type;
+  reason = other159.reason;
+  localityGroup = other159.localityGroup;
+  entriesRead = other159.entriesRead;
+  entriesWritten = other159.entriesWritten;
+  iterators = other159.iterators;
+  __isset = other159.__isset;
+  return *this;
+}
+void ActiveCompaction::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "ActiveCompaction(";
+  out << "extent=" << to_string(extent);
+  out << ", " << "age=" << to_string(age);
+  out << ", " << "inputFiles=" << to_string(inputFiles);
+  out << ", " << "outputFile=" << to_string(outputFile);
+  out << ", " << "type=" << to_string(type);
+  out << ", " << "reason=" << to_string(reason);
+  out << ", " << "localityGroup=" << to_string(localityGroup);
+  out << ", " << "entriesRead=" << to_string(entriesRead);
+  out << ", " << "entriesWritten=" << to_string(entriesWritten);
+  out << ", " << "iterators=" << to_string(iterators);
+  out << ")";
+}
+
+
+WriterOptions::~WriterOptions() throw() {
+}
+
+
+void WriterOptions::__set_maxMemory(const int64_t val) {
+  this->maxMemory = val;
+}
+
+void WriterOptions::__set_latencyMs(const int64_t val) {
+  this->latencyMs = val;
+}
+
+void WriterOptions::__set_timeoutMs(const int64_t val) {
+  this->timeoutMs = val;
+}
+
+void WriterOptions::__set_threads(const int32_t val) {
+  this->threads = val;
+}
+
+void WriterOptions::__set_durability(const Durability::type val) {
+  this->durability = val;
+__isset.durability = true;
+}
 
 uint32_t WriterOptions::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2551,9 +3471,9 @@
         break;
       case 5:
         if (ftype == ::apache::thrift::protocol::T_I32) {
-          int32_t ecast124;
-          xfer += iprot->readI32(ecast124);
-          this->durability = (Durability::type)ecast124;
+          int32_t ecast160;
+          xfer += iprot->readI32(ecast160);
+          this->durability = (Durability::type)ecast160;
           this->__isset.durability = true;
         } else {
           xfer += iprot->skip(ftype);
@@ -2573,6 +3493,7 @@
 
 uint32_t WriterOptions::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("WriterOptions");
 
   xfer += oprot->writeFieldBegin("maxMemory", ::apache::thrift::protocol::T_I64, 1);
@@ -2611,11 +3532,50 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* CompactionStrategyConfig::ascii_fingerprint = "F7C641917C22B35AE581CCD54910B00D";
-const uint8_t CompactionStrategyConfig::binary_fingerprint[16] = {0xF7,0xC6,0x41,0x91,0x7C,0x22,0xB3,0x5A,0xE5,0x81,0xCC,0xD5,0x49,0x10,0xB0,0x0D};
+WriterOptions::WriterOptions(const WriterOptions& other161) {
+  maxMemory = other161.maxMemory;
+  latencyMs = other161.latencyMs;
+  timeoutMs = other161.timeoutMs;
+  threads = other161.threads;
+  durability = other161.durability;
+  __isset = other161.__isset;
+}
+WriterOptions& WriterOptions::operator=(const WriterOptions& other162) {
+  maxMemory = other162.maxMemory;
+  latencyMs = other162.latencyMs;
+  timeoutMs = other162.timeoutMs;
+  threads = other162.threads;
+  durability = other162.durability;
+  __isset = other162.__isset;
+  return *this;
+}
+void WriterOptions::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "WriterOptions(";
+  out << "maxMemory=" << to_string(maxMemory);
+  out << ", " << "latencyMs=" << to_string(latencyMs);
+  out << ", " << "timeoutMs=" << to_string(timeoutMs);
+  out << ", " << "threads=" << to_string(threads);
+  out << ", " << "durability="; (__isset.durability ? (out << to_string(durability)) : (out << "<null>"));
+  out << ")";
+}
+
+
+CompactionStrategyConfig::~CompactionStrategyConfig() throw() {
+}
+
+
+void CompactionStrategyConfig::__set_className(const std::string& val) {
+  this->className = val;
+}
+
+void CompactionStrategyConfig::__set_options(const std::map<std::string, std::string> & val) {
+  this->options = val;
+}
 
 uint32_t CompactionStrategyConfig::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2646,17 +3606,17 @@
         if (ftype == ::apache::thrift::protocol::T_MAP) {
           {
             this->options.clear();
-            uint32_t _size125;
-            ::apache::thrift::protocol::TType _ktype126;
-            ::apache::thrift::protocol::TType _vtype127;
-            xfer += iprot->readMapBegin(_ktype126, _vtype127, _size125);
-            uint32_t _i129;
-            for (_i129 = 0; _i129 < _size125; ++_i129)
+            uint32_t _size163;
+            ::apache::thrift::protocol::TType _ktype164;
+            ::apache::thrift::protocol::TType _vtype165;
+            xfer += iprot->readMapBegin(_ktype164, _vtype165, _size163);
+            uint32_t _i167;
+            for (_i167 = 0; _i167 < _size163; ++_i167)
             {
-              std::string _key130;
-              xfer += iprot->readString(_key130);
-              std::string& _val131 = this->options[_key130];
-              xfer += iprot->readString(_val131);
+              std::string _key168;
+              xfer += iprot->readString(_key168);
+              std::string& _val169 = this->options[_key168];
+              xfer += iprot->readString(_val169);
             }
             xfer += iprot->readMapEnd();
           }
@@ -2679,6 +3639,7 @@
 
 uint32_t CompactionStrategyConfig::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("CompactionStrategyConfig");
 
   xfer += oprot->writeFieldBegin("className", ::apache::thrift::protocol::T_STRING, 1);
@@ -2688,11 +3649,11 @@
   xfer += oprot->writeFieldBegin("options", ::apache::thrift::protocol::T_MAP, 2);
   {
     xfer += oprot->writeMapBegin(::apache::thrift::protocol::T_STRING, ::apache::thrift::protocol::T_STRING, static_cast<uint32_t>(this->options.size()));
-    std::map<std::string, std::string> ::const_iterator _iter132;
-    for (_iter132 = this->options.begin(); _iter132 != this->options.end(); ++_iter132)
+    std::map<std::string, std::string> ::const_iterator _iter170;
+    for (_iter170 = this->options.begin(); _iter170 != this->options.end(); ++_iter170)
     {
-      xfer += oprot->writeString(_iter132->first);
-      xfer += oprot->writeString(_iter132->second);
+      xfer += oprot->writeString(_iter170->first);
+      xfer += oprot->writeString(_iter170->second);
     }
     xfer += oprot->writeMapEnd();
   }
@@ -2710,11 +3671,37 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* UnknownScanner::ascii_fingerprint = "EFB929595D312AC8F305D5A794CFEDA1";
-const uint8_t UnknownScanner::binary_fingerprint[16] = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
+CompactionStrategyConfig::CompactionStrategyConfig(const CompactionStrategyConfig& other171) {
+  className = other171.className;
+  options = other171.options;
+  __isset = other171.__isset;
+}
+CompactionStrategyConfig& CompactionStrategyConfig::operator=(const CompactionStrategyConfig& other172) {
+  className = other172.className;
+  options = other172.options;
+  __isset = other172.__isset;
+  return *this;
+}
+void CompactionStrategyConfig::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "CompactionStrategyConfig(";
+  out << "className=" << to_string(className);
+  out << ", " << "options=" << to_string(options);
+  out << ")";
+}
+
+
+UnknownScanner::~UnknownScanner() throw() {
+}
+
+
+void UnknownScanner::__set_msg(const std::string& val) {
+  this->msg = val;
+}
 
 uint32_t UnknownScanner::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2755,6 +3742,7 @@
 
 uint32_t UnknownScanner::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("UnknownScanner");
 
   xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
@@ -2772,11 +3760,45 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* UnknownWriter::ascii_fingerprint = "EFB929595D312AC8F305D5A794CFEDA1";
-const uint8_t UnknownWriter::binary_fingerprint[16] = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
+UnknownScanner::UnknownScanner(const UnknownScanner& other173) : TException() {
+  msg = other173.msg;
+  __isset = other173.__isset;
+}
+UnknownScanner& UnknownScanner::operator=(const UnknownScanner& other174) {
+  msg = other174.msg;
+  __isset = other174.__isset;
+  return *this;
+}
+void UnknownScanner::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "UnknownScanner(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* UnknownScanner::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: UnknownScanner";
+  }
+}
+
+
+UnknownWriter::~UnknownWriter() throw() {
+}
+
+
+void UnknownWriter::__set_msg(const std::string& val) {
+  this->msg = val;
+}
 
 uint32_t UnknownWriter::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2817,6 +3839,7 @@
 
 uint32_t UnknownWriter::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("UnknownWriter");
 
   xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
@@ -2834,11 +3857,45 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* NoMoreEntriesException::ascii_fingerprint = "EFB929595D312AC8F305D5A794CFEDA1";
-const uint8_t NoMoreEntriesException::binary_fingerprint[16] = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
+UnknownWriter::UnknownWriter(const UnknownWriter& other175) : TException() {
+  msg = other175.msg;
+  __isset = other175.__isset;
+}
+UnknownWriter& UnknownWriter::operator=(const UnknownWriter& other176) {
+  msg = other176.msg;
+  __isset = other176.__isset;
+  return *this;
+}
+void UnknownWriter::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "UnknownWriter(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* UnknownWriter::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: UnknownWriter";
+  }
+}
+
+
+NoMoreEntriesException::~NoMoreEntriesException() throw() {
+}
+
+
+void NoMoreEntriesException::__set_msg(const std::string& val) {
+  this->msg = val;
+}
 
 uint32_t NoMoreEntriesException::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2879,6 +3936,7 @@
 
 uint32_t NoMoreEntriesException::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("NoMoreEntriesException");
 
   xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
@@ -2896,11 +3954,45 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* AccumuloException::ascii_fingerprint = "EFB929595D312AC8F305D5A794CFEDA1";
-const uint8_t AccumuloException::binary_fingerprint[16] = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
+NoMoreEntriesException::NoMoreEntriesException(const NoMoreEntriesException& other177) : TException() {
+  msg = other177.msg;
+  __isset = other177.__isset;
+}
+NoMoreEntriesException& NoMoreEntriesException::operator=(const NoMoreEntriesException& other178) {
+  msg = other178.msg;
+  __isset = other178.__isset;
+  return *this;
+}
+void NoMoreEntriesException::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "NoMoreEntriesException(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* NoMoreEntriesException::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: NoMoreEntriesException";
+  }
+}
+
+
+AccumuloException::~AccumuloException() throw() {
+}
+
+
+void AccumuloException::__set_msg(const std::string& val) {
+  this->msg = val;
+}
 
 uint32_t AccumuloException::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -2941,6 +4033,7 @@
 
 uint32_t AccumuloException::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloException");
 
   xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
@@ -2958,11 +4051,45 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* AccumuloSecurityException::ascii_fingerprint = "EFB929595D312AC8F305D5A794CFEDA1";
-const uint8_t AccumuloSecurityException::binary_fingerprint[16] = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
+AccumuloException::AccumuloException(const AccumuloException& other179) : TException() {
+  msg = other179.msg;
+  __isset = other179.__isset;
+}
+AccumuloException& AccumuloException::operator=(const AccumuloException& other180) {
+  msg = other180.msg;
+  __isset = other180.__isset;
+  return *this;
+}
+void AccumuloException::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "AccumuloException(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* AccumuloException::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: AccumuloException";
+  }
+}
+
+
+AccumuloSecurityException::~AccumuloSecurityException() throw() {
+}
+
+
+void AccumuloSecurityException::__set_msg(const std::string& val) {
+  this->msg = val;
+}
 
 uint32_t AccumuloSecurityException::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3003,6 +4130,7 @@
 
 uint32_t AccumuloSecurityException::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("AccumuloSecurityException");
 
   xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
@@ -3020,11 +4148,45 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* TableNotFoundException::ascii_fingerprint = "EFB929595D312AC8F305D5A794CFEDA1";
-const uint8_t TableNotFoundException::binary_fingerprint[16] = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
+AccumuloSecurityException::AccumuloSecurityException(const AccumuloSecurityException& other181) : TException() {
+  msg = other181.msg;
+  __isset = other181.__isset;
+}
+AccumuloSecurityException& AccumuloSecurityException::operator=(const AccumuloSecurityException& other182) {
+  msg = other182.msg;
+  __isset = other182.__isset;
+  return *this;
+}
+void AccumuloSecurityException::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "AccumuloSecurityException(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* AccumuloSecurityException::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: AccumuloSecurityException";
+  }
+}
+
+
+TableNotFoundException::~TableNotFoundException() throw() {
+}
+
+
+void TableNotFoundException::__set_msg(const std::string& val) {
+  this->msg = val;
+}
 
 uint32_t TableNotFoundException::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3065,6 +4227,7 @@
 
 uint32_t TableNotFoundException::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("TableNotFoundException");
 
   xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
@@ -3082,11 +4245,45 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* TableExistsException::ascii_fingerprint = "EFB929595D312AC8F305D5A794CFEDA1";
-const uint8_t TableExistsException::binary_fingerprint[16] = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
+TableNotFoundException::TableNotFoundException(const TableNotFoundException& other183) : TException() {
+  msg = other183.msg;
+  __isset = other183.__isset;
+}
+TableNotFoundException& TableNotFoundException::operator=(const TableNotFoundException& other184) {
+  msg = other184.msg;
+  __isset = other184.__isset;
+  return *this;
+}
+void TableNotFoundException::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "TableNotFoundException(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* TableNotFoundException::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: TableNotFoundException";
+  }
+}
+
+
+TableExistsException::~TableExistsException() throw() {
+}
+
+
+void TableExistsException::__set_msg(const std::string& val) {
+  this->msg = val;
+}
 
 uint32_t TableExistsException::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3127,6 +4324,7 @@
 
 uint32_t TableExistsException::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("TableExistsException");
 
   xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
@@ -3144,11 +4342,45 @@
   swap(a.__isset, b.__isset);
 }
 
-const char* MutationsRejectedException::ascii_fingerprint = "EFB929595D312AC8F305D5A794CFEDA1";
-const uint8_t MutationsRejectedException::binary_fingerprint[16] = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
+TableExistsException::TableExistsException(const TableExistsException& other185) : TException() {
+  msg = other185.msg;
+  __isset = other185.__isset;
+}
+TableExistsException& TableExistsException::operator=(const TableExistsException& other186) {
+  msg = other186.msg;
+  __isset = other186.__isset;
+  return *this;
+}
+void TableExistsException::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "TableExistsException(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* TableExistsException::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: TableExistsException";
+  }
+}
+
+
+MutationsRejectedException::~MutationsRejectedException() throw() {
+}
+
+
+void MutationsRejectedException::__set_msg(const std::string& val) {
+  this->msg = val;
+}
 
 uint32_t MutationsRejectedException::read(::apache::thrift::protocol::TProtocol* iprot) {
 
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
   uint32_t xfer = 0;
   std::string fname;
   ::apache::thrift::protocol::TType ftype;
@@ -3189,6 +4421,7 @@
 
 uint32_t MutationsRejectedException::write(::apache::thrift::protocol::TProtocol* oprot) const {
   uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
   xfer += oprot->writeStructBegin("MutationsRejectedException");
 
   xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
@@ -3206,4 +4439,322 @@
   swap(a.__isset, b.__isset);
 }
 
+MutationsRejectedException::MutationsRejectedException(const MutationsRejectedException& other187) : TException() {
+  msg = other187.msg;
+  __isset = other187.__isset;
+}
+MutationsRejectedException& MutationsRejectedException::operator=(const MutationsRejectedException& other188) {
+  msg = other188.msg;
+  __isset = other188.__isset;
+  return *this;
+}
+void MutationsRejectedException::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "MutationsRejectedException(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* MutationsRejectedException::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: MutationsRejectedException";
+  }
+}
+
+
+NamespaceExistsException::~NamespaceExistsException() throw() {
+}
+
+
+void NamespaceExistsException::__set_msg(const std::string& val) {
+  this->msg = val;
+}
+
+uint32_t NamespaceExistsException::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->msg);
+          this->__isset.msg = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t NamespaceExistsException::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("NamespaceExistsException");
+
+  xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeString(this->msg);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+void swap(NamespaceExistsException &a, NamespaceExistsException &b) {
+  using ::std::swap;
+  swap(a.msg, b.msg);
+  swap(a.__isset, b.__isset);
+}
+
+NamespaceExistsException::NamespaceExistsException(const NamespaceExistsException& other189) : TException() {
+  msg = other189.msg;
+  __isset = other189.__isset;
+}
+NamespaceExistsException& NamespaceExistsException::operator=(const NamespaceExistsException& other190) {
+  msg = other190.msg;
+  __isset = other190.__isset;
+  return *this;
+}
+void NamespaceExistsException::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "NamespaceExistsException(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* NamespaceExistsException::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: NamespaceExistsException";
+  }
+}
+
+
+NamespaceNotFoundException::~NamespaceNotFoundException() throw() {
+}
+
+
+void NamespaceNotFoundException::__set_msg(const std::string& val) {
+  this->msg = val;
+}
+
+uint32_t NamespaceNotFoundException::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->msg);
+          this->__isset.msg = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t NamespaceNotFoundException::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("NamespaceNotFoundException");
+
+  xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeString(this->msg);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+void swap(NamespaceNotFoundException &a, NamespaceNotFoundException &b) {
+  using ::std::swap;
+  swap(a.msg, b.msg);
+  swap(a.__isset, b.__isset);
+}
+
+NamespaceNotFoundException::NamespaceNotFoundException(const NamespaceNotFoundException& other191) : TException() {
+  msg = other191.msg;
+  __isset = other191.__isset;
+}
+NamespaceNotFoundException& NamespaceNotFoundException::operator=(const NamespaceNotFoundException& other192) {
+  msg = other192.msg;
+  __isset = other192.__isset;
+  return *this;
+}
+void NamespaceNotFoundException::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "NamespaceNotFoundException(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* NamespaceNotFoundException::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: NamespaceNotFoundException";
+  }
+}
+
+
+NamespaceNotEmptyException::~NamespaceNotEmptyException() throw() {
+}
+
+
+void NamespaceNotEmptyException::__set_msg(const std::string& val) {
+  this->msg = val;
+}
+
+uint32_t NamespaceNotEmptyException::read(::apache::thrift::protocol::TProtocol* iprot) {
+
+  apache::thrift::protocol::TInputRecursionTracker tracker(*iprot);
+  uint32_t xfer = 0;
+  std::string fname;
+  ::apache::thrift::protocol::TType ftype;
+  int16_t fid;
+
+  xfer += iprot->readStructBegin(fname);
+
+  using ::apache::thrift::protocol::TProtocolException;
+
+
+  while (true)
+  {
+    xfer += iprot->readFieldBegin(fname, ftype, fid);
+    if (ftype == ::apache::thrift::protocol::T_STOP) {
+      break;
+    }
+    switch (fid)
+    {
+      case 1:
+        if (ftype == ::apache::thrift::protocol::T_STRING) {
+          xfer += iprot->readString(this->msg);
+          this->__isset.msg = true;
+        } else {
+          xfer += iprot->skip(ftype);
+        }
+        break;
+      default:
+        xfer += iprot->skip(ftype);
+        break;
+    }
+    xfer += iprot->readFieldEnd();
+  }
+
+  xfer += iprot->readStructEnd();
+
+  return xfer;
+}
+
+uint32_t NamespaceNotEmptyException::write(::apache::thrift::protocol::TProtocol* oprot) const {
+  uint32_t xfer = 0;
+  apache::thrift::protocol::TOutputRecursionTracker tracker(*oprot);
+  xfer += oprot->writeStructBegin("NamespaceNotEmptyException");
+
+  xfer += oprot->writeFieldBegin("msg", ::apache::thrift::protocol::T_STRING, 1);
+  xfer += oprot->writeString(this->msg);
+  xfer += oprot->writeFieldEnd();
+
+  xfer += oprot->writeFieldStop();
+  xfer += oprot->writeStructEnd();
+  return xfer;
+}
+
+void swap(NamespaceNotEmptyException &a, NamespaceNotEmptyException &b) {
+  using ::std::swap;
+  swap(a.msg, b.msg);
+  swap(a.__isset, b.__isset);
+}
+
+NamespaceNotEmptyException::NamespaceNotEmptyException(const NamespaceNotEmptyException& other193) : TException() {
+  msg = other193.msg;
+  __isset = other193.__isset;
+}
+NamespaceNotEmptyException& NamespaceNotEmptyException::operator=(const NamespaceNotEmptyException& other194) {
+  msg = other194.msg;
+  __isset = other194.__isset;
+  return *this;
+}
+void NamespaceNotEmptyException::printTo(std::ostream& out) const {
+  using ::apache::thrift::to_string;
+  out << "NamespaceNotEmptyException(";
+  out << "msg=" << to_string(msg);
+  out << ")";
+}
+
+const char* NamespaceNotEmptyException::what() const throw() {
+  try {
+    std::stringstream ss;
+    ss << "TException - service has thrown: " << *this;
+    this->thriftTExceptionMessageHolder_ = ss.str();
+    return this->thriftTExceptionMessageHolder_.c_str();
+  } catch (const std::exception&) {
+    return "TException - service has thrown: NamespaceNotEmptyException";
+  }
+}
+
 } // namespace
diff --git a/proxy/src/main/cpp/proxy_types.h b/proxy/src/main/cpp/proxy_types.h
index 569de88..e5daf2e 100644
--- a/proxy/src/main/cpp/proxy_types.h
+++ b/proxy/src/main/cpp/proxy_types.h
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -23,6 +23,8 @@
 #ifndef proxy_TYPES_H
 #define proxy_TYPES_H
 
+#include <iosfwd>
+
 #include <thrift/Thrift.h>
 #include <thrift/TApplicationException.h>
 #include <thrift/protocol/TProtocol.h>
@@ -74,6 +76,22 @@
 
 extern const std::map<int, const char*> _SystemPermission_VALUES_TO_NAMES;
 
+struct NamespacePermission {
+  enum type {
+    READ = 0,
+    WRITE = 1,
+    ALTER_NAMESPACE = 2,
+    GRANT = 3,
+    ALTER_TABLE = 4,
+    CREATE_TABLE = 5,
+    DROP_TABLE = 6,
+    BULK_IMPORT = 7,
+    DROP_NAMESPACE = 8
+  };
+};
+
+extern const std::map<int, const char*> _NamespacePermission_VALUES_TO_NAMES;
+
 struct ScanType {
   enum type {
     SINGLE = 0,
@@ -159,26 +177,86 @@
 
 extern const std::map<int, const char*> _TimeType_VALUES_TO_NAMES;
 
+class Key;
+
+class ColumnUpdate;
+
+class DiskUsage;
+
+class KeyValue;
+
+class ScanResult;
+
+class Range;
+
+class ScanColumn;
+
+class IteratorSetting;
+
+class ScanOptions;
+
+class BatchScanOptions;
+
+class KeyValueAndPeek;
+
+class KeyExtent;
+
+class Column;
+
+class Condition;
+
+class ConditionalUpdates;
+
+class ConditionalWriterOptions;
+
+class ActiveScan;
+
+class ActiveCompaction;
+
+class WriterOptions;
+
+class CompactionStrategyConfig;
+
+class UnknownScanner;
+
+class UnknownWriter;
+
+class NoMoreEntriesException;
+
+class AccumuloException;
+
+class AccumuloSecurityException;
+
+class TableNotFoundException;
+
+class TableExistsException;
+
+class MutationsRejectedException;
+
+class NamespaceExistsException;
+
+class NamespaceNotFoundException;
+
+class NamespaceNotEmptyException;
+
 typedef struct _Key__isset {
   _Key__isset() : row(false), colFamily(false), colQualifier(false), colVisibility(false), timestamp(true) {}
-  bool row;
-  bool colFamily;
-  bool colQualifier;
-  bool colVisibility;
-  bool timestamp;
+  bool row :1;
+  bool colFamily :1;
+  bool colQualifier :1;
+  bool colVisibility :1;
+  bool timestamp :1;
 } _Key__isset;
 
 class Key {
  public:
 
-  static const char* ascii_fingerprint; // = "91151A432E03F5E8564877B5194B48E2";
-  static const uint8_t binary_fingerprint[16]; // = {0x91,0x15,0x1A,0x43,0x2E,0x03,0xF5,0xE8,0x56,0x48,0x77,0xB5,0x19,0x4B,0x48,0xE2};
-
+  Key(const Key&);
+  Key& operator=(const Key&);
   Key() : row(), colFamily(), colQualifier(), colVisibility(), timestamp(9223372036854775807LL) {
   }
 
-  virtual ~Key() throw() {}
-
+  virtual ~Key() throw();
   std::string row;
   std::string colFamily;
   std::string colQualifier;
@@ -187,26 +265,15 @@
 
   _Key__isset __isset;
 
-  void __set_row(const std::string& val) {
-    row = val;
-  }
+  void __set_row(const std::string& val);
 
-  void __set_colFamily(const std::string& val) {
-    colFamily = val;
-  }
+  void __set_colFamily(const std::string& val);
 
-  void __set_colQualifier(const std::string& val) {
-    colQualifier = val;
-  }
+  void __set_colQualifier(const std::string& val);
 
-  void __set_colVisibility(const std::string& val) {
-    colVisibility = val;
-  }
+  void __set_colVisibility(const std::string& val);
 
-  void __set_timestamp(const int64_t val) {
-    timestamp = val;
-    __isset.timestamp = true;
-  }
+  void __set_timestamp(const int64_t val);
 
   bool operator == (const Key & rhs) const
   {
@@ -233,31 +300,36 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(Key &a, Key &b);
 
+inline std::ostream& operator<<(std::ostream& out, const Key& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _ColumnUpdate__isset {
   _ColumnUpdate__isset() : colFamily(false), colQualifier(false), colVisibility(false), timestamp(false), value(false), deleteCell(false) {}
-  bool colFamily;
-  bool colQualifier;
-  bool colVisibility;
-  bool timestamp;
-  bool value;
-  bool deleteCell;
+  bool colFamily :1;
+  bool colQualifier :1;
+  bool colVisibility :1;
+  bool timestamp :1;
+  bool value :1;
+  bool deleteCell :1;
 } _ColumnUpdate__isset;
 
 class ColumnUpdate {
  public:
 
-  static const char* ascii_fingerprint; // = "65CC1863F7DDC1DE75B9EAF9E2DC0D1F";
-  static const uint8_t binary_fingerprint[16]; // = {0x65,0xCC,0x18,0x63,0xF7,0xDD,0xC1,0xDE,0x75,0xB9,0xEA,0xF9,0xE2,0xDC,0x0D,0x1F};
-
+  ColumnUpdate(const ColumnUpdate&);
+  ColumnUpdate& operator=(const ColumnUpdate&);
   ColumnUpdate() : colFamily(), colQualifier(), colVisibility(), timestamp(0), value(), deleteCell(0) {
   }
 
-  virtual ~ColumnUpdate() throw() {}
-
+  virtual ~ColumnUpdate() throw();
   std::string colFamily;
   std::string colQualifier;
   std::string colVisibility;
@@ -267,33 +339,17 @@
 
   _ColumnUpdate__isset __isset;
 
-  void __set_colFamily(const std::string& val) {
-    colFamily = val;
-  }
+  void __set_colFamily(const std::string& val);
 
-  void __set_colQualifier(const std::string& val) {
-    colQualifier = val;
-  }
+  void __set_colQualifier(const std::string& val);
 
-  void __set_colVisibility(const std::string& val) {
-    colVisibility = val;
-    __isset.colVisibility = true;
-  }
+  void __set_colVisibility(const std::string& val);
 
-  void __set_timestamp(const int64_t val) {
-    timestamp = val;
-    __isset.timestamp = true;
-  }
+  void __set_timestamp(const int64_t val);
 
-  void __set_value(const std::string& val) {
-    value = val;
-    __isset.value = true;
-  }
+  void __set_value(const std::string& val);
 
-  void __set_deleteCell(const bool val) {
-    deleteCell = val;
-    __isset.deleteCell = true;
-  }
+  void __set_deleteCell(const bool val);
 
   bool operator == (const ColumnUpdate & rhs) const
   {
@@ -328,39 +384,40 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(ColumnUpdate &a, ColumnUpdate &b);
 
+inline std::ostream& operator<<(std::ostream& out, const ColumnUpdate& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _DiskUsage__isset {
   _DiskUsage__isset() : tables(false), usage(false) {}
-  bool tables;
-  bool usage;
+  bool tables :1;
+  bool usage :1;
 } _DiskUsage__isset;
 
 class DiskUsage {
  public:
 
-  static const char* ascii_fingerprint; // = "D26F4F5E2867D41CF7E0391263932D6B";
-  static const uint8_t binary_fingerprint[16]; // = {0xD2,0x6F,0x4F,0x5E,0x28,0x67,0xD4,0x1C,0xF7,0xE0,0x39,0x12,0x63,0x93,0x2D,0x6B};
-
+  DiskUsage(const DiskUsage&);
+  DiskUsage& operator=(const DiskUsage&);
   DiskUsage() : usage(0) {
   }
 
-  virtual ~DiskUsage() throw() {}
-
+  virtual ~DiskUsage() throw();
   std::vector<std::string>  tables;
   int64_t usage;
 
   _DiskUsage__isset __isset;
 
-  void __set_tables(const std::vector<std::string> & val) {
-    tables = val;
-  }
+  void __set_tables(const std::vector<std::string> & val);
 
-  void __set_usage(const int64_t val) {
-    usage = val;
-  }
+  void __set_usage(const int64_t val);
 
   bool operator == (const DiskUsage & rhs) const
   {
@@ -379,39 +436,40 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(DiskUsage &a, DiskUsage &b);
 
+inline std::ostream& operator<<(std::ostream& out, const DiskUsage& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _KeyValue__isset {
   _KeyValue__isset() : key(false), value(false) {}
-  bool key;
-  bool value;
+  bool key :1;
+  bool value :1;
 } _KeyValue__isset;
 
 class KeyValue {
  public:
 
-  static const char* ascii_fingerprint; // = "0D0CA44F233F983E00E94228C31ABBD4";
-  static const uint8_t binary_fingerprint[16]; // = {0x0D,0x0C,0xA4,0x4F,0x23,0x3F,0x98,0x3E,0x00,0xE9,0x42,0x28,0xC3,0x1A,0xBB,0xD4};
-
+  KeyValue(const KeyValue&);
+  KeyValue& operator=(const KeyValue&);
   KeyValue() : value() {
   }
 
-  virtual ~KeyValue() throw() {}
-
+  virtual ~KeyValue() throw();
   Key key;
   std::string value;
 
   _KeyValue__isset __isset;
 
-  void __set_key(const Key& val) {
-    key = val;
-  }
+  void __set_key(const Key& val);
 
-  void __set_value(const std::string& val) {
-    value = val;
-  }
+  void __set_value(const std::string& val);
 
   bool operator == (const KeyValue & rhs) const
   {
@@ -430,39 +488,40 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(KeyValue &a, KeyValue &b);
 
+inline std::ostream& operator<<(std::ostream& out, const KeyValue& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _ScanResult__isset {
   _ScanResult__isset() : results(false), more(false) {}
-  bool results;
-  bool more;
+  bool results :1;
+  bool more :1;
 } _ScanResult__isset;
 
 class ScanResult {
  public:
 
-  static const char* ascii_fingerprint; // = "684A3FCA76EA202FE071A17F8B510E7A";
-  static const uint8_t binary_fingerprint[16]; // = {0x68,0x4A,0x3F,0xCA,0x76,0xEA,0x20,0x2F,0xE0,0x71,0xA1,0x7F,0x8B,0x51,0x0E,0x7A};
-
+  ScanResult(const ScanResult&);
+  ScanResult& operator=(const ScanResult&);
   ScanResult() : more(0) {
   }
 
-  virtual ~ScanResult() throw() {}
-
+  virtual ~ScanResult() throw();
   std::vector<KeyValue>  results;
   bool more;
 
   _ScanResult__isset __isset;
 
-  void __set_results(const std::vector<KeyValue> & val) {
-    results = val;
-  }
+  void __set_results(const std::vector<KeyValue> & val);
 
-  void __set_more(const bool val) {
-    more = val;
-  }
+  void __set_more(const bool val);
 
   bool operator == (const ScanResult & rhs) const
   {
@@ -481,29 +540,34 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(ScanResult &a, ScanResult &b);
 
+inline std::ostream& operator<<(std::ostream& out, const ScanResult& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _Range__isset {
   _Range__isset() : start(false), startInclusive(false), stop(false), stopInclusive(false) {}
-  bool start;
-  bool startInclusive;
-  bool stop;
-  bool stopInclusive;
+  bool start :1;
+  bool startInclusive :1;
+  bool stop :1;
+  bool stopInclusive :1;
 } _Range__isset;
 
 class Range {
  public:
 
-  static const char* ascii_fingerprint; // = "84C5BA8DB718E60BFBF3F83867647B45";
-  static const uint8_t binary_fingerprint[16]; // = {0x84,0xC5,0xBA,0x8D,0xB7,0x18,0xE6,0x0B,0xFB,0xF3,0xF8,0x38,0x67,0x64,0x7B,0x45};
-
+  Range(const Range&);
+  Range& operator=(const Range&);
   Range() : startInclusive(0), stopInclusive(0) {
   }
 
-  virtual ~Range() throw() {}
-
+  virtual ~Range() throw();
   Key start;
   bool startInclusive;
   Key stop;
@@ -511,21 +575,13 @@
 
   _Range__isset __isset;
 
-  void __set_start(const Key& val) {
-    start = val;
-  }
+  void __set_start(const Key& val);
 
-  void __set_startInclusive(const bool val) {
-    startInclusive = val;
-  }
+  void __set_startInclusive(const bool val);
 
-  void __set_stop(const Key& val) {
-    stop = val;
-  }
+  void __set_stop(const Key& val);
 
-  void __set_stopInclusive(const bool val) {
-    stopInclusive = val;
-  }
+  void __set_stopInclusive(const bool val);
 
   bool operator == (const Range & rhs) const
   {
@@ -548,40 +604,40 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(Range &a, Range &b);
 
+inline std::ostream& operator<<(std::ostream& out, const Range& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _ScanColumn__isset {
   _ScanColumn__isset() : colFamily(false), colQualifier(false) {}
-  bool colFamily;
-  bool colQualifier;
+  bool colFamily :1;
+  bool colQualifier :1;
 } _ScanColumn__isset;
 
 class ScanColumn {
  public:
 
-  static const char* ascii_fingerprint; // = "5B708A954C550ECA9C1A49D3C5CAFAB9";
-  static const uint8_t binary_fingerprint[16]; // = {0x5B,0x70,0x8A,0x95,0x4C,0x55,0x0E,0xCA,0x9C,0x1A,0x49,0xD3,0xC5,0xCA,0xFA,0xB9};
-
+  ScanColumn(const ScanColumn&);
+  ScanColumn& operator=(const ScanColumn&);
   ScanColumn() : colFamily(), colQualifier() {
   }
 
-  virtual ~ScanColumn() throw() {}
-
+  virtual ~ScanColumn() throw();
   std::string colFamily;
   std::string colQualifier;
 
   _ScanColumn__isset __isset;
 
-  void __set_colFamily(const std::string& val) {
-    colFamily = val;
-  }
+  void __set_colFamily(const std::string& val);
 
-  void __set_colQualifier(const std::string& val) {
-    colQualifier = val;
-    __isset.colQualifier = true;
-  }
+  void __set_colQualifier(const std::string& val);
 
   bool operator == (const ScanColumn & rhs) const
   {
@@ -602,29 +658,34 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(ScanColumn &a, ScanColumn &b);
 
+inline std::ostream& operator<<(std::ostream& out, const ScanColumn& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _IteratorSetting__isset {
   _IteratorSetting__isset() : priority(false), name(false), iteratorClass(false), properties(false) {}
-  bool priority;
-  bool name;
-  bool iteratorClass;
-  bool properties;
+  bool priority :1;
+  bool name :1;
+  bool iteratorClass :1;
+  bool properties :1;
 } _IteratorSetting__isset;
 
 class IteratorSetting {
  public:
 
-  static const char* ascii_fingerprint; // = "985C857916964E43205EAC92A157CB4E";
-  static const uint8_t binary_fingerprint[16]; // = {0x98,0x5C,0x85,0x79,0x16,0x96,0x4E,0x43,0x20,0x5E,0xAC,0x92,0xA1,0x57,0xCB,0x4E};
-
+  IteratorSetting(const IteratorSetting&);
+  IteratorSetting& operator=(const IteratorSetting&);
   IteratorSetting() : priority(0), name(), iteratorClass() {
   }
 
-  virtual ~IteratorSetting() throw() {}
-
+  virtual ~IteratorSetting() throw();
   int32_t priority;
   std::string name;
   std::string iteratorClass;
@@ -632,21 +693,13 @@
 
   _IteratorSetting__isset __isset;
 
-  void __set_priority(const int32_t val) {
-    priority = val;
-  }
+  void __set_priority(const int32_t val);
 
-  void __set_name(const std::string& val) {
-    name = val;
-  }
+  void __set_name(const std::string& val);
 
-  void __set_iteratorClass(const std::string& val) {
-    iteratorClass = val;
-  }
+  void __set_iteratorClass(const std::string& val);
 
-  void __set_properties(const std::map<std::string, std::string> & val) {
-    properties = val;
-  }
+  void __set_properties(const std::map<std::string, std::string> & val);
 
   bool operator == (const IteratorSetting & rhs) const
   {
@@ -669,30 +722,35 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(IteratorSetting &a, IteratorSetting &b);
 
+inline std::ostream& operator<<(std::ostream& out, const IteratorSetting& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _ScanOptions__isset {
   _ScanOptions__isset() : authorizations(false), range(false), columns(false), iterators(false), bufferSize(false) {}
-  bool authorizations;
-  bool range;
-  bool columns;
-  bool iterators;
-  bool bufferSize;
+  bool authorizations :1;
+  bool range :1;
+  bool columns :1;
+  bool iterators :1;
+  bool bufferSize :1;
 } _ScanOptions__isset;
 
 class ScanOptions {
  public:
 
-  static const char* ascii_fingerprint; // = "3D87D0CD05FA62E15880C4D2C595907C";
-  static const uint8_t binary_fingerprint[16]; // = {0x3D,0x87,0xD0,0xCD,0x05,0xFA,0x62,0xE1,0x58,0x80,0xC4,0xD2,0xC5,0x95,0x90,0x7C};
-
+  ScanOptions(const ScanOptions&);
+  ScanOptions& operator=(const ScanOptions&);
   ScanOptions() : bufferSize(0) {
   }
 
-  virtual ~ScanOptions() throw() {}
-
+  virtual ~ScanOptions() throw();
   std::set<std::string>  authorizations;
   Range range;
   std::vector<ScanColumn>  columns;
@@ -701,30 +759,15 @@
 
   _ScanOptions__isset __isset;
 
-  void __set_authorizations(const std::set<std::string> & val) {
-    authorizations = val;
-    __isset.authorizations = true;
-  }
+  void __set_authorizations(const std::set<std::string> & val);
 
-  void __set_range(const Range& val) {
-    range = val;
-    __isset.range = true;
-  }
+  void __set_range(const Range& val);
 
-  void __set_columns(const std::vector<ScanColumn> & val) {
-    columns = val;
-    __isset.columns = true;
-  }
+  void __set_columns(const std::vector<ScanColumn> & val);
 
-  void __set_iterators(const std::vector<IteratorSetting> & val) {
-    iterators = val;
-    __isset.iterators = true;
-  }
+  void __set_iterators(const std::vector<IteratorSetting> & val);
 
-  void __set_bufferSize(const int32_t val) {
-    bufferSize = val;
-    __isset.bufferSize = true;
-  }
+  void __set_bufferSize(const int32_t val);
 
   bool operator == (const ScanOptions & rhs) const
   {
@@ -759,30 +802,35 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(ScanOptions &a, ScanOptions &b);
 
+inline std::ostream& operator<<(std::ostream& out, const ScanOptions& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _BatchScanOptions__isset {
   _BatchScanOptions__isset() : authorizations(false), ranges(false), columns(false), iterators(false), threads(false) {}
-  bool authorizations;
-  bool ranges;
-  bool columns;
-  bool iterators;
-  bool threads;
+  bool authorizations :1;
+  bool ranges :1;
+  bool columns :1;
+  bool iterators :1;
+  bool threads :1;
 } _BatchScanOptions__isset;
 
 class BatchScanOptions {
  public:
 
-  static const char* ascii_fingerprint; // = "6ADFA1FBE31B1220D2C103284E002308";
-  static const uint8_t binary_fingerprint[16]; // = {0x6A,0xDF,0xA1,0xFB,0xE3,0x1B,0x12,0x20,0xD2,0xC1,0x03,0x28,0x4E,0x00,0x23,0x08};
-
+  BatchScanOptions(const BatchScanOptions&);
+  BatchScanOptions& operator=(const BatchScanOptions&);
   BatchScanOptions() : threads(0) {
   }
 
-  virtual ~BatchScanOptions() throw() {}
-
+  virtual ~BatchScanOptions() throw();
   std::set<std::string>  authorizations;
   std::vector<Range>  ranges;
   std::vector<ScanColumn>  columns;
@@ -791,30 +839,15 @@
 
   _BatchScanOptions__isset __isset;
 
-  void __set_authorizations(const std::set<std::string> & val) {
-    authorizations = val;
-    __isset.authorizations = true;
-  }
+  void __set_authorizations(const std::set<std::string> & val);
 
-  void __set_ranges(const std::vector<Range> & val) {
-    ranges = val;
-    __isset.ranges = true;
-  }
+  void __set_ranges(const std::vector<Range> & val);
 
-  void __set_columns(const std::vector<ScanColumn> & val) {
-    columns = val;
-    __isset.columns = true;
-  }
+  void __set_columns(const std::vector<ScanColumn> & val);
 
-  void __set_iterators(const std::vector<IteratorSetting> & val) {
-    iterators = val;
-    __isset.iterators = true;
-  }
+  void __set_iterators(const std::vector<IteratorSetting> & val);
 
-  void __set_threads(const int32_t val) {
-    threads = val;
-    __isset.threads = true;
-  }
+  void __set_threads(const int32_t val);
 
   bool operator == (const BatchScanOptions & rhs) const
   {
@@ -849,39 +882,40 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(BatchScanOptions &a, BatchScanOptions &b);
 
+inline std::ostream& operator<<(std::ostream& out, const BatchScanOptions& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _KeyValueAndPeek__isset {
   _KeyValueAndPeek__isset() : keyValue(false), hasNext(false) {}
-  bool keyValue;
-  bool hasNext;
+  bool keyValue :1;
+  bool hasNext :1;
 } _KeyValueAndPeek__isset;
 
 class KeyValueAndPeek {
  public:
 
-  static const char* ascii_fingerprint; // = "CBBC6AB9C7EA5E5E748C13F970862FAB";
-  static const uint8_t binary_fingerprint[16]; // = {0xCB,0xBC,0x6A,0xB9,0xC7,0xEA,0x5E,0x5E,0x74,0x8C,0x13,0xF9,0x70,0x86,0x2F,0xAB};
-
+  KeyValueAndPeek(const KeyValueAndPeek&);
+  KeyValueAndPeek& operator=(const KeyValueAndPeek&);
   KeyValueAndPeek() : hasNext(0) {
   }
 
-  virtual ~KeyValueAndPeek() throw() {}
-
+  virtual ~KeyValueAndPeek() throw();
   KeyValue keyValue;
   bool hasNext;
 
   _KeyValueAndPeek__isset __isset;
 
-  void __set_keyValue(const KeyValue& val) {
-    keyValue = val;
-  }
+  void __set_keyValue(const KeyValue& val);
 
-  void __set_hasNext(const bool val) {
-    hasNext = val;
-  }
+  void __set_hasNext(const bool val);
 
   bool operator == (const KeyValueAndPeek & rhs) const
   {
@@ -900,45 +934,44 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(KeyValueAndPeek &a, KeyValueAndPeek &b);
 
+inline std::ostream& operator<<(std::ostream& out, const KeyValueAndPeek& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _KeyExtent__isset {
   _KeyExtent__isset() : tableId(false), endRow(false), prevEndRow(false) {}
-  bool tableId;
-  bool endRow;
-  bool prevEndRow;
+  bool tableId :1;
+  bool endRow :1;
+  bool prevEndRow :1;
 } _KeyExtent__isset;
 
 class KeyExtent {
  public:
 
-  static const char* ascii_fingerprint; // = "AB879940BD15B6B25691265F7384B271";
-  static const uint8_t binary_fingerprint[16]; // = {0xAB,0x87,0x99,0x40,0xBD,0x15,0xB6,0xB2,0x56,0x91,0x26,0x5F,0x73,0x84,0xB2,0x71};
-
+  KeyExtent(const KeyExtent&);
+  KeyExtent& operator=(const KeyExtent&);
   KeyExtent() : tableId(), endRow(), prevEndRow() {
   }
 
-  virtual ~KeyExtent() throw() {}
-
+  virtual ~KeyExtent() throw();
   std::string tableId;
   std::string endRow;
   std::string prevEndRow;
 
   _KeyExtent__isset __isset;
 
-  void __set_tableId(const std::string& val) {
-    tableId = val;
-  }
+  void __set_tableId(const std::string& val);
 
-  void __set_endRow(const std::string& val) {
-    endRow = val;
-  }
+  void __set_endRow(const std::string& val);
 
-  void __set_prevEndRow(const std::string& val) {
-    prevEndRow = val;
-  }
+  void __set_prevEndRow(const std::string& val);
 
   bool operator == (const KeyExtent & rhs) const
   {
@@ -959,45 +992,44 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(KeyExtent &a, KeyExtent &b);
 
+inline std::ostream& operator<<(std::ostream& out, const KeyExtent& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _Column__isset {
   _Column__isset() : colFamily(false), colQualifier(false), colVisibility(false) {}
-  bool colFamily;
-  bool colQualifier;
-  bool colVisibility;
+  bool colFamily :1;
+  bool colQualifier :1;
+  bool colVisibility :1;
 } _Column__isset;
 
 class Column {
  public:
 
-  static const char* ascii_fingerprint; // = "AB879940BD15B6B25691265F7384B271";
-  static const uint8_t binary_fingerprint[16]; // = {0xAB,0x87,0x99,0x40,0xBD,0x15,0xB6,0xB2,0x56,0x91,0x26,0x5F,0x73,0x84,0xB2,0x71};
-
+  Column(const Column&);
+  Column& operator=(const Column&);
   Column() : colFamily(), colQualifier(), colVisibility() {
   }
 
-  virtual ~Column() throw() {}
-
+  virtual ~Column() throw();
   std::string colFamily;
   std::string colQualifier;
   std::string colVisibility;
 
   _Column__isset __isset;
 
-  void __set_colFamily(const std::string& val) {
-    colFamily = val;
-  }
+  void __set_colFamily(const std::string& val);
 
-  void __set_colQualifier(const std::string& val) {
-    colQualifier = val;
-  }
+  void __set_colQualifier(const std::string& val);
 
-  void __set_colVisibility(const std::string& val) {
-    colVisibility = val;
-  }
+  void __set_colVisibility(const std::string& val);
 
   bool operator == (const Column & rhs) const
   {
@@ -1018,29 +1050,34 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(Column &a, Column &b);
 
+inline std::ostream& operator<<(std::ostream& out, const Column& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _Condition__isset {
   _Condition__isset() : column(false), timestamp(false), value(false), iterators(false) {}
-  bool column;
-  bool timestamp;
-  bool value;
-  bool iterators;
+  bool column :1;
+  bool timestamp :1;
+  bool value :1;
+  bool iterators :1;
 } _Condition__isset;
 
 class Condition {
  public:
 
-  static const char* ascii_fingerprint; // = "C4022914C22D89E33B1A46A7D511C58F";
-  static const uint8_t binary_fingerprint[16]; // = {0xC4,0x02,0x29,0x14,0xC2,0x2D,0x89,0xE3,0x3B,0x1A,0x46,0xA7,0xD5,0x11,0xC5,0x8F};
-
+  Condition(const Condition&);
+  Condition& operator=(const Condition&);
   Condition() : timestamp(0), value() {
   }
 
-  virtual ~Condition() throw() {}
-
+  virtual ~Condition() throw();
   Column column;
   int64_t timestamp;
   std::string value;
@@ -1048,24 +1085,13 @@
 
   _Condition__isset __isset;
 
-  void __set_column(const Column& val) {
-    column = val;
-  }
+  void __set_column(const Column& val);
 
-  void __set_timestamp(const int64_t val) {
-    timestamp = val;
-    __isset.timestamp = true;
-  }
+  void __set_timestamp(const int64_t val);
 
-  void __set_value(const std::string& val) {
-    value = val;
-    __isset.value = true;
-  }
+  void __set_value(const std::string& val);
 
-  void __set_iterators(const std::vector<IteratorSetting> & val) {
-    iterators = val;
-    __isset.iterators = true;
-  }
+  void __set_iterators(const std::vector<IteratorSetting> & val);
 
   bool operator == (const Condition & rhs) const
   {
@@ -1094,39 +1120,40 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(Condition &a, Condition &b);
 
+inline std::ostream& operator<<(std::ostream& out, const Condition& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _ConditionalUpdates__isset {
   _ConditionalUpdates__isset() : conditions(false), updates(false) {}
-  bool conditions;
-  bool updates;
+  bool conditions :1;
+  bool updates :1;
 } _ConditionalUpdates__isset;
 
 class ConditionalUpdates {
  public:
 
-  static const char* ascii_fingerprint; // = "1C1808872D1A8E04F114974ADD77F356";
-  static const uint8_t binary_fingerprint[16]; // = {0x1C,0x18,0x08,0x87,0x2D,0x1A,0x8E,0x04,0xF1,0x14,0x97,0x4A,0xDD,0x77,0xF3,0x56};
-
+  ConditionalUpdates(const ConditionalUpdates&);
+  ConditionalUpdates& operator=(const ConditionalUpdates&);
   ConditionalUpdates() {
   }
 
-  virtual ~ConditionalUpdates() throw() {}
-
+  virtual ~ConditionalUpdates() throw();
   std::vector<Condition>  conditions;
   std::vector<ColumnUpdate>  updates;
 
   _ConditionalUpdates__isset __isset;
 
-  void __set_conditions(const std::vector<Condition> & val) {
-    conditions = val;
-  }
+  void __set_conditions(const std::vector<Condition> & val);
 
-  void __set_updates(const std::vector<ColumnUpdate> & val) {
-    updates = val;
-  }
+  void __set_updates(const std::vector<ColumnUpdate> & val);
 
   bool operator == (const ConditionalUpdates & rhs) const
   {
@@ -1145,30 +1172,35 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(ConditionalUpdates &a, ConditionalUpdates &b);
 
+inline std::ostream& operator<<(std::ostream& out, const ConditionalUpdates& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _ConditionalWriterOptions__isset {
   _ConditionalWriterOptions__isset() : maxMemory(false), timeoutMs(false), threads(false), authorizations(false), durability(false) {}
-  bool maxMemory;
-  bool timeoutMs;
-  bool threads;
-  bool authorizations;
-  bool durability;
+  bool maxMemory :1;
+  bool timeoutMs :1;
+  bool threads :1;
+  bool authorizations :1;
+  bool durability :1;
 } _ConditionalWriterOptions__isset;
 
 class ConditionalWriterOptions {
  public:
 
-  static const char* ascii_fingerprint; // = "C345C04E84A351638B6EACB741BD600E";
-  static const uint8_t binary_fingerprint[16]; // = {0xC3,0x45,0xC0,0x4E,0x84,0xA3,0x51,0x63,0x8B,0x6E,0xAC,0xB7,0x41,0xBD,0x60,0x0E};
-
+  ConditionalWriterOptions(const ConditionalWriterOptions&);
+  ConditionalWriterOptions& operator=(const ConditionalWriterOptions&);
   ConditionalWriterOptions() : maxMemory(0), timeoutMs(0), threads(0), durability((Durability::type)0) {
   }
 
-  virtual ~ConditionalWriterOptions() throw() {}
-
+  virtual ~ConditionalWriterOptions() throw();
   int64_t maxMemory;
   int64_t timeoutMs;
   int32_t threads;
@@ -1177,30 +1209,15 @@
 
   _ConditionalWriterOptions__isset __isset;
 
-  void __set_maxMemory(const int64_t val) {
-    maxMemory = val;
-    __isset.maxMemory = true;
-  }
+  void __set_maxMemory(const int64_t val);
 
-  void __set_timeoutMs(const int64_t val) {
-    timeoutMs = val;
-    __isset.timeoutMs = true;
-  }
+  void __set_timeoutMs(const int64_t val);
 
-  void __set_threads(const int32_t val) {
-    threads = val;
-    __isset.threads = true;
-  }
+  void __set_threads(const int32_t val);
 
-  void __set_authorizations(const std::set<std::string> & val) {
-    authorizations = val;
-    __isset.authorizations = true;
-  }
+  void __set_authorizations(const std::set<std::string> & val);
 
-  void __set_durability(const Durability::type val) {
-    durability = val;
-    __isset.durability = true;
-  }
+  void __set_durability(const Durability::type val);
 
   bool operator == (const ConditionalWriterOptions & rhs) const
   {
@@ -1235,36 +1252,41 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(ConditionalWriterOptions &a, ConditionalWriterOptions &b);
 
+inline std::ostream& operator<<(std::ostream& out, const ConditionalWriterOptions& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _ActiveScan__isset {
   _ActiveScan__isset() : client(false), user(false), table(false), age(false), idleTime(false), type(false), state(false), extent(false), columns(false), iterators(false), authorizations(false) {}
-  bool client;
-  bool user;
-  bool table;
-  bool age;
-  bool idleTime;
-  bool type;
-  bool state;
-  bool extent;
-  bool columns;
-  bool iterators;
-  bool authorizations;
+  bool client :1;
+  bool user :1;
+  bool table :1;
+  bool age :1;
+  bool idleTime :1;
+  bool type :1;
+  bool state :1;
+  bool extent :1;
+  bool columns :1;
+  bool iterators :1;
+  bool authorizations :1;
 } _ActiveScan__isset;
 
 class ActiveScan {
  public:
 
-  static const char* ascii_fingerprint; // = "1B97541CB4E900A054266BBBEE61D004";
-  static const uint8_t binary_fingerprint[16]; // = {0x1B,0x97,0x54,0x1C,0xB4,0xE9,0x00,0xA0,0x54,0x26,0x6B,0xBB,0xEE,0x61,0xD0,0x04};
-
+  ActiveScan(const ActiveScan&);
+  ActiveScan& operator=(const ActiveScan&);
   ActiveScan() : client(), user(), table(), age(0), idleTime(0), type((ScanType::type)0), state((ScanState::type)0) {
   }
 
-  virtual ~ActiveScan() throw() {}
-
+  virtual ~ActiveScan() throw();
   std::string client;
   std::string user;
   std::string table;
@@ -1279,49 +1301,27 @@
 
   _ActiveScan__isset __isset;
 
-  void __set_client(const std::string& val) {
-    client = val;
-  }
+  void __set_client(const std::string& val);
 
-  void __set_user(const std::string& val) {
-    user = val;
-  }
+  void __set_user(const std::string& val);
 
-  void __set_table(const std::string& val) {
-    table = val;
-  }
+  void __set_table(const std::string& val);
 
-  void __set_age(const int64_t val) {
-    age = val;
-  }
+  void __set_age(const int64_t val);
 
-  void __set_idleTime(const int64_t val) {
-    idleTime = val;
-  }
+  void __set_idleTime(const int64_t val);
 
-  void __set_type(const ScanType::type val) {
-    type = val;
-  }
+  void __set_type(const ScanType::type val);
 
-  void __set_state(const ScanState::type val) {
-    state = val;
-  }
+  void __set_state(const ScanState::type val);
 
-  void __set_extent(const KeyExtent& val) {
-    extent = val;
-  }
+  void __set_extent(const KeyExtent& val);
 
-  void __set_columns(const std::vector<Column> & val) {
-    columns = val;
-  }
+  void __set_columns(const std::vector<Column> & val);
 
-  void __set_iterators(const std::vector<IteratorSetting> & val) {
-    iterators = val;
-  }
+  void __set_iterators(const std::vector<IteratorSetting> & val);
 
-  void __set_authorizations(const std::vector<std::string> & val) {
-    authorizations = val;
-  }
+  void __set_authorizations(const std::vector<std::string> & val);
 
   bool operator == (const ActiveScan & rhs) const
   {
@@ -1358,35 +1358,40 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(ActiveScan &a, ActiveScan &b);
 
+inline std::ostream& operator<<(std::ostream& out, const ActiveScan& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _ActiveCompaction__isset {
   _ActiveCompaction__isset() : extent(false), age(false), inputFiles(false), outputFile(false), type(false), reason(false), localityGroup(false), entriesRead(false), entriesWritten(false), iterators(false) {}
-  bool extent;
-  bool age;
-  bool inputFiles;
-  bool outputFile;
-  bool type;
-  bool reason;
-  bool localityGroup;
-  bool entriesRead;
-  bool entriesWritten;
-  bool iterators;
+  bool extent :1;
+  bool age :1;
+  bool inputFiles :1;
+  bool outputFile :1;
+  bool type :1;
+  bool reason :1;
+  bool localityGroup :1;
+  bool entriesRead :1;
+  bool entriesWritten :1;
+  bool iterators :1;
 } _ActiveCompaction__isset;
 
 class ActiveCompaction {
  public:
 
-  static const char* ascii_fingerprint; // = "2BB155CC901109464666B6C7E6A8C1A6";
-  static const uint8_t binary_fingerprint[16]; // = {0x2B,0xB1,0x55,0xCC,0x90,0x11,0x09,0x46,0x46,0x66,0xB6,0xC7,0xE6,0xA8,0xC1,0xA6};
-
+  ActiveCompaction(const ActiveCompaction&);
+  ActiveCompaction& operator=(const ActiveCompaction&);
   ActiveCompaction() : age(0), outputFile(), type((CompactionType::type)0), reason((CompactionReason::type)0), localityGroup(), entriesRead(0), entriesWritten(0) {
   }
 
-  virtual ~ActiveCompaction() throw() {}
-
+  virtual ~ActiveCompaction() throw();
   KeyExtent extent;
   int64_t age;
   std::vector<std::string>  inputFiles;
@@ -1400,45 +1405,25 @@
 
   _ActiveCompaction__isset __isset;
 
-  void __set_extent(const KeyExtent& val) {
-    extent = val;
-  }
+  void __set_extent(const KeyExtent& val);
 
-  void __set_age(const int64_t val) {
-    age = val;
-  }
+  void __set_age(const int64_t val);
 
-  void __set_inputFiles(const std::vector<std::string> & val) {
-    inputFiles = val;
-  }
+  void __set_inputFiles(const std::vector<std::string> & val);
 
-  void __set_outputFile(const std::string& val) {
-    outputFile = val;
-  }
+  void __set_outputFile(const std::string& val);
 
-  void __set_type(const CompactionType::type val) {
-    type = val;
-  }
+  void __set_type(const CompactionType::type val);
 
-  void __set_reason(const CompactionReason::type val) {
-    reason = val;
-  }
+  void __set_reason(const CompactionReason::type val);
 
-  void __set_localityGroup(const std::string& val) {
-    localityGroup = val;
-  }
+  void __set_localityGroup(const std::string& val);
 
-  void __set_entriesRead(const int64_t val) {
-    entriesRead = val;
-  }
+  void __set_entriesRead(const int64_t val);
 
-  void __set_entriesWritten(const int64_t val) {
-    entriesWritten = val;
-  }
+  void __set_entriesWritten(const int64_t val);
 
-  void __set_iterators(const std::vector<IteratorSetting> & val) {
-    iterators = val;
-  }
+  void __set_iterators(const std::vector<IteratorSetting> & val);
 
   bool operator == (const ActiveCompaction & rhs) const
   {
@@ -1473,30 +1458,35 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(ActiveCompaction &a, ActiveCompaction &b);
 
+inline std::ostream& operator<<(std::ostream& out, const ActiveCompaction& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _WriterOptions__isset {
   _WriterOptions__isset() : maxMemory(false), latencyMs(false), timeoutMs(false), threads(false), durability(false) {}
-  bool maxMemory;
-  bool latencyMs;
-  bool timeoutMs;
-  bool threads;
-  bool durability;
+  bool maxMemory :1;
+  bool latencyMs :1;
+  bool timeoutMs :1;
+  bool threads :1;
+  bool durability :1;
 } _WriterOptions__isset;
 
 class WriterOptions {
  public:
 
-  static const char* ascii_fingerprint; // = "6640C55D2C0D4C8C2E7589456EA0C61A";
-  static const uint8_t binary_fingerprint[16]; // = {0x66,0x40,0xC5,0x5D,0x2C,0x0D,0x4C,0x8C,0x2E,0x75,0x89,0x45,0x6E,0xA0,0xC6,0x1A};
-
+  WriterOptions(const WriterOptions&);
+  WriterOptions& operator=(const WriterOptions&);
   WriterOptions() : maxMemory(0), latencyMs(0), timeoutMs(0), threads(0), durability((Durability::type)0) {
   }
 
-  virtual ~WriterOptions() throw() {}
-
+  virtual ~WriterOptions() throw();
   int64_t maxMemory;
   int64_t latencyMs;
   int64_t timeoutMs;
@@ -1505,26 +1495,15 @@
 
   _WriterOptions__isset __isset;
 
-  void __set_maxMemory(const int64_t val) {
-    maxMemory = val;
-  }
+  void __set_maxMemory(const int64_t val);
 
-  void __set_latencyMs(const int64_t val) {
-    latencyMs = val;
-  }
+  void __set_latencyMs(const int64_t val);
 
-  void __set_timeoutMs(const int64_t val) {
-    timeoutMs = val;
-  }
+  void __set_timeoutMs(const int64_t val);
 
-  void __set_threads(const int32_t val) {
-    threads = val;
-  }
+  void __set_threads(const int32_t val);
 
-  void __set_durability(const Durability::type val) {
-    durability = val;
-    __isset.durability = true;
-  }
+  void __set_durability(const Durability::type val);
 
   bool operator == (const WriterOptions & rhs) const
   {
@@ -1551,39 +1530,40 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(WriterOptions &a, WriterOptions &b);
 
+inline std::ostream& operator<<(std::ostream& out, const WriterOptions& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _CompactionStrategyConfig__isset {
   _CompactionStrategyConfig__isset() : className(false), options(false) {}
-  bool className;
-  bool options;
+  bool className :1;
+  bool options :1;
 } _CompactionStrategyConfig__isset;
 
 class CompactionStrategyConfig {
  public:
 
-  static const char* ascii_fingerprint; // = "F7C641917C22B35AE581CCD54910B00D";
-  static const uint8_t binary_fingerprint[16]; // = {0xF7,0xC6,0x41,0x91,0x7C,0x22,0xB3,0x5A,0xE5,0x81,0xCC,0xD5,0x49,0x10,0xB0,0x0D};
-
+  CompactionStrategyConfig(const CompactionStrategyConfig&);
+  CompactionStrategyConfig& operator=(const CompactionStrategyConfig&);
   CompactionStrategyConfig() : className() {
   }
 
-  virtual ~CompactionStrategyConfig() throw() {}
-
+  virtual ~CompactionStrategyConfig() throw();
   std::string className;
   std::map<std::string, std::string>  options;
 
   _CompactionStrategyConfig__isset __isset;
 
-  void __set_className(const std::string& val) {
-    className = val;
-  }
+  void __set_className(const std::string& val);
 
-  void __set_options(const std::map<std::string, std::string> & val) {
-    options = val;
-  }
+  void __set_options(const std::map<std::string, std::string> & val);
 
   bool operator == (const CompactionStrategyConfig & rhs) const
   {
@@ -1602,33 +1582,36 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
 };
 
 void swap(CompactionStrategyConfig &a, CompactionStrategyConfig &b);
 
+inline std::ostream& operator<<(std::ostream& out, const CompactionStrategyConfig& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _UnknownScanner__isset {
   _UnknownScanner__isset() : msg(false) {}
-  bool msg;
+  bool msg :1;
 } _UnknownScanner__isset;
 
 class UnknownScanner : public ::apache::thrift::TException {
  public:
 
-  static const char* ascii_fingerprint; // = "EFB929595D312AC8F305D5A794CFEDA1";
-  static const uint8_t binary_fingerprint[16]; // = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
-
+  UnknownScanner(const UnknownScanner&);
+  UnknownScanner& operator=(const UnknownScanner&);
   UnknownScanner() : msg() {
   }
 
-  virtual ~UnknownScanner() throw() {}
-
+  virtual ~UnknownScanner() throw();
   std::string msg;
 
   _UnknownScanner__isset __isset;
 
-  void __set_msg(const std::string& val) {
-    msg = val;
-  }
+  void __set_msg(const std::string& val);
 
   bool operator == (const UnknownScanner & rhs) const
   {
@@ -1645,33 +1628,38 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
 };
 
 void swap(UnknownScanner &a, UnknownScanner &b);
 
+inline std::ostream& operator<<(std::ostream& out, const UnknownScanner& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _UnknownWriter__isset {
   _UnknownWriter__isset() : msg(false) {}
-  bool msg;
+  bool msg :1;
 } _UnknownWriter__isset;
 
 class UnknownWriter : public ::apache::thrift::TException {
  public:
 
-  static const char* ascii_fingerprint; // = "EFB929595D312AC8F305D5A794CFEDA1";
-  static const uint8_t binary_fingerprint[16]; // = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
-
+  UnknownWriter(const UnknownWriter&);
+  UnknownWriter& operator=(const UnknownWriter&);
   UnknownWriter() : msg() {
   }
 
-  virtual ~UnknownWriter() throw() {}
-
+  virtual ~UnknownWriter() throw();
   std::string msg;
 
   _UnknownWriter__isset __isset;
 
-  void __set_msg(const std::string& val) {
-    msg = val;
-  }
+  void __set_msg(const std::string& val);
 
   bool operator == (const UnknownWriter & rhs) const
   {
@@ -1688,33 +1676,38 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
 };
 
 void swap(UnknownWriter &a, UnknownWriter &b);
 
+inline std::ostream& operator<<(std::ostream& out, const UnknownWriter& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _NoMoreEntriesException__isset {
   _NoMoreEntriesException__isset() : msg(false) {}
-  bool msg;
+  bool msg :1;
 } _NoMoreEntriesException__isset;
 
 class NoMoreEntriesException : public ::apache::thrift::TException {
  public:
 
-  static const char* ascii_fingerprint; // = "EFB929595D312AC8F305D5A794CFEDA1";
-  static const uint8_t binary_fingerprint[16]; // = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
-
+  NoMoreEntriesException(const NoMoreEntriesException&);
+  NoMoreEntriesException& operator=(const NoMoreEntriesException&);
   NoMoreEntriesException() : msg() {
   }
 
-  virtual ~NoMoreEntriesException() throw() {}
-
+  virtual ~NoMoreEntriesException() throw();
   std::string msg;
 
   _NoMoreEntriesException__isset __isset;
 
-  void __set_msg(const std::string& val) {
-    msg = val;
-  }
+  void __set_msg(const std::string& val);
 
   bool operator == (const NoMoreEntriesException & rhs) const
   {
@@ -1731,33 +1724,38 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
 };
 
 void swap(NoMoreEntriesException &a, NoMoreEntriesException &b);
 
+inline std::ostream& operator<<(std::ostream& out, const NoMoreEntriesException& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _AccumuloException__isset {
   _AccumuloException__isset() : msg(false) {}
-  bool msg;
+  bool msg :1;
 } _AccumuloException__isset;
 
 class AccumuloException : public ::apache::thrift::TException {
  public:
 
-  static const char* ascii_fingerprint; // = "EFB929595D312AC8F305D5A794CFEDA1";
-  static const uint8_t binary_fingerprint[16]; // = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
-
+  AccumuloException(const AccumuloException&);
+  AccumuloException& operator=(const AccumuloException&);
   AccumuloException() : msg() {
   }
 
-  virtual ~AccumuloException() throw() {}
-
+  virtual ~AccumuloException() throw();
   std::string msg;
 
   _AccumuloException__isset __isset;
 
-  void __set_msg(const std::string& val) {
-    msg = val;
-  }
+  void __set_msg(const std::string& val);
 
   bool operator == (const AccumuloException & rhs) const
   {
@@ -1774,33 +1772,38 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
 };
 
 void swap(AccumuloException &a, AccumuloException &b);
 
+inline std::ostream& operator<<(std::ostream& out, const AccumuloException& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _AccumuloSecurityException__isset {
   _AccumuloSecurityException__isset() : msg(false) {}
-  bool msg;
+  bool msg :1;
 } _AccumuloSecurityException__isset;
 
 class AccumuloSecurityException : public ::apache::thrift::TException {
  public:
 
-  static const char* ascii_fingerprint; // = "EFB929595D312AC8F305D5A794CFEDA1";
-  static const uint8_t binary_fingerprint[16]; // = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
-
+  AccumuloSecurityException(const AccumuloSecurityException&);
+  AccumuloSecurityException& operator=(const AccumuloSecurityException&);
   AccumuloSecurityException() : msg() {
   }
 
-  virtual ~AccumuloSecurityException() throw() {}
-
+  virtual ~AccumuloSecurityException() throw();
   std::string msg;
 
   _AccumuloSecurityException__isset __isset;
 
-  void __set_msg(const std::string& val) {
-    msg = val;
-  }
+  void __set_msg(const std::string& val);
 
   bool operator == (const AccumuloSecurityException & rhs) const
   {
@@ -1817,33 +1820,38 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
 };
 
 void swap(AccumuloSecurityException &a, AccumuloSecurityException &b);
 
+inline std::ostream& operator<<(std::ostream& out, const AccumuloSecurityException& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _TableNotFoundException__isset {
   _TableNotFoundException__isset() : msg(false) {}
-  bool msg;
+  bool msg :1;
 } _TableNotFoundException__isset;
 
 class TableNotFoundException : public ::apache::thrift::TException {
  public:
 
-  static const char* ascii_fingerprint; // = "EFB929595D312AC8F305D5A794CFEDA1";
-  static const uint8_t binary_fingerprint[16]; // = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
-
+  TableNotFoundException(const TableNotFoundException&);
+  TableNotFoundException& operator=(const TableNotFoundException&);
   TableNotFoundException() : msg() {
   }
 
-  virtual ~TableNotFoundException() throw() {}
-
+  virtual ~TableNotFoundException() throw();
   std::string msg;
 
   _TableNotFoundException__isset __isset;
 
-  void __set_msg(const std::string& val) {
-    msg = val;
-  }
+  void __set_msg(const std::string& val);
 
   bool operator == (const TableNotFoundException & rhs) const
   {
@@ -1860,33 +1868,38 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
 };
 
 void swap(TableNotFoundException &a, TableNotFoundException &b);
 
+inline std::ostream& operator<<(std::ostream& out, const TableNotFoundException& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _TableExistsException__isset {
   _TableExistsException__isset() : msg(false) {}
-  bool msg;
+  bool msg :1;
 } _TableExistsException__isset;
 
 class TableExistsException : public ::apache::thrift::TException {
  public:
 
-  static const char* ascii_fingerprint; // = "EFB929595D312AC8F305D5A794CFEDA1";
-  static const uint8_t binary_fingerprint[16]; // = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
-
+  TableExistsException(const TableExistsException&);
+  TableExistsException& operator=(const TableExistsException&);
   TableExistsException() : msg() {
   }
 
-  virtual ~TableExistsException() throw() {}
-
+  virtual ~TableExistsException() throw();
   std::string msg;
 
   _TableExistsException__isset __isset;
 
-  void __set_msg(const std::string& val) {
-    msg = val;
-  }
+  void __set_msg(const std::string& val);
 
   bool operator == (const TableExistsException & rhs) const
   {
@@ -1903,33 +1916,38 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
 };
 
 void swap(TableExistsException &a, TableExistsException &b);
 
+inline std::ostream& operator<<(std::ostream& out, const TableExistsException& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 typedef struct _MutationsRejectedException__isset {
   _MutationsRejectedException__isset() : msg(false) {}
-  bool msg;
+  bool msg :1;
 } _MutationsRejectedException__isset;
 
 class MutationsRejectedException : public ::apache::thrift::TException {
  public:
 
-  static const char* ascii_fingerprint; // = "EFB929595D312AC8F305D5A794CFEDA1";
-  static const uint8_t binary_fingerprint[16]; // = {0xEF,0xB9,0x29,0x59,0x5D,0x31,0x2A,0xC8,0xF3,0x05,0xD5,0xA7,0x94,0xCF,0xED,0xA1};
-
+  MutationsRejectedException(const MutationsRejectedException&);
+  MutationsRejectedException& operator=(const MutationsRejectedException&);
   MutationsRejectedException() : msg() {
   }
 
-  virtual ~MutationsRejectedException() throw() {}
-
+  virtual ~MutationsRejectedException() throw();
   std::string msg;
 
   _MutationsRejectedException__isset __isset;
 
-  void __set_msg(const std::string& val) {
-    msg = val;
-  }
+  void __set_msg(const std::string& val);
 
   bool operator == (const MutationsRejectedException & rhs) const
   {
@@ -1946,10 +1964,163 @@
   uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
   uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
 
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
 };
 
 void swap(MutationsRejectedException &a, MutationsRejectedException &b);
 
+inline std::ostream& operator<<(std::ostream& out, const MutationsRejectedException& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
+typedef struct _NamespaceExistsException__isset {
+  _NamespaceExistsException__isset() : msg(false) {}
+  bool msg :1;
+} _NamespaceExistsException__isset;
+
+class NamespaceExistsException : public ::apache::thrift::TException {
+ public:
+
+  NamespaceExistsException(const NamespaceExistsException&);
+  NamespaceExistsException& operator=(const NamespaceExistsException&);
+  NamespaceExistsException() : msg() {
+  }
+
+  virtual ~NamespaceExistsException() throw();
+  std::string msg;
+
+  _NamespaceExistsException__isset __isset;
+
+  void __set_msg(const std::string& val);
+
+  bool operator == (const NamespaceExistsException & rhs) const
+  {
+    if (!(msg == rhs.msg))
+      return false;
+    return true;
+  }
+  bool operator != (const NamespaceExistsException &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const NamespaceExistsException & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
+};
+
+void swap(NamespaceExistsException &a, NamespaceExistsException &b);
+
+inline std::ostream& operator<<(std::ostream& out, const NamespaceExistsException& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
+typedef struct _NamespaceNotFoundException__isset {
+  _NamespaceNotFoundException__isset() : msg(false) {}
+  bool msg :1;
+} _NamespaceNotFoundException__isset;
+
+class NamespaceNotFoundException : public ::apache::thrift::TException {
+ public:
+
+  NamespaceNotFoundException(const NamespaceNotFoundException&);
+  NamespaceNotFoundException& operator=(const NamespaceNotFoundException&);
+  NamespaceNotFoundException() : msg() {
+  }
+
+  virtual ~NamespaceNotFoundException() throw();
+  std::string msg;
+
+  _NamespaceNotFoundException__isset __isset;
+
+  void __set_msg(const std::string& val);
+
+  bool operator == (const NamespaceNotFoundException & rhs) const
+  {
+    if (!(msg == rhs.msg))
+      return false;
+    return true;
+  }
+  bool operator != (const NamespaceNotFoundException &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const NamespaceNotFoundException & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
+};
+
+void swap(NamespaceNotFoundException &a, NamespaceNotFoundException &b);
+
+inline std::ostream& operator<<(std::ostream& out, const NamespaceNotFoundException& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
+typedef struct _NamespaceNotEmptyException__isset {
+  _NamespaceNotEmptyException__isset() : msg(false) {}
+  bool msg :1;
+} _NamespaceNotEmptyException__isset;
+
+class NamespaceNotEmptyException : public ::apache::thrift::TException {
+ public:
+
+  NamespaceNotEmptyException(const NamespaceNotEmptyException&);
+  NamespaceNotEmptyException& operator=(const NamespaceNotEmptyException&);
+  NamespaceNotEmptyException() : msg() {
+  }
+
+  virtual ~NamespaceNotEmptyException() throw();
+  std::string msg;
+
+  _NamespaceNotEmptyException__isset __isset;
+
+  void __set_msg(const std::string& val);
+
+  bool operator == (const NamespaceNotEmptyException & rhs) const
+  {
+    if (!(msg == rhs.msg))
+      return false;
+    return true;
+  }
+  bool operator != (const NamespaceNotEmptyException &rhs) const {
+    return !(*this == rhs);
+  }
+
+  bool operator < (const NamespaceNotEmptyException & ) const;
+
+  uint32_t read(::apache::thrift::protocol::TProtocol* iprot);
+  uint32_t write(::apache::thrift::protocol::TProtocol* oprot) const;
+
+  virtual void printTo(std::ostream& out) const;
+  mutable std::string thriftTExceptionMessageHolder_;
+  const char* what() const throw();
+};
+
+void swap(NamespaceNotEmptyException &a, NamespaceNotEmptyException &b);
+
+inline std::ostream& operator<<(std::ostream& out, const NamespaceNotEmptyException& obj)
+{
+  obj.printTo(out);
+  return out;
+}
+
 } // namespace
 
 #endif
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java b/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java
index 87e2c58..a3d185d 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java
@@ -204,7 +204,7 @@
     AccumuloProxy.Iface wrappedImpl = RpcWrapper.service(impl, new AccumuloProxy.Processor<AccumuloProxy.Iface>(impl));
 
     // Create the processor from the implementation
-    TProcessor processor = new AccumuloProxy.Processor<AccumuloProxy.Iface>(wrappedImpl);
+    TProcessor processor = new AccumuloProxy.Processor<>(wrappedImpl);
 
     // Get the type of thrift server to instantiate
     final String serverTypeStr = properties.getProperty(THRIFT_SERVER_TYPE, THRIFT_SERVER_TYPE_DEFAULT);
@@ -265,8 +265,8 @@
     TimedProcessor timedProcessor = new TimedProcessor(metricsFactory, processor, serverName, threadName);
 
     // Create the thrift server with our processor and properties
-    ServerAddress serverAddr = TServerUtils.startTServer(address, serverType, timedProcessor, protocolFactory, serverName, threadName, numThreads,
-        simpleTimerThreadpoolSize, threadpoolResizeInterval, maxFrameSize, sslParams, saslParams, serverSocketTimeout);
+    ServerAddress serverAddr = TServerUtils.startTServer(serverType, timedProcessor, protocolFactory, serverName, threadName, numThreads,
+        simpleTimerThreadpoolSize, threadpoolResizeInterval, maxFrameSize, sslParams, saslParams, serverSocketTimeout, address);
 
     return serverAddr;
   }
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/ProxyServer.java b/proxy/src/main/java/org/apache/accumulo/proxy/ProxyServer.java
index d8b678a..a62e1a1 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/ProxyServer.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/ProxyServer.java
@@ -26,6 +26,7 @@
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
+import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
@@ -49,6 +50,9 @@
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.MutationsRejectedException;
+import org.apache.accumulo.core.client.NamespaceExistsException;
+import org.apache.accumulo.core.client.NamespaceNotEmptyException;
+import org.apache.accumulo.core.client.NamespaceNotFoundException;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.ScannerBase;
 import org.apache.accumulo.core.client.TableExistsException;
@@ -60,9 +64,9 @@
 import org.apache.accumulo.core.client.admin.NewTableConfiguration;
 import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.client.impl.Credentials;
+import org.apache.accumulo.core.client.impl.Namespaces;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
 import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.SecurityErrorCode;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
@@ -77,9 +81,11 @@
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.core.security.NamespacePermission;
 import org.apache.accumulo.core.security.SystemPermission;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.util.ByteBufferUtil;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.TextUtil;
 import org.apache.accumulo.proxy.thrift.AccumuloProxy;
 import org.apache.accumulo.proxy.thrift.BatchScanOptions;
@@ -186,7 +192,7 @@
 
     String useMock = props.getProperty("useMockInstance");
     if (useMock != null && Boolean.parseBoolean(useMock))
-      instance = new MockInstance();
+      instance = DeprecationUtil.makeMockInstance(this.getClass().getName());
     else {
       ClientConfiguration clientConf;
       if (props.containsKey("clientConfigurationFile")) {
@@ -309,6 +315,25 @@
     }
   }
 
+  private void handleExceptionNNF(Exception ex) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      throw ex;
+    } catch (AccumuloException e) {
+      Throwable cause = e.getCause();
+      if (null != cause && NamespaceNotFoundException.class.equals(cause.getClass())) {
+        throw new org.apache.accumulo.proxy.thrift.NamespaceNotFoundException(cause.toString());
+      }
+      handleAccumuloException(e);
+    } catch (AccumuloSecurityException e) {
+      handleAccumuloSecurityException(e);
+    } catch (NamespaceNotFoundException e) {
+      throw new org.apache.accumulo.proxy.thrift.NamespaceNotFoundException(ex.toString());
+    } catch (Exception e) {
+      throw new org.apache.accumulo.proxy.thrift.AccumuloException(e.toString());
+    }
+  }
+
   private void handleException(Exception ex) throws org.apache.accumulo.proxy.thrift.AccumuloException,
       org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
     try {
@@ -339,7 +364,7 @@
       org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.TableNotFoundException, TException {
 
     try {
-      SortedSet<Text> sorted = new TreeSet<Text>();
+      SortedSet<Text> sorted = new TreeSet<>();
       for (ByteBuffer split : splits) {
         sorted.add(ByteBufferUtil.toText(split));
       }
@@ -395,7 +420,7 @@
   }
 
   private List<IteratorSetting> getIteratorSettings(List<org.apache.accumulo.proxy.thrift.IteratorSetting> iterators) {
-    List<IteratorSetting> result = new ArrayList<IteratorSetting>();
+    List<IteratorSetting> result = new ArrayList<>();
     if (iterators != null) {
       for (org.apache.accumulo.proxy.thrift.IteratorSetting is : iterators) {
         result.add(getIteratorSetting(is));
@@ -468,9 +493,9 @@
       org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.TableNotFoundException, TException {
     try {
       Map<String,Set<Text>> groups = getConnector(login).tableOperations().getLocalityGroups(tableName);
-      Map<String,Set<String>> ret = new HashMap<String,Set<String>>();
+      Map<String,Set<String>> ret = new HashMap<>();
       for (Entry<String,Set<Text>> entry : groups.entrySet()) {
-        Set<String> value = new HashSet<String>();
+        Set<String> value = new HashSet<>();
         ret.put(entry.getKey(), value);
         for (Text val : entry.getValue()) {
           value.add(val.toString());
@@ -509,7 +534,7 @@
   public Map<String,String> getTableProperties(ByteBuffer login, String tableName) throws org.apache.accumulo.proxy.thrift.AccumuloException,
       org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.TableNotFoundException, TException {
     try {
-      Map<String,String> ret = new HashMap<String,String>();
+      Map<String,String> ret = new HashMap<>();
 
       for (Map.Entry<String,String> entry : getConnector(login).tableOperations().getProperties(tableName)) {
         ret.put(entry.getKey(), entry.getValue());
@@ -526,7 +551,7 @@
       org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.TableNotFoundException, TException {
     try {
       Collection<Text> splits = getConnector(login).tableOperations().listSplits(tableName, maxSplits);
-      List<ByteBuffer> ret = new ArrayList<ByteBuffer>();
+      List<ByteBuffer> ret = new ArrayList<>();
       for (Text split : splits) {
         ret.add(TextUtil.getByteBuffer(split));
       }
@@ -626,7 +651,7 @@
       throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException,
       org.apache.accumulo.proxy.thrift.TableNotFoundException, TException {
     try {
-      Map<String,Set<Text>> groups = new HashMap<String,Set<Text>>();
+      Map<String,Set<Text>> groups = new HashMap<>();
       for (Entry<String,Set<String>> groupEntry : groupStrings.entrySet()) {
         groups.put(groupEntry.getKey(), new HashSet<Text>());
         for (String val : groupEntry.getValue()) {
@@ -663,10 +688,10 @@
       org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.TableNotFoundException, TException {
     try {
       List<org.apache.accumulo.core.client.admin.DiskUsage> diskUsages = getConnector(login).tableOperations().getDiskUsage(tables);
-      List<DiskUsage> retUsages = new ArrayList<DiskUsage>();
+      List<DiskUsage> retUsages = new ArrayList<>();
       for (org.apache.accumulo.core.client.admin.DiskUsage diskUsage : diskUsages) {
         DiskUsage usage = new DiskUsage();
-        usage.setTables(new ArrayList<String>(diskUsage.getTables()));
+        usage.setTables(new ArrayList<>(diskUsage.getTables()));
         usage.setUsage(diskUsage.getUsage());
         retUsages.add(usage);
       }
@@ -711,7 +736,7 @@
   @Override
   public List<org.apache.accumulo.proxy.thrift.ActiveScan> getActiveScans(ByteBuffer login, String tserver)
       throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
-    List<org.apache.accumulo.proxy.thrift.ActiveScan> result = new ArrayList<org.apache.accumulo.proxy.thrift.ActiveScan>();
+    List<org.apache.accumulo.proxy.thrift.ActiveScan> result = new ArrayList<>();
     try {
       List<ActiveScan> activeScans = getConnector(login).instanceOperations().getActiveScans(tserver);
       for (ActiveScan scan : activeScans) {
@@ -726,7 +751,7 @@
         TabletId e = scan.getTablet();
         pscan.extent = new org.apache.accumulo.proxy.thrift.KeyExtent(e.getTableId().toString(), TextUtil.getByteBuffer(e.getEndRow()),
             TextUtil.getByteBuffer(e.getPrevEndRow()));
-        pscan.columns = new ArrayList<org.apache.accumulo.proxy.thrift.Column>();
+        pscan.columns = new ArrayList<>();
         if (scan.getColumns() != null) {
           for (Column c : scan.getColumns()) {
             org.apache.accumulo.proxy.thrift.Column column = new org.apache.accumulo.proxy.thrift.Column();
@@ -736,7 +761,7 @@
             pscan.columns.add(column);
           }
         }
-        pscan.iterators = new ArrayList<org.apache.accumulo.proxy.thrift.IteratorSetting>();
+        pscan.iterators = new ArrayList<>();
         for (String iteratorString : scan.getSsiList()) {
           String[] parts = iteratorString.split("[=,]");
           if (parts.length == 3) {
@@ -748,7 +773,7 @@
             pscan.iterators.add(settings);
           }
         }
-        pscan.authorizations = new ArrayList<ByteBuffer>();
+        pscan.authorizations = new ArrayList<>();
         if (scan.getAuthorizations() != null) {
           for (byte[] a : scan.getAuthorizations()) {
             pscan.authorizations.add(ByteBuffer.wrap(a));
@@ -768,7 +793,7 @@
       throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
 
     try {
-      List<org.apache.accumulo.proxy.thrift.ActiveCompaction> result = new ArrayList<org.apache.accumulo.proxy.thrift.ActiveCompaction>();
+      List<org.apache.accumulo.proxy.thrift.ActiveCompaction> result = new ArrayList<>();
       List<ActiveCompaction> active = getConnector(login).instanceOperations().getActiveCompactions(tserver);
       for (ActiveCompaction comp : active) {
         org.apache.accumulo.proxy.thrift.ActiveCompaction pcomp = new org.apache.accumulo.proxy.thrift.ActiveCompaction();
@@ -778,7 +803,7 @@
         TabletId e = comp.getTablet();
         pcomp.extent = new org.apache.accumulo.proxy.thrift.KeyExtent(e.getTableId().toString(), TextUtil.getByteBuffer(e.getEndRow()),
             TextUtil.getByteBuffer(e.getPrevEndRow()));
-        pcomp.inputFiles = new ArrayList<String>();
+        pcomp.inputFiles = new ArrayList<>();
         if (comp.getInputFiles() != null) {
           pcomp.inputFiles.addAll(comp.getInputFiles());
         }
@@ -787,7 +812,7 @@
         pcomp.reason = CompactionReason.valueOf(comp.getReason().toString());
         pcomp.type = CompactionType.valueOf(comp.getType().toString());
 
-        pcomp.iterators = new ArrayList<org.apache.accumulo.proxy.thrift.IteratorSetting>();
+        pcomp.iterators = new ArrayList<>();
         if (comp.getIterators() != null) {
           for (IteratorSetting setting : comp.getIterators()) {
             org.apache.accumulo.proxy.thrift.IteratorSetting psetting = new org.apache.accumulo.proxy.thrift.IteratorSetting(setting.getPriority(),
@@ -850,7 +875,7 @@
   public void changeUserAuthorizations(ByteBuffer login, String user, Set<ByteBuffer> authorizations)
       throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
     try {
-      Set<String> auths = new HashSet<String>();
+      Set<String> auths = new HashSet<>();
       for (ByteBuffer auth : authorizations) {
         auths.add(ByteBufferUtil.toString(auth));
       }
@@ -977,8 +1002,40 @@
     }
   }
 
+  @Override
+  public void grantNamespacePermission(ByteBuffer login, String user, String namespaceName, org.apache.accumulo.proxy.thrift.NamespacePermission perm)
+      throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
+    try {
+      getConnector(login).securityOperations().grantNamespacePermission(user, namespaceName, NamespacePermission.getPermissionById((byte) perm.getValue()));
+    } catch (Exception e) {
+      handleException(e);
+    }
+  }
+
+  @Override
+  public boolean hasNamespacePermission(ByteBuffer login, String user, String namespaceName, org.apache.accumulo.proxy.thrift.NamespacePermission perm)
+      throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
+    try {
+      return getConnector(login).securityOperations()
+          .hasNamespacePermission(user, namespaceName, NamespacePermission.getPermissionById((byte) perm.getValue()));
+    } catch (Exception e) {
+      handleException(e);
+      return false;
+    }
+  }
+
+  @Override
+  public void revokeNamespacePermission(ByteBuffer login, String user, String namespaceName, org.apache.accumulo.proxy.thrift.NamespacePermission perm)
+      throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
+    try {
+      getConnector(login).securityOperations().revokeNamespacePermission(user, namespaceName, NamespacePermission.getPermissionById((byte) perm.getValue()));
+    } catch (Exception e) {
+      handleException(e);
+    }
+  }
+
   private Authorizations getAuthorizations(Set<ByteBuffer> authorizations) {
-    List<String> auths = new ArrayList<String>();
+    List<String> auths = new ArrayList<>();
     for (ByteBuffer bbauth : authorizations) {
       auths.add(ByteBufferUtil.toString(bbauth));
     }
@@ -1060,7 +1117,7 @@
           }
         }
 
-        ArrayList<Range> ranges = new ArrayList<Range>();
+        ArrayList<Range> ranges = new ArrayList<>();
 
         if (opts.ranges == null) {
           ranges.add(new Range());
@@ -1208,7 +1265,7 @@
     if (bwpe.exception != null)
       return;
 
-    HashMap<Text,ColumnVisibility> vizMap = new HashMap<Text,ColumnVisibility>();
+    HashMap<Text,ColumnVisibility> vizMap = new HashMap<>();
 
     for (Map.Entry<ByteBuffer,List<ColumnUpdate>> entry : cells.entrySet()) {
       Mutation m = new Mutation(ByteBufferUtil.toBytes(entry.getKey()));
@@ -1467,7 +1524,7 @@
       org.apache.accumulo.proxy.thrift.TableNotFoundException, TException {
     try {
       Map<String,EnumSet<IteratorScope>> iterMap = getConnector(login).tableOperations().listIterators(tableName);
-      Map<String,Set<org.apache.accumulo.proxy.thrift.IteratorScope>> result = new HashMap<String,Set<org.apache.accumulo.proxy.thrift.IteratorScope>>();
+      Map<String,Set<org.apache.accumulo.proxy.thrift.IteratorScope>> result = new HashMap<>();
       for (Map.Entry<String,EnumSet<IteratorScope>> entry : iterMap.entrySet()) {
         result.put(entry.getKey(), getProxyIteratorScopes(entry.getValue()));
       }
@@ -1495,7 +1552,7 @@
       org.apache.accumulo.proxy.thrift.TableNotFoundException, TException {
     try {
       Set<Range> ranges = getConnector(login).tableOperations().splitRangeByTablets(tableName, getRange(range), maxSplits);
-      Set<org.apache.accumulo.proxy.thrift.Range> result = new HashSet<org.apache.accumulo.proxy.thrift.Range>();
+      Set<org.apache.accumulo.proxy.thrift.Range> result = new HashSet<>();
       for (Range r : ranges) {
         result.add(getRange(r));
       }
@@ -1549,6 +1606,238 @@
   }
 
   @Override
+  public String systemNamespace() throws TException {
+    return Namespaces.ACCUMULO_NAMESPACE;
+  }
+
+  @Override
+  public String defaultNamespace() throws TException {
+    return Namespaces.DEFAULT_NAMESPACE;
+  }
+
+  @Override
+  public List<String> listNamespaces(ByteBuffer login) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
+    try {
+      return new LinkedList<>(getConnector(login).namespaceOperations().list());
+    } catch (Exception e) {
+      handleException(e);
+      return null;
+    }
+  }
+
+  @Override
+  public boolean namespaceExists(ByteBuffer login, String namespaceName) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
+    try {
+      return getConnector(login).namespaceOperations().exists(namespaceName);
+    } catch (Exception e) {
+      handleException(e);
+      return false;
+    }
+  }
+
+  @Override
+  public void createNamespace(ByteBuffer login, String namespaceName) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceExistsException, TException {
+    try {
+      getConnector(login).namespaceOperations().create(namespaceName);
+    } catch (NamespaceExistsException e) {
+      throw new org.apache.accumulo.proxy.thrift.NamespaceExistsException(e.toString());
+    } catch (Exception e) {
+      handleException(e);
+    }
+  }
+
+  @Override
+  public void deleteNamespace(ByteBuffer login, String namespaceName) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceNotFoundException,
+      org.apache.accumulo.proxy.thrift.NamespaceNotEmptyException, TException {
+    try {
+      getConnector(login).namespaceOperations().delete(namespaceName);
+    } catch (NamespaceNotFoundException e) {
+      throw new org.apache.accumulo.proxy.thrift.NamespaceNotFoundException(e.toString());
+    } catch (NamespaceNotEmptyException e) {
+      throw new org.apache.accumulo.proxy.thrift.NamespaceNotEmptyException(e.toString());
+    } catch (Exception e) {
+      handleException(e);
+    }
+  }
+
+  @Override
+  public void renameNamespace(ByteBuffer login, String oldNamespaceName, String newNamespaceName) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceNotFoundException,
+      org.apache.accumulo.proxy.thrift.NamespaceExistsException, TException {
+    try {
+      getConnector(login).namespaceOperations().rename(oldNamespaceName, newNamespaceName);
+    } catch (NamespaceNotFoundException e) {
+      throw new org.apache.accumulo.proxy.thrift.NamespaceNotFoundException(e.toString());
+    } catch (NamespaceExistsException e) {
+      throw new org.apache.accumulo.proxy.thrift.NamespaceExistsException(e.toString());
+    } catch (Exception e) {
+      handleException(e);
+    }
+  }
+
+  @Override
+  public void setNamespaceProperty(ByteBuffer login, String namespaceName, String property, String value)
+      throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException,
+      org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      getConnector(login).namespaceOperations().setProperty(namespaceName, property, value);
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+    }
+  }
+
+  @Override
+  public void removeNamespaceProperty(ByteBuffer login, String namespaceName, String property) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      getConnector(login).namespaceOperations().removeProperty(namespaceName, property);
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+    }
+  }
+
+  @Override
+  public Map<String,String> getNamespaceProperties(ByteBuffer login, String namespaceName) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      Map<String,String> props = new HashMap<>();
+      for (Map.Entry<String,String> entry : getConnector(login).namespaceOperations().getProperties(namespaceName)) {
+        props.put(entry.getKey(), entry.getValue());
+      }
+      return props;
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+      return null;
+    }
+  }
+
+  @Override
+  public Map<String,String> namespaceIdMap(ByteBuffer login) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
+    try {
+      return getConnector(login).namespaceOperations().namespaceIdMap();
+    } catch (Exception e) {
+      handleException(e);
+      return null;
+    }
+  }
+
+  @Override
+  public void attachNamespaceIterator(ByteBuffer login, String namespaceName, org.apache.accumulo.proxy.thrift.IteratorSetting setting,
+      Set<org.apache.accumulo.proxy.thrift.IteratorScope> scopes) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      if (null != scopes && scopes.size() > 0) {
+        getConnector(login).namespaceOperations().attachIterator(namespaceName, getIteratorSetting(setting), getIteratorScopes(scopes));
+      } else {
+        getConnector(login).namespaceOperations().attachIterator(namespaceName, getIteratorSetting(setting));
+      }
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+    }
+  }
+
+  @Override
+  public void removeNamespaceIterator(ByteBuffer login, String namespaceName, String name, Set<org.apache.accumulo.proxy.thrift.IteratorScope> scopes)
+      throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException,
+      org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      getConnector(login).namespaceOperations().removeIterator(namespaceName, name, getIteratorScopes(scopes));
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+    }
+  }
+
+  @Override
+  public org.apache.accumulo.proxy.thrift.IteratorSetting getNamespaceIteratorSetting(ByteBuffer login, String namespaceName, String name,
+      org.apache.accumulo.proxy.thrift.IteratorScope scope) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      IteratorSetting setting = getConnector(login).namespaceOperations().getIteratorSetting(namespaceName, name, getIteratorScope(scope));
+      return new org.apache.accumulo.proxy.thrift.IteratorSetting(setting.getPriority(), setting.getName(), setting.getIteratorClass(), setting.getOptions());
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+      return null;
+    }
+  }
+
+  @Override
+  public Map<String,Set<org.apache.accumulo.proxy.thrift.IteratorScope>> listNamespaceIterators(ByteBuffer login, String namespaceName)
+      throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException,
+      org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      Map<String,Set<org.apache.accumulo.proxy.thrift.IteratorScope>> namespaceIters = new HashMap<>();
+      for (Map.Entry<String,EnumSet<IteratorScope>> entry : getConnector(login).namespaceOperations().listIterators(namespaceName).entrySet()) {
+        namespaceIters.put(entry.getKey(), getProxyIteratorScopes(entry.getValue()));
+      }
+      return namespaceIters;
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+      return null;
+    }
+  }
+
+  @Override
+  public void checkNamespaceIteratorConflicts(ByteBuffer login, String namespaceName, org.apache.accumulo.proxy.thrift.IteratorSetting setting,
+      Set<org.apache.accumulo.proxy.thrift.IteratorScope> scopes) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      getConnector(login).namespaceOperations().checkIteratorConflicts(namespaceName, getIteratorSetting(setting), getIteratorScopes(scopes));
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+    }
+  }
+
+  @Override
+  public int addNamespaceConstraint(ByteBuffer login, String namespaceName, String constraintClassName)
+      throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException,
+      org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      return getConnector(login).namespaceOperations().addConstraint(namespaceName, constraintClassName);
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+      return -1;
+    }
+  }
+
+  @Override
+  public void removeNamespaceConstraint(ByteBuffer login, String namespaceName, int id) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      getConnector(login).namespaceOperations().removeConstraint(namespaceName, id);
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+    }
+  }
+
+  @Override
+  public Map<String,Integer> listNamespaceConstraints(ByteBuffer login, String namespaceName) throws org.apache.accumulo.proxy.thrift.AccumuloException,
+      org.apache.accumulo.proxy.thrift.AccumuloSecurityException, org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      return getConnector(login).namespaceOperations().listConstraints(namespaceName);
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+      return null;
+    }
+  }
+
+  @Override
+  public boolean testNamespaceClassLoad(ByteBuffer login, String namespaceName, String className, String asTypeName)
+      throws org.apache.accumulo.proxy.thrift.AccumuloException, org.apache.accumulo.proxy.thrift.AccumuloSecurityException,
+      org.apache.accumulo.proxy.thrift.NamespaceNotFoundException, TException {
+    try {
+      return getConnector(login).namespaceOperations().testClassLoad(namespaceName, className, asTypeName);
+    } catch (Exception e) {
+      handleExceptionNNF(e);
+      return false;
+    }
+  }
+
+  @Override
   public void pingTabletServer(ByteBuffer login, String tserver) throws org.apache.accumulo.proxy.thrift.AccumuloException,
       org.apache.accumulo.proxy.thrift.AccumuloSecurityException, TException {
     try {
@@ -1650,9 +1939,9 @@
     }
 
     try {
-      HashMap<Text,ColumnVisibility> vizMap = new HashMap<Text,ColumnVisibility>();
+      HashMap<Text,ColumnVisibility> vizMap = new HashMap<>();
 
-      ArrayList<ConditionalMutation> cmuts = new ArrayList<ConditionalMutation>(updates.size());
+      ArrayList<ConditionalMutation> cmuts = new ArrayList<>(updates.size());
       for (Entry<ByteBuffer,ConditionalUpdates> cu : updates.entrySet()) {
         ConditionalMutation cmut = new ConditionalMutation(ByteBufferUtil.toBytes(cu.getKey()));
 
@@ -1684,7 +1973,7 @@
 
       Iterator<Result> results = cw.write(cmuts.iterator());
 
-      HashMap<ByteBuffer,ConditionalStatus> resultMap = new HashMap<ByteBuffer,ConditionalStatus>();
+      HashMap<ByteBuffer,ConditionalStatus> resultMap = new HashMap<>();
 
       while (results.hasNext()) {
         Result result = results.next();
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloException.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloException.java
index 724a92e..76dd4bc 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloException.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class AccumuloException extends TException implements org.apache.thrift.TBase<AccumuloException, AccumuloException._Fields>, java.io.Serializable, Cloneable, Comparable<AccumuloException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class AccumuloException extends TException implements org.apache.thrift.TBase<AccumuloException, AccumuloException._Fields>, java.io.Serializable, Cloneable, Comparable<AccumuloException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("AccumuloException");
 
   private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloProxy.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloProxy.java
index eaa49ef..150de3e 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloProxy.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloProxy.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class AccumuloProxy {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class AccumuloProxy {
 
   public interface Iface {
 
@@ -172,6 +175,12 @@
 
     public void revokeTablePermission(ByteBuffer login, String user, String table, TablePermission perm) throws AccumuloException, AccumuloSecurityException, TableNotFoundException, org.apache.thrift.TException;
 
+    public void grantNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException;
+
+    public boolean hasNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException;
+
+    public void revokeNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException;
+
     public String createBatchScanner(ByteBuffer login, String tableName, BatchScanOptions options) throws AccumuloException, AccumuloSecurityException, TableNotFoundException, org.apache.thrift.TException;
 
     public String createScanner(ByteBuffer login, String tableName, ScanOptions options) throws AccumuloException, AccumuloSecurityException, TableNotFoundException, org.apache.thrift.TException;
@@ -206,6 +215,46 @@
 
     public Key getFollowing(Key key, PartialKey part) throws org.apache.thrift.TException;
 
+    public String systemNamespace() throws org.apache.thrift.TException;
+
+    public String defaultNamespace() throws org.apache.thrift.TException;
+
+    public List<String> listNamespaces(ByteBuffer login) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException;
+
+    public boolean namespaceExists(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException;
+
+    public void createNamespace(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceExistsException, org.apache.thrift.TException;
+
+    public void deleteNamespace(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, NamespaceNotEmptyException, org.apache.thrift.TException;
+
+    public void renameNamespace(ByteBuffer login, String oldNamespaceName, String newNamespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, NamespaceExistsException, org.apache.thrift.TException;
+
+    public void setNamespaceProperty(ByteBuffer login, String namespaceName, String property, String value) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public void removeNamespaceProperty(ByteBuffer login, String namespaceName, String property) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public Map<String,String> getNamespaceProperties(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public Map<String,String> namespaceIdMap(ByteBuffer login) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException;
+
+    public void attachNamespaceIterator(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public void removeNamespaceIterator(ByteBuffer login, String namespaceName, String name, Set<IteratorScope> scopes) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public IteratorSetting getNamespaceIteratorSetting(ByteBuffer login, String namespaceName, String name, IteratorScope scope) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public Map<String,Set<IteratorScope>> listNamespaceIterators(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public void checkNamespaceIteratorConflicts(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public int addNamespaceConstraint(ByteBuffer login, String namespaceName, String constraintClassName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public void removeNamespaceConstraint(ByteBuffer login, String namespaceName, int id) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public Map<String,Integer> listNamespaceConstraints(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
+    public boolean testNamespaceClassLoad(ByteBuffer login, String namespaceName, String className, String asTypeName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException;
+
   }
 
   public interface AsyncIface {
@@ -330,6 +379,12 @@
 
     public void revokeTablePermission(ByteBuffer login, String user, String table, TablePermission perm, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
+    public void grantNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void hasNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void revokeNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
     public void createBatchScanner(ByteBuffer login, String tableName, BatchScanOptions options, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
     public void createScanner(ByteBuffer login, String tableName, ScanOptions options, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
@@ -364,6 +419,46 @@
 
     public void getFollowing(Key key, PartialKey part, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
 
+    public void systemNamespace(org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void defaultNamespace(org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void listNamespaces(ByteBuffer login, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void namespaceExists(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void createNamespace(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void deleteNamespace(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void renameNamespace(ByteBuffer login, String oldNamespaceName, String newNamespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void setNamespaceProperty(ByteBuffer login, String namespaceName, String property, String value, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void removeNamespaceProperty(ByteBuffer login, String namespaceName, String property, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void getNamespaceProperties(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void namespaceIdMap(ByteBuffer login, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void attachNamespaceIterator(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void removeNamespaceIterator(ByteBuffer login, String namespaceName, String name, Set<IteratorScope> scopes, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void getNamespaceIteratorSetting(ByteBuffer login, String namespaceName, String name, IteratorScope scope, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void listNamespaceIterators(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void checkNamespaceIteratorConflicts(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void addNamespaceConstraint(ByteBuffer login, String namespaceName, String constraintClassName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void removeNamespaceConstraint(ByteBuffer login, String namespaceName, int id, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void listNamespaceConstraints(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
+    public void testNamespaceClassLoad(ByteBuffer login, String namespaceName, String className, String asTypeName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException;
+
   }
 
   public static class Client extends org.apache.thrift.TServiceClient implements Iface {
@@ -2232,6 +2327,96 @@
       return;
     }
 
+    public void grantNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      send_grantNamespacePermission(login, user, namespaceName, perm);
+      recv_grantNamespacePermission();
+    }
+
+    public void send_grantNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm) throws org.apache.thrift.TException
+    {
+      grantNamespacePermission_args args = new grantNamespacePermission_args();
+      args.setLogin(login);
+      args.setUser(user);
+      args.setNamespaceName(namespaceName);
+      args.setPerm(perm);
+      sendBase("grantNamespacePermission", args);
+    }
+
+    public void recv_grantNamespacePermission() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      grantNamespacePermission_result result = new grantNamespacePermission_result();
+      receiveBase(result, "grantNamespacePermission");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      return;
+    }
+
+    public boolean hasNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      send_hasNamespacePermission(login, user, namespaceName, perm);
+      return recv_hasNamespacePermission();
+    }
+
+    public void send_hasNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm) throws org.apache.thrift.TException
+    {
+      hasNamespacePermission_args args = new hasNamespacePermission_args();
+      args.setLogin(login);
+      args.setUser(user);
+      args.setNamespaceName(namespaceName);
+      args.setPerm(perm);
+      sendBase("hasNamespacePermission", args);
+    }
+
+    public boolean recv_hasNamespacePermission() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      hasNamespacePermission_result result = new hasNamespacePermission_result();
+      receiveBase(result, "hasNamespacePermission");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "hasNamespacePermission failed: unknown result");
+    }
+
+    public void revokeNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      send_revokeNamespacePermission(login, user, namespaceName, perm);
+      recv_revokeNamespacePermission();
+    }
+
+    public void send_revokeNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm) throws org.apache.thrift.TException
+    {
+      revokeNamespacePermission_args args = new revokeNamespacePermission_args();
+      args.setLogin(login);
+      args.setUser(user);
+      args.setNamespaceName(namespaceName);
+      args.setPerm(perm);
+      sendBase("revokeNamespacePermission", args);
+    }
+
+    public void recv_revokeNamespacePermission() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      revokeNamespacePermission_result result = new revokeNamespacePermission_result();
+      receiveBase(result, "revokeNamespacePermission");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      return;
+    }
+
     public String createBatchScanner(ByteBuffer login, String tableName, BatchScanOptions options) throws AccumuloException, AccumuloSecurityException, TableNotFoundException, org.apache.thrift.TException
     {
       send_createBatchScanner(login, tableName, options);
@@ -2492,7 +2677,7 @@
       update_args args = new update_args();
       args.setWriter(writer);
       args.setCells(cells);
-      sendBase("update", args);
+      sendBaseOneway("update", args);
     }
 
     public void flush(String writer) throws UnknownWriter, MutationsRejectedException, org.apache.thrift.TException
@@ -2716,6 +2901,628 @@
       throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "getFollowing failed: unknown result");
     }
 
+    public String systemNamespace() throws org.apache.thrift.TException
+    {
+      send_systemNamespace();
+      return recv_systemNamespace();
+    }
+
+    public void send_systemNamespace() throws org.apache.thrift.TException
+    {
+      systemNamespace_args args = new systemNamespace_args();
+      sendBase("systemNamespace", args);
+    }
+
+    public String recv_systemNamespace() throws org.apache.thrift.TException
+    {
+      systemNamespace_result result = new systemNamespace_result();
+      receiveBase(result, "systemNamespace");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "systemNamespace failed: unknown result");
+    }
+
+    public String defaultNamespace() throws org.apache.thrift.TException
+    {
+      send_defaultNamespace();
+      return recv_defaultNamespace();
+    }
+
+    public void send_defaultNamespace() throws org.apache.thrift.TException
+    {
+      defaultNamespace_args args = new defaultNamespace_args();
+      sendBase("defaultNamespace", args);
+    }
+
+    public String recv_defaultNamespace() throws org.apache.thrift.TException
+    {
+      defaultNamespace_result result = new defaultNamespace_result();
+      receiveBase(result, "defaultNamespace");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "defaultNamespace failed: unknown result");
+    }
+
+    public List<String> listNamespaces(ByteBuffer login) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      send_listNamespaces(login);
+      return recv_listNamespaces();
+    }
+
+    public void send_listNamespaces(ByteBuffer login) throws org.apache.thrift.TException
+    {
+      listNamespaces_args args = new listNamespaces_args();
+      args.setLogin(login);
+      sendBase("listNamespaces", args);
+    }
+
+    public List<String> recv_listNamespaces() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      listNamespaces_result result = new listNamespaces_result();
+      receiveBase(result, "listNamespaces");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "listNamespaces failed: unknown result");
+    }
+
+    public boolean namespaceExists(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      send_namespaceExists(login, namespaceName);
+      return recv_namespaceExists();
+    }
+
+    public void send_namespaceExists(ByteBuffer login, String namespaceName) throws org.apache.thrift.TException
+    {
+      namespaceExists_args args = new namespaceExists_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      sendBase("namespaceExists", args);
+    }
+
+    public boolean recv_namespaceExists() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      namespaceExists_result result = new namespaceExists_result();
+      receiveBase(result, "namespaceExists");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "namespaceExists failed: unknown result");
+    }
+
+    public void createNamespace(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceExistsException, org.apache.thrift.TException
+    {
+      send_createNamespace(login, namespaceName);
+      recv_createNamespace();
+    }
+
+    public void send_createNamespace(ByteBuffer login, String namespaceName) throws org.apache.thrift.TException
+    {
+      createNamespace_args args = new createNamespace_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      sendBase("createNamespace", args);
+    }
+
+    public void recv_createNamespace() throws AccumuloException, AccumuloSecurityException, NamespaceExistsException, org.apache.thrift.TException
+    {
+      createNamespace_result result = new createNamespace_result();
+      receiveBase(result, "createNamespace");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      return;
+    }
+
+    public void deleteNamespace(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, NamespaceNotEmptyException, org.apache.thrift.TException
+    {
+      send_deleteNamespace(login, namespaceName);
+      recv_deleteNamespace();
+    }
+
+    public void send_deleteNamespace(ByteBuffer login, String namespaceName) throws org.apache.thrift.TException
+    {
+      deleteNamespace_args args = new deleteNamespace_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      sendBase("deleteNamespace", args);
+    }
+
+    public void recv_deleteNamespace() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, NamespaceNotEmptyException, org.apache.thrift.TException
+    {
+      deleteNamespace_result result = new deleteNamespace_result();
+      receiveBase(result, "deleteNamespace");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      if (result.ouch4 != null) {
+        throw result.ouch4;
+      }
+      return;
+    }
+
+    public void renameNamespace(ByteBuffer login, String oldNamespaceName, String newNamespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, NamespaceExistsException, org.apache.thrift.TException
+    {
+      send_renameNamespace(login, oldNamespaceName, newNamespaceName);
+      recv_renameNamespace();
+    }
+
+    public void send_renameNamespace(ByteBuffer login, String oldNamespaceName, String newNamespaceName) throws org.apache.thrift.TException
+    {
+      renameNamespace_args args = new renameNamespace_args();
+      args.setLogin(login);
+      args.setOldNamespaceName(oldNamespaceName);
+      args.setNewNamespaceName(newNamespaceName);
+      sendBase("renameNamespace", args);
+    }
+
+    public void recv_renameNamespace() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, NamespaceExistsException, org.apache.thrift.TException
+    {
+      renameNamespace_result result = new renameNamespace_result();
+      receiveBase(result, "renameNamespace");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      if (result.ouch4 != null) {
+        throw result.ouch4;
+      }
+      return;
+    }
+
+    public void setNamespaceProperty(ByteBuffer login, String namespaceName, String property, String value) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_setNamespaceProperty(login, namespaceName, property, value);
+      recv_setNamespaceProperty();
+    }
+
+    public void send_setNamespaceProperty(ByteBuffer login, String namespaceName, String property, String value) throws org.apache.thrift.TException
+    {
+      setNamespaceProperty_args args = new setNamespaceProperty_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      args.setProperty(property);
+      args.setValue(value);
+      sendBase("setNamespaceProperty", args);
+    }
+
+    public void recv_setNamespaceProperty() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      setNamespaceProperty_result result = new setNamespaceProperty_result();
+      receiveBase(result, "setNamespaceProperty");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      return;
+    }
+
+    public void removeNamespaceProperty(ByteBuffer login, String namespaceName, String property) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_removeNamespaceProperty(login, namespaceName, property);
+      recv_removeNamespaceProperty();
+    }
+
+    public void send_removeNamespaceProperty(ByteBuffer login, String namespaceName, String property) throws org.apache.thrift.TException
+    {
+      removeNamespaceProperty_args args = new removeNamespaceProperty_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      args.setProperty(property);
+      sendBase("removeNamespaceProperty", args);
+    }
+
+    public void recv_removeNamespaceProperty() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      removeNamespaceProperty_result result = new removeNamespaceProperty_result();
+      receiveBase(result, "removeNamespaceProperty");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      return;
+    }
+
+    public Map<String,String> getNamespaceProperties(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_getNamespaceProperties(login, namespaceName);
+      return recv_getNamespaceProperties();
+    }
+
+    public void send_getNamespaceProperties(ByteBuffer login, String namespaceName) throws org.apache.thrift.TException
+    {
+      getNamespaceProperties_args args = new getNamespaceProperties_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      sendBase("getNamespaceProperties", args);
+    }
+
+    public Map<String,String> recv_getNamespaceProperties() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      getNamespaceProperties_result result = new getNamespaceProperties_result();
+      receiveBase(result, "getNamespaceProperties");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "getNamespaceProperties failed: unknown result");
+    }
+
+    public Map<String,String> namespaceIdMap(ByteBuffer login) throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      send_namespaceIdMap(login);
+      return recv_namespaceIdMap();
+    }
+
+    public void send_namespaceIdMap(ByteBuffer login) throws org.apache.thrift.TException
+    {
+      namespaceIdMap_args args = new namespaceIdMap_args();
+      args.setLogin(login);
+      sendBase("namespaceIdMap", args);
+    }
+
+    public Map<String,String> recv_namespaceIdMap() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException
+    {
+      namespaceIdMap_result result = new namespaceIdMap_result();
+      receiveBase(result, "namespaceIdMap");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "namespaceIdMap failed: unknown result");
+    }
+
+    public void attachNamespaceIterator(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_attachNamespaceIterator(login, namespaceName, setting, scopes);
+      recv_attachNamespaceIterator();
+    }
+
+    public void send_attachNamespaceIterator(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes) throws org.apache.thrift.TException
+    {
+      attachNamespaceIterator_args args = new attachNamespaceIterator_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      args.setSetting(setting);
+      args.setScopes(scopes);
+      sendBase("attachNamespaceIterator", args);
+    }
+
+    public void recv_attachNamespaceIterator() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      attachNamespaceIterator_result result = new attachNamespaceIterator_result();
+      receiveBase(result, "attachNamespaceIterator");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      return;
+    }
+
+    public void removeNamespaceIterator(ByteBuffer login, String namespaceName, String name, Set<IteratorScope> scopes) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_removeNamespaceIterator(login, namespaceName, name, scopes);
+      recv_removeNamespaceIterator();
+    }
+
+    public void send_removeNamespaceIterator(ByteBuffer login, String namespaceName, String name, Set<IteratorScope> scopes) throws org.apache.thrift.TException
+    {
+      removeNamespaceIterator_args args = new removeNamespaceIterator_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      args.setName(name);
+      args.setScopes(scopes);
+      sendBase("removeNamespaceIterator", args);
+    }
+
+    public void recv_removeNamespaceIterator() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      removeNamespaceIterator_result result = new removeNamespaceIterator_result();
+      receiveBase(result, "removeNamespaceIterator");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      return;
+    }
+
+    public IteratorSetting getNamespaceIteratorSetting(ByteBuffer login, String namespaceName, String name, IteratorScope scope) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_getNamespaceIteratorSetting(login, namespaceName, name, scope);
+      return recv_getNamespaceIteratorSetting();
+    }
+
+    public void send_getNamespaceIteratorSetting(ByteBuffer login, String namespaceName, String name, IteratorScope scope) throws org.apache.thrift.TException
+    {
+      getNamespaceIteratorSetting_args args = new getNamespaceIteratorSetting_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      args.setName(name);
+      args.setScope(scope);
+      sendBase("getNamespaceIteratorSetting", args);
+    }
+
+    public IteratorSetting recv_getNamespaceIteratorSetting() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      getNamespaceIteratorSetting_result result = new getNamespaceIteratorSetting_result();
+      receiveBase(result, "getNamespaceIteratorSetting");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "getNamespaceIteratorSetting failed: unknown result");
+    }
+
+    public Map<String,Set<IteratorScope>> listNamespaceIterators(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_listNamespaceIterators(login, namespaceName);
+      return recv_listNamespaceIterators();
+    }
+
+    public void send_listNamespaceIterators(ByteBuffer login, String namespaceName) throws org.apache.thrift.TException
+    {
+      listNamespaceIterators_args args = new listNamespaceIterators_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      sendBase("listNamespaceIterators", args);
+    }
+
+    public Map<String,Set<IteratorScope>> recv_listNamespaceIterators() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      listNamespaceIterators_result result = new listNamespaceIterators_result();
+      receiveBase(result, "listNamespaceIterators");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "listNamespaceIterators failed: unknown result");
+    }
+
+    public void checkNamespaceIteratorConflicts(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_checkNamespaceIteratorConflicts(login, namespaceName, setting, scopes);
+      recv_checkNamespaceIteratorConflicts();
+    }
+
+    public void send_checkNamespaceIteratorConflicts(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes) throws org.apache.thrift.TException
+    {
+      checkNamespaceIteratorConflicts_args args = new checkNamespaceIteratorConflicts_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      args.setSetting(setting);
+      args.setScopes(scopes);
+      sendBase("checkNamespaceIteratorConflicts", args);
+    }
+
+    public void recv_checkNamespaceIteratorConflicts() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      checkNamespaceIteratorConflicts_result result = new checkNamespaceIteratorConflicts_result();
+      receiveBase(result, "checkNamespaceIteratorConflicts");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      return;
+    }
+
+    public int addNamespaceConstraint(ByteBuffer login, String namespaceName, String constraintClassName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_addNamespaceConstraint(login, namespaceName, constraintClassName);
+      return recv_addNamespaceConstraint();
+    }
+
+    public void send_addNamespaceConstraint(ByteBuffer login, String namespaceName, String constraintClassName) throws org.apache.thrift.TException
+    {
+      addNamespaceConstraint_args args = new addNamespaceConstraint_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      args.setConstraintClassName(constraintClassName);
+      sendBase("addNamespaceConstraint", args);
+    }
+
+    public int recv_addNamespaceConstraint() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      addNamespaceConstraint_result result = new addNamespaceConstraint_result();
+      receiveBase(result, "addNamespaceConstraint");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "addNamespaceConstraint failed: unknown result");
+    }
+
+    public void removeNamespaceConstraint(ByteBuffer login, String namespaceName, int id) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_removeNamespaceConstraint(login, namespaceName, id);
+      recv_removeNamespaceConstraint();
+    }
+
+    public void send_removeNamespaceConstraint(ByteBuffer login, String namespaceName, int id) throws org.apache.thrift.TException
+    {
+      removeNamespaceConstraint_args args = new removeNamespaceConstraint_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      args.setId(id);
+      sendBase("removeNamespaceConstraint", args);
+    }
+
+    public void recv_removeNamespaceConstraint() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      removeNamespaceConstraint_result result = new removeNamespaceConstraint_result();
+      receiveBase(result, "removeNamespaceConstraint");
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      return;
+    }
+
+    public Map<String,Integer> listNamespaceConstraints(ByteBuffer login, String namespaceName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_listNamespaceConstraints(login, namespaceName);
+      return recv_listNamespaceConstraints();
+    }
+
+    public void send_listNamespaceConstraints(ByteBuffer login, String namespaceName) throws org.apache.thrift.TException
+    {
+      listNamespaceConstraints_args args = new listNamespaceConstraints_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      sendBase("listNamespaceConstraints", args);
+    }
+
+    public Map<String,Integer> recv_listNamespaceConstraints() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      listNamespaceConstraints_result result = new listNamespaceConstraints_result();
+      receiveBase(result, "listNamespaceConstraints");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "listNamespaceConstraints failed: unknown result");
+    }
+
+    public boolean testNamespaceClassLoad(ByteBuffer login, String namespaceName, String className, String asTypeName) throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      send_testNamespaceClassLoad(login, namespaceName, className, asTypeName);
+      return recv_testNamespaceClassLoad();
+    }
+
+    public void send_testNamespaceClassLoad(ByteBuffer login, String namespaceName, String className, String asTypeName) throws org.apache.thrift.TException
+    {
+      testNamespaceClassLoad_args args = new testNamespaceClassLoad_args();
+      args.setLogin(login);
+      args.setNamespaceName(namespaceName);
+      args.setClassName(className);
+      args.setAsTypeName(asTypeName);
+      sendBase("testNamespaceClassLoad", args);
+    }
+
+    public boolean recv_testNamespaceClassLoad() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException
+    {
+      testNamespaceClassLoad_result result = new testNamespaceClassLoad_result();
+      receiveBase(result, "testNamespaceClassLoad");
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.ouch1 != null) {
+        throw result.ouch1;
+      }
+      if (result.ouch2 != null) {
+        throw result.ouch2;
+      }
+      if (result.ouch3 != null) {
+        throw result.ouch3;
+      }
+      throw new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.MISSING_RESULT, "testNamespaceClassLoad failed: unknown result");
+    }
+
   }
   public static class AsyncClient extends org.apache.thrift.async.TAsyncClient implements AsyncIface {
     public static class Factory implements org.apache.thrift.async.TAsyncClientFactory<AsyncClient> {
@@ -5017,6 +5824,129 @@
       }
     }
 
+    public void grantNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      grantNamespacePermission_call method_call = new grantNamespacePermission_call(login, user, namespaceName, perm, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class grantNamespacePermission_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String user;
+      private String namespaceName;
+      private NamespacePermission perm;
+      public grantNamespacePermission_call(ByteBuffer login, String user, String namespaceName, NamespacePermission perm, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.user = user;
+        this.namespaceName = namespaceName;
+        this.perm = perm;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("grantNamespacePermission", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        grantNamespacePermission_args args = new grantNamespacePermission_args();
+        args.setLogin(login);
+        args.setUser(user);
+        args.setNamespaceName(namespaceName);
+        args.setPerm(perm);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_grantNamespacePermission();
+      }
+    }
+
+    public void hasNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      hasNamespacePermission_call method_call = new hasNamespacePermission_call(login, user, namespaceName, perm, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class hasNamespacePermission_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String user;
+      private String namespaceName;
+      private NamespacePermission perm;
+      public hasNamespacePermission_call(ByteBuffer login, String user, String namespaceName, NamespacePermission perm, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.user = user;
+        this.namespaceName = namespaceName;
+        this.perm = perm;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("hasNamespacePermission", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        hasNamespacePermission_args args = new hasNamespacePermission_args();
+        args.setLogin(login);
+        args.setUser(user);
+        args.setNamespaceName(namespaceName);
+        args.setPerm(perm);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public boolean getResult() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_hasNamespacePermission();
+      }
+    }
+
+    public void revokeNamespacePermission(ByteBuffer login, String user, String namespaceName, NamespacePermission perm, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      revokeNamespacePermission_call method_call = new revokeNamespacePermission_call(login, user, namespaceName, perm, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class revokeNamespacePermission_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String user;
+      private String namespaceName;
+      private NamespacePermission perm;
+      public revokeNamespacePermission_call(ByteBuffer login, String user, String namespaceName, NamespacePermission perm, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.user = user;
+        this.namespaceName = namespaceName;
+        this.perm = perm;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("revokeNamespacePermission", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        revokeNamespacePermission_args args = new revokeNamespacePermission_args();
+        args.setLogin(login);
+        args.setUser(user);
+        args.setNamespaceName(namespaceName);
+        args.setPerm(perm);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_revokeNamespacePermission();
+      }
+    }
+
     public void createBatchScanner(ByteBuffer login, String tableName, BatchScanOptions options, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
       checkReady();
       createBatchScanner_call method_call = new createBatchScanner_call(login, tableName, options, resultHandler, this, ___protocolFactory, ___transport);
@@ -5317,7 +6247,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("update", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("update", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         update_args args = new update_args();
         args.setWriter(writer);
         args.setCells(cells);
@@ -5611,6 +6541,736 @@
       }
     }
 
+    public void systemNamespace(org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      systemNamespace_call method_call = new systemNamespace_call(resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class systemNamespace_call extends org.apache.thrift.async.TAsyncMethodCall {
+      public systemNamespace_call(org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("systemNamespace", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        systemNamespace_args args = new systemNamespace_args();
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public String getResult() throws org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_systemNamespace();
+      }
+    }
+
+    public void defaultNamespace(org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      defaultNamespace_call method_call = new defaultNamespace_call(resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class defaultNamespace_call extends org.apache.thrift.async.TAsyncMethodCall {
+      public defaultNamespace_call(org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("defaultNamespace", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        defaultNamespace_args args = new defaultNamespace_args();
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public String getResult() throws org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_defaultNamespace();
+      }
+    }
+
+    public void listNamespaces(ByteBuffer login, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      listNamespaces_call method_call = new listNamespaces_call(login, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class listNamespaces_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      public listNamespaces_call(ByteBuffer login, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("listNamespaces", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        listNamespaces_args args = new listNamespaces_args();
+        args.setLogin(login);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public List<String> getResult() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_listNamespaces();
+      }
+    }
+
+    public void namespaceExists(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      namespaceExists_call method_call = new namespaceExists_call(login, namespaceName, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class namespaceExists_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      public namespaceExists_call(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("namespaceExists", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        namespaceExists_args args = new namespaceExists_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public boolean getResult() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_namespaceExists();
+      }
+    }
+
+    public void createNamespace(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      createNamespace_call method_call = new createNamespace_call(login, namespaceName, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class createNamespace_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      public createNamespace_call(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("createNamespace", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        createNamespace_args args = new createNamespace_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, NamespaceExistsException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_createNamespace();
+      }
+    }
+
+    public void deleteNamespace(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      deleteNamespace_call method_call = new deleteNamespace_call(login, namespaceName, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class deleteNamespace_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      public deleteNamespace_call(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("deleteNamespace", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        deleteNamespace_args args = new deleteNamespace_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, NamespaceNotEmptyException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_deleteNamespace();
+      }
+    }
+
+    public void renameNamespace(ByteBuffer login, String oldNamespaceName, String newNamespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      renameNamespace_call method_call = new renameNamespace_call(login, oldNamespaceName, newNamespaceName, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class renameNamespace_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String oldNamespaceName;
+      private String newNamespaceName;
+      public renameNamespace_call(ByteBuffer login, String oldNamespaceName, String newNamespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.oldNamespaceName = oldNamespaceName;
+        this.newNamespaceName = newNamespaceName;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("renameNamespace", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        renameNamespace_args args = new renameNamespace_args();
+        args.setLogin(login);
+        args.setOldNamespaceName(oldNamespaceName);
+        args.setNewNamespaceName(newNamespaceName);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, NamespaceExistsException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_renameNamespace();
+      }
+    }
+
+    public void setNamespaceProperty(ByteBuffer login, String namespaceName, String property, String value, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      setNamespaceProperty_call method_call = new setNamespaceProperty_call(login, namespaceName, property, value, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class setNamespaceProperty_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      private String property;
+      private String value;
+      public setNamespaceProperty_call(ByteBuffer login, String namespaceName, String property, String value, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+        this.property = property;
+        this.value = value;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("setNamespaceProperty", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        setNamespaceProperty_args args = new setNamespaceProperty_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.setProperty(property);
+        args.setValue(value);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_setNamespaceProperty();
+      }
+    }
+
+    public void removeNamespaceProperty(ByteBuffer login, String namespaceName, String property, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      removeNamespaceProperty_call method_call = new removeNamespaceProperty_call(login, namespaceName, property, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class removeNamespaceProperty_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      private String property;
+      public removeNamespaceProperty_call(ByteBuffer login, String namespaceName, String property, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+        this.property = property;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("removeNamespaceProperty", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        removeNamespaceProperty_args args = new removeNamespaceProperty_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.setProperty(property);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_removeNamespaceProperty();
+      }
+    }
+
+    public void getNamespaceProperties(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      getNamespaceProperties_call method_call = new getNamespaceProperties_call(login, namespaceName, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class getNamespaceProperties_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      public getNamespaceProperties_call(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("getNamespaceProperties", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        getNamespaceProperties_args args = new getNamespaceProperties_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public Map<String,String> getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_getNamespaceProperties();
+      }
+    }
+
+    public void namespaceIdMap(ByteBuffer login, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      namespaceIdMap_call method_call = new namespaceIdMap_call(login, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class namespaceIdMap_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      public namespaceIdMap_call(ByteBuffer login, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("namespaceIdMap", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        namespaceIdMap_args args = new namespaceIdMap_args();
+        args.setLogin(login);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public Map<String,String> getResult() throws AccumuloException, AccumuloSecurityException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_namespaceIdMap();
+      }
+    }
+
+    public void attachNamespaceIterator(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      attachNamespaceIterator_call method_call = new attachNamespaceIterator_call(login, namespaceName, setting, scopes, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class attachNamespaceIterator_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      private IteratorSetting setting;
+      private Set<IteratorScope> scopes;
+      public attachNamespaceIterator_call(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+        this.setting = setting;
+        this.scopes = scopes;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("attachNamespaceIterator", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        attachNamespaceIterator_args args = new attachNamespaceIterator_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.setSetting(setting);
+        args.setScopes(scopes);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_attachNamespaceIterator();
+      }
+    }
+
+    public void removeNamespaceIterator(ByteBuffer login, String namespaceName, String name, Set<IteratorScope> scopes, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      removeNamespaceIterator_call method_call = new removeNamespaceIterator_call(login, namespaceName, name, scopes, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class removeNamespaceIterator_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      private String name;
+      private Set<IteratorScope> scopes;
+      public removeNamespaceIterator_call(ByteBuffer login, String namespaceName, String name, Set<IteratorScope> scopes, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+        this.name = name;
+        this.scopes = scopes;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("removeNamespaceIterator", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        removeNamespaceIterator_args args = new removeNamespaceIterator_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.setName(name);
+        args.setScopes(scopes);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_removeNamespaceIterator();
+      }
+    }
+
+    public void getNamespaceIteratorSetting(ByteBuffer login, String namespaceName, String name, IteratorScope scope, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      getNamespaceIteratorSetting_call method_call = new getNamespaceIteratorSetting_call(login, namespaceName, name, scope, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class getNamespaceIteratorSetting_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      private String name;
+      private IteratorScope scope;
+      public getNamespaceIteratorSetting_call(ByteBuffer login, String namespaceName, String name, IteratorScope scope, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+        this.name = name;
+        this.scope = scope;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("getNamespaceIteratorSetting", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        getNamespaceIteratorSetting_args args = new getNamespaceIteratorSetting_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.setName(name);
+        args.setScope(scope);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public IteratorSetting getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_getNamespaceIteratorSetting();
+      }
+    }
+
+    public void listNamespaceIterators(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      listNamespaceIterators_call method_call = new listNamespaceIterators_call(login, namespaceName, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class listNamespaceIterators_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      public listNamespaceIterators_call(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("listNamespaceIterators", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        listNamespaceIterators_args args = new listNamespaceIterators_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public Map<String,Set<IteratorScope>> getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_listNamespaceIterators();
+      }
+    }
+
+    public void checkNamespaceIteratorConflicts(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      checkNamespaceIteratorConflicts_call method_call = new checkNamespaceIteratorConflicts_call(login, namespaceName, setting, scopes, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class checkNamespaceIteratorConflicts_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      private IteratorSetting setting;
+      private Set<IteratorScope> scopes;
+      public checkNamespaceIteratorConflicts_call(ByteBuffer login, String namespaceName, IteratorSetting setting, Set<IteratorScope> scopes, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+        this.setting = setting;
+        this.scopes = scopes;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("checkNamespaceIteratorConflicts", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        checkNamespaceIteratorConflicts_args args = new checkNamespaceIteratorConflicts_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.setSetting(setting);
+        args.setScopes(scopes);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_checkNamespaceIteratorConflicts();
+      }
+    }
+
+    public void addNamespaceConstraint(ByteBuffer login, String namespaceName, String constraintClassName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      addNamespaceConstraint_call method_call = new addNamespaceConstraint_call(login, namespaceName, constraintClassName, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class addNamespaceConstraint_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      private String constraintClassName;
+      public addNamespaceConstraint_call(ByteBuffer login, String namespaceName, String constraintClassName, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+        this.constraintClassName = constraintClassName;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("addNamespaceConstraint", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        addNamespaceConstraint_args args = new addNamespaceConstraint_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.setConstraintClassName(constraintClassName);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public int getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_addNamespaceConstraint();
+      }
+    }
+
+    public void removeNamespaceConstraint(ByteBuffer login, String namespaceName, int id, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      removeNamespaceConstraint_call method_call = new removeNamespaceConstraint_call(login, namespaceName, id, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class removeNamespaceConstraint_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      private int id;
+      public removeNamespaceConstraint_call(ByteBuffer login, String namespaceName, int id, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+        this.id = id;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("removeNamespaceConstraint", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        removeNamespaceConstraint_args args = new removeNamespaceConstraint_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.setId(id);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public void getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        (new Client(prot)).recv_removeNamespaceConstraint();
+      }
+    }
+
+    public void listNamespaceConstraints(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      listNamespaceConstraints_call method_call = new listNamespaceConstraints_call(login, namespaceName, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class listNamespaceConstraints_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      public listNamespaceConstraints_call(ByteBuffer login, String namespaceName, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("listNamespaceConstraints", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        listNamespaceConstraints_args args = new listNamespaceConstraints_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public Map<String,Integer> getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_listNamespaceConstraints();
+      }
+    }
+
+    public void testNamespaceClassLoad(ByteBuffer login, String namespaceName, String className, String asTypeName, org.apache.thrift.async.AsyncMethodCallback resultHandler) throws org.apache.thrift.TException {
+      checkReady();
+      testNamespaceClassLoad_call method_call = new testNamespaceClassLoad_call(login, namespaceName, className, asTypeName, resultHandler, this, ___protocolFactory, ___transport);
+      this.___currentMethod = method_call;
+      ___manager.call(method_call);
+    }
+
+    public static class testNamespaceClassLoad_call extends org.apache.thrift.async.TAsyncMethodCall {
+      private ByteBuffer login;
+      private String namespaceName;
+      private String className;
+      private String asTypeName;
+      public testNamespaceClassLoad_call(ByteBuffer login, String namespaceName, String className, String asTypeName, org.apache.thrift.async.AsyncMethodCallback resultHandler, org.apache.thrift.async.TAsyncClient client, org.apache.thrift.protocol.TProtocolFactory protocolFactory, org.apache.thrift.transport.TNonblockingTransport transport) throws org.apache.thrift.TException {
+        super(client, protocolFactory, transport, resultHandler, false);
+        this.login = login;
+        this.namespaceName = namespaceName;
+        this.className = className;
+        this.asTypeName = asTypeName;
+      }
+
+      public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("testNamespaceClassLoad", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        testNamespaceClassLoad_args args = new testNamespaceClassLoad_args();
+        args.setLogin(login);
+        args.setNamespaceName(namespaceName);
+        args.setClassName(className);
+        args.setAsTypeName(asTypeName);
+        args.write(prot);
+        prot.writeMessageEnd();
+      }
+
+      public boolean getResult() throws AccumuloException, AccumuloSecurityException, NamespaceNotFoundException, org.apache.thrift.TException {
+        if (getState() != org.apache.thrift.async.TAsyncMethodCall.State.RESPONSE_READ) {
+          throw new IllegalStateException("Method call not finished!");
+        }
+        org.apache.thrift.transport.TMemoryInputTransport memoryTransport = new org.apache.thrift.transport.TMemoryInputTransport(getFrameBuffer().array());
+        org.apache.thrift.protocol.TProtocol prot = client.getProtocolFactory().getProtocol(memoryTransport);
+        return (new Client(prot)).recv_testNamespaceClassLoad();
+      }
+    }
+
   }
 
   public static class Processor<I extends Iface> extends org.apache.thrift.TBaseProcessor<I> implements org.apache.thrift.TProcessor {
@@ -5684,6 +7344,9 @@
       processMap.put("listLocalUsers", new listLocalUsers());
       processMap.put("revokeSystemPermission", new revokeSystemPermission());
       processMap.put("revokeTablePermission", new revokeTablePermission());
+      processMap.put("grantNamespacePermission", new grantNamespacePermission());
+      processMap.put("hasNamespacePermission", new hasNamespacePermission());
+      processMap.put("revokeNamespacePermission", new revokeNamespacePermission());
       processMap.put("createBatchScanner", new createBatchScanner());
       processMap.put("createScanner", new createScanner());
       processMap.put("hasNext", new hasNext());
@@ -5701,6 +7364,26 @@
       processMap.put("closeConditionalWriter", new closeConditionalWriter());
       processMap.put("getRowRange", new getRowRange());
       processMap.put("getFollowing", new getFollowing());
+      processMap.put("systemNamespace", new systemNamespace());
+      processMap.put("defaultNamespace", new defaultNamespace());
+      processMap.put("listNamespaces", new listNamespaces());
+      processMap.put("namespaceExists", new namespaceExists());
+      processMap.put("createNamespace", new createNamespace());
+      processMap.put("deleteNamespace", new deleteNamespace());
+      processMap.put("renameNamespace", new renameNamespace());
+      processMap.put("setNamespaceProperty", new setNamespaceProperty());
+      processMap.put("removeNamespaceProperty", new removeNamespaceProperty());
+      processMap.put("getNamespaceProperties", new getNamespaceProperties());
+      processMap.put("namespaceIdMap", new namespaceIdMap());
+      processMap.put("attachNamespaceIterator", new attachNamespaceIterator());
+      processMap.put("removeNamespaceIterator", new removeNamespaceIterator());
+      processMap.put("getNamespaceIteratorSetting", new getNamespaceIteratorSetting());
+      processMap.put("listNamespaceIterators", new listNamespaceIterators());
+      processMap.put("checkNamespaceIteratorConflicts", new checkNamespaceIteratorConflicts());
+      processMap.put("addNamespaceConstraint", new addNamespaceConstraint());
+      processMap.put("removeNamespaceConstraint", new removeNamespaceConstraint());
+      processMap.put("listNamespaceConstraints", new listNamespaceConstraints());
+      processMap.put("testNamespaceClassLoad", new testNamespaceClassLoad());
       return processMap;
     }
 
@@ -7321,6 +9004,85 @@
       }
     }
 
+    public static class grantNamespacePermission<I extends Iface> extends org.apache.thrift.ProcessFunction<I, grantNamespacePermission_args> {
+      public grantNamespacePermission() {
+        super("grantNamespacePermission");
+      }
+
+      public grantNamespacePermission_args getEmptyArgsInstance() {
+        return new grantNamespacePermission_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public grantNamespacePermission_result getResult(I iface, grantNamespacePermission_args args) throws org.apache.thrift.TException {
+        grantNamespacePermission_result result = new grantNamespacePermission_result();
+        try {
+          iface.grantNamespacePermission(args.login, args.user, args.namespaceName, args.perm);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        }
+        return result;
+      }
+    }
+
+    public static class hasNamespacePermission<I extends Iface> extends org.apache.thrift.ProcessFunction<I, hasNamespacePermission_args> {
+      public hasNamespacePermission() {
+        super("hasNamespacePermission");
+      }
+
+      public hasNamespacePermission_args getEmptyArgsInstance() {
+        return new hasNamespacePermission_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public hasNamespacePermission_result getResult(I iface, hasNamespacePermission_args args) throws org.apache.thrift.TException {
+        hasNamespacePermission_result result = new hasNamespacePermission_result();
+        try {
+          result.success = iface.hasNamespacePermission(args.login, args.user, args.namespaceName, args.perm);
+          result.setSuccessIsSet(true);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        }
+        return result;
+      }
+    }
+
+    public static class revokeNamespacePermission<I extends Iface> extends org.apache.thrift.ProcessFunction<I, revokeNamespacePermission_args> {
+      public revokeNamespacePermission() {
+        super("revokeNamespacePermission");
+      }
+
+      public revokeNamespacePermission_args getEmptyArgsInstance() {
+        return new revokeNamespacePermission_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public revokeNamespacePermission_result getResult(I iface, revokeNamespacePermission_args args) throws org.apache.thrift.TException {
+        revokeNamespacePermission_result result = new revokeNamespacePermission_result();
+        try {
+          iface.revokeNamespacePermission(args.login, args.user, args.namespaceName, args.perm);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        }
+        return result;
+      }
+    }
+
     public static class createBatchScanner<I extends Iface> extends org.apache.thrift.ProcessFunction<I, createBatchScanner_args> {
       public createBatchScanner() {
         super("createBatchScanner");
@@ -7755,6 +9517,551 @@
       }
     }
 
+    public static class systemNamespace<I extends Iface> extends org.apache.thrift.ProcessFunction<I, systemNamespace_args> {
+      public systemNamespace() {
+        super("systemNamespace");
+      }
+
+      public systemNamespace_args getEmptyArgsInstance() {
+        return new systemNamespace_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public systemNamespace_result getResult(I iface, systemNamespace_args args) throws org.apache.thrift.TException {
+        systemNamespace_result result = new systemNamespace_result();
+        result.success = iface.systemNamespace();
+        return result;
+      }
+    }
+
+    public static class defaultNamespace<I extends Iface> extends org.apache.thrift.ProcessFunction<I, defaultNamespace_args> {
+      public defaultNamespace() {
+        super("defaultNamespace");
+      }
+
+      public defaultNamespace_args getEmptyArgsInstance() {
+        return new defaultNamespace_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public defaultNamespace_result getResult(I iface, defaultNamespace_args args) throws org.apache.thrift.TException {
+        defaultNamespace_result result = new defaultNamespace_result();
+        result.success = iface.defaultNamespace();
+        return result;
+      }
+    }
+
+    public static class listNamespaces<I extends Iface> extends org.apache.thrift.ProcessFunction<I, listNamespaces_args> {
+      public listNamespaces() {
+        super("listNamespaces");
+      }
+
+      public listNamespaces_args getEmptyArgsInstance() {
+        return new listNamespaces_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public listNamespaces_result getResult(I iface, listNamespaces_args args) throws org.apache.thrift.TException {
+        listNamespaces_result result = new listNamespaces_result();
+        try {
+          result.success = iface.listNamespaces(args.login);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        }
+        return result;
+      }
+    }
+
+    public static class namespaceExists<I extends Iface> extends org.apache.thrift.ProcessFunction<I, namespaceExists_args> {
+      public namespaceExists() {
+        super("namespaceExists");
+      }
+
+      public namespaceExists_args getEmptyArgsInstance() {
+        return new namespaceExists_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public namespaceExists_result getResult(I iface, namespaceExists_args args) throws org.apache.thrift.TException {
+        namespaceExists_result result = new namespaceExists_result();
+        try {
+          result.success = iface.namespaceExists(args.login, args.namespaceName);
+          result.setSuccessIsSet(true);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        }
+        return result;
+      }
+    }
+
+    public static class createNamespace<I extends Iface> extends org.apache.thrift.ProcessFunction<I, createNamespace_args> {
+      public createNamespace() {
+        super("createNamespace");
+      }
+
+      public createNamespace_args getEmptyArgsInstance() {
+        return new createNamespace_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public createNamespace_result getResult(I iface, createNamespace_args args) throws org.apache.thrift.TException {
+        createNamespace_result result = new createNamespace_result();
+        try {
+          iface.createNamespace(args.login, args.namespaceName);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceExistsException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class deleteNamespace<I extends Iface> extends org.apache.thrift.ProcessFunction<I, deleteNamespace_args> {
+      public deleteNamespace() {
+        super("deleteNamespace");
+      }
+
+      public deleteNamespace_args getEmptyArgsInstance() {
+        return new deleteNamespace_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public deleteNamespace_result getResult(I iface, deleteNamespace_args args) throws org.apache.thrift.TException {
+        deleteNamespace_result result = new deleteNamespace_result();
+        try {
+          iface.deleteNamespace(args.login, args.namespaceName);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        } catch (NamespaceNotEmptyException ouch4) {
+          result.ouch4 = ouch4;
+        }
+        return result;
+      }
+    }
+
+    public static class renameNamespace<I extends Iface> extends org.apache.thrift.ProcessFunction<I, renameNamespace_args> {
+      public renameNamespace() {
+        super("renameNamespace");
+      }
+
+      public renameNamespace_args getEmptyArgsInstance() {
+        return new renameNamespace_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public renameNamespace_result getResult(I iface, renameNamespace_args args) throws org.apache.thrift.TException {
+        renameNamespace_result result = new renameNamespace_result();
+        try {
+          iface.renameNamespace(args.login, args.oldNamespaceName, args.newNamespaceName);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        } catch (NamespaceExistsException ouch4) {
+          result.ouch4 = ouch4;
+        }
+        return result;
+      }
+    }
+
+    public static class setNamespaceProperty<I extends Iface> extends org.apache.thrift.ProcessFunction<I, setNamespaceProperty_args> {
+      public setNamespaceProperty() {
+        super("setNamespaceProperty");
+      }
+
+      public setNamespaceProperty_args getEmptyArgsInstance() {
+        return new setNamespaceProperty_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public setNamespaceProperty_result getResult(I iface, setNamespaceProperty_args args) throws org.apache.thrift.TException {
+        setNamespaceProperty_result result = new setNamespaceProperty_result();
+        try {
+          iface.setNamespaceProperty(args.login, args.namespaceName, args.property, args.value);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class removeNamespaceProperty<I extends Iface> extends org.apache.thrift.ProcessFunction<I, removeNamespaceProperty_args> {
+      public removeNamespaceProperty() {
+        super("removeNamespaceProperty");
+      }
+
+      public removeNamespaceProperty_args getEmptyArgsInstance() {
+        return new removeNamespaceProperty_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public removeNamespaceProperty_result getResult(I iface, removeNamespaceProperty_args args) throws org.apache.thrift.TException {
+        removeNamespaceProperty_result result = new removeNamespaceProperty_result();
+        try {
+          iface.removeNamespaceProperty(args.login, args.namespaceName, args.property);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class getNamespaceProperties<I extends Iface> extends org.apache.thrift.ProcessFunction<I, getNamespaceProperties_args> {
+      public getNamespaceProperties() {
+        super("getNamespaceProperties");
+      }
+
+      public getNamespaceProperties_args getEmptyArgsInstance() {
+        return new getNamespaceProperties_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public getNamespaceProperties_result getResult(I iface, getNamespaceProperties_args args) throws org.apache.thrift.TException {
+        getNamespaceProperties_result result = new getNamespaceProperties_result();
+        try {
+          result.success = iface.getNamespaceProperties(args.login, args.namespaceName);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class namespaceIdMap<I extends Iface> extends org.apache.thrift.ProcessFunction<I, namespaceIdMap_args> {
+      public namespaceIdMap() {
+        super("namespaceIdMap");
+      }
+
+      public namespaceIdMap_args getEmptyArgsInstance() {
+        return new namespaceIdMap_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public namespaceIdMap_result getResult(I iface, namespaceIdMap_args args) throws org.apache.thrift.TException {
+        namespaceIdMap_result result = new namespaceIdMap_result();
+        try {
+          result.success = iface.namespaceIdMap(args.login);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        }
+        return result;
+      }
+    }
+
+    public static class attachNamespaceIterator<I extends Iface> extends org.apache.thrift.ProcessFunction<I, attachNamespaceIterator_args> {
+      public attachNamespaceIterator() {
+        super("attachNamespaceIterator");
+      }
+
+      public attachNamespaceIterator_args getEmptyArgsInstance() {
+        return new attachNamespaceIterator_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public attachNamespaceIterator_result getResult(I iface, attachNamespaceIterator_args args) throws org.apache.thrift.TException {
+        attachNamespaceIterator_result result = new attachNamespaceIterator_result();
+        try {
+          iface.attachNamespaceIterator(args.login, args.namespaceName, args.setting, args.scopes);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class removeNamespaceIterator<I extends Iface> extends org.apache.thrift.ProcessFunction<I, removeNamespaceIterator_args> {
+      public removeNamespaceIterator() {
+        super("removeNamespaceIterator");
+      }
+
+      public removeNamespaceIterator_args getEmptyArgsInstance() {
+        return new removeNamespaceIterator_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public removeNamespaceIterator_result getResult(I iface, removeNamespaceIterator_args args) throws org.apache.thrift.TException {
+        removeNamespaceIterator_result result = new removeNamespaceIterator_result();
+        try {
+          iface.removeNamespaceIterator(args.login, args.namespaceName, args.name, args.scopes);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class getNamespaceIteratorSetting<I extends Iface> extends org.apache.thrift.ProcessFunction<I, getNamespaceIteratorSetting_args> {
+      public getNamespaceIteratorSetting() {
+        super("getNamespaceIteratorSetting");
+      }
+
+      public getNamespaceIteratorSetting_args getEmptyArgsInstance() {
+        return new getNamespaceIteratorSetting_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public getNamespaceIteratorSetting_result getResult(I iface, getNamespaceIteratorSetting_args args) throws org.apache.thrift.TException {
+        getNamespaceIteratorSetting_result result = new getNamespaceIteratorSetting_result();
+        try {
+          result.success = iface.getNamespaceIteratorSetting(args.login, args.namespaceName, args.name, args.scope);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class listNamespaceIterators<I extends Iface> extends org.apache.thrift.ProcessFunction<I, listNamespaceIterators_args> {
+      public listNamespaceIterators() {
+        super("listNamespaceIterators");
+      }
+
+      public listNamespaceIterators_args getEmptyArgsInstance() {
+        return new listNamespaceIterators_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public listNamespaceIterators_result getResult(I iface, listNamespaceIterators_args args) throws org.apache.thrift.TException {
+        listNamespaceIterators_result result = new listNamespaceIterators_result();
+        try {
+          result.success = iface.listNamespaceIterators(args.login, args.namespaceName);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class checkNamespaceIteratorConflicts<I extends Iface> extends org.apache.thrift.ProcessFunction<I, checkNamespaceIteratorConflicts_args> {
+      public checkNamespaceIteratorConflicts() {
+        super("checkNamespaceIteratorConflicts");
+      }
+
+      public checkNamespaceIteratorConflicts_args getEmptyArgsInstance() {
+        return new checkNamespaceIteratorConflicts_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public checkNamespaceIteratorConflicts_result getResult(I iface, checkNamespaceIteratorConflicts_args args) throws org.apache.thrift.TException {
+        checkNamespaceIteratorConflicts_result result = new checkNamespaceIteratorConflicts_result();
+        try {
+          iface.checkNamespaceIteratorConflicts(args.login, args.namespaceName, args.setting, args.scopes);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class addNamespaceConstraint<I extends Iface> extends org.apache.thrift.ProcessFunction<I, addNamespaceConstraint_args> {
+      public addNamespaceConstraint() {
+        super("addNamespaceConstraint");
+      }
+
+      public addNamespaceConstraint_args getEmptyArgsInstance() {
+        return new addNamespaceConstraint_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public addNamespaceConstraint_result getResult(I iface, addNamespaceConstraint_args args) throws org.apache.thrift.TException {
+        addNamespaceConstraint_result result = new addNamespaceConstraint_result();
+        try {
+          result.success = iface.addNamespaceConstraint(args.login, args.namespaceName, args.constraintClassName);
+          result.setSuccessIsSet(true);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class removeNamespaceConstraint<I extends Iface> extends org.apache.thrift.ProcessFunction<I, removeNamespaceConstraint_args> {
+      public removeNamespaceConstraint() {
+        super("removeNamespaceConstraint");
+      }
+
+      public removeNamespaceConstraint_args getEmptyArgsInstance() {
+        return new removeNamespaceConstraint_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public removeNamespaceConstraint_result getResult(I iface, removeNamespaceConstraint_args args) throws org.apache.thrift.TException {
+        removeNamespaceConstraint_result result = new removeNamespaceConstraint_result();
+        try {
+          iface.removeNamespaceConstraint(args.login, args.namespaceName, args.id);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class listNamespaceConstraints<I extends Iface> extends org.apache.thrift.ProcessFunction<I, listNamespaceConstraints_args> {
+      public listNamespaceConstraints() {
+        super("listNamespaceConstraints");
+      }
+
+      public listNamespaceConstraints_args getEmptyArgsInstance() {
+        return new listNamespaceConstraints_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public listNamespaceConstraints_result getResult(I iface, listNamespaceConstraints_args args) throws org.apache.thrift.TException {
+        listNamespaceConstraints_result result = new listNamespaceConstraints_result();
+        try {
+          result.success = iface.listNamespaceConstraints(args.login, args.namespaceName);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
+    public static class testNamespaceClassLoad<I extends Iface> extends org.apache.thrift.ProcessFunction<I, testNamespaceClassLoad_args> {
+      public testNamespaceClassLoad() {
+        super("testNamespaceClassLoad");
+      }
+
+      public testNamespaceClassLoad_args getEmptyArgsInstance() {
+        return new testNamespaceClassLoad_args();
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public testNamespaceClassLoad_result getResult(I iface, testNamespaceClassLoad_args args) throws org.apache.thrift.TException {
+        testNamespaceClassLoad_result result = new testNamespaceClassLoad_result();
+        try {
+          result.success = iface.testNamespaceClassLoad(args.login, args.namespaceName, args.className, args.asTypeName);
+          result.setSuccessIsSet(true);
+        } catch (AccumuloException ouch1) {
+          result.ouch1 = ouch1;
+        } catch (AccumuloSecurityException ouch2) {
+          result.ouch2 = ouch2;
+        } catch (NamespaceNotFoundException ouch3) {
+          result.ouch3 = ouch3;
+        }
+        return result;
+      }
+    }
+
   }
 
   public static class AsyncProcessor<I extends AsyncIface> extends org.apache.thrift.TBaseAsyncProcessor<I> {
@@ -7828,6 +10135,9 @@
       processMap.put("listLocalUsers", new listLocalUsers());
       processMap.put("revokeSystemPermission", new revokeSystemPermission());
       processMap.put("revokeTablePermission", new revokeTablePermission());
+      processMap.put("grantNamespacePermission", new grantNamespacePermission());
+      processMap.put("hasNamespacePermission", new hasNamespacePermission());
+      processMap.put("revokeNamespacePermission", new revokeNamespacePermission());
       processMap.put("createBatchScanner", new createBatchScanner());
       processMap.put("createScanner", new createScanner());
       processMap.put("hasNext", new hasNext());
@@ -7845,6 +10155,26 @@
       processMap.put("closeConditionalWriter", new closeConditionalWriter());
       processMap.put("getRowRange", new getRowRange());
       processMap.put("getFollowing", new getFollowing());
+      processMap.put("systemNamespace", new systemNamespace());
+      processMap.put("defaultNamespace", new defaultNamespace());
+      processMap.put("listNamespaces", new listNamespaces());
+      processMap.put("namespaceExists", new namespaceExists());
+      processMap.put("createNamespace", new createNamespace());
+      processMap.put("deleteNamespace", new deleteNamespace());
+      processMap.put("renameNamespace", new renameNamespace());
+      processMap.put("setNamespaceProperty", new setNamespaceProperty());
+      processMap.put("removeNamespaceProperty", new removeNamespaceProperty());
+      processMap.put("getNamespaceProperties", new getNamespaceProperties());
+      processMap.put("namespaceIdMap", new namespaceIdMap());
+      processMap.put("attachNamespaceIterator", new attachNamespaceIterator());
+      processMap.put("removeNamespaceIterator", new removeNamespaceIterator());
+      processMap.put("getNamespaceIteratorSetting", new getNamespaceIteratorSetting());
+      processMap.put("listNamespaceIterators", new listNamespaceIterators());
+      processMap.put("checkNamespaceIteratorConflicts", new checkNamespaceIteratorConflicts());
+      processMap.put("addNamespaceConstraint", new addNamespaceConstraint());
+      processMap.put("removeNamespaceConstraint", new removeNamespaceConstraint());
+      processMap.put("listNamespaceConstraints", new listNamespaceConstraints());
+      processMap.put("testNamespaceClassLoad", new testNamespaceClassLoad());
       return processMap;
     }
 
@@ -11682,6 +14012,191 @@
       }
     }
 
+    public static class grantNamespacePermission<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, grantNamespacePermission_args, Void> {
+      public grantNamespacePermission() {
+        super("grantNamespacePermission");
+      }
+
+      public grantNamespacePermission_args getEmptyArgsInstance() {
+        return new grantNamespacePermission_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            grantNamespacePermission_result result = new grantNamespacePermission_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            grantNamespacePermission_result result = new grantNamespacePermission_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, grantNamespacePermission_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.grantNamespacePermission(args.login, args.user, args.namespaceName, args.perm,resultHandler);
+      }
+    }
+
+    public static class hasNamespacePermission<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, hasNamespacePermission_args, Boolean> {
+      public hasNamespacePermission() {
+        super("hasNamespacePermission");
+      }
+
+      public hasNamespacePermission_args getEmptyArgsInstance() {
+        return new hasNamespacePermission_args();
+      }
+
+      public AsyncMethodCallback<Boolean> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Boolean>() { 
+          public void onComplete(Boolean o) {
+            hasNamespacePermission_result result = new hasNamespacePermission_result();
+            result.success = o;
+            result.setSuccessIsSet(true);
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            hasNamespacePermission_result result = new hasNamespacePermission_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, hasNamespacePermission_args args, org.apache.thrift.async.AsyncMethodCallback<Boolean> resultHandler) throws TException {
+        iface.hasNamespacePermission(args.login, args.user, args.namespaceName, args.perm,resultHandler);
+      }
+    }
+
+    public static class revokeNamespacePermission<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, revokeNamespacePermission_args, Void> {
+      public revokeNamespacePermission() {
+        super("revokeNamespacePermission");
+      }
+
+      public revokeNamespacePermission_args getEmptyArgsInstance() {
+        return new revokeNamespacePermission_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            revokeNamespacePermission_result result = new revokeNamespacePermission_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            revokeNamespacePermission_result result = new revokeNamespacePermission_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, revokeNamespacePermission_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.revokeNamespacePermission(args.login, args.user, args.namespaceName, args.perm,resultHandler);
+      }
+    }
+
     public static class createBatchScanner<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, createBatchScanner_args, String> {
       public createBatchScanner() {
         super("createBatchScanner");
@@ -12705,6 +15220,1303 @@
       }
     }
 
+    public static class systemNamespace<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, systemNamespace_args, String> {
+      public systemNamespace() {
+        super("systemNamespace");
+      }
+
+      public systemNamespace_args getEmptyArgsInstance() {
+        return new systemNamespace_args();
+      }
+
+      public AsyncMethodCallback<String> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<String>() { 
+          public void onComplete(String o) {
+            systemNamespace_result result = new systemNamespace_result();
+            result.success = o;
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            systemNamespace_result result = new systemNamespace_result();
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, systemNamespace_args args, org.apache.thrift.async.AsyncMethodCallback<String> resultHandler) throws TException {
+        iface.systemNamespace(resultHandler);
+      }
+    }
+
+    public static class defaultNamespace<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, defaultNamespace_args, String> {
+      public defaultNamespace() {
+        super("defaultNamespace");
+      }
+
+      public defaultNamespace_args getEmptyArgsInstance() {
+        return new defaultNamespace_args();
+      }
+
+      public AsyncMethodCallback<String> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<String>() { 
+          public void onComplete(String o) {
+            defaultNamespace_result result = new defaultNamespace_result();
+            result.success = o;
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            defaultNamespace_result result = new defaultNamespace_result();
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, defaultNamespace_args args, org.apache.thrift.async.AsyncMethodCallback<String> resultHandler) throws TException {
+        iface.defaultNamespace(resultHandler);
+      }
+    }
+
+    public static class listNamespaces<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, listNamespaces_args, List<String>> {
+      public listNamespaces() {
+        super("listNamespaces");
+      }
+
+      public listNamespaces_args getEmptyArgsInstance() {
+        return new listNamespaces_args();
+      }
+
+      public AsyncMethodCallback<List<String>> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<List<String>>() { 
+          public void onComplete(List<String> o) {
+            listNamespaces_result result = new listNamespaces_result();
+            result.success = o;
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            listNamespaces_result result = new listNamespaces_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, listNamespaces_args args, org.apache.thrift.async.AsyncMethodCallback<List<String>> resultHandler) throws TException {
+        iface.listNamespaces(args.login,resultHandler);
+      }
+    }
+
+    public static class namespaceExists<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, namespaceExists_args, Boolean> {
+      public namespaceExists() {
+        super("namespaceExists");
+      }
+
+      public namespaceExists_args getEmptyArgsInstance() {
+        return new namespaceExists_args();
+      }
+
+      public AsyncMethodCallback<Boolean> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Boolean>() { 
+          public void onComplete(Boolean o) {
+            namespaceExists_result result = new namespaceExists_result();
+            result.success = o;
+            result.setSuccessIsSet(true);
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            namespaceExists_result result = new namespaceExists_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, namespaceExists_args args, org.apache.thrift.async.AsyncMethodCallback<Boolean> resultHandler) throws TException {
+        iface.namespaceExists(args.login, args.namespaceName,resultHandler);
+      }
+    }
+
+    public static class createNamespace<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, createNamespace_args, Void> {
+      public createNamespace() {
+        super("createNamespace");
+      }
+
+      public createNamespace_args getEmptyArgsInstance() {
+        return new createNamespace_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            createNamespace_result result = new createNamespace_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            createNamespace_result result = new createNamespace_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceExistsException) {
+                        result.ouch3 = (NamespaceExistsException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, createNamespace_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.createNamespace(args.login, args.namespaceName,resultHandler);
+      }
+    }
+
+    public static class deleteNamespace<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, deleteNamespace_args, Void> {
+      public deleteNamespace() {
+        super("deleteNamespace");
+      }
+
+      public deleteNamespace_args getEmptyArgsInstance() {
+        return new deleteNamespace_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            deleteNamespace_result result = new deleteNamespace_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            deleteNamespace_result result = new deleteNamespace_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotEmptyException) {
+                        result.ouch4 = (NamespaceNotEmptyException) e;
+                        result.setOuch4IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, deleteNamespace_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.deleteNamespace(args.login, args.namespaceName,resultHandler);
+      }
+    }
+
+    public static class renameNamespace<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, renameNamespace_args, Void> {
+      public renameNamespace() {
+        super("renameNamespace");
+      }
+
+      public renameNamespace_args getEmptyArgsInstance() {
+        return new renameNamespace_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            renameNamespace_result result = new renameNamespace_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            renameNamespace_result result = new renameNamespace_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceExistsException) {
+                        result.ouch4 = (NamespaceExistsException) e;
+                        result.setOuch4IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, renameNamespace_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.renameNamespace(args.login, args.oldNamespaceName, args.newNamespaceName,resultHandler);
+      }
+    }
+
+    public static class setNamespaceProperty<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, setNamespaceProperty_args, Void> {
+      public setNamespaceProperty() {
+        super("setNamespaceProperty");
+      }
+
+      public setNamespaceProperty_args getEmptyArgsInstance() {
+        return new setNamespaceProperty_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            setNamespaceProperty_result result = new setNamespaceProperty_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            setNamespaceProperty_result result = new setNamespaceProperty_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, setNamespaceProperty_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.setNamespaceProperty(args.login, args.namespaceName, args.property, args.value,resultHandler);
+      }
+    }
+
+    public static class removeNamespaceProperty<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, removeNamespaceProperty_args, Void> {
+      public removeNamespaceProperty() {
+        super("removeNamespaceProperty");
+      }
+
+      public removeNamespaceProperty_args getEmptyArgsInstance() {
+        return new removeNamespaceProperty_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            removeNamespaceProperty_result result = new removeNamespaceProperty_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            removeNamespaceProperty_result result = new removeNamespaceProperty_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, removeNamespaceProperty_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.removeNamespaceProperty(args.login, args.namespaceName, args.property,resultHandler);
+      }
+    }
+
+    public static class getNamespaceProperties<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, getNamespaceProperties_args, Map<String,String>> {
+      public getNamespaceProperties() {
+        super("getNamespaceProperties");
+      }
+
+      public getNamespaceProperties_args getEmptyArgsInstance() {
+        return new getNamespaceProperties_args();
+      }
+
+      public AsyncMethodCallback<Map<String,String>> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Map<String,String>>() { 
+          public void onComplete(Map<String,String> o) {
+            getNamespaceProperties_result result = new getNamespaceProperties_result();
+            result.success = o;
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            getNamespaceProperties_result result = new getNamespaceProperties_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, getNamespaceProperties_args args, org.apache.thrift.async.AsyncMethodCallback<Map<String,String>> resultHandler) throws TException {
+        iface.getNamespaceProperties(args.login, args.namespaceName,resultHandler);
+      }
+    }
+
+    public static class namespaceIdMap<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, namespaceIdMap_args, Map<String,String>> {
+      public namespaceIdMap() {
+        super("namespaceIdMap");
+      }
+
+      public namespaceIdMap_args getEmptyArgsInstance() {
+        return new namespaceIdMap_args();
+      }
+
+      public AsyncMethodCallback<Map<String,String>> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Map<String,String>>() { 
+          public void onComplete(Map<String,String> o) {
+            namespaceIdMap_result result = new namespaceIdMap_result();
+            result.success = o;
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            namespaceIdMap_result result = new namespaceIdMap_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, namespaceIdMap_args args, org.apache.thrift.async.AsyncMethodCallback<Map<String,String>> resultHandler) throws TException {
+        iface.namespaceIdMap(args.login,resultHandler);
+      }
+    }
+
+    public static class attachNamespaceIterator<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, attachNamespaceIterator_args, Void> {
+      public attachNamespaceIterator() {
+        super("attachNamespaceIterator");
+      }
+
+      public attachNamespaceIterator_args getEmptyArgsInstance() {
+        return new attachNamespaceIterator_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            attachNamespaceIterator_result result = new attachNamespaceIterator_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            attachNamespaceIterator_result result = new attachNamespaceIterator_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, attachNamespaceIterator_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.attachNamespaceIterator(args.login, args.namespaceName, args.setting, args.scopes,resultHandler);
+      }
+    }
+
+    public static class removeNamespaceIterator<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, removeNamespaceIterator_args, Void> {
+      public removeNamespaceIterator() {
+        super("removeNamespaceIterator");
+      }
+
+      public removeNamespaceIterator_args getEmptyArgsInstance() {
+        return new removeNamespaceIterator_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            removeNamespaceIterator_result result = new removeNamespaceIterator_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            removeNamespaceIterator_result result = new removeNamespaceIterator_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, removeNamespaceIterator_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.removeNamespaceIterator(args.login, args.namespaceName, args.name, args.scopes,resultHandler);
+      }
+    }
+
+    public static class getNamespaceIteratorSetting<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, getNamespaceIteratorSetting_args, IteratorSetting> {
+      public getNamespaceIteratorSetting() {
+        super("getNamespaceIteratorSetting");
+      }
+
+      public getNamespaceIteratorSetting_args getEmptyArgsInstance() {
+        return new getNamespaceIteratorSetting_args();
+      }
+
+      public AsyncMethodCallback<IteratorSetting> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<IteratorSetting>() { 
+          public void onComplete(IteratorSetting o) {
+            getNamespaceIteratorSetting_result result = new getNamespaceIteratorSetting_result();
+            result.success = o;
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            getNamespaceIteratorSetting_result result = new getNamespaceIteratorSetting_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, getNamespaceIteratorSetting_args args, org.apache.thrift.async.AsyncMethodCallback<IteratorSetting> resultHandler) throws TException {
+        iface.getNamespaceIteratorSetting(args.login, args.namespaceName, args.name, args.scope,resultHandler);
+      }
+    }
+
+    public static class listNamespaceIterators<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, listNamespaceIterators_args, Map<String,Set<IteratorScope>>> {
+      public listNamespaceIterators() {
+        super("listNamespaceIterators");
+      }
+
+      public listNamespaceIterators_args getEmptyArgsInstance() {
+        return new listNamespaceIterators_args();
+      }
+
+      public AsyncMethodCallback<Map<String,Set<IteratorScope>>> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Map<String,Set<IteratorScope>>>() { 
+          public void onComplete(Map<String,Set<IteratorScope>> o) {
+            listNamespaceIterators_result result = new listNamespaceIterators_result();
+            result.success = o;
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            listNamespaceIterators_result result = new listNamespaceIterators_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, listNamespaceIterators_args args, org.apache.thrift.async.AsyncMethodCallback<Map<String,Set<IteratorScope>>> resultHandler) throws TException {
+        iface.listNamespaceIterators(args.login, args.namespaceName,resultHandler);
+      }
+    }
+
+    public static class checkNamespaceIteratorConflicts<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, checkNamespaceIteratorConflicts_args, Void> {
+      public checkNamespaceIteratorConflicts() {
+        super("checkNamespaceIteratorConflicts");
+      }
+
+      public checkNamespaceIteratorConflicts_args getEmptyArgsInstance() {
+        return new checkNamespaceIteratorConflicts_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            checkNamespaceIteratorConflicts_result result = new checkNamespaceIteratorConflicts_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            checkNamespaceIteratorConflicts_result result = new checkNamespaceIteratorConflicts_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, checkNamespaceIteratorConflicts_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.checkNamespaceIteratorConflicts(args.login, args.namespaceName, args.setting, args.scopes,resultHandler);
+      }
+    }
+
+    public static class addNamespaceConstraint<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, addNamespaceConstraint_args, Integer> {
+      public addNamespaceConstraint() {
+        super("addNamespaceConstraint");
+      }
+
+      public addNamespaceConstraint_args getEmptyArgsInstance() {
+        return new addNamespaceConstraint_args();
+      }
+
+      public AsyncMethodCallback<Integer> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Integer>() { 
+          public void onComplete(Integer o) {
+            addNamespaceConstraint_result result = new addNamespaceConstraint_result();
+            result.success = o;
+            result.setSuccessIsSet(true);
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            addNamespaceConstraint_result result = new addNamespaceConstraint_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, addNamespaceConstraint_args args, org.apache.thrift.async.AsyncMethodCallback<Integer> resultHandler) throws TException {
+        iface.addNamespaceConstraint(args.login, args.namespaceName, args.constraintClassName,resultHandler);
+      }
+    }
+
+    public static class removeNamespaceConstraint<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, removeNamespaceConstraint_args, Void> {
+      public removeNamespaceConstraint() {
+        super("removeNamespaceConstraint");
+      }
+
+      public removeNamespaceConstraint_args getEmptyArgsInstance() {
+        return new removeNamespaceConstraint_args();
+      }
+
+      public AsyncMethodCallback<Void> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Void>() { 
+          public void onComplete(Void o) {
+            removeNamespaceConstraint_result result = new removeNamespaceConstraint_result();
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            removeNamespaceConstraint_result result = new removeNamespaceConstraint_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, removeNamespaceConstraint_args args, org.apache.thrift.async.AsyncMethodCallback<Void> resultHandler) throws TException {
+        iface.removeNamespaceConstraint(args.login, args.namespaceName, args.id,resultHandler);
+      }
+    }
+
+    public static class listNamespaceConstraints<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, listNamespaceConstraints_args, Map<String,Integer>> {
+      public listNamespaceConstraints() {
+        super("listNamespaceConstraints");
+      }
+
+      public listNamespaceConstraints_args getEmptyArgsInstance() {
+        return new listNamespaceConstraints_args();
+      }
+
+      public AsyncMethodCallback<Map<String,Integer>> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Map<String,Integer>>() { 
+          public void onComplete(Map<String,Integer> o) {
+            listNamespaceConstraints_result result = new listNamespaceConstraints_result();
+            result.success = o;
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            listNamespaceConstraints_result result = new listNamespaceConstraints_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, listNamespaceConstraints_args args, org.apache.thrift.async.AsyncMethodCallback<Map<String,Integer>> resultHandler) throws TException {
+        iface.listNamespaceConstraints(args.login, args.namespaceName,resultHandler);
+      }
+    }
+
+    public static class testNamespaceClassLoad<I extends AsyncIface> extends org.apache.thrift.AsyncProcessFunction<I, testNamespaceClassLoad_args, Boolean> {
+      public testNamespaceClassLoad() {
+        super("testNamespaceClassLoad");
+      }
+
+      public testNamespaceClassLoad_args getEmptyArgsInstance() {
+        return new testNamespaceClassLoad_args();
+      }
+
+      public AsyncMethodCallback<Boolean> getResultHandler(final AsyncFrameBuffer fb, final int seqid) {
+        final org.apache.thrift.AsyncProcessFunction fcall = this;
+        return new AsyncMethodCallback<Boolean>() { 
+          public void onComplete(Boolean o) {
+            testNamespaceClassLoad_result result = new testNamespaceClassLoad_result();
+            result.success = o;
+            result.setSuccessIsSet(true);
+            try {
+              fcall.sendResponse(fb,result, org.apache.thrift.protocol.TMessageType.REPLY,seqid);
+              return;
+            } catch (Exception e) {
+              LOGGER.error("Exception writing to internal frame buffer", e);
+            }
+            fb.close();
+          }
+          public void onError(Exception e) {
+            byte msgType = org.apache.thrift.protocol.TMessageType.REPLY;
+            org.apache.thrift.TBase msg;
+            testNamespaceClassLoad_result result = new testNamespaceClassLoad_result();
+            if (e instanceof AccumuloException) {
+                        result.ouch1 = (AccumuloException) e;
+                        result.setOuch1IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof AccumuloSecurityException) {
+                        result.ouch2 = (AccumuloSecurityException) e;
+                        result.setOuch2IsSet(true);
+                        msg = result;
+            }
+            else             if (e instanceof NamespaceNotFoundException) {
+                        result.ouch3 = (NamespaceNotFoundException) e;
+                        result.setOuch3IsSet(true);
+                        msg = result;
+            }
+             else 
+            {
+              msgType = org.apache.thrift.protocol.TMessageType.EXCEPTION;
+              msg = (org.apache.thrift.TBase)new org.apache.thrift.TApplicationException(org.apache.thrift.TApplicationException.INTERNAL_ERROR, e.getMessage());
+            }
+            try {
+              fcall.sendResponse(fb,msg,msgType,seqid);
+              return;
+            } catch (Exception ex) {
+              LOGGER.error("Exception writing to internal frame buffer", ex);
+            }
+            fb.close();
+          }
+        };
+      }
+
+      protected boolean isOneway() {
+        return false;
+      }
+
+      public void start(I iface, testNamespaceClassLoad_args args, org.apache.thrift.async.AsyncMethodCallback<Boolean> resultHandler) throws TException {
+        iface.testNamespaceClassLoad(args.login, args.namespaceName, args.className, args.asTypeName,resultHandler);
+      }
+    }
+
   }
 
   public static class login_args implements org.apache.thrift.TBase<login_args, login_args._Fields>, java.io.Serializable, Cloneable, Comparable<login_args>   {
@@ -12975,7 +16787,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_principal = true && (isSetPrincipal());
+      list.add(present_principal);
+      if (present_principal)
+        list.add(principal);
+
+      boolean present_loginProperties = true && (isSetLoginProperties());
+      list.add(present_loginProperties);
+      if (present_loginProperties)
+        list.add(loginProperties);
+
+      return list.hashCode();
     }
 
     @Override
@@ -13097,13 +16921,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map164 = iprot.readMapBegin();
                   struct.loginProperties = new HashMap<String,String>(2*_map164.size);
-                  for (int _i165 = 0; _i165 < _map164.size; ++_i165)
+                  String _key165;
+                  String _val166;
+                  for (int _i167 = 0; _i167 < _map164.size; ++_i167)
                   {
-                    String _key166;
-                    String _val167;
-                    _key166 = iprot.readString();
-                    _val167 = iprot.readString();
-                    struct.loginProperties.put(_key166, _val167);
+                    _key165 = iprot.readString();
+                    _val166 = iprot.readString();
+                    struct.loginProperties.put(_key165, _val166);
                   }
                   iprot.readMapEnd();
                 }
@@ -13197,13 +17021,13 @@
           {
             org.apache.thrift.protocol.TMap _map170 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.loginProperties = new HashMap<String,String>(2*_map170.size);
-            for (int _i171 = 0; _i171 < _map170.size; ++_i171)
+            String _key171;
+            String _val172;
+            for (int _i173 = 0; _i173 < _map170.size; ++_i173)
             {
-              String _key172;
-              String _val173;
-              _key172 = iprot.readString();
-              _val173 = iprot.readString();
-              struct.loginProperties.put(_key172, _val173);
+              _key171 = iprot.readString();
+              _val172 = iprot.readString();
+              struct.loginProperties.put(_key171, _val172);
             }
           }
           struct.setLoginPropertiesIsSet(true);
@@ -13309,7 +17133,7 @@
       AccumuloSecurityException ouch2)
     {
       this();
-      this.success = success;
+      this.success = org.apache.thrift.TBaseHelper.copyBinary(success);
       this.ouch2 = ouch2;
     }
 
@@ -13319,7 +17143,6 @@
     public login_result(login_result other) {
       if (other.isSetSuccess()) {
         this.success = org.apache.thrift.TBaseHelper.copyBinary(other.success);
-;
       }
       if (other.isSetOuch2()) {
         this.ouch2 = new AccumuloSecurityException(other.ouch2);
@@ -13342,16 +17165,16 @@
     }
 
     public ByteBuffer bufferForSuccess() {
-      return success;
+      return org.apache.thrift.TBaseHelper.copyBinary(success);
     }
 
     public login_result setSuccess(byte[] success) {
-      setSuccess(success == null ? (ByteBuffer)null : ByteBuffer.wrap(success));
+      this.success = success == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(success, success.length));
       return this;
     }
 
     public login_result setSuccess(ByteBuffer success) {
-      this.success = success;
+      this.success = org.apache.thrift.TBaseHelper.copyBinary(success);
       return this;
     }
 
@@ -13478,7 +17301,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -13784,7 +17619,7 @@
       String constraintClassName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.constraintClassName = constraintClassName;
     }
@@ -13795,7 +17630,6 @@
     public addConstraint_args(addConstraint_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -13822,16 +17656,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public addConstraint_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public addConstraint_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -14004,7 +17838,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_constraintClassName = true && (isSetConstraintClassName());
+      list.add(present_constraintClassName);
+      if (present_constraintClassName)
+        list.add(constraintClassName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -14531,7 +18382,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Integer.valueOf(getSuccess());
+        return getSuccess();
 
       case OUCH1:
         return getOuch1();
@@ -14619,7 +18470,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -15010,7 +18883,7 @@
       Set<ByteBuffer> splits)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.splits = splits;
     }
@@ -15021,7 +18894,6 @@
     public addSplits_args(addSplits_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -15049,16 +18921,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public addSplits_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public addSplits_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -15246,7 +19118,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_splits = true && (isSetSplits());
+      list.add(present_splits);
+      if (present_splits)
+        list.add(splits);
+
+      return list.hashCode();
     }
 
     @Override
@@ -15327,7 +19216,7 @@
       if (this.splits == null) {
         sb.append("null");
       } else {
-        sb.append(this.splits);
+        org.apache.thrift.TBaseHelper.toString(this.splits, sb);
       }
       first = false;
       sb.append(")");
@@ -15394,11 +19283,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set174 = iprot.readSetBegin();
                   struct.splits = new HashSet<ByteBuffer>(2*_set174.size);
-                  for (int _i175 = 0; _i175 < _set174.size; ++_i175)
+                  ByteBuffer _elem175;
+                  for (int _i176 = 0; _i176 < _set174.size; ++_i176)
                   {
-                    ByteBuffer _elem176;
-                    _elem176 = iprot.readBinary();
-                    struct.splits.add(_elem176);
+                    _elem175 = iprot.readBinary();
+                    struct.splits.add(_elem175);
                   }
                   iprot.readSetEnd();
                 }
@@ -15505,11 +19394,11 @@
           {
             org.apache.thrift.protocol.TSet _set179 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.splits = new HashSet<ByteBuffer>(2*_set179.size);
-            for (int _i180 = 0; _i180 < _set179.size; ++_i180)
+            ByteBuffer _elem180;
+            for (int _i181 = 0; _i181 < _set179.size; ++_i181)
             {
-              ByteBuffer _elem181;
-              _elem181 = iprot.readBinary();
-              struct.splits.add(_elem181);
+              _elem180 = iprot.readBinary();
+              struct.splits.add(_elem180);
             }
           }
           struct.setSplitsIsSet(true);
@@ -15832,7 +19721,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -16192,7 +20098,7 @@
       Set<IteratorScope> scopes)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.setting = setting;
       this.scopes = scopes;
@@ -16204,7 +20110,6 @@
     public attachIterator_args(attachIterator_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -16239,16 +20144,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public attachIterator_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public attachIterator_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -16482,7 +20387,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_setting = true && (isSetSetting());
+      list.add(present_setting);
+      if (present_setting)
+        list.add(setting);
+
+      boolean present_scopes = true && (isSetScopes());
+      list.add(present_scopes);
+      if (present_scopes)
+        list.add(scopes);
+
+      return list.hashCode();
     }
 
     @Override
@@ -16660,11 +20587,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set182 = iprot.readSetBegin();
                   struct.scopes = new HashSet<IteratorScope>(2*_set182.size);
-                  for (int _i183 = 0; _i183 < _set182.size; ++_i183)
+                  IteratorScope _elem183;
+                  for (int _i184 = 0; _i184 < _set182.size; ++_i184)
                   {
-                    IteratorScope _elem184;
-                    _elem184 = IteratorScope.findByValue(iprot.readI32());
-                    struct.scopes.add(_elem184);
+                    _elem183 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                    struct.scopes.add(_elem183);
                   }
                   iprot.readSetEnd();
                 }
@@ -16787,11 +20714,11 @@
           {
             org.apache.thrift.protocol.TSet _set187 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, iprot.readI32());
             struct.scopes = new HashSet<IteratorScope>(2*_set187.size);
-            for (int _i188 = 0; _i188 < _set187.size; ++_i188)
+            IteratorScope _elem188;
+            for (int _i189 = 0; _i189 < _set187.size; ++_i189)
             {
-              IteratorScope _elem189;
-              _elem189 = IteratorScope.findByValue(iprot.readI32());
-              struct.scopes.add(_elem189);
+              _elem188 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+              struct.scopes.add(_elem188);
             }
           }
           struct.setScopesIsSet(true);
@@ -17114,7 +21041,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -17474,7 +21418,7 @@
       Set<IteratorScope> scopes)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.setting = setting;
       this.scopes = scopes;
@@ -17486,7 +21430,6 @@
     public checkIteratorConflicts_args(checkIteratorConflicts_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -17521,16 +21464,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public checkIteratorConflicts_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public checkIteratorConflicts_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -17764,7 +21707,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_setting = true && (isSetSetting());
+      list.add(present_setting);
+      if (present_setting)
+        list.add(setting);
+
+      boolean present_scopes = true && (isSetScopes());
+      list.add(present_scopes);
+      if (present_scopes)
+        list.add(scopes);
+
+      return list.hashCode();
     }
 
     @Override
@@ -17942,11 +21907,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set190 = iprot.readSetBegin();
                   struct.scopes = new HashSet<IteratorScope>(2*_set190.size);
-                  for (int _i191 = 0; _i191 < _set190.size; ++_i191)
+                  IteratorScope _elem191;
+                  for (int _i192 = 0; _i192 < _set190.size; ++_i192)
                   {
-                    IteratorScope _elem192;
-                    _elem192 = IteratorScope.findByValue(iprot.readI32());
-                    struct.scopes.add(_elem192);
+                    _elem191 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                    struct.scopes.add(_elem191);
                   }
                   iprot.readSetEnd();
                 }
@@ -18069,11 +22034,11 @@
           {
             org.apache.thrift.protocol.TSet _set195 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, iprot.readI32());
             struct.scopes = new HashSet<IteratorScope>(2*_set195.size);
-            for (int _i196 = 0; _i196 < _set195.size; ++_i196)
+            IteratorScope _elem196;
+            for (int _i197 = 0; _i197 < _set195.size; ++_i197)
             {
-              IteratorScope _elem197;
-              _elem197 = IteratorScope.findByValue(iprot.readI32());
-              struct.scopes.add(_elem197);
+              _elem196 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+              struct.scopes.add(_elem196);
             }
           }
           struct.setScopesIsSet(true);
@@ -18396,7 +22361,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -18739,7 +22721,7 @@
       String tableName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
     }
 
@@ -18749,7 +22731,6 @@
     public clearLocatorCache_args(clearLocatorCache_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -18772,16 +22753,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public clearLocatorCache_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public clearLocatorCache_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -18908,7 +22889,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -19303,7 +23296,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      return list.hashCode();
     }
 
     @Override
@@ -19597,7 +23597,7 @@
       Set<String> propertiesToExclude)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.newTableName = newTableName;
       this.flush = flush;
@@ -19613,7 +23613,6 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -19653,16 +23652,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public cloneTable_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public cloneTable_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -19891,7 +23890,7 @@
         return getNewTableName();
 
       case FLUSH:
-        return Boolean.valueOf(isFlush());
+        return isFlush();
 
       case PROPERTIES_TO_SET:
         return getPropertiesToSet();
@@ -19998,7 +23997,39 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_newTableName = true && (isSetNewTableName());
+      list.add(present_newTableName);
+      if (present_newTableName)
+        list.add(newTableName);
+
+      boolean present_flush = true;
+      list.add(present_flush);
+      if (present_flush)
+        list.add(flush);
+
+      boolean present_propertiesToSet = true && (isSetPropertiesToSet());
+      list.add(present_propertiesToSet);
+      if (present_propertiesToSet)
+        list.add(propertiesToSet);
+
+      boolean present_propertiesToExclude = true && (isSetPropertiesToExclude());
+      list.add(present_propertiesToExclude);
+      if (present_propertiesToExclude)
+        list.add(propertiesToExclude);
+
+      return list.hashCode();
     }
 
     @Override
@@ -20214,13 +24245,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map198 = iprot.readMapBegin();
                   struct.propertiesToSet = new HashMap<String,String>(2*_map198.size);
-                  for (int _i199 = 0; _i199 < _map198.size; ++_i199)
+                  String _key199;
+                  String _val200;
+                  for (int _i201 = 0; _i201 < _map198.size; ++_i201)
                   {
-                    String _key200;
-                    String _val201;
-                    _key200 = iprot.readString();
-                    _val201 = iprot.readString();
-                    struct.propertiesToSet.put(_key200, _val201);
+                    _key199 = iprot.readString();
+                    _val200 = iprot.readString();
+                    struct.propertiesToSet.put(_key199, _val200);
                   }
                   iprot.readMapEnd();
                 }
@@ -20234,11 +24265,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set202 = iprot.readSetBegin();
                   struct.propertiesToExclude = new HashSet<String>(2*_set202.size);
-                  for (int _i203 = 0; _i203 < _set202.size; ++_i203)
+                  String _elem203;
+                  for (int _i204 = 0; _i204 < _set202.size; ++_i204)
                   {
-                    String _elem204;
-                    _elem204 = iprot.readString();
-                    struct.propertiesToExclude.add(_elem204);
+                    _elem203 = iprot.readString();
+                    struct.propertiesToExclude.add(_elem203);
                   }
                   iprot.readSetEnd();
                 }
@@ -20399,13 +24430,13 @@
           {
             org.apache.thrift.protocol.TMap _map209 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.propertiesToSet = new HashMap<String,String>(2*_map209.size);
-            for (int _i210 = 0; _i210 < _map209.size; ++_i210)
+            String _key210;
+            String _val211;
+            for (int _i212 = 0; _i212 < _map209.size; ++_i212)
             {
-              String _key211;
-              String _val212;
-              _key211 = iprot.readString();
-              _val212 = iprot.readString();
-              struct.propertiesToSet.put(_key211, _val212);
+              _key210 = iprot.readString();
+              _val211 = iprot.readString();
+              struct.propertiesToSet.put(_key210, _val211);
             }
           }
           struct.setPropertiesToSetIsSet(true);
@@ -20414,11 +24445,11 @@
           {
             org.apache.thrift.protocol.TSet _set213 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.propertiesToExclude = new HashSet<String>(2*_set213.size);
-            for (int _i214 = 0; _i214 < _set213.size; ++_i214)
+            String _elem214;
+            for (int _i215 = 0; _i215 < _set213.size; ++_i215)
             {
-              String _elem215;
-              _elem215 = iprot.readString();
-              struct.propertiesToExclude.add(_elem215);
+              _elem214 = iprot.readString();
+              struct.propertiesToExclude.add(_elem214);
             }
           }
           struct.setPropertiesToExcludeIsSet(true);
@@ -20800,7 +24831,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      boolean present_ouch4 = true && (isSetOuch4());
+      list.add(present_ouch4);
+      if (present_ouch4)
+        list.add(ouch4);
+
+      return list.hashCode();
     }
 
     @Override
@@ -21238,10 +25291,10 @@
       CompactionStrategyConfig compactionStrategy)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
-      this.startRow = startRow;
-      this.endRow = endRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       this.iterators = iterators;
       this.flush = flush;
       setFlushIsSet(true);
@@ -21257,18 +25310,15 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
       }
       if (other.isSetStartRow()) {
         this.startRow = org.apache.thrift.TBaseHelper.copyBinary(other.startRow);
-;
       }
       if (other.isSetEndRow()) {
         this.endRow = org.apache.thrift.TBaseHelper.copyBinary(other.endRow);
-;
       }
       if (other.isSetIterators()) {
         List<IteratorSetting> __this__iterators = new ArrayList<IteratorSetting>(other.iterators.size());
@@ -21308,16 +25358,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public compactTable_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public compactTable_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -21366,16 +25416,16 @@
     }
 
     public ByteBuffer bufferForStartRow() {
-      return startRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(startRow);
     }
 
     public compactTable_args setStartRow(byte[] startRow) {
-      setStartRow(startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(startRow));
+      this.startRow = startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(startRow, startRow.length));
       return this;
     }
 
     public compactTable_args setStartRow(ByteBuffer startRow) {
-      this.startRow = startRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
       return this;
     }
 
@@ -21400,16 +25450,16 @@
     }
 
     public ByteBuffer bufferForEndRow() {
-      return endRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     public compactTable_args setEndRow(byte[] endRow) {
-      setEndRow(endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(endRow));
+      this.endRow = endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(endRow, endRow.length));
       return this;
     }
 
     public compactTable_args setEndRow(ByteBuffer endRow) {
-      this.endRow = endRow;
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       return this;
     }
 
@@ -21624,10 +25674,10 @@
         return getIterators();
 
       case FLUSH:
-        return Boolean.valueOf(isFlush());
+        return isFlush();
 
       case WAIT:
-        return Boolean.valueOf(isWait());
+        return isWait();
 
       case COMPACTION_STRATEGY:
         return getCompactionStrategy();
@@ -21753,7 +25803,49 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_startRow = true && (isSetStartRow());
+      list.add(present_startRow);
+      if (present_startRow)
+        list.add(startRow);
+
+      boolean present_endRow = true && (isSetEndRow());
+      list.add(present_endRow);
+      if (present_endRow)
+        list.add(endRow);
+
+      boolean present_iterators = true && (isSetIterators());
+      list.add(present_iterators);
+      if (present_iterators)
+        list.add(iterators);
+
+      boolean present_flush = true;
+      list.add(present_flush);
+      if (present_flush)
+        list.add(flush);
+
+      boolean present_wait = true;
+      list.add(present_wait);
+      if (present_wait)
+        list.add(wait);
+
+      boolean present_compactionStrategy = true && (isSetCompactionStrategy());
+      list.add(present_compactionStrategy);
+      if (present_compactionStrategy)
+        list.add(compactionStrategy);
+
+      return list.hashCode();
     }
 
     @Override
@@ -22004,12 +26096,12 @@
                 {
                   org.apache.thrift.protocol.TList _list216 = iprot.readListBegin();
                   struct.iterators = new ArrayList<IteratorSetting>(_list216.size);
-                  for (int _i217 = 0; _i217 < _list216.size; ++_i217)
+                  IteratorSetting _elem217;
+                  for (int _i218 = 0; _i218 < _list216.size; ++_i218)
                   {
-                    IteratorSetting _elem218;
-                    _elem218 = new IteratorSetting();
-                    _elem218.read(iprot);
-                    struct.iterators.add(_elem218);
+                    _elem217 = new IteratorSetting();
+                    _elem217.read(iprot);
+                    struct.iterators.add(_elem217);
                   }
                   iprot.readListEnd();
                 }
@@ -22200,12 +26292,12 @@
           {
             org.apache.thrift.protocol.TList _list221 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
             struct.iterators = new ArrayList<IteratorSetting>(_list221.size);
-            for (int _i222 = 0; _i222 < _list221.size; ++_i222)
+            IteratorSetting _elem222;
+            for (int _i223 = 0; _i223 < _list221.size; ++_i223)
             {
-              IteratorSetting _elem223;
-              _elem223 = new IteratorSetting();
-              _elem223.read(iprot);
-              struct.iterators.add(_elem223);
+              _elem222 = new IteratorSetting();
+              _elem222.read(iprot);
+              struct.iterators.add(_elem222);
             }
           }
           struct.setIteratorsIsSet(true);
@@ -22541,7 +26633,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -22884,7 +26993,7 @@
       String tableName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
     }
 
@@ -22894,7 +27003,6 @@
     public cancelCompaction_args(cancelCompaction_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -22917,16 +27025,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public cancelCompaction_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public cancelCompaction_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -23053,7 +27161,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -23566,7 +27686,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -23935,7 +28072,7 @@
       TimeType type)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.versioningIter = versioningIter;
       setVersioningIterIsSet(true);
@@ -23949,7 +28086,6 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -23979,16 +28115,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public createTable_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public createTable_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -24132,7 +28268,7 @@
         return getTableName();
 
       case VERSIONING_ITER:
-        return Boolean.valueOf(isVersioningIter());
+        return isVersioningIter();
 
       case TYPE:
         return getType();
@@ -24214,7 +28350,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_versioningIter = true;
+      list.add(present_versioningIter);
+      if (present_versioningIter)
+        list.add(versioningIter);
+
+      boolean present_type = true && (isSetType());
+      list.add(present_type);
+      if (present_type)
+        list.add(type.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -24383,7 +28541,7 @@
               break;
             case 4: // TYPE
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.type = TimeType.findByValue(iprot.readI32());
+                struct.type = org.apache.accumulo.proxy.thrift.TimeType.findByValue(iprot.readI32());
                 struct.setTypeIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -24484,7 +28642,7 @@
           struct.setVersioningIterIsSet(true);
         }
         if (incoming.get(3)) {
-          struct.type = TimeType.findByValue(iprot.readI32());
+          struct.type = org.apache.accumulo.proxy.thrift.TimeType.findByValue(iprot.readI32());
           struct.setTypeIsSet(true);
         }
       }
@@ -24805,7 +28963,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -25148,7 +29323,7 @@
       String tableName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
     }
 
@@ -25158,7 +29333,6 @@
     public deleteTable_args(deleteTable_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -25181,16 +29355,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public deleteTable_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public deleteTable_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -25317,7 +29491,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -25830,7 +30016,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -26189,10 +30392,10 @@
       ByteBuffer endRow)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
-      this.startRow = startRow;
-      this.endRow = endRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     /**
@@ -26201,18 +30404,15 @@
     public deleteRows_args(deleteRows_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
       }
       if (other.isSetStartRow()) {
         this.startRow = org.apache.thrift.TBaseHelper.copyBinary(other.startRow);
-;
       }
       if (other.isSetEndRow()) {
         this.endRow = org.apache.thrift.TBaseHelper.copyBinary(other.endRow);
-;
       }
     }
 
@@ -26234,16 +30434,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public deleteRows_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public deleteRows_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -26292,16 +30492,16 @@
     }
 
     public ByteBuffer bufferForStartRow() {
-      return startRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(startRow);
     }
 
     public deleteRows_args setStartRow(byte[] startRow) {
-      setStartRow(startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(startRow));
+      this.startRow = startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(startRow, startRow.length));
       return this;
     }
 
     public deleteRows_args setStartRow(ByteBuffer startRow) {
-      this.startRow = startRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
       return this;
     }
 
@@ -26326,16 +30526,16 @@
     }
 
     public ByteBuffer bufferForEndRow() {
-      return endRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     public deleteRows_args setEndRow(byte[] endRow) {
-      setEndRow(endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(endRow));
+      this.endRow = endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(endRow, endRow.length));
       return this;
     }
 
     public deleteRows_args setEndRow(ByteBuffer endRow) {
-      this.endRow = endRow;
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       return this;
     }
 
@@ -26482,7 +30682,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_startRow = true && (isSetStartRow());
+      list.add(present_startRow);
+      if (present_startRow)
+        list.add(startRow);
+
+      boolean present_endRow = true && (isSetEndRow());
+      list.add(present_endRow);
+      if (present_endRow)
+        list.add(endRow);
+
+      return list.hashCode();
     }
 
     @Override
@@ -27077,7 +31299,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -27428,7 +31667,7 @@
       String exportDir)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.exportDir = exportDir;
     }
@@ -27439,7 +31678,6 @@
     public exportTable_args(exportTable_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -27466,16 +31704,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public exportTable_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public exportTable_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -27648,7 +31886,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_exportDir = true && (isSetExportDir());
+      list.add(present_exportDir);
+      if (present_exportDir)
+        list.add(exportDir);
+
+      return list.hashCode();
     }
 
     @Override
@@ -28202,7 +32457,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -28571,10 +32843,10 @@
       boolean wait)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
-      this.startRow = startRow;
-      this.endRow = endRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       this.wait = wait;
       setWaitIsSet(true);
     }
@@ -28586,18 +32858,15 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
       }
       if (other.isSetStartRow()) {
         this.startRow = org.apache.thrift.TBaseHelper.copyBinary(other.startRow);
-;
       }
       if (other.isSetEndRow()) {
         this.endRow = org.apache.thrift.TBaseHelper.copyBinary(other.endRow);
-;
       }
       this.wait = other.wait;
     }
@@ -28622,16 +32891,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public flushTable_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public flushTable_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -28680,16 +32949,16 @@
     }
 
     public ByteBuffer bufferForStartRow() {
-      return startRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(startRow);
     }
 
     public flushTable_args setStartRow(byte[] startRow) {
-      setStartRow(startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(startRow));
+      this.startRow = startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(startRow, startRow.length));
       return this;
     }
 
     public flushTable_args setStartRow(ByteBuffer startRow) {
-      this.startRow = startRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
       return this;
     }
 
@@ -28714,16 +32983,16 @@
     }
 
     public ByteBuffer bufferForEndRow() {
-      return endRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     public flushTable_args setEndRow(byte[] endRow) {
-      setEndRow(endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(endRow));
+      this.endRow = endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(endRow, endRow.length));
       return this;
     }
 
     public flushTable_args setEndRow(ByteBuffer endRow) {
-      this.endRow = endRow;
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       return this;
     }
 
@@ -28825,7 +33094,7 @@
         return getEndRow();
 
       case WAIT:
-        return Boolean.valueOf(isWait());
+        return isWait();
 
       }
       throw new IllegalStateException();
@@ -28915,7 +33184,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_startRow = true && (isSetStartRow());
+      list.add(present_startRow);
+      if (present_startRow)
+        list.add(startRow);
+
+      boolean present_endRow = true && (isSetEndRow());
+      list.add(present_endRow);
+      if (present_endRow)
+        list.add(endRow);
+
+      boolean present_wait = true;
+      list.add(present_wait);
+      if (present_wait)
+        list.add(wait);
+
+      return list.hashCode();
     }
 
     @Override
@@ -29547,7 +33843,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -29891,7 +34204,7 @@
       Set<String> tables)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tables = tables;
     }
 
@@ -29901,7 +34214,6 @@
     public getDiskUsage_args(getDiskUsage_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTables()) {
         Set<String> __this__tables = new HashSet<String>(other.tables);
@@ -29925,16 +34237,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getDiskUsage_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getDiskUsage_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -30076,7 +34388,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tables = true && (isSetTables());
+      list.add(present_tables);
+      if (present_tables)
+        list.add(tables);
+
+      return list.hashCode();
     }
 
     @Override
@@ -30198,11 +34522,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set224 = iprot.readSetBegin();
                   struct.tables = new HashSet<String>(2*_set224.size);
-                  for (int _i225 = 0; _i225 < _set224.size; ++_i225)
+                  String _elem225;
+                  for (int _i226 = 0; _i226 < _set224.size; ++_i226)
                   {
-                    String _elem226;
-                    _elem226 = iprot.readString();
-                    struct.tables.add(_elem226);
+                    _elem225 = iprot.readString();
+                    struct.tables.add(_elem225);
                   }
                   iprot.readSetEnd();
                 }
@@ -30294,11 +34618,11 @@
           {
             org.apache.thrift.protocol.TSet _set229 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.tables = new HashSet<String>(2*_set229.size);
-            for (int _i230 = 0; _i230 < _set229.size; ++_i230)
+            String _elem230;
+            for (int _i231 = 0; _i231 < _set229.size; ++_i231)
             {
-              String _elem231;
-              _elem231 = iprot.readString();
-              struct.tables.add(_elem231);
+              _elem230 = iprot.readString();
+              struct.tables.add(_elem230);
             }
           }
           struct.setTablesIsSet(true);
@@ -30700,7 +35024,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -30850,12 +35196,12 @@
                 {
                   org.apache.thrift.protocol.TList _list232 = iprot.readListBegin();
                   struct.success = new ArrayList<DiskUsage>(_list232.size);
-                  for (int _i233 = 0; _i233 < _list232.size; ++_i233)
+                  DiskUsage _elem233;
+                  for (int _i234 = 0; _i234 < _list232.size; ++_i234)
                   {
-                    DiskUsage _elem234;
-                    _elem234 = new DiskUsage();
-                    _elem234.read(iprot);
-                    struct.success.add(_elem234);
+                    _elem233 = new DiskUsage();
+                    _elem233.read(iprot);
+                    struct.success.add(_elem233);
                   }
                   iprot.readListEnd();
                 }
@@ -30992,12 +35338,12 @@
           {
             org.apache.thrift.protocol.TList _list237 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
             struct.success = new ArrayList<DiskUsage>(_list237.size);
-            for (int _i238 = 0; _i238 < _list237.size; ++_i238)
+            DiskUsage _elem238;
+            for (int _i239 = 0; _i239 < _list237.size; ++_i239)
             {
-              DiskUsage _elem239;
-              _elem239 = new DiskUsage();
-              _elem239.read(iprot);
-              struct.success.add(_elem239);
+              _elem238 = new DiskUsage();
+              _elem238.read(iprot);
+              struct.success.add(_elem238);
             }
           }
           struct.setSuccessIsSet(true);
@@ -31118,7 +35464,7 @@
       String tableName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
     }
 
@@ -31128,7 +35474,6 @@
     public getLocalityGroups_args(getLocalityGroups_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -31151,16 +35496,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getLocalityGroups_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getLocalityGroups_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -31287,7 +35632,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -31885,7 +36242,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -32035,23 +36414,23 @@
                 {
                   org.apache.thrift.protocol.TMap _map240 = iprot.readMapBegin();
                   struct.success = new HashMap<String,Set<String>>(2*_map240.size);
-                  for (int _i241 = 0; _i241 < _map240.size; ++_i241)
+                  String _key241;
+                  Set<String> _val242;
+                  for (int _i243 = 0; _i243 < _map240.size; ++_i243)
                   {
-                    String _key242;
-                    Set<String> _val243;
-                    _key242 = iprot.readString();
+                    _key241 = iprot.readString();
                     {
                       org.apache.thrift.protocol.TSet _set244 = iprot.readSetBegin();
-                      _val243 = new HashSet<String>(2*_set244.size);
-                      for (int _i245 = 0; _i245 < _set244.size; ++_i245)
+                      _val242 = new HashSet<String>(2*_set244.size);
+                      String _elem245;
+                      for (int _i246 = 0; _i246 < _set244.size; ++_i246)
                       {
-                        String _elem246;
-                        _elem246 = iprot.readString();
-                        _val243.add(_elem246);
+                        _elem245 = iprot.readString();
+                        _val242.add(_elem245);
                       }
                       iprot.readSetEnd();
                     }
-                    struct.success.put(_key242, _val243);
+                    struct.success.put(_key241, _val242);
                   }
                   iprot.readMapEnd();
                 }
@@ -32203,22 +36582,22 @@
           {
             org.apache.thrift.protocol.TMap _map251 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.SET, iprot.readI32());
             struct.success = new HashMap<String,Set<String>>(2*_map251.size);
-            for (int _i252 = 0; _i252 < _map251.size; ++_i252)
+            String _key252;
+            Set<String> _val253;
+            for (int _i254 = 0; _i254 < _map251.size; ++_i254)
             {
-              String _key253;
-              Set<String> _val254;
-              _key253 = iprot.readString();
+              _key252 = iprot.readString();
               {
                 org.apache.thrift.protocol.TSet _set255 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-                _val254 = new HashSet<String>(2*_set255.size);
-                for (int _i256 = 0; _i256 < _set255.size; ++_i256)
+                _val253 = new HashSet<String>(2*_set255.size);
+                String _elem256;
+                for (int _i257 = 0; _i257 < _set255.size; ++_i257)
                 {
-                  String _elem257;
-                  _elem257 = iprot.readString();
-                  _val254.add(_elem257);
+                  _elem256 = iprot.readString();
+                  _val253.add(_elem256);
                 }
               }
-              struct.success.put(_key253, _val254);
+              struct.success.put(_key252, _val253);
             }
           }
           struct.setSuccessIsSet(true);
@@ -32363,7 +36742,7 @@
       IteratorScope scope)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.iteratorName = iteratorName;
       this.scope = scope;
@@ -32375,7 +36754,6 @@
     public getIteratorSetting_args(getIteratorSetting_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -32406,16 +36784,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getIteratorSetting_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getIteratorSetting_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -32642,7 +37020,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_iteratorName = true && (isSetIteratorName());
+      list.add(present_iteratorName);
+      if (present_iteratorName)
+        list.add(iteratorName);
+
+      boolean present_scope = true && (isSetScope());
+      list.add(present_scope);
+      if (present_scope)
+        list.add(scope.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -32813,7 +37213,7 @@
               break;
             case 4: // SCOPE
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.scope = IteratorScope.findByValue(iprot.readI32());
+                struct.scope = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
                 struct.setScopeIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -32916,7 +37316,7 @@
           struct.setIteratorNameIsSet(true);
         }
         if (incoming.get(3)) {
-          struct.scope = IteratorScope.findByValue(iprot.readI32());
+          struct.scope = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
           struct.setScopeIsSet(true);
         }
       }
@@ -33296,7 +37696,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -33729,13 +38151,13 @@
       boolean endInclusive)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.auths = auths;
-      this.startRow = startRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
       this.startInclusive = startInclusive;
       setStartInclusiveIsSet(true);
-      this.endRow = endRow;
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       this.endInclusive = endInclusive;
       setEndInclusiveIsSet(true);
     }
@@ -33747,7 +38169,6 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -33758,12 +38179,10 @@
       }
       if (other.isSetStartRow()) {
         this.startRow = org.apache.thrift.TBaseHelper.copyBinary(other.startRow);
-;
       }
       this.startInclusive = other.startInclusive;
       if (other.isSetEndRow()) {
         this.endRow = org.apache.thrift.TBaseHelper.copyBinary(other.endRow);
-;
       }
       this.endInclusive = other.endInclusive;
     }
@@ -33791,16 +38210,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getMaxRow_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getMaxRow_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -33888,16 +38307,16 @@
     }
 
     public ByteBuffer bufferForStartRow() {
-      return startRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(startRow);
     }
 
     public getMaxRow_args setStartRow(byte[] startRow) {
-      setStartRow(startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(startRow));
+      this.startRow = startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(startRow, startRow.length));
       return this;
     }
 
     public getMaxRow_args setStartRow(ByteBuffer startRow) {
-      this.startRow = startRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
       return this;
     }
 
@@ -33945,16 +38364,16 @@
     }
 
     public ByteBuffer bufferForEndRow() {
-      return endRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     public getMaxRow_args setEndRow(byte[] endRow) {
-      setEndRow(endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(endRow));
+      this.endRow = endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(endRow, endRow.length));
       return this;
     }
 
     public getMaxRow_args setEndRow(ByteBuffer endRow) {
-      this.endRow = endRow;
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       return this;
     }
 
@@ -34072,13 +38491,13 @@
         return getStartRow();
 
       case START_INCLUSIVE:
-        return Boolean.valueOf(isStartInclusive());
+        return isStartInclusive();
 
       case END_ROW:
         return getEndRow();
 
       case END_INCLUSIVE:
-        return Boolean.valueOf(isEndInclusive());
+        return isEndInclusive();
 
       }
       throw new IllegalStateException();
@@ -34190,7 +38609,44 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_auths = true && (isSetAuths());
+      list.add(present_auths);
+      if (present_auths)
+        list.add(auths);
+
+      boolean present_startRow = true && (isSetStartRow());
+      list.add(present_startRow);
+      if (present_startRow)
+        list.add(startRow);
+
+      boolean present_startInclusive = true;
+      list.add(present_startInclusive);
+      if (present_startInclusive)
+        list.add(startInclusive);
+
+      boolean present_endRow = true && (isSetEndRow());
+      list.add(present_endRow);
+      if (present_endRow)
+        list.add(endRow);
+
+      boolean present_endInclusive = true;
+      list.add(present_endInclusive);
+      if (present_endInclusive)
+        list.add(endInclusive);
+
+      return list.hashCode();
     }
 
     @Override
@@ -34311,7 +38767,7 @@
       if (this.auths == null) {
         sb.append("null");
       } else {
-        sb.append(this.auths);
+        org.apache.thrift.TBaseHelper.toString(this.auths, sb);
       }
       first = false;
       if (!first) sb.append(", ");
@@ -34404,11 +38860,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set258 = iprot.readSetBegin();
                   struct.auths = new HashSet<ByteBuffer>(2*_set258.size);
-                  for (int _i259 = 0; _i259 < _set258.size; ++_i259)
+                  ByteBuffer _elem259;
+                  for (int _i260 = 0; _i260 < _set258.size; ++_i260)
                   {
-                    ByteBuffer _elem260;
-                    _elem260 = iprot.readBinary();
-                    struct.auths.add(_elem260);
+                    _elem259 = iprot.readBinary();
+                    struct.auths.add(_elem259);
                   }
                   iprot.readSetEnd();
                 }
@@ -34587,11 +39043,11 @@
           {
             org.apache.thrift.protocol.TSet _set263 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.auths = new HashSet<ByteBuffer>(2*_set263.size);
-            for (int _i264 = 0; _i264 < _set263.size; ++_i264)
+            ByteBuffer _elem264;
+            for (int _i265 = 0; _i265 < _set263.size; ++_i265)
             {
-              ByteBuffer _elem265;
-              _elem265 = iprot.readBinary();
-              struct.auths.add(_elem265);
+              _elem264 = iprot.readBinary();
+              struct.auths.add(_elem264);
             }
           }
           struct.setAuthsIsSet(true);
@@ -34729,7 +39185,7 @@
       TableNotFoundException ouch3)
     {
       this();
-      this.success = success;
+      this.success = org.apache.thrift.TBaseHelper.copyBinary(success);
       this.ouch1 = ouch1;
       this.ouch2 = ouch2;
       this.ouch3 = ouch3;
@@ -34741,7 +39197,6 @@
     public getMaxRow_result(getMaxRow_result other) {
       if (other.isSetSuccess()) {
         this.success = org.apache.thrift.TBaseHelper.copyBinary(other.success);
-;
       }
       if (other.isSetOuch1()) {
         this.ouch1 = new AccumuloException(other.ouch1);
@@ -34772,16 +39227,16 @@
     }
 
     public ByteBuffer bufferForSuccess() {
-      return success;
+      return org.apache.thrift.TBaseHelper.copyBinary(success);
     }
 
     public getMaxRow_result setSuccess(byte[] success) {
-      setSuccess(success == null ? (ByteBuffer)null : ByteBuffer.wrap(success));
+      this.success = success == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(success, success.length));
       return this;
     }
 
     public getMaxRow_result setSuccess(ByteBuffer success) {
-      this.success = success;
+      this.success = org.apache.thrift.TBaseHelper.copyBinary(success);
       return this;
     }
 
@@ -35000,7 +39455,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -35384,7 +39861,7 @@
       String tableName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
     }
 
@@ -35394,7 +39871,6 @@
     public getTableProperties_args(getTableProperties_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -35417,16 +39893,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getTableProperties_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getTableProperties_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -35553,7 +40029,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -36139,7 +40627,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -36289,13 +40799,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map266 = iprot.readMapBegin();
                   struct.success = new HashMap<String,String>(2*_map266.size);
-                  for (int _i267 = 0; _i267 < _map266.size; ++_i267)
+                  String _key267;
+                  String _val268;
+                  for (int _i269 = 0; _i269 < _map266.size; ++_i269)
                   {
-                    String _key268;
-                    String _val269;
-                    _key268 = iprot.readString();
-                    _val269 = iprot.readString();
-                    struct.success.put(_key268, _val269);
+                    _key267 = iprot.readString();
+                    _val268 = iprot.readString();
+                    struct.success.put(_key267, _val268);
                   }
                   iprot.readMapEnd();
                 }
@@ -36434,13 +40944,13 @@
           {
             org.apache.thrift.protocol.TMap _map272 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new HashMap<String,String>(2*_map272.size);
-            for (int _i273 = 0; _i273 < _map272.size; ++_i273)
+            String _key273;
+            String _val274;
+            for (int _i275 = 0; _i275 < _map272.size; ++_i275)
             {
-              String _key274;
-              String _val275;
-              _key274 = iprot.readString();
-              _val275 = iprot.readString();
-              struct.success.put(_key274, _val275);
+              _key273 = iprot.readString();
+              _val274 = iprot.readString();
+              struct.success.put(_key273, _val274);
             }
           }
           struct.setSuccessIsSet(true);
@@ -36587,7 +41097,7 @@
       boolean setTime)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.importDir = importDir;
       this.failureDir = failureDir;
@@ -36602,7 +41112,6 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -36636,16 +41145,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public importDirectory_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public importDirectory_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -36819,7 +41328,7 @@
         return getFailureDir();
 
       case SET_TIME:
-        return Boolean.valueOf(isSetTime());
+        return isSetTime();
 
       }
       throw new IllegalStateException();
@@ -36909,7 +41418,34 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_importDir = true && (isSetImportDir());
+      list.add(present_importDir);
+      if (present_importDir)
+        list.add(importDir);
+
+      boolean present_failureDir = true && (isSetFailureDir());
+      list.add(present_failureDir);
+      if (present_failureDir)
+        list.add(failureDir);
+
+      boolean present_setTime = true;
+      list.add(present_setTime);
+      if (present_setTime)
+        list.add(setTime);
+
+      return list.hashCode();
     }
 
     @Override
@@ -37541,7 +42077,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      boolean present_ouch4 = true && (isSetOuch4());
+      list.add(present_ouch4);
+      if (present_ouch4)
+        list.add(ouch4);
+
+      return list.hashCode();
     }
 
     @Override
@@ -37892,7 +42445,7 @@
       String importDir)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.importDir = importDir;
     }
@@ -37903,7 +42456,6 @@
     public importTable_args(importTable_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -37930,16 +42482,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public importTable_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public importTable_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -38112,7 +42664,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_importDir = true && (isSetImportDir());
+      list.add(present_importDir);
+      if (present_importDir)
+        list.add(importDir);
+
+      return list.hashCode();
     }
 
     @Override
@@ -38666,7 +43235,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -39019,7 +43605,7 @@
       int maxSplits)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.maxSplits = maxSplits;
       setMaxSplitsIsSet(true);
@@ -39032,7 +43618,6 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -39058,16 +43643,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public listSplits_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public listSplits_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -39171,7 +43756,7 @@
         return getTableName();
 
       case MAX_SPLITS:
-        return Integer.valueOf(getMaxSplits());
+        return getMaxSplits();
 
       }
       throw new IllegalStateException();
@@ -39239,7 +43824,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_maxSplits = true;
+      list.add(present_maxSplits);
+      if (present_maxSplits)
+        list.add(maxSplits);
+
+      return list.hashCode();
     }
 
     @Override
@@ -39865,7 +44467,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -39940,7 +44564,7 @@
       if (this.success == null) {
         sb.append("null");
       } else {
-        sb.append(this.success);
+        org.apache.thrift.TBaseHelper.toString(this.success, sb);
       }
       first = false;
       if (!first) sb.append(", ");
@@ -40015,11 +44639,11 @@
                 {
                   org.apache.thrift.protocol.TList _list276 = iprot.readListBegin();
                   struct.success = new ArrayList<ByteBuffer>(_list276.size);
-                  for (int _i277 = 0; _i277 < _list276.size; ++_i277)
+                  ByteBuffer _elem277;
+                  for (int _i278 = 0; _i278 < _list276.size; ++_i278)
                   {
-                    ByteBuffer _elem278;
-                    _elem278 = iprot.readBinary();
-                    struct.success.add(_elem278);
+                    _elem277 = iprot.readBinary();
+                    struct.success.add(_elem277);
                   }
                   iprot.readListEnd();
                 }
@@ -40156,11 +44780,11 @@
           {
             org.apache.thrift.protocol.TList _list281 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new ArrayList<ByteBuffer>(_list281.size);
-            for (int _i282 = 0; _i282 < _list281.size; ++_i282)
+            ByteBuffer _elem282;
+            for (int _i283 = 0; _i283 < _list281.size; ++_i283)
             {
-              ByteBuffer _elem283;
-              _elem283 = iprot.readBinary();
-              struct.success.add(_elem283);
+              _elem282 = iprot.readBinary();
+              struct.success.add(_elem282);
             }
           }
           struct.setSuccessIsSet(true);
@@ -40273,7 +44897,7 @@
       ByteBuffer login)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     /**
@@ -40282,7 +44906,6 @@
     public listTables_args(listTables_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
     }
 
@@ -40301,16 +44924,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public listTables_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public listTables_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -40391,7 +45014,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      return list.hashCode();
     }
 
     @Override
@@ -40762,7 +45392,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -40858,11 +45495,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set284 = iprot.readSetBegin();
                   struct.success = new HashSet<String>(2*_set284.size);
-                  for (int _i285 = 0; _i285 < _set284.size; ++_i285)
+                  String _elem285;
+                  for (int _i286 = 0; _i286 < _set284.size; ++_i286)
                   {
-                    String _elem286;
-                    _elem286 = iprot.readString();
-                    struct.success.add(_elem286);
+                    _elem285 = iprot.readString();
+                    struct.success.add(_elem285);
                   }
                   iprot.readSetEnd();
                 }
@@ -40939,11 +45576,11 @@
           {
             org.apache.thrift.protocol.TSet _set289 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new HashSet<String>(2*_set289.size);
-            for (int _i290 = 0; _i290 < _set289.size; ++_i290)
+            String _elem290;
+            for (int _i291 = 0; _i291 < _set289.size; ++_i291)
             {
-              String _elem291;
-              _elem291 = iprot.readString();
-              struct.success.add(_elem291);
+              _elem290 = iprot.readString();
+              struct.success.add(_elem290);
             }
           }
           struct.setSuccessIsSet(true);
@@ -41049,7 +45686,7 @@
       String tableName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
     }
 
@@ -41059,7 +45696,6 @@
     public listIterators_args(listIterators_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -41082,16 +45718,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public listIterators_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public listIterators_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -41218,7 +45854,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -41819,7 +46467,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -41969,23 +46639,23 @@
                 {
                   org.apache.thrift.protocol.TMap _map292 = iprot.readMapBegin();
                   struct.success = new HashMap<String,Set<IteratorScope>>(2*_map292.size);
-                  for (int _i293 = 0; _i293 < _map292.size; ++_i293)
+                  String _key293;
+                  Set<IteratorScope> _val294;
+                  for (int _i295 = 0; _i295 < _map292.size; ++_i295)
                   {
-                    String _key294;
-                    Set<IteratorScope> _val295;
-                    _key294 = iprot.readString();
+                    _key293 = iprot.readString();
                     {
                       org.apache.thrift.protocol.TSet _set296 = iprot.readSetBegin();
-                      _val295 = new HashSet<IteratorScope>(2*_set296.size);
-                      for (int _i297 = 0; _i297 < _set296.size; ++_i297)
+                      _val294 = new HashSet<IteratorScope>(2*_set296.size);
+                      IteratorScope _elem297;
+                      for (int _i298 = 0; _i298 < _set296.size; ++_i298)
                       {
-                        IteratorScope _elem298;
-                        _elem298 = IteratorScope.findByValue(iprot.readI32());
-                        _val295.add(_elem298);
+                        _elem297 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                        _val294.add(_elem297);
                       }
                       iprot.readSetEnd();
                     }
-                    struct.success.put(_key294, _val295);
+                    struct.success.put(_key293, _val294);
                   }
                   iprot.readMapEnd();
                 }
@@ -42137,22 +46807,22 @@
           {
             org.apache.thrift.protocol.TMap _map303 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.SET, iprot.readI32());
             struct.success = new HashMap<String,Set<IteratorScope>>(2*_map303.size);
-            for (int _i304 = 0; _i304 < _map303.size; ++_i304)
+            String _key304;
+            Set<IteratorScope> _val305;
+            for (int _i306 = 0; _i306 < _map303.size; ++_i306)
             {
-              String _key305;
-              Set<IteratorScope> _val306;
-              _key305 = iprot.readString();
+              _key304 = iprot.readString();
               {
                 org.apache.thrift.protocol.TSet _set307 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, iprot.readI32());
-                _val306 = new HashSet<IteratorScope>(2*_set307.size);
-                for (int _i308 = 0; _i308 < _set307.size; ++_i308)
+                _val305 = new HashSet<IteratorScope>(2*_set307.size);
+                IteratorScope _elem308;
+                for (int _i309 = 0; _i309 < _set307.size; ++_i309)
                 {
-                  IteratorScope _elem309;
-                  _elem309 = IteratorScope.findByValue(iprot.readI32());
-                  _val306.add(_elem309);
+                  _elem308 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                  _val305.add(_elem308);
                 }
               }
-              struct.success.put(_key305, _val306);
+              struct.success.put(_key304, _val305);
             }
           }
           struct.setSuccessIsSet(true);
@@ -42273,7 +46943,7 @@
       String tableName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
     }
 
@@ -42283,7 +46953,6 @@
     public listConstraints_args(listConstraints_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -42306,16 +46975,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public listConstraints_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public listConstraints_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -42442,7 +47111,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -43028,7 +47709,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -43178,13 +47881,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map310 = iprot.readMapBegin();
                   struct.success = new HashMap<String,Integer>(2*_map310.size);
-                  for (int _i311 = 0; _i311 < _map310.size; ++_i311)
+                  String _key311;
+                  int _val312;
+                  for (int _i313 = 0; _i313 < _map310.size; ++_i313)
                   {
-                    String _key312;
-                    int _val313;
-                    _key312 = iprot.readString();
-                    _val313 = iprot.readI32();
-                    struct.success.put(_key312, _val313);
+                    _key311 = iprot.readString();
+                    _val312 = iprot.readI32();
+                    struct.success.put(_key311, _val312);
                   }
                   iprot.readMapEnd();
                 }
@@ -43323,13 +48026,13 @@
           {
             org.apache.thrift.protocol.TMap _map316 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, iprot.readI32());
             struct.success = new HashMap<String,Integer>(2*_map316.size);
-            for (int _i317 = 0; _i317 < _map316.size; ++_i317)
+            String _key317;
+            int _val318;
+            for (int _i319 = 0; _i319 < _map316.size; ++_i319)
             {
-              String _key318;
-              int _val319;
-              _key318 = iprot.readString();
-              _val319 = iprot.readI32();
-              struct.success.put(_key318, _val319);
+              _key317 = iprot.readString();
+              _val318 = iprot.readI32();
+              struct.success.put(_key317, _val318);
             }
           }
           struct.setSuccessIsSet(true);
@@ -43466,10 +48169,10 @@
       ByteBuffer endRow)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
-      this.startRow = startRow;
-      this.endRow = endRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     /**
@@ -43478,18 +48181,15 @@
     public mergeTablets_args(mergeTablets_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
       }
       if (other.isSetStartRow()) {
         this.startRow = org.apache.thrift.TBaseHelper.copyBinary(other.startRow);
-;
       }
       if (other.isSetEndRow()) {
         this.endRow = org.apache.thrift.TBaseHelper.copyBinary(other.endRow);
-;
       }
     }
 
@@ -43511,16 +48211,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public mergeTablets_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public mergeTablets_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -43569,16 +48269,16 @@
     }
 
     public ByteBuffer bufferForStartRow() {
-      return startRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(startRow);
     }
 
     public mergeTablets_args setStartRow(byte[] startRow) {
-      setStartRow(startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(startRow));
+      this.startRow = startRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(startRow, startRow.length));
       return this;
     }
 
     public mergeTablets_args setStartRow(ByteBuffer startRow) {
-      this.startRow = startRow;
+      this.startRow = org.apache.thrift.TBaseHelper.copyBinary(startRow);
       return this;
     }
 
@@ -43603,16 +48303,16 @@
     }
 
     public ByteBuffer bufferForEndRow() {
-      return endRow;
+      return org.apache.thrift.TBaseHelper.copyBinary(endRow);
     }
 
     public mergeTablets_args setEndRow(byte[] endRow) {
-      setEndRow(endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(endRow));
+      this.endRow = endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(endRow, endRow.length));
       return this;
     }
 
     public mergeTablets_args setEndRow(ByteBuffer endRow) {
-      this.endRow = endRow;
+      this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
       return this;
     }
 
@@ -43759,7 +48459,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_startRow = true && (isSetStartRow());
+      list.add(present_startRow);
+      if (present_startRow)
+        list.add(startRow);
+
+      boolean present_endRow = true && (isSetEndRow());
+      list.add(present_endRow);
+      if (present_endRow)
+        list.add(endRow);
+
+      return list.hashCode();
     }
 
     @Override
@@ -44354,7 +49076,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -44709,7 +49448,7 @@
       boolean wait)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.wait = wait;
       setWaitIsSet(true);
@@ -44722,7 +49461,6 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -44748,16 +49486,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public offlineTable_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public offlineTable_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -44861,7 +49599,7 @@
         return getTableName();
 
       case WAIT:
-        return Boolean.valueOf(isWait());
+        return isWait();
 
       }
       throw new IllegalStateException();
@@ -44929,7 +49667,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_wait = true;
+      list.add(present_wait);
+      if (present_wait)
+        list.add(wait);
+
+      return list.hashCode();
     }
 
     @Override
@@ -45479,7 +50234,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -45834,7 +50606,7 @@
       boolean wait)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.wait = wait;
       setWaitIsSet(true);
@@ -45847,7 +50619,6 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -45873,16 +50644,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public onlineTable_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public onlineTable_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -45986,7 +50757,7 @@
         return getTableName();
 
       case WAIT:
-        return Boolean.valueOf(isWait());
+        return isWait();
 
       }
       throw new IllegalStateException();
@@ -46054,7 +50825,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_wait = true;
+      list.add(present_wait);
+      if (present_wait)
+        list.add(wait);
+
+      return list.hashCode();
     }
 
     @Override
@@ -46604,7 +51392,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -46957,7 +51762,7 @@
       int constraint)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.constraint = constraint;
       setConstraintIsSet(true);
@@ -46970,7 +51775,6 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -46996,16 +51800,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public removeConstraint_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public removeConstraint_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -47109,7 +51913,7 @@
         return getTableName();
 
       case CONSTRAINT:
-        return Integer.valueOf(getConstraint());
+        return getConstraint();
 
       }
       throw new IllegalStateException();
@@ -47177,7 +51981,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_constraint = true;
+      list.add(present_constraint);
+      if (present_constraint)
+        list.add(constraint);
+
+      return list.hashCode();
     }
 
     @Override
@@ -47727,7 +52548,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -48087,7 +52925,7 @@
       Set<IteratorScope> scopes)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.iterName = iterName;
       this.scopes = scopes;
@@ -48099,7 +52937,6 @@
     public removeIterator_args(removeIterator_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -48134,16 +52971,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public removeIterator_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public removeIterator_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -48377,7 +53214,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_iterName = true && (isSetIterName());
+      list.add(present_iterName);
+      if (present_iterName)
+        list.add(iterName);
+
+      boolean present_scopes = true && (isSetScopes());
+      list.add(present_scopes);
+      if (present_scopes)
+        list.add(scopes);
+
+      return list.hashCode();
     }
 
     @Override
@@ -48551,11 +53410,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set320 = iprot.readSetBegin();
                   struct.scopes = new HashSet<IteratorScope>(2*_set320.size);
-                  for (int _i321 = 0; _i321 < _set320.size; ++_i321)
+                  IteratorScope _elem321;
+                  for (int _i322 = 0; _i322 < _set320.size; ++_i322)
                   {
-                    IteratorScope _elem322;
-                    _elem322 = IteratorScope.findByValue(iprot.readI32());
-                    struct.scopes.add(_elem322);
+                    _elem321 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                    struct.scopes.add(_elem321);
                   }
                   iprot.readSetEnd();
                 }
@@ -48677,11 +53536,11 @@
           {
             org.apache.thrift.protocol.TSet _set325 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, iprot.readI32());
             struct.scopes = new HashSet<IteratorScope>(2*_set325.size);
-            for (int _i326 = 0; _i326 < _set325.size; ++_i326)
+            IteratorScope _elem326;
+            for (int _i327 = 0; _i327 < _set325.size; ++_i327)
             {
-              IteratorScope _elem327;
-              _elem327 = IteratorScope.findByValue(iprot.readI32());
-              struct.scopes.add(_elem327);
+              _elem326 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+              struct.scopes.add(_elem326);
             }
           }
           struct.setScopesIsSet(true);
@@ -49004,7 +53863,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -49355,7 +54231,7 @@
       String property)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.property = property;
     }
@@ -49366,7 +54242,6 @@
     public removeTableProperty_args(removeTableProperty_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -49393,16 +54268,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public removeTableProperty_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public removeTableProperty_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -49575,7 +54450,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      return list.hashCode();
     }
 
     @Override
@@ -50129,7 +55021,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -50480,7 +55389,7 @@
       String newTableName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.oldTableName = oldTableName;
       this.newTableName = newTableName;
     }
@@ -50491,7 +55400,6 @@
     public renameTable_args(renameTable_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetOldTableName()) {
         this.oldTableName = other.oldTableName;
@@ -50518,16 +55426,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public renameTable_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public renameTable_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -50700,7 +55608,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_oldTableName = true && (isSetOldTableName());
+      list.add(present_oldTableName);
+      if (present_oldTableName)
+        list.add(oldTableName);
+
+      boolean present_newTableName = true && (isSetNewTableName());
+      list.add(present_newTableName);
+      if (present_newTableName)
+        list.add(newTableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -51313,7 +56238,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      boolean present_ouch4 = true && (isSetOuch4());
+      list.add(present_ouch4);
+      if (present_ouch4)
+        list.add(ouch4);
+
+      return list.hashCode();
     }
 
     @Override
@@ -51710,7 +56657,7 @@
       Map<String,Set<String>> groups)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.groups = groups;
     }
@@ -51721,7 +56668,6 @@
     public setLocalityGroups_args(setLocalityGroups_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -51760,16 +56706,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public setLocalityGroups_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public setLocalityGroups_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -51953,7 +56899,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_groups = true && (isSetGroups());
+      list.add(present_groups);
+      if (present_groups)
+        list.add(groups);
+
+      return list.hashCode();
     }
 
     @Override
@@ -52101,23 +57064,23 @@
                 {
                   org.apache.thrift.protocol.TMap _map328 = iprot.readMapBegin();
                   struct.groups = new HashMap<String,Set<String>>(2*_map328.size);
-                  for (int _i329 = 0; _i329 < _map328.size; ++_i329)
+                  String _key329;
+                  Set<String> _val330;
+                  for (int _i331 = 0; _i331 < _map328.size; ++_i331)
                   {
-                    String _key330;
-                    Set<String> _val331;
-                    _key330 = iprot.readString();
+                    _key329 = iprot.readString();
                     {
                       org.apache.thrift.protocol.TSet _set332 = iprot.readSetBegin();
-                      _val331 = new HashSet<String>(2*_set332.size);
-                      for (int _i333 = 0; _i333 < _set332.size; ++_i333)
+                      _val330 = new HashSet<String>(2*_set332.size);
+                      String _elem333;
+                      for (int _i334 = 0; _i334 < _set332.size; ++_i334)
                       {
-                        String _elem334;
-                        _elem334 = iprot.readString();
-                        _val331.add(_elem334);
+                        _elem333 = iprot.readString();
+                        _val330.add(_elem333);
                       }
                       iprot.readSetEnd();
                     }
-                    struct.groups.put(_key330, _val331);
+                    struct.groups.put(_key329, _val330);
                   }
                   iprot.readMapEnd();
                 }
@@ -52239,22 +57202,22 @@
           {
             org.apache.thrift.protocol.TMap _map339 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.SET, iprot.readI32());
             struct.groups = new HashMap<String,Set<String>>(2*_map339.size);
-            for (int _i340 = 0; _i340 < _map339.size; ++_i340)
+            String _key340;
+            Set<String> _val341;
+            for (int _i342 = 0; _i342 < _map339.size; ++_i342)
             {
-              String _key341;
-              Set<String> _val342;
-              _key341 = iprot.readString();
+              _key340 = iprot.readString();
               {
                 org.apache.thrift.protocol.TSet _set343 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
-                _val342 = new HashSet<String>(2*_set343.size);
-                for (int _i344 = 0; _i344 < _set343.size; ++_i344)
+                _val341 = new HashSet<String>(2*_set343.size);
+                String _elem344;
+                for (int _i345 = 0; _i345 < _set343.size; ++_i345)
                 {
-                  String _elem345;
-                  _elem345 = iprot.readString();
-                  _val342.add(_elem345);
+                  _elem344 = iprot.readString();
+                  _val341.add(_elem344);
                 }
               }
-              struct.groups.put(_key341, _val342);
+              struct.groups.put(_key340, _val341);
             }
           }
           struct.setGroupsIsSet(true);
@@ -52577,7 +57540,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -52936,7 +57916,7 @@
       String value)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.property = property;
       this.value = value;
@@ -52948,7 +57928,6 @@
     public setTableProperty_args(setTableProperty_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -52979,16 +57958,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public setTableProperty_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public setTableProperty_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -53207,7 +58186,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      boolean present_value = true && (isSetValue());
+      list.add(present_value);
+      if (present_value)
+        list.add(value);
+
+      return list.hashCode();
     }
 
     @Override
@@ -53802,7 +58803,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -54163,7 +59181,7 @@
       int maxSplits)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.range = range;
       this.maxSplits = maxSplits;
@@ -54177,7 +59195,6 @@
       __isset_bitfield = other.__isset_bitfield;
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -54207,16 +59224,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public splitRangeByTablets_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public splitRangeByTablets_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -54355,7 +59372,7 @@
         return getRange();
 
       case MAX_SPLITS:
-        return Integer.valueOf(getMaxSplits());
+        return getMaxSplits();
 
       }
       throw new IllegalStateException();
@@ -54434,7 +59451,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_range = true && (isSetRange());
+      list.add(present_range);
+      if (present_range)
+        list.add(range);
+
+      boolean present_maxSplits = true;
+      list.add(present_maxSplits);
+      if (present_maxSplits)
+        list.add(maxSplits);
+
+      return list.hashCode();
     }
 
     @Override
@@ -55109,7 +60148,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -55259,12 +60320,12 @@
                 {
                   org.apache.thrift.protocol.TSet _set346 = iprot.readSetBegin();
                   struct.success = new HashSet<Range>(2*_set346.size);
-                  for (int _i347 = 0; _i347 < _set346.size; ++_i347)
+                  Range _elem347;
+                  for (int _i348 = 0; _i348 < _set346.size; ++_i348)
                   {
-                    Range _elem348;
-                    _elem348 = new Range();
-                    _elem348.read(iprot);
-                    struct.success.add(_elem348);
+                    _elem347 = new Range();
+                    _elem347.read(iprot);
+                    struct.success.add(_elem347);
                   }
                   iprot.readSetEnd();
                 }
@@ -55401,12 +60462,12 @@
           {
             org.apache.thrift.protocol.TSet _set351 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
             struct.success = new HashSet<Range>(2*_set351.size);
-            for (int _i352 = 0; _i352 < _set351.size; ++_i352)
+            Range _elem352;
+            for (int _i353 = 0; _i353 < _set351.size; ++_i353)
             {
-              Range _elem353;
-              _elem353 = new Range();
-              _elem353.read(iprot);
-              struct.success.add(_elem353);
+              _elem352 = new Range();
+              _elem352.read(iprot);
+              struct.success.add(_elem352);
             }
           }
           struct.setSuccessIsSet(true);
@@ -55527,7 +60588,7 @@
       String tableName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
     }
 
@@ -55537,7 +60598,6 @@
     public tableExists_args(tableExists_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -55560,16 +60620,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public tableExists_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public tableExists_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -55696,7 +60756,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -56047,7 +61119,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       }
       throw new IllegalStateException();
@@ -56093,7 +61165,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -56338,7 +61417,7 @@
       ByteBuffer login)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     /**
@@ -56347,7 +61426,6 @@
     public tableIdMap_args(tableIdMap_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
     }
 
@@ -56366,16 +61444,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public tableIdMap_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public tableIdMap_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -56456,7 +61534,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      return list.hashCode();
     }
 
     @Override
@@ -56824,7 +61909,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -56920,13 +62012,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map354 = iprot.readMapBegin();
                   struct.success = new HashMap<String,String>(2*_map354.size);
-                  for (int _i355 = 0; _i355 < _map354.size; ++_i355)
+                  String _key355;
+                  String _val356;
+                  for (int _i357 = 0; _i357 < _map354.size; ++_i357)
                   {
-                    String _key356;
-                    String _val357;
-                    _key356 = iprot.readString();
-                    _val357 = iprot.readString();
-                    struct.success.put(_key356, _val357);
+                    _key355 = iprot.readString();
+                    _val356 = iprot.readString();
+                    struct.success.put(_key355, _val356);
                   }
                   iprot.readMapEnd();
                 }
@@ -57005,13 +62097,13 @@
           {
             org.apache.thrift.protocol.TMap _map360 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new HashMap<String,String>(2*_map360.size);
-            for (int _i361 = 0; _i361 < _map360.size; ++_i361)
+            String _key361;
+            String _val362;
+            for (int _i363 = 0; _i363 < _map360.size; ++_i363)
             {
-              String _key362;
-              String _val363;
-              _key362 = iprot.readString();
-              _val363 = iprot.readString();
-              struct.success.put(_key362, _val363);
+              _key361 = iprot.readString();
+              _val362 = iprot.readString();
+              struct.success.put(_key361, _val362);
             }
           }
           struct.setSuccessIsSet(true);
@@ -57133,7 +62225,7 @@
       String asTypeName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.className = className;
       this.asTypeName = asTypeName;
@@ -57145,7 +62237,6 @@
     public testTableClassLoad_args(testTableClassLoad_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -57176,16 +62267,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public testTableClassLoad_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public testTableClassLoad_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -57404,7 +62495,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_className = true && (isSetClassName());
+      list.add(present_className);
+      if (present_className)
+        list.add(className);
+
+      boolean present_asTypeName = true && (isSetAsTypeName());
+      list.add(present_asTypeName);
+      if (present_asTypeName)
+        list.add(asTypeName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -57972,7 +63085,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case OUCH1:
         return getOuch1();
@@ -58060,7 +63173,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -58442,7 +63577,7 @@
       String tserver)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tserver = tserver;
     }
 
@@ -58452,7 +63587,6 @@
     public pingTabletServer_args(pingTabletServer_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTserver()) {
         this.tserver = other.tserver;
@@ -58475,16 +63609,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public pingTabletServer_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public pingTabletServer_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -58611,7 +63745,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tserver = true && (isSetTserver());
+      list.add(present_tserver);
+      if (present_tserver)
+        list.add(tserver);
+
+      return list.hashCode();
     }
 
     @Override
@@ -59065,7 +64211,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -59365,7 +64523,7 @@
       String tserver)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tserver = tserver;
     }
 
@@ -59375,7 +64533,6 @@
     public getActiveScans_args(getActiveScans_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTserver()) {
         this.tserver = other.tserver;
@@ -59398,16 +64555,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getActiveScans_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getActiveScans_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -59534,7 +64691,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tserver = true && (isSetTserver());
+      list.add(present_tserver);
+      if (present_tserver)
+        list.add(tserver);
+
+      return list.hashCode();
     }
 
     @Override
@@ -60067,7 +65236,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -60199,12 +65385,12 @@
                 {
                   org.apache.thrift.protocol.TList _list364 = iprot.readListBegin();
                   struct.success = new ArrayList<ActiveScan>(_list364.size);
-                  for (int _i365 = 0; _i365 < _list364.size; ++_i365)
+                  ActiveScan _elem365;
+                  for (int _i366 = 0; _i366 < _list364.size; ++_i366)
                   {
-                    ActiveScan _elem366;
-                    _elem366 = new ActiveScan();
-                    _elem366.read(iprot);
-                    struct.success.add(_elem366);
+                    _elem365 = new ActiveScan();
+                    _elem365.read(iprot);
+                    struct.success.add(_elem365);
                   }
                   iprot.readListEnd();
                 }
@@ -60321,12 +65507,12 @@
           {
             org.apache.thrift.protocol.TList _list369 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
             struct.success = new ArrayList<ActiveScan>(_list369.size);
-            for (int _i370 = 0; _i370 < _list369.size; ++_i370)
+            ActiveScan _elem370;
+            for (int _i371 = 0; _i371 < _list369.size; ++_i371)
             {
-              ActiveScan _elem371;
-              _elem371 = new ActiveScan();
-              _elem371.read(iprot);
-              struct.success.add(_elem371);
+              _elem370 = new ActiveScan();
+              _elem370.read(iprot);
+              struct.success.add(_elem370);
             }
           }
           struct.setSuccessIsSet(true);
@@ -60442,7 +65628,7 @@
       String tserver)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tserver = tserver;
     }
 
@@ -60452,7 +65638,6 @@
     public getActiveCompactions_args(getActiveCompactions_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTserver()) {
         this.tserver = other.tserver;
@@ -60475,16 +65660,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getActiveCompactions_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getActiveCompactions_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -60611,7 +65796,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tserver = true && (isSetTserver());
+      list.add(present_tserver);
+      if (present_tserver)
+        list.add(tserver);
+
+      return list.hashCode();
     }
 
     @Override
@@ -61144,7 +66341,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -61276,12 +66490,12 @@
                 {
                   org.apache.thrift.protocol.TList _list372 = iprot.readListBegin();
                   struct.success = new ArrayList<ActiveCompaction>(_list372.size);
-                  for (int _i373 = 0; _i373 < _list372.size; ++_i373)
+                  ActiveCompaction _elem373;
+                  for (int _i374 = 0; _i374 < _list372.size; ++_i374)
                   {
-                    ActiveCompaction _elem374;
-                    _elem374 = new ActiveCompaction();
-                    _elem374.read(iprot);
-                    struct.success.add(_elem374);
+                    _elem373 = new ActiveCompaction();
+                    _elem373.read(iprot);
+                    struct.success.add(_elem373);
                   }
                   iprot.readListEnd();
                 }
@@ -61398,12 +66612,12 @@
           {
             org.apache.thrift.protocol.TList _list377 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
             struct.success = new ArrayList<ActiveCompaction>(_list377.size);
-            for (int _i378 = 0; _i378 < _list377.size; ++_i378)
+            ActiveCompaction _elem378;
+            for (int _i379 = 0; _i379 < _list377.size; ++_i379)
             {
-              ActiveCompaction _elem379;
-              _elem379 = new ActiveCompaction();
-              _elem379.read(iprot);
-              struct.success.add(_elem379);
+              _elem378 = new ActiveCompaction();
+              _elem378.read(iprot);
+              struct.success.add(_elem378);
             }
           }
           struct.setSuccessIsSet(true);
@@ -61511,7 +66725,7 @@
       ByteBuffer login)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     /**
@@ -61520,7 +66734,6 @@
     public getSiteConfiguration_args(getSiteConfiguration_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
     }
 
@@ -61539,16 +66752,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getSiteConfiguration_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getSiteConfiguration_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -61629,7 +66842,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      return list.hashCode();
     }
 
     @Override
@@ -62115,7 +67335,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -62247,13 +67484,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map380 = iprot.readMapBegin();
                   struct.success = new HashMap<String,String>(2*_map380.size);
-                  for (int _i381 = 0; _i381 < _map380.size; ++_i381)
+                  String _key381;
+                  String _val382;
+                  for (int _i383 = 0; _i383 < _map380.size; ++_i383)
                   {
-                    String _key382;
-                    String _val383;
-                    _key382 = iprot.readString();
-                    _val383 = iprot.readString();
-                    struct.success.put(_key382, _val383);
+                    _key381 = iprot.readString();
+                    _val382 = iprot.readString();
+                    struct.success.put(_key381, _val382);
                   }
                   iprot.readMapEnd();
                 }
@@ -62372,13 +67609,13 @@
           {
             org.apache.thrift.protocol.TMap _map386 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new HashMap<String,String>(2*_map386.size);
-            for (int _i387 = 0; _i387 < _map386.size; ++_i387)
+            String _key387;
+            String _val388;
+            for (int _i389 = 0; _i389 < _map386.size; ++_i389)
             {
-              String _key388;
-              String _val389;
-              _key388 = iprot.readString();
-              _val389 = iprot.readString();
-              struct.success.put(_key388, _val389);
+              _key387 = iprot.readString();
+              _val388 = iprot.readString();
+              struct.success.put(_key387, _val388);
             }
           }
           struct.setSuccessIsSet(true);
@@ -62486,7 +67723,7 @@
       ByteBuffer login)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     /**
@@ -62495,7 +67732,6 @@
     public getSystemConfiguration_args(getSystemConfiguration_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
     }
 
@@ -62514,16 +67750,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getSystemConfiguration_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getSystemConfiguration_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -62604,7 +67840,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      return list.hashCode();
     }
 
     @Override
@@ -63090,7 +68333,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -63222,13 +68482,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map390 = iprot.readMapBegin();
                   struct.success = new HashMap<String,String>(2*_map390.size);
-                  for (int _i391 = 0; _i391 < _map390.size; ++_i391)
+                  String _key391;
+                  String _val392;
+                  for (int _i393 = 0; _i393 < _map390.size; ++_i393)
                   {
-                    String _key392;
-                    String _val393;
-                    _key392 = iprot.readString();
-                    _val393 = iprot.readString();
-                    struct.success.put(_key392, _val393);
+                    _key391 = iprot.readString();
+                    _val392 = iprot.readString();
+                    struct.success.put(_key391, _val392);
                   }
                   iprot.readMapEnd();
                 }
@@ -63347,13 +68607,13 @@
           {
             org.apache.thrift.protocol.TMap _map396 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new HashMap<String,String>(2*_map396.size);
-            for (int _i397 = 0; _i397 < _map396.size; ++_i397)
+            String _key397;
+            String _val398;
+            for (int _i399 = 0; _i399 < _map396.size; ++_i399)
             {
-              String _key398;
-              String _val399;
-              _key398 = iprot.readString();
-              _val399 = iprot.readString();
-              struct.success.put(_key398, _val399);
+              _key397 = iprot.readString();
+              _val398 = iprot.readString();
+              struct.success.put(_key397, _val398);
             }
           }
           struct.setSuccessIsSet(true);
@@ -63461,7 +68721,7 @@
       ByteBuffer login)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     /**
@@ -63470,7 +68730,6 @@
     public getTabletServers_args(getTabletServers_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
     }
 
@@ -63489,16 +68748,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getTabletServers_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getTabletServers_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -63579,7 +68838,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      return list.hashCode();
     }
 
     @Override
@@ -63950,7 +69216,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -64046,11 +69319,11 @@
                 {
                   org.apache.thrift.protocol.TList _list400 = iprot.readListBegin();
                   struct.success = new ArrayList<String>(_list400.size);
-                  for (int _i401 = 0; _i401 < _list400.size; ++_i401)
+                  String _elem401;
+                  for (int _i402 = 0; _i402 < _list400.size; ++_i402)
                   {
-                    String _elem402;
-                    _elem402 = iprot.readString();
-                    struct.success.add(_elem402);
+                    _elem401 = iprot.readString();
+                    struct.success.add(_elem401);
                   }
                   iprot.readListEnd();
                 }
@@ -64127,11 +69400,11 @@
           {
             org.apache.thrift.protocol.TList _list405 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new ArrayList<String>(_list405.size);
-            for (int _i406 = 0; _i406 < _list405.size; ++_i406)
+            String _elem406;
+            for (int _i407 = 0; _i407 < _list405.size; ++_i407)
             {
-              String _elem407;
-              _elem407 = iprot.readString();
-              struct.success.add(_elem407);
+              _elem406 = iprot.readString();
+              struct.success.add(_elem406);
             }
           }
           struct.setSuccessIsSet(true);
@@ -64237,7 +69510,7 @@
       String property)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.property = property;
     }
 
@@ -64247,7 +69520,6 @@
     public removeProperty_args(removeProperty_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetProperty()) {
         this.property = other.property;
@@ -64270,16 +69542,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public removeProperty_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public removeProperty_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -64406,7 +69678,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      return list.hashCode();
     }
 
     @Override
@@ -64860,7 +70144,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -65168,7 +70464,7 @@
       String value)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.property = property;
       this.value = value;
     }
@@ -65179,7 +70475,6 @@
     public setProperty_args(setProperty_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetProperty()) {
         this.property = other.property;
@@ -65206,16 +70501,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public setProperty_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public setProperty_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -65388,7 +70683,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      boolean present_value = true && (isSetValue());
+      list.add(present_value);
+      if (present_value)
+        list.add(value);
+
+      return list.hashCode();
     }
 
     @Override
@@ -65883,7 +71195,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -66191,7 +71515,7 @@
       String asTypeName)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.className = className;
       this.asTypeName = asTypeName;
     }
@@ -66202,7 +71526,6 @@
     public testClassLoad_args(testClassLoad_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetClassName()) {
         this.className = other.className;
@@ -66229,16 +71552,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public testClassLoad_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public testClassLoad_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -66411,7 +71734,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_className = true && (isSetClassName());
+      list.add(present_className);
+      if (present_className)
+        list.add(className);
+
+      boolean present_asTypeName = true && (isSetAsTypeName());
+      list.add(present_asTypeName);
+      if (present_asTypeName)
+        list.add(asTypeName);
+
+      return list.hashCode();
     }
 
     @Override
@@ -66893,7 +72233,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case OUCH1:
         return getOuch1();
@@ -66967,7 +72307,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -67316,7 +72673,7 @@
       Map<String,String> properties)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
       this.properties = properties;
     }
@@ -67327,7 +72684,6 @@
     public authenticateUser_args(authenticateUser_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
@@ -67355,16 +72711,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public authenticateUser_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public authenticateUser_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -67548,7 +72904,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_properties = true && (isSetProperties());
+      list.add(present_properties);
+      if (present_properties)
+        list.add(properties);
+
+      return list.hashCode();
     }
 
     @Override
@@ -67696,13 +73069,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map408 = iprot.readMapBegin();
                   struct.properties = new HashMap<String,String>(2*_map408.size);
-                  for (int _i409 = 0; _i409 < _map408.size; ++_i409)
+                  String _key409;
+                  String _val410;
+                  for (int _i411 = 0; _i411 < _map408.size; ++_i411)
                   {
-                    String _key410;
-                    String _val411;
-                    _key410 = iprot.readString();
-                    _val411 = iprot.readString();
-                    struct.properties.put(_key410, _val411);
+                    _key409 = iprot.readString();
+                    _val410 = iprot.readString();
+                    struct.properties.put(_key409, _val410);
                   }
                   iprot.readMapEnd();
                 }
@@ -67811,13 +73184,13 @@
           {
             org.apache.thrift.protocol.TMap _map414 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.properties = new HashMap<String,String>(2*_map414.size);
-            for (int _i415 = 0; _i415 < _map414.size; ++_i415)
+            String _key415;
+            String _val416;
+            for (int _i417 = 0; _i417 < _map414.size; ++_i417)
             {
-              String _key416;
-              String _val417;
-              _key416 = iprot.readString();
-              _val417 = iprot.readString();
-              struct.properties.put(_key416, _val417);
+              _key415 = iprot.readString();
+              _val416 = iprot.readString();
+              struct.properties.put(_key415, _val416);
             }
           }
           struct.setPropertiesIsSet(true);
@@ -68068,7 +73441,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case OUCH1:
         return getOuch1();
@@ -68142,7 +73515,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -68490,7 +73880,7 @@
       Set<ByteBuffer> authorizations)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
       this.authorizations = authorizations;
     }
@@ -68501,7 +73891,6 @@
     public changeUserAuthorizations_args(changeUserAuthorizations_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
@@ -68529,16 +73918,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public changeUserAuthorizations_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public changeUserAuthorizations_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -68726,7 +74115,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_authorizations = true && (isSetAuthorizations());
+      list.add(present_authorizations);
+      if (present_authorizations)
+        list.add(authorizations);
+
+      return list.hashCode();
     }
 
     @Override
@@ -68807,7 +74213,7 @@
       if (this.authorizations == null) {
         sb.append("null");
       } else {
-        sb.append(this.authorizations);
+        org.apache.thrift.TBaseHelper.toString(this.authorizations, sb);
       }
       first = false;
       sb.append(")");
@@ -68874,11 +74280,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set418 = iprot.readSetBegin();
                   struct.authorizations = new HashSet<ByteBuffer>(2*_set418.size);
-                  for (int _i419 = 0; _i419 < _set418.size; ++_i419)
+                  ByteBuffer _elem419;
+                  for (int _i420 = 0; _i420 < _set418.size; ++_i420)
                   {
-                    ByteBuffer _elem420;
-                    _elem420 = iprot.readBinary();
-                    struct.authorizations.add(_elem420);
+                    _elem419 = iprot.readBinary();
+                    struct.authorizations.add(_elem419);
                   }
                   iprot.readSetEnd();
                 }
@@ -68985,11 +74391,11 @@
           {
             org.apache.thrift.protocol.TSet _set423 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.authorizations = new HashSet<ByteBuffer>(2*_set423.size);
-            for (int _i424 = 0; _i424 < _set423.size; ++_i424)
+            ByteBuffer _elem424;
+            for (int _i425 = 0; _i425 < _set423.size; ++_i425)
             {
-              ByteBuffer _elem425;
-              _elem425 = iprot.readBinary();
-              struct.authorizations.add(_elem425);
+              _elem424 = iprot.readBinary();
+              struct.authorizations.add(_elem424);
             }
           }
           struct.setAuthorizationsIsSet(true);
@@ -69253,7 +74659,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -69561,9 +74979,9 @@
       ByteBuffer password)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
-      this.password = password;
+      this.password = org.apache.thrift.TBaseHelper.copyBinary(password);
     }
 
     /**
@@ -69572,14 +74990,12 @@
     public changeLocalUserPassword_args(changeLocalUserPassword_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
       }
       if (other.isSetPassword()) {
         this.password = org.apache.thrift.TBaseHelper.copyBinary(other.password);
-;
       }
     }
 
@@ -69600,16 +75016,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public changeLocalUserPassword_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public changeLocalUserPassword_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -69658,16 +75074,16 @@
     }
 
     public ByteBuffer bufferForPassword() {
-      return password;
+      return org.apache.thrift.TBaseHelper.copyBinary(password);
     }
 
     public changeLocalUserPassword_args setPassword(byte[] password) {
-      setPassword(password == null ? (ByteBuffer)null : ByteBuffer.wrap(password));
+      this.password = password == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(password, password.length));
       return this;
     }
 
     public changeLocalUserPassword_args setPassword(ByteBuffer password) {
-      this.password = password;
+      this.password = org.apache.thrift.TBaseHelper.copyBinary(password);
       return this;
     }
 
@@ -69792,7 +75208,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_password = true && (isSetPassword());
+      list.add(present_password);
+      if (present_password)
+        list.add(password);
+
+      return list.hashCode();
     }
 
     @Override
@@ -70287,7 +75720,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -70595,9 +76040,9 @@
       ByteBuffer password)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
-      this.password = password;
+      this.password = org.apache.thrift.TBaseHelper.copyBinary(password);
     }
 
     /**
@@ -70606,14 +76051,12 @@
     public createLocalUser_args(createLocalUser_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
       }
       if (other.isSetPassword()) {
         this.password = org.apache.thrift.TBaseHelper.copyBinary(other.password);
-;
       }
     }
 
@@ -70634,16 +76077,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public createLocalUser_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public createLocalUser_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -70692,16 +76135,16 @@
     }
 
     public ByteBuffer bufferForPassword() {
-      return password;
+      return org.apache.thrift.TBaseHelper.copyBinary(password);
     }
 
     public createLocalUser_args setPassword(byte[] password) {
-      setPassword(password == null ? (ByteBuffer)null : ByteBuffer.wrap(password));
+      this.password = password == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(password, password.length));
       return this;
     }
 
     public createLocalUser_args setPassword(ByteBuffer password) {
-      this.password = password;
+      this.password = org.apache.thrift.TBaseHelper.copyBinary(password);
       return this;
     }
 
@@ -70826,7 +76269,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_password = true && (isSetPassword());
+      list.add(present_password);
+      if (present_password)
+        list.add(password);
+
+      return list.hashCode();
     }
 
     @Override
@@ -71321,7 +76781,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -71621,7 +77093,7 @@
       String user)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
     }
 
@@ -71631,7 +77103,6 @@
     public dropLocalUser_args(dropLocalUser_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
@@ -71654,16 +77125,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public dropLocalUser_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public dropLocalUser_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -71790,7 +77261,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      return list.hashCode();
     }
 
     @Override
@@ -72244,7 +77727,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -72544,7 +78039,7 @@
       String user)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
     }
 
@@ -72554,7 +78049,6 @@
     public getUserAuthorizations_args(getUserAuthorizations_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
@@ -72577,16 +78071,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public getUserAuthorizations_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public getUserAuthorizations_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -72713,7 +78207,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      return list.hashCode();
     }
 
     @Override
@@ -73243,7 +78749,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -73308,7 +78831,7 @@
       if (this.success == null) {
         sb.append("null");
       } else {
-        sb.append(this.success);
+        org.apache.thrift.TBaseHelper.toString(this.success, sb);
       }
       first = false;
       if (!first) sb.append(", ");
@@ -73375,11 +78898,11 @@
                 {
                   org.apache.thrift.protocol.TList _list426 = iprot.readListBegin();
                   struct.success = new ArrayList<ByteBuffer>(_list426.size);
-                  for (int _i427 = 0; _i427 < _list426.size; ++_i427)
+                  ByteBuffer _elem427;
+                  for (int _i428 = 0; _i428 < _list426.size; ++_i428)
                   {
-                    ByteBuffer _elem428;
-                    _elem428 = iprot.readBinary();
-                    struct.success.add(_elem428);
+                    _elem427 = iprot.readBinary();
+                    struct.success.add(_elem427);
                   }
                   iprot.readListEnd();
                 }
@@ -73496,11 +79019,11 @@
           {
             org.apache.thrift.protocol.TList _list431 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new ArrayList<ByteBuffer>(_list431.size);
-            for (int _i432 = 0; _i432 < _list431.size; ++_i432)
+            ByteBuffer _elem432;
+            for (int _i433 = 0; _i433 < _list431.size; ++_i433)
             {
-              ByteBuffer _elem433;
-              _elem433 = iprot.readBinary();
-              struct.success.add(_elem433);
+              _elem432 = iprot.readBinary();
+              struct.success.add(_elem432);
             }
           }
           struct.setSuccessIsSet(true);
@@ -73632,7 +79155,7 @@
       SystemPermission perm)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
       this.perm = perm;
     }
@@ -73643,7 +79166,6 @@
     public grantSystemPermission_args(grantSystemPermission_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
@@ -73670,16 +79192,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public grantSystemPermission_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public grantSystemPermission_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -73860,7 +79382,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_perm = true && (isSetPerm());
+      list.add(present_perm);
+      if (present_perm)
+        list.add(perm.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -74005,7 +79544,7 @@
               break;
             case 3: // PERM
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.perm = SystemPermission.findByValue(iprot.readI32());
+                struct.perm = org.apache.accumulo.proxy.thrift.SystemPermission.findByValue(iprot.readI32());
                 struct.setPermIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -74093,7 +79632,7 @@
           struct.setUserIsSet(true);
         }
         if (incoming.get(2)) {
-          struct.perm = SystemPermission.findByValue(iprot.readI32());
+          struct.perm = org.apache.accumulo.proxy.thrift.SystemPermission.findByValue(iprot.readI32());
           struct.setPermIsSet(true);
         }
       }
@@ -74355,7 +79894,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -74679,7 +80230,7 @@
       TablePermission perm)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
       this.table = table;
       this.perm = perm;
@@ -74691,7 +80242,6 @@
     public grantTablePermission_args(grantTablePermission_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
@@ -74722,16 +80272,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public grantTablePermission_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public grantTablePermission_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -74958,7 +80508,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_table = true && (isSetTable());
+      list.add(present_table);
+      if (present_table)
+        list.add(table);
+
+      boolean present_perm = true && (isSetPerm());
+      list.add(present_perm);
+      if (present_perm)
+        list.add(perm.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -75129,7 +80701,7 @@
               break;
             case 4: // PERM
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.perm = TablePermission.findByValue(iprot.readI32());
+                struct.perm = org.apache.accumulo.proxy.thrift.TablePermission.findByValue(iprot.readI32());
                 struct.setPermIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -75232,7 +80804,7 @@
           struct.setTableIsSet(true);
         }
         if (incoming.get(3)) {
-          struct.perm = TablePermission.findByValue(iprot.readI32());
+          struct.perm = org.apache.accumulo.proxy.thrift.TablePermission.findByValue(iprot.readI32());
           struct.setPermIsSet(true);
         }
       }
@@ -75553,7 +81125,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -75912,7 +81501,7 @@
       SystemPermission perm)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
       this.perm = perm;
     }
@@ -75923,7 +81512,6 @@
     public hasSystemPermission_args(hasSystemPermission_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
@@ -75950,16 +81538,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public hasSystemPermission_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public hasSystemPermission_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -76140,7 +81728,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_perm = true && (isSetPerm());
+      list.add(present_perm);
+      if (present_perm)
+        list.add(perm.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -76285,7 +81890,7 @@
               break;
             case 3: // PERM
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.perm = SystemPermission.findByValue(iprot.readI32());
+                struct.perm = org.apache.accumulo.proxy.thrift.SystemPermission.findByValue(iprot.readI32());
                 struct.setPermIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -76373,7 +81978,7 @@
           struct.setUserIsSet(true);
         }
         if (incoming.get(2)) {
-          struct.perm = SystemPermission.findByValue(iprot.readI32());
+          struct.perm = org.apache.accumulo.proxy.thrift.SystemPermission.findByValue(iprot.readI32());
           struct.setPermIsSet(true);
         }
       }
@@ -76622,7 +82227,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case OUCH1:
         return getOuch1();
@@ -76696,7 +82301,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -77059,7 +82681,7 @@
       TablePermission perm)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
       this.table = table;
       this.perm = perm;
@@ -77071,7 +82693,6 @@
     public hasTablePermission_args(hasTablePermission_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
@@ -77102,16 +82723,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public hasTablePermission_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public hasTablePermission_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -77338,7 +82959,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_table = true && (isSetTable());
+      list.add(present_table);
+      if (present_table)
+        list.add(table);
+
+      boolean present_perm = true && (isSetPerm());
+      list.add(present_perm);
+      if (present_perm)
+        list.add(perm.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -77509,7 +83152,7 @@
               break;
             case 4: // PERM
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.perm = TablePermission.findByValue(iprot.readI32());
+                struct.perm = org.apache.accumulo.proxy.thrift.TablePermission.findByValue(iprot.readI32());
                 struct.setPermIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -77612,7 +83255,7 @@
           struct.setTableIsSet(true);
         }
         if (incoming.get(3)) {
-          struct.perm = TablePermission.findByValue(iprot.readI32());
+          struct.perm = org.apache.accumulo.proxy.thrift.TablePermission.findByValue(iprot.readI32());
           struct.setPermIsSet(true);
         }
       }
@@ -77906,7 +83549,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case OUCH1:
         return getOuch1();
@@ -77994,7 +83637,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -78368,7 +84033,7 @@
       ByteBuffer login)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     /**
@@ -78377,7 +84042,6 @@
     public listLocalUsers_args(listLocalUsers_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
     }
 
@@ -78396,16 +84060,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public listLocalUsers_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public listLocalUsers_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -78486,7 +84150,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      return list.hashCode();
     }
 
     @Override
@@ -79034,7 +84705,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -79184,11 +84877,11 @@
                 {
                   org.apache.thrift.protocol.TSet _set434 = iprot.readSetBegin();
                   struct.success = new HashSet<String>(2*_set434.size);
-                  for (int _i435 = 0; _i435 < _set434.size; ++_i435)
+                  String _elem435;
+                  for (int _i436 = 0; _i436 < _set434.size; ++_i436)
                   {
-                    String _elem436;
-                    _elem436 = iprot.readString();
-                    struct.success.add(_elem436);
+                    _elem435 = iprot.readString();
+                    struct.success.add(_elem435);
                   }
                   iprot.readSetEnd();
                 }
@@ -79325,11 +85018,11 @@
           {
             org.apache.thrift.protocol.TSet _set439 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
             struct.success = new HashSet<String>(2*_set439.size);
-            for (int _i440 = 0; _i440 < _set439.size; ++_i440)
+            String _elem440;
+            for (int _i441 = 0; _i441 < _set439.size; ++_i441)
             {
-              String _elem441;
-              _elem441 = iprot.readString();
-              struct.success.add(_elem441);
+              _elem440 = iprot.readString();
+              struct.success.add(_elem440);
             }
           }
           struct.setSuccessIsSet(true);
@@ -79466,7 +85159,7 @@
       SystemPermission perm)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
       this.perm = perm;
     }
@@ -79477,7 +85170,6 @@
     public revokeSystemPermission_args(revokeSystemPermission_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
@@ -79504,16 +85196,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public revokeSystemPermission_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public revokeSystemPermission_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -79694,7 +85386,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_perm = true && (isSetPerm());
+      list.add(present_perm);
+      if (present_perm)
+        list.add(perm.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -79839,7 +85548,7 @@
               break;
             case 3: // PERM
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.perm = SystemPermission.findByValue(iprot.readI32());
+                struct.perm = org.apache.accumulo.proxy.thrift.SystemPermission.findByValue(iprot.readI32());
                 struct.setPermIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -79927,7 +85636,7 @@
           struct.setUserIsSet(true);
         }
         if (incoming.get(2)) {
-          struct.perm = SystemPermission.findByValue(iprot.readI32());
+          struct.perm = org.apache.accumulo.proxy.thrift.SystemPermission.findByValue(iprot.readI32());
           struct.setPermIsSet(true);
         }
       }
@@ -80189,7 +85898,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -80513,7 +86234,7 @@
       TablePermission perm)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.user = user;
       this.table = table;
       this.perm = perm;
@@ -80525,7 +86246,6 @@
     public revokeTablePermission_args(revokeTablePermission_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetUser()) {
         this.user = other.user;
@@ -80556,16 +86276,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public revokeTablePermission_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public revokeTablePermission_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -80792,7 +86512,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_table = true && (isSetTable());
+      list.add(present_table);
+      if (present_table)
+        list.add(table);
+
+      boolean present_perm = true && (isSetPerm());
+      list.add(present_perm);
+      if (present_perm)
+        list.add(perm.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -80963,7 +86705,7 @@
               break;
             case 4: // PERM
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.perm = TablePermission.findByValue(iprot.readI32());
+                struct.perm = org.apache.accumulo.proxy.thrift.TablePermission.findByValue(iprot.readI32());
                 struct.setPermIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -81066,7 +86808,7 @@
           struct.setTableIsSet(true);
         }
         if (incoming.get(3)) {
-          struct.perm = TablePermission.findByValue(iprot.readI32());
+          struct.perm = org.apache.accumulo.proxy.thrift.TablePermission.findByValue(iprot.readI32());
           struct.setPermIsSet(true);
         }
       }
@@ -81387,7 +87129,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -81634,6 +87393,3627 @@
 
   }
 
+  public static class grantNamespacePermission_args implements org.apache.thrift.TBase<grantNamespacePermission_args, grantNamespacePermission_args._Fields>, java.io.Serializable, Cloneable, Comparable<grantNamespacePermission_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("grantNamespacePermission_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField USER_FIELD_DESC = new org.apache.thrift.protocol.TField("user", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)3);
+    private static final org.apache.thrift.protocol.TField PERM_FIELD_DESC = new org.apache.thrift.protocol.TField("perm", org.apache.thrift.protocol.TType.I32, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new grantNamespacePermission_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new grantNamespacePermission_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String user; // required
+    public String namespaceName; // required
+    /**
+     * 
+     * @see NamespacePermission
+     */
+    public NamespacePermission perm; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      USER((short)2, "user"),
+      NAMESPACE_NAME((short)3, "namespaceName"),
+      /**
+       * 
+       * @see NamespacePermission
+       */
+      PERM((short)4, "perm");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // USER
+            return USER;
+          case 3: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 4: // PERM
+            return PERM;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.USER, new org.apache.thrift.meta_data.FieldMetaData("user", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.PERM, new org.apache.thrift.meta_data.FieldMetaData("perm", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, NamespacePermission.class)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(grantNamespacePermission_args.class, metaDataMap);
+    }
+
+    public grantNamespacePermission_args() {
+    }
+
+    public grantNamespacePermission_args(
+      ByteBuffer login,
+      String user,
+      String namespaceName,
+      NamespacePermission perm)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.user = user;
+      this.namespaceName = namespaceName;
+      this.perm = perm;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public grantNamespacePermission_args(grantNamespacePermission_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetUser()) {
+        this.user = other.user;
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetPerm()) {
+        this.perm = other.perm;
+      }
+    }
+
+    public grantNamespacePermission_args deepCopy() {
+      return new grantNamespacePermission_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.user = null;
+      this.namespaceName = null;
+      this.perm = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public grantNamespacePermission_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public grantNamespacePermission_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getUser() {
+      return this.user;
+    }
+
+    public grantNamespacePermission_args setUser(String user) {
+      this.user = user;
+      return this;
+    }
+
+    public void unsetUser() {
+      this.user = null;
+    }
+
+    /** Returns true if field user is set (has been assigned a value) and false otherwise */
+    public boolean isSetUser() {
+      return this.user != null;
+    }
+
+    public void setUserIsSet(boolean value) {
+      if (!value) {
+        this.user = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public grantNamespacePermission_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    /**
+     * 
+     * @see NamespacePermission
+     */
+    public NamespacePermission getPerm() {
+      return this.perm;
+    }
+
+    /**
+     * 
+     * @see NamespacePermission
+     */
+    public grantNamespacePermission_args setPerm(NamespacePermission perm) {
+      this.perm = perm;
+      return this;
+    }
+
+    public void unsetPerm() {
+      this.perm = null;
+    }
+
+    /** Returns true if field perm is set (has been assigned a value) and false otherwise */
+    public boolean isSetPerm() {
+      return this.perm != null;
+    }
+
+    public void setPermIsSet(boolean value) {
+      if (!value) {
+        this.perm = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case USER:
+        if (value == null) {
+          unsetUser();
+        } else {
+          setUser((String)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case PERM:
+        if (value == null) {
+          unsetPerm();
+        } else {
+          setPerm((NamespacePermission)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case USER:
+        return getUser();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case PERM:
+        return getPerm();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case USER:
+        return isSetUser();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case PERM:
+        return isSetPerm();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof grantNamespacePermission_args)
+        return this.equals((grantNamespacePermission_args)that);
+      return false;
+    }
+
+    public boolean equals(grantNamespacePermission_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_user = true && this.isSetUser();
+      boolean that_present_user = true && that.isSetUser();
+      if (this_present_user || that_present_user) {
+        if (!(this_present_user && that_present_user))
+          return false;
+        if (!this.user.equals(that.user))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_perm = true && this.isSetPerm();
+      boolean that_present_perm = true && that.isSetPerm();
+      if (this_present_perm || that_present_perm) {
+        if (!(this_present_perm && that_present_perm))
+          return false;
+        if (!this.perm.equals(that.perm))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_perm = true && (isSetPerm());
+      list.add(present_perm);
+      if (present_perm)
+        list.add(perm.getValue());
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(grantNamespacePermission_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetUser()).compareTo(other.isSetUser());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetUser()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.user, other.user);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetPerm()).compareTo(other.isSetPerm());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetPerm()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.perm, other.perm);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("grantNamespacePermission_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("user:");
+      if (this.user == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.user);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("perm:");
+      if (this.perm == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.perm);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class grantNamespacePermission_argsStandardSchemeFactory implements SchemeFactory {
+      public grantNamespacePermission_argsStandardScheme getScheme() {
+        return new grantNamespacePermission_argsStandardScheme();
+      }
+    }
+
+    private static class grantNamespacePermission_argsStandardScheme extends StandardScheme<grantNamespacePermission_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, grantNamespacePermission_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // USER
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.user = iprot.readString();
+                struct.setUserIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // PERM
+              if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
+                struct.perm = org.apache.accumulo.proxy.thrift.NamespacePermission.findByValue(iprot.readI32());
+                struct.setPermIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, grantNamespacePermission_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.user != null) {
+          oprot.writeFieldBegin(USER_FIELD_DESC);
+          oprot.writeString(struct.user);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.perm != null) {
+          oprot.writeFieldBegin(PERM_FIELD_DESC);
+          oprot.writeI32(struct.perm.getValue());
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class grantNamespacePermission_argsTupleSchemeFactory implements SchemeFactory {
+      public grantNamespacePermission_argsTupleScheme getScheme() {
+        return new grantNamespacePermission_argsTupleScheme();
+      }
+    }
+
+    private static class grantNamespacePermission_argsTupleScheme extends TupleScheme<grantNamespacePermission_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, grantNamespacePermission_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetUser()) {
+          optionals.set(1);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(2);
+        }
+        if (struct.isSetPerm()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetUser()) {
+          oprot.writeString(struct.user);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetPerm()) {
+          oprot.writeI32(struct.perm.getValue());
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, grantNamespacePermission_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.user = iprot.readString();
+          struct.setUserIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.perm = org.apache.accumulo.proxy.thrift.NamespacePermission.findByValue(iprot.readI32());
+          struct.setPermIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class grantNamespacePermission_result implements org.apache.thrift.TBase<grantNamespacePermission_result, grantNamespacePermission_result._Fields>, java.io.Serializable, Cloneable, Comparable<grantNamespacePermission_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("grantNamespacePermission_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new grantNamespacePermission_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new grantNamespacePermission_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(grantNamespacePermission_result.class, metaDataMap);
+    }
+
+    public grantNamespacePermission_result() {
+    }
+
+    public grantNamespacePermission_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public grantNamespacePermission_result(grantNamespacePermission_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+    }
+
+    public grantNamespacePermission_result deepCopy() {
+      return new grantNamespacePermission_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public grantNamespacePermission_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public grantNamespacePermission_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof grantNamespacePermission_result)
+        return this.equals((grantNamespacePermission_result)that);
+      return false;
+    }
+
+    public boolean equals(grantNamespacePermission_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(grantNamespacePermission_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("grantNamespacePermission_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class grantNamespacePermission_resultStandardSchemeFactory implements SchemeFactory {
+      public grantNamespacePermission_resultStandardScheme getScheme() {
+        return new grantNamespacePermission_resultStandardScheme();
+      }
+    }
+
+    private static class grantNamespacePermission_resultStandardScheme extends StandardScheme<grantNamespacePermission_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, grantNamespacePermission_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, grantNamespacePermission_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class grantNamespacePermission_resultTupleSchemeFactory implements SchemeFactory {
+      public grantNamespacePermission_resultTupleScheme getScheme() {
+        return new grantNamespacePermission_resultTupleScheme();
+      }
+    }
+
+    private static class grantNamespacePermission_resultTupleScheme extends TupleScheme<grantNamespacePermission_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, grantNamespacePermission_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        oprot.writeBitSet(optionals, 2);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, grantNamespacePermission_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(2);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class hasNamespacePermission_args implements org.apache.thrift.TBase<hasNamespacePermission_args, hasNamespacePermission_args._Fields>, java.io.Serializable, Cloneable, Comparable<hasNamespacePermission_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("hasNamespacePermission_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField USER_FIELD_DESC = new org.apache.thrift.protocol.TField("user", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)3);
+    private static final org.apache.thrift.protocol.TField PERM_FIELD_DESC = new org.apache.thrift.protocol.TField("perm", org.apache.thrift.protocol.TType.I32, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new hasNamespacePermission_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new hasNamespacePermission_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String user; // required
+    public String namespaceName; // required
+    /**
+     * 
+     * @see NamespacePermission
+     */
+    public NamespacePermission perm; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      USER((short)2, "user"),
+      NAMESPACE_NAME((short)3, "namespaceName"),
+      /**
+       * 
+       * @see NamespacePermission
+       */
+      PERM((short)4, "perm");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // USER
+            return USER;
+          case 3: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 4: // PERM
+            return PERM;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.USER, new org.apache.thrift.meta_data.FieldMetaData("user", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.PERM, new org.apache.thrift.meta_data.FieldMetaData("perm", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, NamespacePermission.class)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(hasNamespacePermission_args.class, metaDataMap);
+    }
+
+    public hasNamespacePermission_args() {
+    }
+
+    public hasNamespacePermission_args(
+      ByteBuffer login,
+      String user,
+      String namespaceName,
+      NamespacePermission perm)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.user = user;
+      this.namespaceName = namespaceName;
+      this.perm = perm;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public hasNamespacePermission_args(hasNamespacePermission_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetUser()) {
+        this.user = other.user;
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetPerm()) {
+        this.perm = other.perm;
+      }
+    }
+
+    public hasNamespacePermission_args deepCopy() {
+      return new hasNamespacePermission_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.user = null;
+      this.namespaceName = null;
+      this.perm = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public hasNamespacePermission_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public hasNamespacePermission_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getUser() {
+      return this.user;
+    }
+
+    public hasNamespacePermission_args setUser(String user) {
+      this.user = user;
+      return this;
+    }
+
+    public void unsetUser() {
+      this.user = null;
+    }
+
+    /** Returns true if field user is set (has been assigned a value) and false otherwise */
+    public boolean isSetUser() {
+      return this.user != null;
+    }
+
+    public void setUserIsSet(boolean value) {
+      if (!value) {
+        this.user = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public hasNamespacePermission_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    /**
+     * 
+     * @see NamespacePermission
+     */
+    public NamespacePermission getPerm() {
+      return this.perm;
+    }
+
+    /**
+     * 
+     * @see NamespacePermission
+     */
+    public hasNamespacePermission_args setPerm(NamespacePermission perm) {
+      this.perm = perm;
+      return this;
+    }
+
+    public void unsetPerm() {
+      this.perm = null;
+    }
+
+    /** Returns true if field perm is set (has been assigned a value) and false otherwise */
+    public boolean isSetPerm() {
+      return this.perm != null;
+    }
+
+    public void setPermIsSet(boolean value) {
+      if (!value) {
+        this.perm = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case USER:
+        if (value == null) {
+          unsetUser();
+        } else {
+          setUser((String)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case PERM:
+        if (value == null) {
+          unsetPerm();
+        } else {
+          setPerm((NamespacePermission)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case USER:
+        return getUser();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case PERM:
+        return getPerm();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case USER:
+        return isSetUser();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case PERM:
+        return isSetPerm();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof hasNamespacePermission_args)
+        return this.equals((hasNamespacePermission_args)that);
+      return false;
+    }
+
+    public boolean equals(hasNamespacePermission_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_user = true && this.isSetUser();
+      boolean that_present_user = true && that.isSetUser();
+      if (this_present_user || that_present_user) {
+        if (!(this_present_user && that_present_user))
+          return false;
+        if (!this.user.equals(that.user))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_perm = true && this.isSetPerm();
+      boolean that_present_perm = true && that.isSetPerm();
+      if (this_present_perm || that_present_perm) {
+        if (!(this_present_perm && that_present_perm))
+          return false;
+        if (!this.perm.equals(that.perm))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_perm = true && (isSetPerm());
+      list.add(present_perm);
+      if (present_perm)
+        list.add(perm.getValue());
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(hasNamespacePermission_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetUser()).compareTo(other.isSetUser());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetUser()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.user, other.user);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetPerm()).compareTo(other.isSetPerm());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetPerm()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.perm, other.perm);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("hasNamespacePermission_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("user:");
+      if (this.user == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.user);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("perm:");
+      if (this.perm == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.perm);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class hasNamespacePermission_argsStandardSchemeFactory implements SchemeFactory {
+      public hasNamespacePermission_argsStandardScheme getScheme() {
+        return new hasNamespacePermission_argsStandardScheme();
+      }
+    }
+
+    private static class hasNamespacePermission_argsStandardScheme extends StandardScheme<hasNamespacePermission_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, hasNamespacePermission_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // USER
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.user = iprot.readString();
+                struct.setUserIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // PERM
+              if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
+                struct.perm = org.apache.accumulo.proxy.thrift.NamespacePermission.findByValue(iprot.readI32());
+                struct.setPermIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, hasNamespacePermission_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.user != null) {
+          oprot.writeFieldBegin(USER_FIELD_DESC);
+          oprot.writeString(struct.user);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.perm != null) {
+          oprot.writeFieldBegin(PERM_FIELD_DESC);
+          oprot.writeI32(struct.perm.getValue());
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class hasNamespacePermission_argsTupleSchemeFactory implements SchemeFactory {
+      public hasNamespacePermission_argsTupleScheme getScheme() {
+        return new hasNamespacePermission_argsTupleScheme();
+      }
+    }
+
+    private static class hasNamespacePermission_argsTupleScheme extends TupleScheme<hasNamespacePermission_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, hasNamespacePermission_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetUser()) {
+          optionals.set(1);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(2);
+        }
+        if (struct.isSetPerm()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetUser()) {
+          oprot.writeString(struct.user);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetPerm()) {
+          oprot.writeI32(struct.perm.getValue());
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, hasNamespacePermission_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.user = iprot.readString();
+          struct.setUserIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.perm = org.apache.accumulo.proxy.thrift.NamespacePermission.findByValue(iprot.readI32());
+          struct.setPermIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class hasNamespacePermission_result implements org.apache.thrift.TBase<hasNamespacePermission_result, hasNamespacePermission_result._Fields>, java.io.Serializable, Cloneable, Comparable<hasNamespacePermission_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("hasNamespacePermission_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.BOOL, (short)0);
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new hasNamespacePermission_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new hasNamespacePermission_resultTupleSchemeFactory());
+    }
+
+    public boolean success; // required
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private byte __isset_bitfield = 0;
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.BOOL)));
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(hasNamespacePermission_result.class, metaDataMap);
+    }
+
+    public hasNamespacePermission_result() {
+    }
+
+    public hasNamespacePermission_result(
+      boolean success,
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public hasNamespacePermission_result(hasNamespacePermission_result other) {
+      __isset_bitfield = other.__isset_bitfield;
+      this.success = other.success;
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+    }
+
+    public hasNamespacePermission_result deepCopy() {
+      return new hasNamespacePermission_result(this);
+    }
+
+    @Override
+    public void clear() {
+      setSuccessIsSet(false);
+      this.success = false;
+      this.ouch1 = null;
+      this.ouch2 = null;
+    }
+
+    public boolean isSuccess() {
+      return this.success;
+    }
+
+    public hasNamespacePermission_result setSuccess(boolean success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bitfield = EncodingUtils.clearBit(__isset_bitfield, __SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return EncodingUtils.testBit(__isset_bitfield, __SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __SUCCESS_ISSET_ID, value);
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public hasNamespacePermission_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public hasNamespacePermission_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Boolean)value);
+        }
+        break;
+
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSuccess();
+
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof hasNamespacePermission_result)
+        return this.equals((hasNamespacePermission_result)that);
+      return false;
+    }
+
+    public boolean equals(hasNamespacePermission_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(hasNamespacePermission_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("hasNamespacePermission_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        // it doesn't seem like you should have to do this, but java serialization is wacky, and doesn't call the default constructor.
+        __isset_bitfield = 0;
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class hasNamespacePermission_resultStandardSchemeFactory implements SchemeFactory {
+      public hasNamespacePermission_resultStandardScheme getScheme() {
+        return new hasNamespacePermission_resultStandardScheme();
+      }
+    }
+
+    private static class hasNamespacePermission_resultStandardScheme extends StandardScheme<hasNamespacePermission_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, hasNamespacePermission_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.BOOL) {
+                struct.success = iprot.readBool();
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, hasNamespacePermission_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.isSetSuccess()) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          oprot.writeBool(struct.success);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class hasNamespacePermission_resultTupleSchemeFactory implements SchemeFactory {
+      public hasNamespacePermission_resultTupleScheme getScheme() {
+        return new hasNamespacePermission_resultTupleScheme();
+      }
+    }
+
+    private static class hasNamespacePermission_resultTupleScheme extends TupleScheme<hasNamespacePermission_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, hasNamespacePermission_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch1()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetSuccess()) {
+          oprot.writeBool(struct.success);
+        }
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, hasNamespacePermission_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.success = iprot.readBool();
+          struct.setSuccessIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class revokeNamespacePermission_args implements org.apache.thrift.TBase<revokeNamespacePermission_args, revokeNamespacePermission_args._Fields>, java.io.Serializable, Cloneable, Comparable<revokeNamespacePermission_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("revokeNamespacePermission_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField USER_FIELD_DESC = new org.apache.thrift.protocol.TField("user", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)3);
+    private static final org.apache.thrift.protocol.TField PERM_FIELD_DESC = new org.apache.thrift.protocol.TField("perm", org.apache.thrift.protocol.TType.I32, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new revokeNamespacePermission_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new revokeNamespacePermission_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String user; // required
+    public String namespaceName; // required
+    /**
+     * 
+     * @see NamespacePermission
+     */
+    public NamespacePermission perm; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      USER((short)2, "user"),
+      NAMESPACE_NAME((short)3, "namespaceName"),
+      /**
+       * 
+       * @see NamespacePermission
+       */
+      PERM((short)4, "perm");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // USER
+            return USER;
+          case 3: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 4: // PERM
+            return PERM;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.USER, new org.apache.thrift.meta_data.FieldMetaData("user", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.PERM, new org.apache.thrift.meta_data.FieldMetaData("perm", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, NamespacePermission.class)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(revokeNamespacePermission_args.class, metaDataMap);
+    }
+
+    public revokeNamespacePermission_args() {
+    }
+
+    public revokeNamespacePermission_args(
+      ByteBuffer login,
+      String user,
+      String namespaceName,
+      NamespacePermission perm)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.user = user;
+      this.namespaceName = namespaceName;
+      this.perm = perm;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public revokeNamespacePermission_args(revokeNamespacePermission_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetUser()) {
+        this.user = other.user;
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetPerm()) {
+        this.perm = other.perm;
+      }
+    }
+
+    public revokeNamespacePermission_args deepCopy() {
+      return new revokeNamespacePermission_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.user = null;
+      this.namespaceName = null;
+      this.perm = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public revokeNamespacePermission_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public revokeNamespacePermission_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getUser() {
+      return this.user;
+    }
+
+    public revokeNamespacePermission_args setUser(String user) {
+      this.user = user;
+      return this;
+    }
+
+    public void unsetUser() {
+      this.user = null;
+    }
+
+    /** Returns true if field user is set (has been assigned a value) and false otherwise */
+    public boolean isSetUser() {
+      return this.user != null;
+    }
+
+    public void setUserIsSet(boolean value) {
+      if (!value) {
+        this.user = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public revokeNamespacePermission_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    /**
+     * 
+     * @see NamespacePermission
+     */
+    public NamespacePermission getPerm() {
+      return this.perm;
+    }
+
+    /**
+     * 
+     * @see NamespacePermission
+     */
+    public revokeNamespacePermission_args setPerm(NamespacePermission perm) {
+      this.perm = perm;
+      return this;
+    }
+
+    public void unsetPerm() {
+      this.perm = null;
+    }
+
+    /** Returns true if field perm is set (has been assigned a value) and false otherwise */
+    public boolean isSetPerm() {
+      return this.perm != null;
+    }
+
+    public void setPermIsSet(boolean value) {
+      if (!value) {
+        this.perm = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case USER:
+        if (value == null) {
+          unsetUser();
+        } else {
+          setUser((String)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case PERM:
+        if (value == null) {
+          unsetPerm();
+        } else {
+          setPerm((NamespacePermission)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case USER:
+        return getUser();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case PERM:
+        return getPerm();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case USER:
+        return isSetUser();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case PERM:
+        return isSetPerm();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof revokeNamespacePermission_args)
+        return this.equals((revokeNamespacePermission_args)that);
+      return false;
+    }
+
+    public boolean equals(revokeNamespacePermission_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_user = true && this.isSetUser();
+      boolean that_present_user = true && that.isSetUser();
+      if (this_present_user || that_present_user) {
+        if (!(this_present_user && that_present_user))
+          return false;
+        if (!this.user.equals(that.user))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_perm = true && this.isSetPerm();
+      boolean that_present_perm = true && that.isSetPerm();
+      if (this_present_perm || that_present_perm) {
+        if (!(this_present_perm && that_present_perm))
+          return false;
+        if (!this.perm.equals(that.perm))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_user = true && (isSetUser());
+      list.add(present_user);
+      if (present_user)
+        list.add(user);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_perm = true && (isSetPerm());
+      list.add(present_perm);
+      if (present_perm)
+        list.add(perm.getValue());
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(revokeNamespacePermission_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetUser()).compareTo(other.isSetUser());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetUser()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.user, other.user);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetPerm()).compareTo(other.isSetPerm());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetPerm()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.perm, other.perm);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("revokeNamespacePermission_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("user:");
+      if (this.user == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.user);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("perm:");
+      if (this.perm == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.perm);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class revokeNamespacePermission_argsStandardSchemeFactory implements SchemeFactory {
+      public revokeNamespacePermission_argsStandardScheme getScheme() {
+        return new revokeNamespacePermission_argsStandardScheme();
+      }
+    }
+
+    private static class revokeNamespacePermission_argsStandardScheme extends StandardScheme<revokeNamespacePermission_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, revokeNamespacePermission_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // USER
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.user = iprot.readString();
+                struct.setUserIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // PERM
+              if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
+                struct.perm = org.apache.accumulo.proxy.thrift.NamespacePermission.findByValue(iprot.readI32());
+                struct.setPermIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, revokeNamespacePermission_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.user != null) {
+          oprot.writeFieldBegin(USER_FIELD_DESC);
+          oprot.writeString(struct.user);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.perm != null) {
+          oprot.writeFieldBegin(PERM_FIELD_DESC);
+          oprot.writeI32(struct.perm.getValue());
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class revokeNamespacePermission_argsTupleSchemeFactory implements SchemeFactory {
+      public revokeNamespacePermission_argsTupleScheme getScheme() {
+        return new revokeNamespacePermission_argsTupleScheme();
+      }
+    }
+
+    private static class revokeNamespacePermission_argsTupleScheme extends TupleScheme<revokeNamespacePermission_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, revokeNamespacePermission_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetUser()) {
+          optionals.set(1);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(2);
+        }
+        if (struct.isSetPerm()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetUser()) {
+          oprot.writeString(struct.user);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetPerm()) {
+          oprot.writeI32(struct.perm.getValue());
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, revokeNamespacePermission_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.user = iprot.readString();
+          struct.setUserIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.perm = org.apache.accumulo.proxy.thrift.NamespacePermission.findByValue(iprot.readI32());
+          struct.setPermIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class revokeNamespacePermission_result implements org.apache.thrift.TBase<revokeNamespacePermission_result, revokeNamespacePermission_result._Fields>, java.io.Serializable, Cloneable, Comparable<revokeNamespacePermission_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("revokeNamespacePermission_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new revokeNamespacePermission_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new revokeNamespacePermission_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(revokeNamespacePermission_result.class, metaDataMap);
+    }
+
+    public revokeNamespacePermission_result() {
+    }
+
+    public revokeNamespacePermission_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public revokeNamespacePermission_result(revokeNamespacePermission_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+    }
+
+    public revokeNamespacePermission_result deepCopy() {
+      return new revokeNamespacePermission_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public revokeNamespacePermission_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public revokeNamespacePermission_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof revokeNamespacePermission_result)
+        return this.equals((revokeNamespacePermission_result)that);
+      return false;
+    }
+
+    public boolean equals(revokeNamespacePermission_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(revokeNamespacePermission_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("revokeNamespacePermission_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class revokeNamespacePermission_resultStandardSchemeFactory implements SchemeFactory {
+      public revokeNamespacePermission_resultStandardScheme getScheme() {
+        return new revokeNamespacePermission_resultStandardScheme();
+      }
+    }
+
+    private static class revokeNamespacePermission_resultStandardScheme extends StandardScheme<revokeNamespacePermission_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, revokeNamespacePermission_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, revokeNamespacePermission_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class revokeNamespacePermission_resultTupleSchemeFactory implements SchemeFactory {
+      public revokeNamespacePermission_resultTupleScheme getScheme() {
+        return new revokeNamespacePermission_resultTupleScheme();
+      }
+    }
+
+    private static class revokeNamespacePermission_resultTupleScheme extends TupleScheme<revokeNamespacePermission_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, revokeNamespacePermission_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        oprot.writeBitSet(optionals, 2);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, revokeNamespacePermission_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(2);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+      }
+    }
+
+  }
+
   public static class createBatchScanner_args implements org.apache.thrift.TBase<createBatchScanner_args, createBatchScanner_args._Fields>, java.io.Serializable, Cloneable, Comparable<createBatchScanner_args>   {
     private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("createBatchScanner_args");
 
@@ -81738,7 +91118,7 @@
       BatchScanOptions options)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.options = options;
     }
@@ -81749,7 +91129,6 @@
     public createBatchScanner_args(createBatchScanner_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -81776,16 +91155,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public createBatchScanner_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public createBatchScanner_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -81958,7 +91337,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_options = true && (isSetOptions());
+      list.add(present_options);
+      if (present_options)
+        list.add(options);
+
+      return list.hashCode();
     }
 
     @Override
@@ -82576,7 +91972,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -82968,7 +92386,7 @@
       ScanOptions options)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.options = options;
     }
@@ -82979,7 +92397,6 @@
     public createScanner_args(createScanner_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -83006,16 +92423,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public createScanner_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public createScanner_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -83188,7 +92605,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_options = true && (isSetOptions());
+      list.add(present_options);
+      if (present_options)
+        list.add(options);
+
+      return list.hashCode();
     }
 
     @Override
@@ -83806,7 +93240,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -84289,7 +93745,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_scanner = true && (isSetScanner());
+      list.add(present_scanner);
+      if (present_scanner)
+        list.add(scanner);
+
+      return list.hashCode();
     }
 
     @Override
@@ -84644,7 +94107,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       case OUCH1:
         return getOuch1();
@@ -84704,7 +94167,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      return list.hashCode();
     }
 
     @Override
@@ -85099,7 +94574,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_scanner = true && (isSetScanner());
+      list.add(present_scanner);
+      if (present_scanner)
+        list.add(scanner);
+
+      return list.hashCode();
     }
 
     @Override
@@ -85630,7 +95112,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -86122,7 +95626,7 @@
         return getScanner();
 
       case K:
-        return Integer.valueOf(getK());
+        return getK();
 
       }
       throw new IllegalStateException();
@@ -86179,7 +95683,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_scanner = true && (isSetScanner());
+      list.add(present_scanner);
+      if (present_scanner)
+        list.add(scanner);
+
+      boolean present_k = true;
+      list.add(present_k);
+      if (present_k)
+        list.add(k);
+
+      return list.hashCode();
     }
 
     @Override
@@ -86747,7 +96263,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -87235,7 +96773,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_scanner = true && (isSetScanner());
+      list.add(present_scanner);
+      if (present_scanner)
+        list.add(scanner);
+
+      return list.hashCode();
     }
 
     @Override
@@ -87589,7 +97134,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      return list.hashCode();
     }
 
     @Override
@@ -87857,7 +97409,7 @@
       Map<ByteBuffer,List<ColumnUpdate>> cells)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.cells = cells;
     }
@@ -87868,7 +97420,6 @@
     public updateAndFlush_args(updateAndFlush_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -87881,7 +97432,6 @@
           List<ColumnUpdate> other_element_value = other_element.getValue();
 
           ByteBuffer __this__cells_copy_key = org.apache.thrift.TBaseHelper.copyBinary(other_element_key);
-;
 
           List<ColumnUpdate> __this__cells_copy_value = new ArrayList<ColumnUpdate>(other_element_value.size());
           for (ColumnUpdate other_element_value_element : other_element_value) {
@@ -87911,16 +97461,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public updateAndFlush_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public updateAndFlush_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -88104,7 +97654,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_cells = true && (isSetCells());
+      list.add(present_cells);
+      if (present_cells)
+        list.add(cells);
+
+      return list.hashCode();
     }
 
     @Override
@@ -88252,24 +97819,24 @@
                 {
                   org.apache.thrift.protocol.TMap _map442 = iprot.readMapBegin();
                   struct.cells = new HashMap<ByteBuffer,List<ColumnUpdate>>(2*_map442.size);
-                  for (int _i443 = 0; _i443 < _map442.size; ++_i443)
+                  ByteBuffer _key443;
+                  List<ColumnUpdate> _val444;
+                  for (int _i445 = 0; _i445 < _map442.size; ++_i445)
                   {
-                    ByteBuffer _key444;
-                    List<ColumnUpdate> _val445;
-                    _key444 = iprot.readBinary();
+                    _key443 = iprot.readBinary();
                     {
                       org.apache.thrift.protocol.TList _list446 = iprot.readListBegin();
-                      _val445 = new ArrayList<ColumnUpdate>(_list446.size);
-                      for (int _i447 = 0; _i447 < _list446.size; ++_i447)
+                      _val444 = new ArrayList<ColumnUpdate>(_list446.size);
+                      ColumnUpdate _elem447;
+                      for (int _i448 = 0; _i448 < _list446.size; ++_i448)
                       {
-                        ColumnUpdate _elem448;
-                        _elem448 = new ColumnUpdate();
-                        _elem448.read(iprot);
-                        _val445.add(_elem448);
+                        _elem447 = new ColumnUpdate();
+                        _elem447.read(iprot);
+                        _val444.add(_elem447);
                       }
                       iprot.readListEnd();
                     }
-                    struct.cells.put(_key444, _val445);
+                    struct.cells.put(_key443, _val444);
                   }
                   iprot.readMapEnd();
                 }
@@ -88391,23 +97958,23 @@
           {
             org.apache.thrift.protocol.TMap _map453 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.LIST, iprot.readI32());
             struct.cells = new HashMap<ByteBuffer,List<ColumnUpdate>>(2*_map453.size);
-            for (int _i454 = 0; _i454 < _map453.size; ++_i454)
+            ByteBuffer _key454;
+            List<ColumnUpdate> _val455;
+            for (int _i456 = 0; _i456 < _map453.size; ++_i456)
             {
-              ByteBuffer _key455;
-              List<ColumnUpdate> _val456;
-              _key455 = iprot.readBinary();
+              _key454 = iprot.readBinary();
               {
                 org.apache.thrift.protocol.TList _list457 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-                _val456 = new ArrayList<ColumnUpdate>(_list457.size);
-                for (int _i458 = 0; _i458 < _list457.size; ++_i458)
+                _val455 = new ArrayList<ColumnUpdate>(_list457.size);
+                ColumnUpdate _elem458;
+                for (int _i459 = 0; _i459 < _list457.size; ++_i459)
                 {
-                  ColumnUpdate _elem459;
-                  _elem459 = new ColumnUpdate();
-                  _elem459.read(iprot);
-                  _val456.add(_elem459);
+                  _elem458 = new ColumnUpdate();
+                  _elem458.read(iprot);
+                  _val455.add(_elem458);
                 }
               }
-              struct.cells.put(_key455, _val456);
+              struct.cells.put(_key454, _val455);
             }
           }
           struct.setCellsIsSet(true);
@@ -88789,7 +98356,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_outch1 = true && (isSetOutch1());
+      list.add(present_outch1);
+      if (present_outch1)
+        list.add(outch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      boolean present_ouch4 = true && (isSetOuch4());
+      list.add(present_ouch4);
+      if (present_ouch4)
+        list.add(ouch4);
+
+      return list.hashCode();
     }
 
     @Override
@@ -89183,7 +98772,7 @@
       WriterOptions opts)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.opts = opts;
     }
@@ -89194,7 +98783,6 @@
     public createWriter_args(createWriter_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -89221,16 +98809,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public createWriter_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public createWriter_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -89403,7 +98991,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_opts = true && (isSetOpts());
+      list.add(present_opts);
+      if (present_opts)
+        list.add(opts);
+
+      return list.hashCode();
     }
 
     @Override
@@ -90021,7 +99626,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_outch1 = true && (isSetOutch1());
+      list.add(present_outch1);
+      if (present_outch1)
+        list.add(outch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -90427,7 +100054,6 @@
           List<ColumnUpdate> other_element_value = other_element.getValue();
 
           ByteBuffer __this__cells_copy_key = org.apache.thrift.TBaseHelper.copyBinary(other_element_key);
-;
 
           List<ColumnUpdate> __this__cells_copy_value = new ArrayList<ColumnUpdate>(other_element_value.size());
           for (ColumnUpdate other_element_value_element : other_element_value) {
@@ -90593,7 +100219,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_writer = true && (isSetWriter());
+      list.add(present_writer);
+      if (present_writer)
+        list.add(writer);
+
+      boolean present_cells = true && (isSetCells());
+      list.add(present_cells);
+      if (present_cells)
+        list.add(cells);
+
+      return list.hashCode();
     }
 
     @Override
@@ -90715,24 +100353,24 @@
                 {
                   org.apache.thrift.protocol.TMap _map460 = iprot.readMapBegin();
                   struct.cells = new HashMap<ByteBuffer,List<ColumnUpdate>>(2*_map460.size);
-                  for (int _i461 = 0; _i461 < _map460.size; ++_i461)
+                  ByteBuffer _key461;
+                  List<ColumnUpdate> _val462;
+                  for (int _i463 = 0; _i463 < _map460.size; ++_i463)
                   {
-                    ByteBuffer _key462;
-                    List<ColumnUpdate> _val463;
-                    _key462 = iprot.readBinary();
+                    _key461 = iprot.readBinary();
                     {
                       org.apache.thrift.protocol.TList _list464 = iprot.readListBegin();
-                      _val463 = new ArrayList<ColumnUpdate>(_list464.size);
-                      for (int _i465 = 0; _i465 < _list464.size; ++_i465)
+                      _val462 = new ArrayList<ColumnUpdate>(_list464.size);
+                      ColumnUpdate _elem465;
+                      for (int _i466 = 0; _i466 < _list464.size; ++_i466)
                       {
-                        ColumnUpdate _elem466;
-                        _elem466 = new ColumnUpdate();
-                        _elem466.read(iprot);
-                        _val463.add(_elem466);
+                        _elem465 = new ColumnUpdate();
+                        _elem465.read(iprot);
+                        _val462.add(_elem465);
                       }
                       iprot.readListEnd();
                     }
-                    struct.cells.put(_key462, _val463);
+                    struct.cells.put(_key461, _val462);
                   }
                   iprot.readMapEnd();
                 }
@@ -90839,23 +100477,23 @@
           {
             org.apache.thrift.protocol.TMap _map471 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.LIST, iprot.readI32());
             struct.cells = new HashMap<ByteBuffer,List<ColumnUpdate>>(2*_map471.size);
-            for (int _i472 = 0; _i472 < _map471.size; ++_i472)
+            ByteBuffer _key472;
+            List<ColumnUpdate> _val473;
+            for (int _i474 = 0; _i474 < _map471.size; ++_i474)
             {
-              ByteBuffer _key473;
-              List<ColumnUpdate> _val474;
-              _key473 = iprot.readBinary();
+              _key472 = iprot.readBinary();
               {
                 org.apache.thrift.protocol.TList _list475 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
-                _val474 = new ArrayList<ColumnUpdate>(_list475.size);
-                for (int _i476 = 0; _i476 < _list475.size; ++_i476)
+                _val473 = new ArrayList<ColumnUpdate>(_list475.size);
+                ColumnUpdate _elem476;
+                for (int _i477 = 0; _i477 < _list475.size; ++_i477)
                 {
-                  ColumnUpdate _elem477;
-                  _elem477 = new ColumnUpdate();
-                  _elem477.read(iprot);
-                  _val474.add(_elem477);
+                  _elem476 = new ColumnUpdate();
+                  _elem476.read(iprot);
+                  _val473.add(_elem476);
                 }
               }
-              struct.cells.put(_key473, _val474);
+              struct.cells.put(_key472, _val473);
             }
           }
           struct.setCellsIsSet(true);
@@ -91060,7 +100698,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_writer = true && (isSetWriter());
+      list.add(present_writer);
+      if (present_writer)
+        list.add(writer);
+
+      return list.hashCode();
     }
 
     @Override
@@ -91473,7 +101118,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -91872,7 +101529,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_writer = true && (isSetWriter());
+      list.add(present_writer);
+      if (present_writer)
+        list.add(writer);
+
+      return list.hashCode();
     }
 
     @Override
@@ -92285,7 +101949,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
     }
 
     @Override
@@ -92601,9 +102277,9 @@
       ConditionalUpdates updates)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
-      this.row = row;
+      this.row = org.apache.thrift.TBaseHelper.copyBinary(row);
       this.updates = updates;
     }
 
@@ -92613,14 +102289,12 @@
     public updateRowConditionally_args(updateRowConditionally_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
       }
       if (other.isSetRow()) {
         this.row = org.apache.thrift.TBaseHelper.copyBinary(other.row);
-;
       }
       if (other.isSetUpdates()) {
         this.updates = new ConditionalUpdates(other.updates);
@@ -92645,16 +102319,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public updateRowConditionally_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public updateRowConditionally_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -92703,16 +102377,16 @@
     }
 
     public ByteBuffer bufferForRow() {
-      return row;
+      return org.apache.thrift.TBaseHelper.copyBinary(row);
     }
 
     public updateRowConditionally_args setRow(byte[] row) {
-      setRow(row == null ? (ByteBuffer)null : ByteBuffer.wrap(row));
+      this.row = row == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(row, row.length));
       return this;
     }
 
     public updateRowConditionally_args setRow(ByteBuffer row) {
-      this.row = row;
+      this.row = org.apache.thrift.TBaseHelper.copyBinary(row);
       return this;
     }
 
@@ -92883,7 +102557,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_row = true && (isSetRow());
+      list.add(present_row);
+      if (present_row)
+        list.add(row);
+
+      boolean present_updates = true && (isSetUpdates());
+      list.add(present_updates);
+      if (present_updates)
+        list.add(updates);
+
+      return list.hashCode();
     }
 
     @Override
@@ -93558,7 +103254,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success.getValue());
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -93705,7 +103423,7 @@
           switch (schemeField.id) {
             case 0: // SUCCESS
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.success = ConditionalStatus.findByValue(iprot.readI32());
+                struct.success = org.apache.accumulo.proxy.thrift.ConditionalStatus.findByValue(iprot.readI32());
                 struct.setSuccessIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -93823,7 +103541,7 @@
         TTupleProtocol iprot = (TTupleProtocol) prot;
         BitSet incoming = iprot.readBitSet(4);
         if (incoming.get(0)) {
-          struct.success = ConditionalStatus.findByValue(iprot.readI32());
+          struct.success = org.apache.accumulo.proxy.thrift.ConditionalStatus.findByValue(iprot.readI32());
           struct.setSuccessIsSet(true);
         }
         if (incoming.get(1)) {
@@ -93950,7 +103668,7 @@
       ConditionalWriterOptions options)
     {
       this();
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       this.tableName = tableName;
       this.options = options;
     }
@@ -93961,7 +103679,6 @@
     public createConditionalWriter_args(createConditionalWriter_args other) {
       if (other.isSetLogin()) {
         this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
-;
       }
       if (other.isSetTableName()) {
         this.tableName = other.tableName;
@@ -93988,16 +103705,16 @@
     }
 
     public ByteBuffer bufferForLogin() {
-      return login;
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
     }
 
     public createConditionalWriter_args setLogin(byte[] login) {
-      setLogin(login == null ? (ByteBuffer)null : ByteBuffer.wrap(login));
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
       return this;
     }
 
     public createConditionalWriter_args setLogin(ByteBuffer login) {
-      this.login = login;
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
       return this;
     }
 
@@ -94170,7 +103887,24 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_tableName = true && (isSetTableName());
+      list.add(present_tableName);
+      if (present_tableName)
+        list.add(tableName);
+
+      boolean present_options = true && (isSetOptions());
+      list.add(present_options);
+      if (present_options)
+        list.add(options);
+
+      return list.hashCode();
     }
 
     @Override
@@ -94788,7 +104522,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -95193,7 +104949,6 @@
           ConditionalUpdates other_element_value = other_element.getValue();
 
           ByteBuffer __this__updates_copy_key = org.apache.thrift.TBaseHelper.copyBinary(other_element_key);
-;
 
           ConditionalUpdates __this__updates_copy_value = new ConditionalUpdates(other_element_value);
 
@@ -95356,7 +105111,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_conditionalWriter = true && (isSetConditionalWriter());
+      list.add(present_conditionalWriter);
+      if (present_conditionalWriter)
+        list.add(conditionalWriter);
+
+      boolean present_updates = true && (isSetUpdates());
+      list.add(present_updates);
+      if (present_updates)
+        list.add(updates);
+
+      return list.hashCode();
     }
 
     @Override
@@ -95478,14 +105245,14 @@
                 {
                   org.apache.thrift.protocol.TMap _map478 = iprot.readMapBegin();
                   struct.updates = new HashMap<ByteBuffer,ConditionalUpdates>(2*_map478.size);
-                  for (int _i479 = 0; _i479 < _map478.size; ++_i479)
+                  ByteBuffer _key479;
+                  ConditionalUpdates _val480;
+                  for (int _i481 = 0; _i481 < _map478.size; ++_i481)
                   {
-                    ByteBuffer _key480;
-                    ConditionalUpdates _val481;
-                    _key480 = iprot.readBinary();
-                    _val481 = new ConditionalUpdates();
-                    _val481.read(iprot);
-                    struct.updates.put(_key480, _val481);
+                    _key479 = iprot.readBinary();
+                    _val480 = new ConditionalUpdates();
+                    _val480.read(iprot);
+                    struct.updates.put(_key479, _val480);
                   }
                   iprot.readMapEnd();
                 }
@@ -95579,14 +105346,14 @@
           {
             org.apache.thrift.protocol.TMap _map484 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
             struct.updates = new HashMap<ByteBuffer,ConditionalUpdates>(2*_map484.size);
-            for (int _i485 = 0; _i485 < _map484.size; ++_i485)
+            ByteBuffer _key485;
+            ConditionalUpdates _val486;
+            for (int _i487 = 0; _i487 < _map484.size; ++_i487)
             {
-              ByteBuffer _key486;
-              ConditionalUpdates _val487;
-              _key486 = iprot.readBinary();
-              _val487 = new ConditionalUpdates();
-              _val487.read(iprot);
-              struct.updates.put(_key486, _val487);
+              _key485 = iprot.readBinary();
+              _val486 = new ConditionalUpdates();
+              _val486.read(iprot);
+              struct.updates.put(_key485, _val486);
             }
           }
           struct.setUpdatesIsSet(true);
@@ -95728,7 +105495,6 @@
           ConditionalStatus other_element_value = other_element.getValue();
 
           ByteBuffer __this__success_copy_key = org.apache.thrift.TBaseHelper.copyBinary(other_element_key);
-;
 
           ConditionalStatus __this__success_copy_value = other_element_value;
 
@@ -95994,7 +105760,29 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
     }
 
     @Override
@@ -96144,13 +105932,13 @@
                 {
                   org.apache.thrift.protocol.TMap _map488 = iprot.readMapBegin();
                   struct.success = new HashMap<ByteBuffer,ConditionalStatus>(2*_map488.size);
-                  for (int _i489 = 0; _i489 < _map488.size; ++_i489)
+                  ByteBuffer _key489;
+                  ConditionalStatus _val490;
+                  for (int _i491 = 0; _i491 < _map488.size; ++_i491)
                   {
-                    ByteBuffer _key490;
-                    ConditionalStatus _val491;
-                    _key490 = iprot.readBinary();
-                    _val491 = ConditionalStatus.findByValue(iprot.readI32());
-                    struct.success.put(_key490, _val491);
+                    _key489 = iprot.readBinary();
+                    _val490 = org.apache.accumulo.proxy.thrift.ConditionalStatus.findByValue(iprot.readI32());
+                    struct.success.put(_key489, _val490);
                   }
                   iprot.readMapEnd();
                 }
@@ -96289,13 +106077,13 @@
           {
             org.apache.thrift.protocol.TMap _map494 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, iprot.readI32());
             struct.success = new HashMap<ByteBuffer,ConditionalStatus>(2*_map494.size);
-            for (int _i495 = 0; _i495 < _map494.size; ++_i495)
+            ByteBuffer _key495;
+            ConditionalStatus _val496;
+            for (int _i497 = 0; _i497 < _map494.size; ++_i497)
             {
-              ByteBuffer _key496;
-              ConditionalStatus _val497;
-              _key496 = iprot.readBinary();
-              _val497 = ConditionalStatus.findByValue(iprot.readI32());
-              struct.success.put(_key496, _val497);
+              _key495 = iprot.readBinary();
+              _val496 = org.apache.accumulo.proxy.thrift.ConditionalStatus.findByValue(iprot.readI32());
+              struct.success.put(_key495, _val496);
             }
           }
           struct.setSuccessIsSet(true);
@@ -96515,7 +106303,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_conditionalWriter = true && (isSetConditionalWriter());
+      list.add(present_conditionalWriter);
+      if (present_conditionalWriter)
+        list.add(conditionalWriter);
+
+      return list.hashCode();
     }
 
     @Override
@@ -96804,7 +106599,9 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
     }
 
     @Override
@@ -97008,7 +106805,7 @@
       ByteBuffer row)
     {
       this();
-      this.row = row;
+      this.row = org.apache.thrift.TBaseHelper.copyBinary(row);
     }
 
     /**
@@ -97017,7 +106814,6 @@
     public getRowRange_args(getRowRange_args other) {
       if (other.isSetRow()) {
         this.row = org.apache.thrift.TBaseHelper.copyBinary(other.row);
-;
       }
     }
 
@@ -97036,16 +106832,16 @@
     }
 
     public ByteBuffer bufferForRow() {
-      return row;
+      return org.apache.thrift.TBaseHelper.copyBinary(row);
     }
 
     public getRowRange_args setRow(byte[] row) {
-      setRow(row == null ? (ByteBuffer)null : ByteBuffer.wrap(row));
+      this.row = row == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(row, row.length));
       return this;
     }
 
     public getRowRange_args setRow(ByteBuffer row) {
-      this.row = row;
+      this.row = org.apache.thrift.TBaseHelper.copyBinary(row);
       return this;
     }
 
@@ -97126,7 +106922,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_row = true && (isSetRow());
+      list.add(present_row);
+      if (present_row)
+        list.add(row);
+
+      return list.hashCode();
     }
 
     @Override
@@ -97480,7 +107283,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -97914,7 +107724,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_key = true && (isSetKey());
+      list.add(present_key);
+      if (present_key)
+        list.add(key);
+
+      boolean present_part = true && (isSetPart());
+      list.add(present_part);
+      if (present_part)
+        list.add(part.getValue());
+
+      return list.hashCode();
     }
 
     @Override
@@ -98037,7 +107859,7 @@
               break;
             case 2: // PART
               if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-                struct.part = PartialKey.findByValue(iprot.readI32());
+                struct.part = org.apache.accumulo.proxy.thrift.PartialKey.findByValue(iprot.readI32());
                 struct.setPartIsSet(true);
               } else { 
                 org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -98111,7 +107933,7 @@
           struct.setKeyIsSet(true);
         }
         if (incoming.get(1)) {
-          struct.part = PartialKey.findByValue(iprot.readI32());
+          struct.part = org.apache.accumulo.proxy.thrift.PartialKey.findByValue(iprot.readI32());
           struct.setPartIsSet(true);
         }
       }
@@ -98314,7 +108136,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
@@ -98478,4 +108307,22973 @@
 
   }
 
+  public static class systemNamespace_args implements org.apache.thrift.TBase<systemNamespace_args, systemNamespace_args._Fields>, java.io.Serializable, Cloneable, Comparable<systemNamespace_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("systemNamespace_args");
+
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new systemNamespace_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new systemNamespace_argsTupleSchemeFactory());
+    }
+
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+;
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(systemNamespace_args.class, metaDataMap);
+    }
+
+    public systemNamespace_args() {
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public systemNamespace_args(systemNamespace_args other) {
+    }
+
+    public systemNamespace_args deepCopy() {
+      return new systemNamespace_args(this);
+    }
+
+    @Override
+    public void clear() {
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof systemNamespace_args)
+        return this.equals((systemNamespace_args)that);
+      return false;
+    }
+
+    public boolean equals(systemNamespace_args that) {
+      if (that == null)
+        return false;
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(systemNamespace_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("systemNamespace_args(");
+      boolean first = true;
+
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class systemNamespace_argsStandardSchemeFactory implements SchemeFactory {
+      public systemNamespace_argsStandardScheme getScheme() {
+        return new systemNamespace_argsStandardScheme();
+      }
+    }
+
+    private static class systemNamespace_argsStandardScheme extends StandardScheme<systemNamespace_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, systemNamespace_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, systemNamespace_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class systemNamespace_argsTupleSchemeFactory implements SchemeFactory {
+      public systemNamespace_argsTupleScheme getScheme() {
+        return new systemNamespace_argsTupleScheme();
+      }
+    }
+
+    private static class systemNamespace_argsTupleScheme extends TupleScheme<systemNamespace_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, systemNamespace_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, systemNamespace_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+      }
+    }
+
+  }
+
+  public static class systemNamespace_result implements org.apache.thrift.TBase<systemNamespace_result, systemNamespace_result._Fields>, java.io.Serializable, Cloneable, Comparable<systemNamespace_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("systemNamespace_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.STRING, (short)0);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new systemNamespace_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new systemNamespace_resultTupleSchemeFactory());
+    }
+
+    public String success; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(systemNamespace_result.class, metaDataMap);
+    }
+
+    public systemNamespace_result() {
+    }
+
+    public systemNamespace_result(
+      String success)
+    {
+      this();
+      this.success = success;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public systemNamespace_result(systemNamespace_result other) {
+      if (other.isSetSuccess()) {
+        this.success = other.success;
+      }
+    }
+
+    public systemNamespace_result deepCopy() {
+      return new systemNamespace_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.success = null;
+    }
+
+    public String getSuccess() {
+      return this.success;
+    }
+
+    public systemNamespace_result setSuccess(String success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof systemNamespace_result)
+        return this.equals((systemNamespace_result)that);
+      return false;
+    }
+
+    public boolean equals(systemNamespace_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(systemNamespace_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("systemNamespace_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class systemNamespace_resultStandardSchemeFactory implements SchemeFactory {
+      public systemNamespace_resultStandardScheme getScheme() {
+        return new systemNamespace_resultStandardScheme();
+      }
+    }
+
+    private static class systemNamespace_resultStandardScheme extends StandardScheme<systemNamespace_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, systemNamespace_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.success = iprot.readString();
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, systemNamespace_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.success != null) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          oprot.writeString(struct.success);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class systemNamespace_resultTupleSchemeFactory implements SchemeFactory {
+      public systemNamespace_resultTupleScheme getScheme() {
+        return new systemNamespace_resultTupleScheme();
+      }
+    }
+
+    private static class systemNamespace_resultTupleScheme extends TupleScheme<systemNamespace_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, systemNamespace_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        oprot.writeBitSet(optionals, 1);
+        if (struct.isSetSuccess()) {
+          oprot.writeString(struct.success);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, systemNamespace_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(1);
+        if (incoming.get(0)) {
+          struct.success = iprot.readString();
+          struct.setSuccessIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class defaultNamespace_args implements org.apache.thrift.TBase<defaultNamespace_args, defaultNamespace_args._Fields>, java.io.Serializable, Cloneable, Comparable<defaultNamespace_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("defaultNamespace_args");
+
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new defaultNamespace_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new defaultNamespace_argsTupleSchemeFactory());
+    }
+
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+;
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(defaultNamespace_args.class, metaDataMap);
+    }
+
+    public defaultNamespace_args() {
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public defaultNamespace_args(defaultNamespace_args other) {
+    }
+
+    public defaultNamespace_args deepCopy() {
+      return new defaultNamespace_args(this);
+    }
+
+    @Override
+    public void clear() {
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof defaultNamespace_args)
+        return this.equals((defaultNamespace_args)that);
+      return false;
+    }
+
+    public boolean equals(defaultNamespace_args that) {
+      if (that == null)
+        return false;
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(defaultNamespace_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("defaultNamespace_args(");
+      boolean first = true;
+
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class defaultNamespace_argsStandardSchemeFactory implements SchemeFactory {
+      public defaultNamespace_argsStandardScheme getScheme() {
+        return new defaultNamespace_argsStandardScheme();
+      }
+    }
+
+    private static class defaultNamespace_argsStandardScheme extends StandardScheme<defaultNamespace_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, defaultNamespace_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, defaultNamespace_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class defaultNamespace_argsTupleSchemeFactory implements SchemeFactory {
+      public defaultNamespace_argsTupleScheme getScheme() {
+        return new defaultNamespace_argsTupleScheme();
+      }
+    }
+
+    private static class defaultNamespace_argsTupleScheme extends TupleScheme<defaultNamespace_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, defaultNamespace_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, defaultNamespace_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+      }
+    }
+
+  }
+
+  public static class defaultNamespace_result implements org.apache.thrift.TBase<defaultNamespace_result, defaultNamespace_result._Fields>, java.io.Serializable, Cloneable, Comparable<defaultNamespace_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("defaultNamespace_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.STRING, (short)0);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new defaultNamespace_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new defaultNamespace_resultTupleSchemeFactory());
+    }
+
+    public String success; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(defaultNamespace_result.class, metaDataMap);
+    }
+
+    public defaultNamespace_result() {
+    }
+
+    public defaultNamespace_result(
+      String success)
+    {
+      this();
+      this.success = success;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public defaultNamespace_result(defaultNamespace_result other) {
+      if (other.isSetSuccess()) {
+        this.success = other.success;
+      }
+    }
+
+    public defaultNamespace_result deepCopy() {
+      return new defaultNamespace_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.success = null;
+    }
+
+    public String getSuccess() {
+      return this.success;
+    }
+
+    public defaultNamespace_result setSuccess(String success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof defaultNamespace_result)
+        return this.equals((defaultNamespace_result)that);
+      return false;
+    }
+
+    public boolean equals(defaultNamespace_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(defaultNamespace_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("defaultNamespace_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class defaultNamespace_resultStandardSchemeFactory implements SchemeFactory {
+      public defaultNamespace_resultStandardScheme getScheme() {
+        return new defaultNamespace_resultStandardScheme();
+      }
+    }
+
+    private static class defaultNamespace_resultStandardScheme extends StandardScheme<defaultNamespace_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, defaultNamespace_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.success = iprot.readString();
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, defaultNamespace_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.success != null) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          oprot.writeString(struct.success);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class defaultNamespace_resultTupleSchemeFactory implements SchemeFactory {
+      public defaultNamespace_resultTupleScheme getScheme() {
+        return new defaultNamespace_resultTupleScheme();
+      }
+    }
+
+    private static class defaultNamespace_resultTupleScheme extends TupleScheme<defaultNamespace_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, defaultNamespace_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        oprot.writeBitSet(optionals, 1);
+        if (struct.isSetSuccess()) {
+          oprot.writeString(struct.success);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, defaultNamespace_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(1);
+        if (incoming.get(0)) {
+          struct.success = iprot.readString();
+          struct.setSuccessIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class listNamespaces_args implements org.apache.thrift.TBase<listNamespaces_args, listNamespaces_args._Fields>, java.io.Serializable, Cloneable, Comparable<listNamespaces_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("listNamespaces_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new listNamespaces_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new listNamespaces_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(listNamespaces_args.class, metaDataMap);
+    }
+
+    public listNamespaces_args() {
+    }
+
+    public listNamespaces_args(
+      ByteBuffer login)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public listNamespaces_args(listNamespaces_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+    }
+
+    public listNamespaces_args deepCopy() {
+      return new listNamespaces_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public listNamespaces_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public listNamespaces_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof listNamespaces_args)
+        return this.equals((listNamespaces_args)that);
+      return false;
+    }
+
+    public boolean equals(listNamespaces_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(listNamespaces_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("listNamespaces_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class listNamespaces_argsStandardSchemeFactory implements SchemeFactory {
+      public listNamespaces_argsStandardScheme getScheme() {
+        return new listNamespaces_argsStandardScheme();
+      }
+    }
+
+    private static class listNamespaces_argsStandardScheme extends StandardScheme<listNamespaces_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, listNamespaces_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, listNamespaces_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class listNamespaces_argsTupleSchemeFactory implements SchemeFactory {
+      public listNamespaces_argsTupleScheme getScheme() {
+        return new listNamespaces_argsTupleScheme();
+      }
+    }
+
+    private static class listNamespaces_argsTupleScheme extends TupleScheme<listNamespaces_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, listNamespaces_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        oprot.writeBitSet(optionals, 1);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, listNamespaces_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(1);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class listNamespaces_result implements org.apache.thrift.TBase<listNamespaces_result, listNamespaces_result._Fields>, java.io.Serializable, Cloneable, Comparable<listNamespaces_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("listNamespaces_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.LIST, (short)0);
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new listNamespaces_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new listNamespaces_resultTupleSchemeFactory());
+    }
+
+    public List<String> success; // required
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.ListMetaData(org.apache.thrift.protocol.TType.LIST, 
+              new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING))));
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(listNamespaces_result.class, metaDataMap);
+    }
+
+    public listNamespaces_result() {
+    }
+
+    public listNamespaces_result(
+      List<String> success,
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2)
+    {
+      this();
+      this.success = success;
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public listNamespaces_result(listNamespaces_result other) {
+      if (other.isSetSuccess()) {
+        List<String> __this__success = new ArrayList<String>(other.success);
+        this.success = __this__success;
+      }
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+    }
+
+    public listNamespaces_result deepCopy() {
+      return new listNamespaces_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.success = null;
+      this.ouch1 = null;
+      this.ouch2 = null;
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<String> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(String elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<String>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<String> getSuccess() {
+      return this.success;
+    }
+
+    public listNamespaces_result setSuccess(List<String> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public listNamespaces_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public listNamespaces_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<String>)value);
+        }
+        break;
+
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof listNamespaces_result)
+        return this.equals((listNamespaces_result)that);
+      return false;
+    }
+
+    public boolean equals(listNamespaces_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(listNamespaces_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("listNamespaces_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class listNamespaces_resultStandardSchemeFactory implements SchemeFactory {
+      public listNamespaces_resultStandardScheme getScheme() {
+        return new listNamespaces_resultStandardScheme();
+      }
+    }
+
+    private static class listNamespaces_resultStandardScheme extends StandardScheme<listNamespaces_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, listNamespaces_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.LIST) {
+                {
+                  org.apache.thrift.protocol.TList _list498 = iprot.readListBegin();
+                  struct.success = new ArrayList<String>(_list498.size);
+                  String _elem499;
+                  for (int _i500 = 0; _i500 < _list498.size; ++_i500)
+                  {
+                    _elem499 = iprot.readString();
+                    struct.success.add(_elem499);
+                  }
+                  iprot.readListEnd();
+                }
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, listNamespaces_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.success != null) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          {
+            oprot.writeListBegin(new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, struct.success.size()));
+            for (String _iter501 : struct.success)
+            {
+              oprot.writeString(_iter501);
+            }
+            oprot.writeListEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class listNamespaces_resultTupleSchemeFactory implements SchemeFactory {
+      public listNamespaces_resultTupleScheme getScheme() {
+        return new listNamespaces_resultTupleScheme();
+      }
+    }
+
+    private static class listNamespaces_resultTupleScheme extends TupleScheme<listNamespaces_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, listNamespaces_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch1()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetSuccess()) {
+          {
+            oprot.writeI32(struct.success.size());
+            for (String _iter502 : struct.success)
+            {
+              oprot.writeString(_iter502);
+            }
+          }
+        }
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, listNamespaces_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          {
+            org.apache.thrift.protocol.TList _list503 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.success = new ArrayList<String>(_list503.size);
+            String _elem504;
+            for (int _i505 = 0; _i505 < _list503.size; ++_i505)
+            {
+              _elem504 = iprot.readString();
+              struct.success.add(_elem504);
+            }
+          }
+          struct.setSuccessIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class namespaceExists_args implements org.apache.thrift.TBase<namespaceExists_args, namespaceExists_args._Fields>, java.io.Serializable, Cloneable, Comparable<namespaceExists_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("namespaceExists_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new namespaceExists_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new namespaceExists_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(namespaceExists_args.class, metaDataMap);
+    }
+
+    public namespaceExists_args() {
+    }
+
+    public namespaceExists_args(
+      ByteBuffer login,
+      String namespaceName)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public namespaceExists_args(namespaceExists_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+    }
+
+    public namespaceExists_args deepCopy() {
+      return new namespaceExists_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public namespaceExists_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public namespaceExists_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public namespaceExists_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof namespaceExists_args)
+        return this.equals((namespaceExists_args)that);
+      return false;
+    }
+
+    public boolean equals(namespaceExists_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(namespaceExists_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("namespaceExists_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class namespaceExists_argsStandardSchemeFactory implements SchemeFactory {
+      public namespaceExists_argsStandardScheme getScheme() {
+        return new namespaceExists_argsStandardScheme();
+      }
+    }
+
+    private static class namespaceExists_argsStandardScheme extends StandardScheme<namespaceExists_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, namespaceExists_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, namespaceExists_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class namespaceExists_argsTupleSchemeFactory implements SchemeFactory {
+      public namespaceExists_argsTupleScheme getScheme() {
+        return new namespaceExists_argsTupleScheme();
+      }
+    }
+
+    private static class namespaceExists_argsTupleScheme extends TupleScheme<namespaceExists_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, namespaceExists_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        oprot.writeBitSet(optionals, 2);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, namespaceExists_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(2);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class namespaceExists_result implements org.apache.thrift.TBase<namespaceExists_result, namespaceExists_result._Fields>, java.io.Serializable, Cloneable, Comparable<namespaceExists_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("namespaceExists_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.BOOL, (short)0);
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new namespaceExists_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new namespaceExists_resultTupleSchemeFactory());
+    }
+
+    public boolean success; // required
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private byte __isset_bitfield = 0;
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.BOOL)));
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(namespaceExists_result.class, metaDataMap);
+    }
+
+    public namespaceExists_result() {
+    }
+
+    public namespaceExists_result(
+      boolean success,
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public namespaceExists_result(namespaceExists_result other) {
+      __isset_bitfield = other.__isset_bitfield;
+      this.success = other.success;
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+    }
+
+    public namespaceExists_result deepCopy() {
+      return new namespaceExists_result(this);
+    }
+
+    @Override
+    public void clear() {
+      setSuccessIsSet(false);
+      this.success = false;
+      this.ouch1 = null;
+      this.ouch2 = null;
+    }
+
+    public boolean isSuccess() {
+      return this.success;
+    }
+
+    public namespaceExists_result setSuccess(boolean success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bitfield = EncodingUtils.clearBit(__isset_bitfield, __SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return EncodingUtils.testBit(__isset_bitfield, __SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __SUCCESS_ISSET_ID, value);
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public namespaceExists_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public namespaceExists_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Boolean)value);
+        }
+        break;
+
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSuccess();
+
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof namespaceExists_result)
+        return this.equals((namespaceExists_result)that);
+      return false;
+    }
+
+    public boolean equals(namespaceExists_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(namespaceExists_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("namespaceExists_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        // it doesn't seem like you should have to do this, but java serialization is wacky, and doesn't call the default constructor.
+        __isset_bitfield = 0;
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class namespaceExists_resultStandardSchemeFactory implements SchemeFactory {
+      public namespaceExists_resultStandardScheme getScheme() {
+        return new namespaceExists_resultStandardScheme();
+      }
+    }
+
+    private static class namespaceExists_resultStandardScheme extends StandardScheme<namespaceExists_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, namespaceExists_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.BOOL) {
+                struct.success = iprot.readBool();
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, namespaceExists_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.isSetSuccess()) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          oprot.writeBool(struct.success);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class namespaceExists_resultTupleSchemeFactory implements SchemeFactory {
+      public namespaceExists_resultTupleScheme getScheme() {
+        return new namespaceExists_resultTupleScheme();
+      }
+    }
+
+    private static class namespaceExists_resultTupleScheme extends TupleScheme<namespaceExists_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, namespaceExists_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch1()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetSuccess()) {
+          oprot.writeBool(struct.success);
+        }
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, namespaceExists_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.success = iprot.readBool();
+          struct.setSuccessIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class createNamespace_args implements org.apache.thrift.TBase<createNamespace_args, createNamespace_args._Fields>, java.io.Serializable, Cloneable, Comparable<createNamespace_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("createNamespace_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new createNamespace_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new createNamespace_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(createNamespace_args.class, metaDataMap);
+    }
+
+    public createNamespace_args() {
+    }
+
+    public createNamespace_args(
+      ByteBuffer login,
+      String namespaceName)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public createNamespace_args(createNamespace_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+    }
+
+    public createNamespace_args deepCopy() {
+      return new createNamespace_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public createNamespace_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public createNamespace_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public createNamespace_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof createNamespace_args)
+        return this.equals((createNamespace_args)that);
+      return false;
+    }
+
+    public boolean equals(createNamespace_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(createNamespace_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("createNamespace_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class createNamespace_argsStandardSchemeFactory implements SchemeFactory {
+      public createNamespace_argsStandardScheme getScheme() {
+        return new createNamespace_argsStandardScheme();
+      }
+    }
+
+    private static class createNamespace_argsStandardScheme extends StandardScheme<createNamespace_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, createNamespace_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, createNamespace_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class createNamespace_argsTupleSchemeFactory implements SchemeFactory {
+      public createNamespace_argsTupleScheme getScheme() {
+        return new createNamespace_argsTupleScheme();
+      }
+    }
+
+    private static class createNamespace_argsTupleScheme extends TupleScheme<createNamespace_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, createNamespace_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        oprot.writeBitSet(optionals, 2);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, createNamespace_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(2);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class createNamespace_result implements org.apache.thrift.TBase<createNamespace_result, createNamespace_result._Fields>, java.io.Serializable, Cloneable, Comparable<createNamespace_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("createNamespace_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new createNamespace_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new createNamespace_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceExistsException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(createNamespace_result.class, metaDataMap);
+    }
+
+    public createNamespace_result() {
+    }
+
+    public createNamespace_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceExistsException ouch3)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public createNamespace_result(createNamespace_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceExistsException(other.ouch3);
+      }
+    }
+
+    public createNamespace_result deepCopy() {
+      return new createNamespace_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public createNamespace_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public createNamespace_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceExistsException getOuch3() {
+      return this.ouch3;
+    }
+
+    public createNamespace_result setOuch3(NamespaceExistsException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceExistsException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof createNamespace_result)
+        return this.equals((createNamespace_result)that);
+      return false;
+    }
+
+    public boolean equals(createNamespace_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(createNamespace_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("createNamespace_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class createNamespace_resultStandardSchemeFactory implements SchemeFactory {
+      public createNamespace_resultStandardScheme getScheme() {
+        return new createNamespace_resultStandardScheme();
+      }
+    }
+
+    private static class createNamespace_resultStandardScheme extends StandardScheme<createNamespace_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, createNamespace_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceExistsException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, createNamespace_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class createNamespace_resultTupleSchemeFactory implements SchemeFactory {
+      public createNamespace_resultTupleScheme getScheme() {
+        return new createNamespace_resultTupleScheme();
+      }
+    }
+
+    private static class createNamespace_resultTupleScheme extends TupleScheme<createNamespace_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, createNamespace_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, createNamespace_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch3 = new NamespaceExistsException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class deleteNamespace_args implements org.apache.thrift.TBase<deleteNamespace_args, deleteNamespace_args._Fields>, java.io.Serializable, Cloneable, Comparable<deleteNamespace_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("deleteNamespace_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new deleteNamespace_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new deleteNamespace_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(deleteNamespace_args.class, metaDataMap);
+    }
+
+    public deleteNamespace_args() {
+    }
+
+    public deleteNamespace_args(
+      ByteBuffer login,
+      String namespaceName)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteNamespace_args(deleteNamespace_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+    }
+
+    public deleteNamespace_args deepCopy() {
+      return new deleteNamespace_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public deleteNamespace_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public deleteNamespace_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public deleteNamespace_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteNamespace_args)
+        return this.equals((deleteNamespace_args)that);
+      return false;
+    }
+
+    public boolean equals(deleteNamespace_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(deleteNamespace_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteNamespace_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class deleteNamespace_argsStandardSchemeFactory implements SchemeFactory {
+      public deleteNamespace_argsStandardScheme getScheme() {
+        return new deleteNamespace_argsStandardScheme();
+      }
+    }
+
+    private static class deleteNamespace_argsStandardScheme extends StandardScheme<deleteNamespace_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, deleteNamespace_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, deleteNamespace_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class deleteNamespace_argsTupleSchemeFactory implements SchemeFactory {
+      public deleteNamespace_argsTupleScheme getScheme() {
+        return new deleteNamespace_argsTupleScheme();
+      }
+    }
+
+    private static class deleteNamespace_argsTupleScheme extends TupleScheme<deleteNamespace_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, deleteNamespace_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        oprot.writeBitSet(optionals, 2);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, deleteNamespace_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(2);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class deleteNamespace_result implements org.apache.thrift.TBase<deleteNamespace_result, deleteNamespace_result._Fields>, java.io.Serializable, Cloneable, Comparable<deleteNamespace_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("deleteNamespace_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+    private static final org.apache.thrift.protocol.TField OUCH4_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch4", org.apache.thrift.protocol.TType.STRUCT, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new deleteNamespace_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new deleteNamespace_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+    public NamespaceNotEmptyException ouch4; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3"),
+      OUCH4((short)4, "ouch4");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          case 4: // OUCH4
+            return OUCH4;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH4, new org.apache.thrift.meta_data.FieldMetaData("ouch4", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(deleteNamespace_result.class, metaDataMap);
+    }
+
+    public deleteNamespace_result() {
+    }
+
+    public deleteNamespace_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3,
+      NamespaceNotEmptyException ouch4)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+      this.ouch4 = ouch4;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteNamespace_result(deleteNamespace_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+      if (other.isSetOuch4()) {
+        this.ouch4 = new NamespaceNotEmptyException(other.ouch4);
+      }
+    }
+
+    public deleteNamespace_result deepCopy() {
+      return new deleteNamespace_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+      this.ouch4 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public deleteNamespace_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public deleteNamespace_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public deleteNamespace_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public NamespaceNotEmptyException getOuch4() {
+      return this.ouch4;
+    }
+
+    public deleteNamespace_result setOuch4(NamespaceNotEmptyException ouch4) {
+      this.ouch4 = ouch4;
+      return this;
+    }
+
+    public void unsetOuch4() {
+      this.ouch4 = null;
+    }
+
+    /** Returns true if field ouch4 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch4() {
+      return this.ouch4 != null;
+    }
+
+    public void setOuch4IsSet(boolean value) {
+      if (!value) {
+        this.ouch4 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      case OUCH4:
+        if (value == null) {
+          unsetOuch4();
+        } else {
+          setOuch4((NamespaceNotEmptyException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      case OUCH4:
+        return getOuch4();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      case OUCH4:
+        return isSetOuch4();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteNamespace_result)
+        return this.equals((deleteNamespace_result)that);
+      return false;
+    }
+
+    public boolean equals(deleteNamespace_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      boolean this_present_ouch4 = true && this.isSetOuch4();
+      boolean that_present_ouch4 = true && that.isSetOuch4();
+      if (this_present_ouch4 || that_present_ouch4) {
+        if (!(this_present_ouch4 && that_present_ouch4))
+          return false;
+        if (!this.ouch4.equals(that.ouch4))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      boolean present_ouch4 = true && (isSetOuch4());
+      list.add(present_ouch4);
+      if (present_ouch4)
+        list.add(ouch4);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(deleteNamespace_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch4()).compareTo(other.isSetOuch4());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch4()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch4, other.ouch4);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteNamespace_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch4:");
+      if (this.ouch4 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch4);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class deleteNamespace_resultStandardSchemeFactory implements SchemeFactory {
+      public deleteNamespace_resultStandardScheme getScheme() {
+        return new deleteNamespace_resultStandardScheme();
+      }
+    }
+
+    private static class deleteNamespace_resultStandardScheme extends StandardScheme<deleteNamespace_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, deleteNamespace_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // OUCH4
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch4 = new NamespaceNotEmptyException();
+                struct.ouch4.read(iprot);
+                struct.setOuch4IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, deleteNamespace_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch4 != null) {
+          oprot.writeFieldBegin(OUCH4_FIELD_DESC);
+          struct.ouch4.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class deleteNamespace_resultTupleSchemeFactory implements SchemeFactory {
+      public deleteNamespace_resultTupleScheme getScheme() {
+        return new deleteNamespace_resultTupleScheme();
+      }
+    }
+
+    private static class deleteNamespace_resultTupleScheme extends TupleScheme<deleteNamespace_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, deleteNamespace_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(2);
+        }
+        if (struct.isSetOuch4()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+        if (struct.isSetOuch4()) {
+          struct.ouch4.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, deleteNamespace_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.ouch4 = new NamespaceNotEmptyException();
+          struct.ouch4.read(iprot);
+          struct.setOuch4IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class renameNamespace_args implements org.apache.thrift.TBase<renameNamespace_args, renameNamespace_args._Fields>, java.io.Serializable, Cloneable, Comparable<renameNamespace_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("renameNamespace_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField OLD_NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("oldNamespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField NEW_NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("newNamespaceName", org.apache.thrift.protocol.TType.STRING, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new renameNamespace_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new renameNamespace_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String oldNamespaceName; // required
+    public String newNamespaceName; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      OLD_NAMESPACE_NAME((short)2, "oldNamespaceName"),
+      NEW_NAMESPACE_NAME((short)3, "newNamespaceName");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // OLD_NAMESPACE_NAME
+            return OLD_NAMESPACE_NAME;
+          case 3: // NEW_NAMESPACE_NAME
+            return NEW_NAMESPACE_NAME;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.OLD_NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("oldNamespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.NEW_NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("newNamespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(renameNamespace_args.class, metaDataMap);
+    }
+
+    public renameNamespace_args() {
+    }
+
+    public renameNamespace_args(
+      ByteBuffer login,
+      String oldNamespaceName,
+      String newNamespaceName)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.oldNamespaceName = oldNamespaceName;
+      this.newNamespaceName = newNamespaceName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public renameNamespace_args(renameNamespace_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetOldNamespaceName()) {
+        this.oldNamespaceName = other.oldNamespaceName;
+      }
+      if (other.isSetNewNamespaceName()) {
+        this.newNamespaceName = other.newNamespaceName;
+      }
+    }
+
+    public renameNamespace_args deepCopy() {
+      return new renameNamespace_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.oldNamespaceName = null;
+      this.newNamespaceName = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public renameNamespace_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public renameNamespace_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getOldNamespaceName() {
+      return this.oldNamespaceName;
+    }
+
+    public renameNamespace_args setOldNamespaceName(String oldNamespaceName) {
+      this.oldNamespaceName = oldNamespaceName;
+      return this;
+    }
+
+    public void unsetOldNamespaceName() {
+      this.oldNamespaceName = null;
+    }
+
+    /** Returns true if field oldNamespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetOldNamespaceName() {
+      return this.oldNamespaceName != null;
+    }
+
+    public void setOldNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.oldNamespaceName = null;
+      }
+    }
+
+    public String getNewNamespaceName() {
+      return this.newNamespaceName;
+    }
+
+    public renameNamespace_args setNewNamespaceName(String newNamespaceName) {
+      this.newNamespaceName = newNamespaceName;
+      return this;
+    }
+
+    public void unsetNewNamespaceName() {
+      this.newNamespaceName = null;
+    }
+
+    /** Returns true if field newNamespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNewNamespaceName() {
+      return this.newNamespaceName != null;
+    }
+
+    public void setNewNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.newNamespaceName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case OLD_NAMESPACE_NAME:
+        if (value == null) {
+          unsetOldNamespaceName();
+        } else {
+          setOldNamespaceName((String)value);
+        }
+        break;
+
+      case NEW_NAMESPACE_NAME:
+        if (value == null) {
+          unsetNewNamespaceName();
+        } else {
+          setNewNamespaceName((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case OLD_NAMESPACE_NAME:
+        return getOldNamespaceName();
+
+      case NEW_NAMESPACE_NAME:
+        return getNewNamespaceName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case OLD_NAMESPACE_NAME:
+        return isSetOldNamespaceName();
+      case NEW_NAMESPACE_NAME:
+        return isSetNewNamespaceName();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof renameNamespace_args)
+        return this.equals((renameNamespace_args)that);
+      return false;
+    }
+
+    public boolean equals(renameNamespace_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_oldNamespaceName = true && this.isSetOldNamespaceName();
+      boolean that_present_oldNamespaceName = true && that.isSetOldNamespaceName();
+      if (this_present_oldNamespaceName || that_present_oldNamespaceName) {
+        if (!(this_present_oldNamespaceName && that_present_oldNamespaceName))
+          return false;
+        if (!this.oldNamespaceName.equals(that.oldNamespaceName))
+          return false;
+      }
+
+      boolean this_present_newNamespaceName = true && this.isSetNewNamespaceName();
+      boolean that_present_newNamespaceName = true && that.isSetNewNamespaceName();
+      if (this_present_newNamespaceName || that_present_newNamespaceName) {
+        if (!(this_present_newNamespaceName && that_present_newNamespaceName))
+          return false;
+        if (!this.newNamespaceName.equals(that.newNamespaceName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_oldNamespaceName = true && (isSetOldNamespaceName());
+      list.add(present_oldNamespaceName);
+      if (present_oldNamespaceName)
+        list.add(oldNamespaceName);
+
+      boolean present_newNamespaceName = true && (isSetNewNamespaceName());
+      list.add(present_newNamespaceName);
+      if (present_newNamespaceName)
+        list.add(newNamespaceName);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(renameNamespace_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOldNamespaceName()).compareTo(other.isSetOldNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOldNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.oldNamespaceName, other.oldNamespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNewNamespaceName()).compareTo(other.isSetNewNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNewNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.newNamespaceName, other.newNamespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("renameNamespace_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("oldNamespaceName:");
+      if (this.oldNamespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.oldNamespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("newNamespaceName:");
+      if (this.newNamespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.newNamespaceName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class renameNamespace_argsStandardSchemeFactory implements SchemeFactory {
+      public renameNamespace_argsStandardScheme getScheme() {
+        return new renameNamespace_argsStandardScheme();
+      }
+    }
+
+    private static class renameNamespace_argsStandardScheme extends StandardScheme<renameNamespace_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, renameNamespace_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OLD_NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.oldNamespaceName = iprot.readString();
+                struct.setOldNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // NEW_NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.newNamespaceName = iprot.readString();
+                struct.setNewNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, renameNamespace_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.oldNamespaceName != null) {
+          oprot.writeFieldBegin(OLD_NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.oldNamespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.newNamespaceName != null) {
+          oprot.writeFieldBegin(NEW_NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.newNamespaceName);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class renameNamespace_argsTupleSchemeFactory implements SchemeFactory {
+      public renameNamespace_argsTupleScheme getScheme() {
+        return new renameNamespace_argsTupleScheme();
+      }
+    }
+
+    private static class renameNamespace_argsTupleScheme extends TupleScheme<renameNamespace_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, renameNamespace_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOldNamespaceName()) {
+          optionals.set(1);
+        }
+        if (struct.isSetNewNamespaceName()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetOldNamespaceName()) {
+          oprot.writeString(struct.oldNamespaceName);
+        }
+        if (struct.isSetNewNamespaceName()) {
+          oprot.writeString(struct.newNamespaceName);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, renameNamespace_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.oldNamespaceName = iprot.readString();
+          struct.setOldNamespaceNameIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.newNamespaceName = iprot.readString();
+          struct.setNewNamespaceNameIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class renameNamespace_result implements org.apache.thrift.TBase<renameNamespace_result, renameNamespace_result._Fields>, java.io.Serializable, Cloneable, Comparable<renameNamespace_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("renameNamespace_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+    private static final org.apache.thrift.protocol.TField OUCH4_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch4", org.apache.thrift.protocol.TType.STRUCT, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new renameNamespace_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new renameNamespace_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+    public NamespaceExistsException ouch4; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3"),
+      OUCH4((short)4, "ouch4");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          case 4: // OUCH4
+            return OUCH4;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH4, new org.apache.thrift.meta_data.FieldMetaData("ouch4", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(renameNamespace_result.class, metaDataMap);
+    }
+
+    public renameNamespace_result() {
+    }
+
+    public renameNamespace_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3,
+      NamespaceExistsException ouch4)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+      this.ouch4 = ouch4;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public renameNamespace_result(renameNamespace_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+      if (other.isSetOuch4()) {
+        this.ouch4 = new NamespaceExistsException(other.ouch4);
+      }
+    }
+
+    public renameNamespace_result deepCopy() {
+      return new renameNamespace_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+      this.ouch4 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public renameNamespace_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public renameNamespace_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public renameNamespace_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public NamespaceExistsException getOuch4() {
+      return this.ouch4;
+    }
+
+    public renameNamespace_result setOuch4(NamespaceExistsException ouch4) {
+      this.ouch4 = ouch4;
+      return this;
+    }
+
+    public void unsetOuch4() {
+      this.ouch4 = null;
+    }
+
+    /** Returns true if field ouch4 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch4() {
+      return this.ouch4 != null;
+    }
+
+    public void setOuch4IsSet(boolean value) {
+      if (!value) {
+        this.ouch4 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      case OUCH4:
+        if (value == null) {
+          unsetOuch4();
+        } else {
+          setOuch4((NamespaceExistsException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      case OUCH4:
+        return getOuch4();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      case OUCH4:
+        return isSetOuch4();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof renameNamespace_result)
+        return this.equals((renameNamespace_result)that);
+      return false;
+    }
+
+    public boolean equals(renameNamespace_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      boolean this_present_ouch4 = true && this.isSetOuch4();
+      boolean that_present_ouch4 = true && that.isSetOuch4();
+      if (this_present_ouch4 || that_present_ouch4) {
+        if (!(this_present_ouch4 && that_present_ouch4))
+          return false;
+        if (!this.ouch4.equals(that.ouch4))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      boolean present_ouch4 = true && (isSetOuch4());
+      list.add(present_ouch4);
+      if (present_ouch4)
+        list.add(ouch4);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(renameNamespace_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch4()).compareTo(other.isSetOuch4());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch4()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch4, other.ouch4);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("renameNamespace_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch4:");
+      if (this.ouch4 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch4);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class renameNamespace_resultStandardSchemeFactory implements SchemeFactory {
+      public renameNamespace_resultStandardScheme getScheme() {
+        return new renameNamespace_resultStandardScheme();
+      }
+    }
+
+    private static class renameNamespace_resultStandardScheme extends StandardScheme<renameNamespace_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, renameNamespace_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // OUCH4
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch4 = new NamespaceExistsException();
+                struct.ouch4.read(iprot);
+                struct.setOuch4IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, renameNamespace_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch4 != null) {
+          oprot.writeFieldBegin(OUCH4_FIELD_DESC);
+          struct.ouch4.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class renameNamespace_resultTupleSchemeFactory implements SchemeFactory {
+      public renameNamespace_resultTupleScheme getScheme() {
+        return new renameNamespace_resultTupleScheme();
+      }
+    }
+
+    private static class renameNamespace_resultTupleScheme extends TupleScheme<renameNamespace_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, renameNamespace_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(2);
+        }
+        if (struct.isSetOuch4()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+        if (struct.isSetOuch4()) {
+          struct.ouch4.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, renameNamespace_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.ouch4 = new NamespaceExistsException();
+          struct.ouch4.read(iprot);
+          struct.setOuch4IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class setNamespaceProperty_args implements org.apache.thrift.TBase<setNamespaceProperty_args, setNamespaceProperty_args._Fields>, java.io.Serializable, Cloneable, Comparable<setNamespaceProperty_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("setNamespaceProperty_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField PROPERTY_FIELD_DESC = new org.apache.thrift.protocol.TField("property", org.apache.thrift.protocol.TType.STRING, (short)3);
+    private static final org.apache.thrift.protocol.TField VALUE_FIELD_DESC = new org.apache.thrift.protocol.TField("value", org.apache.thrift.protocol.TType.STRING, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new setNamespaceProperty_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new setNamespaceProperty_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+    public String property; // required
+    public String value; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName"),
+      PROPERTY((short)3, "property"),
+      VALUE((short)4, "value");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 3: // PROPERTY
+            return PROPERTY;
+          case 4: // VALUE
+            return VALUE;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.PROPERTY, new org.apache.thrift.meta_data.FieldMetaData("property", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.VALUE, new org.apache.thrift.meta_data.FieldMetaData("value", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(setNamespaceProperty_args.class, metaDataMap);
+    }
+
+    public setNamespaceProperty_args() {
+    }
+
+    public setNamespaceProperty_args(
+      ByteBuffer login,
+      String namespaceName,
+      String property,
+      String value)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+      this.property = property;
+      this.value = value;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public setNamespaceProperty_args(setNamespaceProperty_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetProperty()) {
+        this.property = other.property;
+      }
+      if (other.isSetValue()) {
+        this.value = other.value;
+      }
+    }
+
+    public setNamespaceProperty_args deepCopy() {
+      return new setNamespaceProperty_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+      this.property = null;
+      this.value = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public setNamespaceProperty_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public setNamespaceProperty_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public setNamespaceProperty_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public String getProperty() {
+      return this.property;
+    }
+
+    public setNamespaceProperty_args setProperty(String property) {
+      this.property = property;
+      return this;
+    }
+
+    public void unsetProperty() {
+      this.property = null;
+    }
+
+    /** Returns true if field property is set (has been assigned a value) and false otherwise */
+    public boolean isSetProperty() {
+      return this.property != null;
+    }
+
+    public void setPropertyIsSet(boolean value) {
+      if (!value) {
+        this.property = null;
+      }
+    }
+
+    public String getValue() {
+      return this.value;
+    }
+
+    public setNamespaceProperty_args setValue(String value) {
+      this.value = value;
+      return this;
+    }
+
+    public void unsetValue() {
+      this.value = null;
+    }
+
+    /** Returns true if field value is set (has been assigned a value) and false otherwise */
+    public boolean isSetValue() {
+      return this.value != null;
+    }
+
+    public void setValueIsSet(boolean value) {
+      if (!value) {
+        this.value = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case PROPERTY:
+        if (value == null) {
+          unsetProperty();
+        } else {
+          setProperty((String)value);
+        }
+        break;
+
+      case VALUE:
+        if (value == null) {
+          unsetValue();
+        } else {
+          setValue((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case PROPERTY:
+        return getProperty();
+
+      case VALUE:
+        return getValue();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case PROPERTY:
+        return isSetProperty();
+      case VALUE:
+        return isSetValue();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof setNamespaceProperty_args)
+        return this.equals((setNamespaceProperty_args)that);
+      return false;
+    }
+
+    public boolean equals(setNamespaceProperty_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_property = true && this.isSetProperty();
+      boolean that_present_property = true && that.isSetProperty();
+      if (this_present_property || that_present_property) {
+        if (!(this_present_property && that_present_property))
+          return false;
+        if (!this.property.equals(that.property))
+          return false;
+      }
+
+      boolean this_present_value = true && this.isSetValue();
+      boolean that_present_value = true && that.isSetValue();
+      if (this_present_value || that_present_value) {
+        if (!(this_present_value && that_present_value))
+          return false;
+        if (!this.value.equals(that.value))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      boolean present_value = true && (isSetValue());
+      list.add(present_value);
+      if (present_value)
+        list.add(value);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(setNamespaceProperty_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetProperty()).compareTo(other.isSetProperty());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetProperty()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.property, other.property);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetValue()).compareTo(other.isSetValue());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetValue()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.value, other.value);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("setNamespaceProperty_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("property:");
+      if (this.property == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.property);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("value:");
+      if (this.value == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.value);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class setNamespaceProperty_argsStandardSchemeFactory implements SchemeFactory {
+      public setNamespaceProperty_argsStandardScheme getScheme() {
+        return new setNamespaceProperty_argsStandardScheme();
+      }
+    }
+
+    private static class setNamespaceProperty_argsStandardScheme extends StandardScheme<setNamespaceProperty_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, setNamespaceProperty_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // PROPERTY
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.property = iprot.readString();
+                struct.setPropertyIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // VALUE
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.value = iprot.readString();
+                struct.setValueIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, setNamespaceProperty_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.property != null) {
+          oprot.writeFieldBegin(PROPERTY_FIELD_DESC);
+          oprot.writeString(struct.property);
+          oprot.writeFieldEnd();
+        }
+        if (struct.value != null) {
+          oprot.writeFieldBegin(VALUE_FIELD_DESC);
+          oprot.writeString(struct.value);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class setNamespaceProperty_argsTupleSchemeFactory implements SchemeFactory {
+      public setNamespaceProperty_argsTupleScheme getScheme() {
+        return new setNamespaceProperty_argsTupleScheme();
+      }
+    }
+
+    private static class setNamespaceProperty_argsTupleScheme extends TupleScheme<setNamespaceProperty_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, setNamespaceProperty_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        if (struct.isSetProperty()) {
+          optionals.set(2);
+        }
+        if (struct.isSetValue()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetProperty()) {
+          oprot.writeString(struct.property);
+        }
+        if (struct.isSetValue()) {
+          oprot.writeString(struct.value);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, setNamespaceProperty_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.property = iprot.readString();
+          struct.setPropertyIsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.value = iprot.readString();
+          struct.setValueIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class setNamespaceProperty_result implements org.apache.thrift.TBase<setNamespaceProperty_result, setNamespaceProperty_result._Fields>, java.io.Serializable, Cloneable, Comparable<setNamespaceProperty_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("setNamespaceProperty_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new setNamespaceProperty_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new setNamespaceProperty_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(setNamespaceProperty_result.class, metaDataMap);
+    }
+
+    public setNamespaceProperty_result() {
+    }
+
+    public setNamespaceProperty_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public setNamespaceProperty_result(setNamespaceProperty_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public setNamespaceProperty_result deepCopy() {
+      return new setNamespaceProperty_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public setNamespaceProperty_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public setNamespaceProperty_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public setNamespaceProperty_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof setNamespaceProperty_result)
+        return this.equals((setNamespaceProperty_result)that);
+      return false;
+    }
+
+    public boolean equals(setNamespaceProperty_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(setNamespaceProperty_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("setNamespaceProperty_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class setNamespaceProperty_resultStandardSchemeFactory implements SchemeFactory {
+      public setNamespaceProperty_resultStandardScheme getScheme() {
+        return new setNamespaceProperty_resultStandardScheme();
+      }
+    }
+
+    private static class setNamespaceProperty_resultStandardScheme extends StandardScheme<setNamespaceProperty_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, setNamespaceProperty_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, setNamespaceProperty_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class setNamespaceProperty_resultTupleSchemeFactory implements SchemeFactory {
+      public setNamespaceProperty_resultTupleScheme getScheme() {
+        return new setNamespaceProperty_resultTupleScheme();
+      }
+    }
+
+    private static class setNamespaceProperty_resultTupleScheme extends TupleScheme<setNamespaceProperty_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, setNamespaceProperty_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, setNamespaceProperty_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class removeNamespaceProperty_args implements org.apache.thrift.TBase<removeNamespaceProperty_args, removeNamespaceProperty_args._Fields>, java.io.Serializable, Cloneable, Comparable<removeNamespaceProperty_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("removeNamespaceProperty_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField PROPERTY_FIELD_DESC = new org.apache.thrift.protocol.TField("property", org.apache.thrift.protocol.TType.STRING, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new removeNamespaceProperty_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new removeNamespaceProperty_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+    public String property; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName"),
+      PROPERTY((short)3, "property");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 3: // PROPERTY
+            return PROPERTY;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.PROPERTY, new org.apache.thrift.meta_data.FieldMetaData("property", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(removeNamespaceProperty_args.class, metaDataMap);
+    }
+
+    public removeNamespaceProperty_args() {
+    }
+
+    public removeNamespaceProperty_args(
+      ByteBuffer login,
+      String namespaceName,
+      String property)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+      this.property = property;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public removeNamespaceProperty_args(removeNamespaceProperty_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetProperty()) {
+        this.property = other.property;
+      }
+    }
+
+    public removeNamespaceProperty_args deepCopy() {
+      return new removeNamespaceProperty_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+      this.property = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public removeNamespaceProperty_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public removeNamespaceProperty_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public removeNamespaceProperty_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public String getProperty() {
+      return this.property;
+    }
+
+    public removeNamespaceProperty_args setProperty(String property) {
+      this.property = property;
+      return this;
+    }
+
+    public void unsetProperty() {
+      this.property = null;
+    }
+
+    /** Returns true if field property is set (has been assigned a value) and false otherwise */
+    public boolean isSetProperty() {
+      return this.property != null;
+    }
+
+    public void setPropertyIsSet(boolean value) {
+      if (!value) {
+        this.property = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case PROPERTY:
+        if (value == null) {
+          unsetProperty();
+        } else {
+          setProperty((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case PROPERTY:
+        return getProperty();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case PROPERTY:
+        return isSetProperty();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof removeNamespaceProperty_args)
+        return this.equals((removeNamespaceProperty_args)that);
+      return false;
+    }
+
+    public boolean equals(removeNamespaceProperty_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_property = true && this.isSetProperty();
+      boolean that_present_property = true && that.isSetProperty();
+      if (this_present_property || that_present_property) {
+        if (!(this_present_property && that_present_property))
+          return false;
+        if (!this.property.equals(that.property))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_property = true && (isSetProperty());
+      list.add(present_property);
+      if (present_property)
+        list.add(property);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(removeNamespaceProperty_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetProperty()).compareTo(other.isSetProperty());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetProperty()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.property, other.property);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("removeNamespaceProperty_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("property:");
+      if (this.property == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.property);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class removeNamespaceProperty_argsStandardSchemeFactory implements SchemeFactory {
+      public removeNamespaceProperty_argsStandardScheme getScheme() {
+        return new removeNamespaceProperty_argsStandardScheme();
+      }
+    }
+
+    private static class removeNamespaceProperty_argsStandardScheme extends StandardScheme<removeNamespaceProperty_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, removeNamespaceProperty_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // PROPERTY
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.property = iprot.readString();
+                struct.setPropertyIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, removeNamespaceProperty_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.property != null) {
+          oprot.writeFieldBegin(PROPERTY_FIELD_DESC);
+          oprot.writeString(struct.property);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class removeNamespaceProperty_argsTupleSchemeFactory implements SchemeFactory {
+      public removeNamespaceProperty_argsTupleScheme getScheme() {
+        return new removeNamespaceProperty_argsTupleScheme();
+      }
+    }
+
+    private static class removeNamespaceProperty_argsTupleScheme extends TupleScheme<removeNamespaceProperty_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, removeNamespaceProperty_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        if (struct.isSetProperty()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetProperty()) {
+          oprot.writeString(struct.property);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, removeNamespaceProperty_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.property = iprot.readString();
+          struct.setPropertyIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class removeNamespaceProperty_result implements org.apache.thrift.TBase<removeNamespaceProperty_result, removeNamespaceProperty_result._Fields>, java.io.Serializable, Cloneable, Comparable<removeNamespaceProperty_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("removeNamespaceProperty_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new removeNamespaceProperty_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new removeNamespaceProperty_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(removeNamespaceProperty_result.class, metaDataMap);
+    }
+
+    public removeNamespaceProperty_result() {
+    }
+
+    public removeNamespaceProperty_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public removeNamespaceProperty_result(removeNamespaceProperty_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public removeNamespaceProperty_result deepCopy() {
+      return new removeNamespaceProperty_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public removeNamespaceProperty_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public removeNamespaceProperty_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public removeNamespaceProperty_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof removeNamespaceProperty_result)
+        return this.equals((removeNamespaceProperty_result)that);
+      return false;
+    }
+
+    public boolean equals(removeNamespaceProperty_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(removeNamespaceProperty_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("removeNamespaceProperty_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class removeNamespaceProperty_resultStandardSchemeFactory implements SchemeFactory {
+      public removeNamespaceProperty_resultStandardScheme getScheme() {
+        return new removeNamespaceProperty_resultStandardScheme();
+      }
+    }
+
+    private static class removeNamespaceProperty_resultStandardScheme extends StandardScheme<removeNamespaceProperty_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, removeNamespaceProperty_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, removeNamespaceProperty_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class removeNamespaceProperty_resultTupleSchemeFactory implements SchemeFactory {
+      public removeNamespaceProperty_resultTupleScheme getScheme() {
+        return new removeNamespaceProperty_resultTupleScheme();
+      }
+    }
+
+    private static class removeNamespaceProperty_resultTupleScheme extends TupleScheme<removeNamespaceProperty_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, removeNamespaceProperty_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, removeNamespaceProperty_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class getNamespaceProperties_args implements org.apache.thrift.TBase<getNamespaceProperties_args, getNamespaceProperties_args._Fields>, java.io.Serializable, Cloneable, Comparable<getNamespaceProperties_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("getNamespaceProperties_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new getNamespaceProperties_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new getNamespaceProperties_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(getNamespaceProperties_args.class, metaDataMap);
+    }
+
+    public getNamespaceProperties_args() {
+    }
+
+    public getNamespaceProperties_args(
+      ByteBuffer login,
+      String namespaceName)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getNamespaceProperties_args(getNamespaceProperties_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+    }
+
+    public getNamespaceProperties_args deepCopy() {
+      return new getNamespaceProperties_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public getNamespaceProperties_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public getNamespaceProperties_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public getNamespaceProperties_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getNamespaceProperties_args)
+        return this.equals((getNamespaceProperties_args)that);
+      return false;
+    }
+
+    public boolean equals(getNamespaceProperties_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(getNamespaceProperties_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getNamespaceProperties_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class getNamespaceProperties_argsStandardSchemeFactory implements SchemeFactory {
+      public getNamespaceProperties_argsStandardScheme getScheme() {
+        return new getNamespaceProperties_argsStandardScheme();
+      }
+    }
+
+    private static class getNamespaceProperties_argsStandardScheme extends StandardScheme<getNamespaceProperties_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, getNamespaceProperties_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, getNamespaceProperties_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class getNamespaceProperties_argsTupleSchemeFactory implements SchemeFactory {
+      public getNamespaceProperties_argsTupleScheme getScheme() {
+        return new getNamespaceProperties_argsTupleScheme();
+      }
+    }
+
+    private static class getNamespaceProperties_argsTupleScheme extends TupleScheme<getNamespaceProperties_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, getNamespaceProperties_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        oprot.writeBitSet(optionals, 2);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, getNamespaceProperties_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(2);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class getNamespaceProperties_result implements org.apache.thrift.TBase<getNamespaceProperties_result, getNamespaceProperties_result._Fields>, java.io.Serializable, Cloneable, Comparable<getNamespaceProperties_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("getNamespaceProperties_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.MAP, (short)0);
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new getNamespaceProperties_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new getNamespaceProperties_resultTupleSchemeFactory());
+    }
+
+    public Map<String,String> success; // required
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.MapMetaData(org.apache.thrift.protocol.TType.MAP, 
+              new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING), 
+              new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING))));
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(getNamespaceProperties_result.class, metaDataMap);
+    }
+
+    public getNamespaceProperties_result() {
+    }
+
+    public getNamespaceProperties_result(
+      Map<String,String> success,
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.success = success;
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getNamespaceProperties_result(getNamespaceProperties_result other) {
+      if (other.isSetSuccess()) {
+        Map<String,String> __this__success = new HashMap<String,String>(other.success);
+        this.success = __this__success;
+      }
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public getNamespaceProperties_result deepCopy() {
+      return new getNamespaceProperties_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.success = null;
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public void putToSuccess(String key, String val) {
+      if (this.success == null) {
+        this.success = new HashMap<String,String>();
+      }
+      this.success.put(key, val);
+    }
+
+    public Map<String,String> getSuccess() {
+      return this.success;
+    }
+
+    public getNamespaceProperties_result setSuccess(Map<String,String> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public getNamespaceProperties_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public getNamespaceProperties_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public getNamespaceProperties_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Map<String,String>)value);
+        }
+        break;
+
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getNamespaceProperties_result)
+        return this.equals((getNamespaceProperties_result)that);
+      return false;
+    }
+
+    public boolean equals(getNamespaceProperties_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(getNamespaceProperties_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getNamespaceProperties_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class getNamespaceProperties_resultStandardSchemeFactory implements SchemeFactory {
+      public getNamespaceProperties_resultStandardScheme getScheme() {
+        return new getNamespaceProperties_resultStandardScheme();
+      }
+    }
+
+    private static class getNamespaceProperties_resultStandardScheme extends StandardScheme<getNamespaceProperties_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, getNamespaceProperties_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
+                {
+                  org.apache.thrift.protocol.TMap _map506 = iprot.readMapBegin();
+                  struct.success = new HashMap<String,String>(2*_map506.size);
+                  String _key507;
+                  String _val508;
+                  for (int _i509 = 0; _i509 < _map506.size; ++_i509)
+                  {
+                    _key507 = iprot.readString();
+                    _val508 = iprot.readString();
+                    struct.success.put(_key507, _val508);
+                  }
+                  iprot.readMapEnd();
+                }
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, getNamespaceProperties_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.success != null) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          {
+            oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, struct.success.size()));
+            for (Map.Entry<String, String> _iter510 : struct.success.entrySet())
+            {
+              oprot.writeString(_iter510.getKey());
+              oprot.writeString(_iter510.getValue());
+            }
+            oprot.writeMapEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class getNamespaceProperties_resultTupleSchemeFactory implements SchemeFactory {
+      public getNamespaceProperties_resultTupleScheme getScheme() {
+        return new getNamespaceProperties_resultTupleScheme();
+      }
+    }
+
+    private static class getNamespaceProperties_resultTupleScheme extends TupleScheme<getNamespaceProperties_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, getNamespaceProperties_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch1()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(2);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetSuccess()) {
+          {
+            oprot.writeI32(struct.success.size());
+            for (Map.Entry<String, String> _iter511 : struct.success.entrySet())
+            {
+              oprot.writeString(_iter511.getKey());
+              oprot.writeString(_iter511.getValue());
+            }
+          }
+        }
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, getNamespaceProperties_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          {
+            org.apache.thrift.protocol.TMap _map512 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.success = new HashMap<String,String>(2*_map512.size);
+            String _key513;
+            String _val514;
+            for (int _i515 = 0; _i515 < _map512.size; ++_i515)
+            {
+              _key513 = iprot.readString();
+              _val514 = iprot.readString();
+              struct.success.put(_key513, _val514);
+            }
+          }
+          struct.setSuccessIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class namespaceIdMap_args implements org.apache.thrift.TBase<namespaceIdMap_args, namespaceIdMap_args._Fields>, java.io.Serializable, Cloneable, Comparable<namespaceIdMap_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("namespaceIdMap_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new namespaceIdMap_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new namespaceIdMap_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(namespaceIdMap_args.class, metaDataMap);
+    }
+
+    public namespaceIdMap_args() {
+    }
+
+    public namespaceIdMap_args(
+      ByteBuffer login)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public namespaceIdMap_args(namespaceIdMap_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+    }
+
+    public namespaceIdMap_args deepCopy() {
+      return new namespaceIdMap_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public namespaceIdMap_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public namespaceIdMap_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof namespaceIdMap_args)
+        return this.equals((namespaceIdMap_args)that);
+      return false;
+    }
+
+    public boolean equals(namespaceIdMap_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(namespaceIdMap_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("namespaceIdMap_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class namespaceIdMap_argsStandardSchemeFactory implements SchemeFactory {
+      public namespaceIdMap_argsStandardScheme getScheme() {
+        return new namespaceIdMap_argsStandardScheme();
+      }
+    }
+
+    private static class namespaceIdMap_argsStandardScheme extends StandardScheme<namespaceIdMap_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, namespaceIdMap_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, namespaceIdMap_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class namespaceIdMap_argsTupleSchemeFactory implements SchemeFactory {
+      public namespaceIdMap_argsTupleScheme getScheme() {
+        return new namespaceIdMap_argsTupleScheme();
+      }
+    }
+
+    private static class namespaceIdMap_argsTupleScheme extends TupleScheme<namespaceIdMap_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, namespaceIdMap_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        oprot.writeBitSet(optionals, 1);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, namespaceIdMap_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(1);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class namespaceIdMap_result implements org.apache.thrift.TBase<namespaceIdMap_result, namespaceIdMap_result._Fields>, java.io.Serializable, Cloneable, Comparable<namespaceIdMap_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("namespaceIdMap_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.MAP, (short)0);
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new namespaceIdMap_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new namespaceIdMap_resultTupleSchemeFactory());
+    }
+
+    public Map<String,String> success; // required
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.MapMetaData(org.apache.thrift.protocol.TType.MAP, 
+              new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING), 
+              new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING))));
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(namespaceIdMap_result.class, metaDataMap);
+    }
+
+    public namespaceIdMap_result() {
+    }
+
+    public namespaceIdMap_result(
+      Map<String,String> success,
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2)
+    {
+      this();
+      this.success = success;
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public namespaceIdMap_result(namespaceIdMap_result other) {
+      if (other.isSetSuccess()) {
+        Map<String,String> __this__success = new HashMap<String,String>(other.success);
+        this.success = __this__success;
+      }
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+    }
+
+    public namespaceIdMap_result deepCopy() {
+      return new namespaceIdMap_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.success = null;
+      this.ouch1 = null;
+      this.ouch2 = null;
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public void putToSuccess(String key, String val) {
+      if (this.success == null) {
+        this.success = new HashMap<String,String>();
+      }
+      this.success.put(key, val);
+    }
+
+    public Map<String,String> getSuccess() {
+      return this.success;
+    }
+
+    public namespaceIdMap_result setSuccess(Map<String,String> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public namespaceIdMap_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public namespaceIdMap_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Map<String,String>)value);
+        }
+        break;
+
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof namespaceIdMap_result)
+        return this.equals((namespaceIdMap_result)that);
+      return false;
+    }
+
+    public boolean equals(namespaceIdMap_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(namespaceIdMap_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("namespaceIdMap_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class namespaceIdMap_resultStandardSchemeFactory implements SchemeFactory {
+      public namespaceIdMap_resultStandardScheme getScheme() {
+        return new namespaceIdMap_resultStandardScheme();
+      }
+    }
+
+    private static class namespaceIdMap_resultStandardScheme extends StandardScheme<namespaceIdMap_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, namespaceIdMap_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
+                {
+                  org.apache.thrift.protocol.TMap _map516 = iprot.readMapBegin();
+                  struct.success = new HashMap<String,String>(2*_map516.size);
+                  String _key517;
+                  String _val518;
+                  for (int _i519 = 0; _i519 < _map516.size; ++_i519)
+                  {
+                    _key517 = iprot.readString();
+                    _val518 = iprot.readString();
+                    struct.success.put(_key517, _val518);
+                  }
+                  iprot.readMapEnd();
+                }
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, namespaceIdMap_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.success != null) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          {
+            oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, struct.success.size()));
+            for (Map.Entry<String, String> _iter520 : struct.success.entrySet())
+            {
+              oprot.writeString(_iter520.getKey());
+              oprot.writeString(_iter520.getValue());
+            }
+            oprot.writeMapEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class namespaceIdMap_resultTupleSchemeFactory implements SchemeFactory {
+      public namespaceIdMap_resultTupleScheme getScheme() {
+        return new namespaceIdMap_resultTupleScheme();
+      }
+    }
+
+    private static class namespaceIdMap_resultTupleScheme extends TupleScheme<namespaceIdMap_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, namespaceIdMap_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch1()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetSuccess()) {
+          {
+            oprot.writeI32(struct.success.size());
+            for (Map.Entry<String, String> _iter521 : struct.success.entrySet())
+            {
+              oprot.writeString(_iter521.getKey());
+              oprot.writeString(_iter521.getValue());
+            }
+          }
+        }
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, namespaceIdMap_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          {
+            org.apache.thrift.protocol.TMap _map522 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
+            struct.success = new HashMap<String,String>(2*_map522.size);
+            String _key523;
+            String _val524;
+            for (int _i525 = 0; _i525 < _map522.size; ++_i525)
+            {
+              _key523 = iprot.readString();
+              _val524 = iprot.readString();
+              struct.success.put(_key523, _val524);
+            }
+          }
+          struct.setSuccessIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class attachNamespaceIterator_args implements org.apache.thrift.TBase<attachNamespaceIterator_args, attachNamespaceIterator_args._Fields>, java.io.Serializable, Cloneable, Comparable<attachNamespaceIterator_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("attachNamespaceIterator_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField SETTING_FIELD_DESC = new org.apache.thrift.protocol.TField("setting", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+    private static final org.apache.thrift.protocol.TField SCOPES_FIELD_DESC = new org.apache.thrift.protocol.TField("scopes", org.apache.thrift.protocol.TType.SET, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new attachNamespaceIterator_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new attachNamespaceIterator_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+    public IteratorSetting setting; // required
+    public Set<IteratorScope> scopes; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName"),
+      SETTING((short)3, "setting"),
+      SCOPES((short)4, "scopes");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 3: // SETTING
+            return SETTING;
+          case 4: // SCOPES
+            return SCOPES;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.SETTING, new org.apache.thrift.meta_data.FieldMetaData("setting", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, IteratorSetting.class)));
+      tmpMap.put(_Fields.SCOPES, new org.apache.thrift.meta_data.FieldMetaData("scopes", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.SetMetaData(org.apache.thrift.protocol.TType.SET, 
+              new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, IteratorScope.class))));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(attachNamespaceIterator_args.class, metaDataMap);
+    }
+
+    public attachNamespaceIterator_args() {
+    }
+
+    public attachNamespaceIterator_args(
+      ByteBuffer login,
+      String namespaceName,
+      IteratorSetting setting,
+      Set<IteratorScope> scopes)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+      this.setting = setting;
+      this.scopes = scopes;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public attachNamespaceIterator_args(attachNamespaceIterator_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetSetting()) {
+        this.setting = new IteratorSetting(other.setting);
+      }
+      if (other.isSetScopes()) {
+        Set<IteratorScope> __this__scopes = new HashSet<IteratorScope>(other.scopes.size());
+        for (IteratorScope other_element : other.scopes) {
+          __this__scopes.add(other_element);
+        }
+        this.scopes = __this__scopes;
+      }
+    }
+
+    public attachNamespaceIterator_args deepCopy() {
+      return new attachNamespaceIterator_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+      this.setting = null;
+      this.scopes = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public attachNamespaceIterator_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public attachNamespaceIterator_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public attachNamespaceIterator_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public IteratorSetting getSetting() {
+      return this.setting;
+    }
+
+    public attachNamespaceIterator_args setSetting(IteratorSetting setting) {
+      this.setting = setting;
+      return this;
+    }
+
+    public void unsetSetting() {
+      this.setting = null;
+    }
+
+    /** Returns true if field setting is set (has been assigned a value) and false otherwise */
+    public boolean isSetSetting() {
+      return this.setting != null;
+    }
+
+    public void setSettingIsSet(boolean value) {
+      if (!value) {
+        this.setting = null;
+      }
+    }
+
+    public int getScopesSize() {
+      return (this.scopes == null) ? 0 : this.scopes.size();
+    }
+
+    public java.util.Iterator<IteratorScope> getScopesIterator() {
+      return (this.scopes == null) ? null : this.scopes.iterator();
+    }
+
+    public void addToScopes(IteratorScope elem) {
+      if (this.scopes == null) {
+        this.scopes = new HashSet<IteratorScope>();
+      }
+      this.scopes.add(elem);
+    }
+
+    public Set<IteratorScope> getScopes() {
+      return this.scopes;
+    }
+
+    public attachNamespaceIterator_args setScopes(Set<IteratorScope> scopes) {
+      this.scopes = scopes;
+      return this;
+    }
+
+    public void unsetScopes() {
+      this.scopes = null;
+    }
+
+    /** Returns true if field scopes is set (has been assigned a value) and false otherwise */
+    public boolean isSetScopes() {
+      return this.scopes != null;
+    }
+
+    public void setScopesIsSet(boolean value) {
+      if (!value) {
+        this.scopes = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case SETTING:
+        if (value == null) {
+          unsetSetting();
+        } else {
+          setSetting((IteratorSetting)value);
+        }
+        break;
+
+      case SCOPES:
+        if (value == null) {
+          unsetScopes();
+        } else {
+          setScopes((Set<IteratorScope>)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case SETTING:
+        return getSetting();
+
+      case SCOPES:
+        return getScopes();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case SETTING:
+        return isSetSetting();
+      case SCOPES:
+        return isSetScopes();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof attachNamespaceIterator_args)
+        return this.equals((attachNamespaceIterator_args)that);
+      return false;
+    }
+
+    public boolean equals(attachNamespaceIterator_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_setting = true && this.isSetSetting();
+      boolean that_present_setting = true && that.isSetSetting();
+      if (this_present_setting || that_present_setting) {
+        if (!(this_present_setting && that_present_setting))
+          return false;
+        if (!this.setting.equals(that.setting))
+          return false;
+      }
+
+      boolean this_present_scopes = true && this.isSetScopes();
+      boolean that_present_scopes = true && that.isSetScopes();
+      if (this_present_scopes || that_present_scopes) {
+        if (!(this_present_scopes && that_present_scopes))
+          return false;
+        if (!this.scopes.equals(that.scopes))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_setting = true && (isSetSetting());
+      list.add(present_setting);
+      if (present_setting)
+        list.add(setting);
+
+      boolean present_scopes = true && (isSetScopes());
+      list.add(present_scopes);
+      if (present_scopes)
+        list.add(scopes);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(attachNamespaceIterator_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetSetting()).compareTo(other.isSetSetting());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSetting()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.setting, other.setting);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetScopes()).compareTo(other.isSetScopes());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetScopes()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.scopes, other.scopes);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("attachNamespaceIterator_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("setting:");
+      if (this.setting == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.setting);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("scopes:");
+      if (this.scopes == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.scopes);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+      if (setting != null) {
+        setting.validate();
+      }
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class attachNamespaceIterator_argsStandardSchemeFactory implements SchemeFactory {
+      public attachNamespaceIterator_argsStandardScheme getScheme() {
+        return new attachNamespaceIterator_argsStandardScheme();
+      }
+    }
+
+    private static class attachNamespaceIterator_argsStandardScheme extends StandardScheme<attachNamespaceIterator_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, attachNamespaceIterator_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // SETTING
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.setting = new IteratorSetting();
+                struct.setting.read(iprot);
+                struct.setSettingIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // SCOPES
+              if (schemeField.type == org.apache.thrift.protocol.TType.SET) {
+                {
+                  org.apache.thrift.protocol.TSet _set526 = iprot.readSetBegin();
+                  struct.scopes = new HashSet<IteratorScope>(2*_set526.size);
+                  IteratorScope _elem527;
+                  for (int _i528 = 0; _i528 < _set526.size; ++_i528)
+                  {
+                    _elem527 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                    struct.scopes.add(_elem527);
+                  }
+                  iprot.readSetEnd();
+                }
+                struct.setScopesIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, attachNamespaceIterator_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.setting != null) {
+          oprot.writeFieldBegin(SETTING_FIELD_DESC);
+          struct.setting.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.scopes != null) {
+          oprot.writeFieldBegin(SCOPES_FIELD_DESC);
+          {
+            oprot.writeSetBegin(new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, struct.scopes.size()));
+            for (IteratorScope _iter529 : struct.scopes)
+            {
+              oprot.writeI32(_iter529.getValue());
+            }
+            oprot.writeSetEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class attachNamespaceIterator_argsTupleSchemeFactory implements SchemeFactory {
+      public attachNamespaceIterator_argsTupleScheme getScheme() {
+        return new attachNamespaceIterator_argsTupleScheme();
+      }
+    }
+
+    private static class attachNamespaceIterator_argsTupleScheme extends TupleScheme<attachNamespaceIterator_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, attachNamespaceIterator_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        if (struct.isSetSetting()) {
+          optionals.set(2);
+        }
+        if (struct.isSetScopes()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetSetting()) {
+          struct.setting.write(oprot);
+        }
+        if (struct.isSetScopes()) {
+          {
+            oprot.writeI32(struct.scopes.size());
+            for (IteratorScope _iter530 : struct.scopes)
+            {
+              oprot.writeI32(_iter530.getValue());
+            }
+          }
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, attachNamespaceIterator_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.setting = new IteratorSetting();
+          struct.setting.read(iprot);
+          struct.setSettingIsSet(true);
+        }
+        if (incoming.get(3)) {
+          {
+            org.apache.thrift.protocol.TSet _set531 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, iprot.readI32());
+            struct.scopes = new HashSet<IteratorScope>(2*_set531.size);
+            IteratorScope _elem532;
+            for (int _i533 = 0; _i533 < _set531.size; ++_i533)
+            {
+              _elem532 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+              struct.scopes.add(_elem532);
+            }
+          }
+          struct.setScopesIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class attachNamespaceIterator_result implements org.apache.thrift.TBase<attachNamespaceIterator_result, attachNamespaceIterator_result._Fields>, java.io.Serializable, Cloneable, Comparable<attachNamespaceIterator_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("attachNamespaceIterator_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new attachNamespaceIterator_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new attachNamespaceIterator_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(attachNamespaceIterator_result.class, metaDataMap);
+    }
+
+    public attachNamespaceIterator_result() {
+    }
+
+    public attachNamespaceIterator_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public attachNamespaceIterator_result(attachNamespaceIterator_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public attachNamespaceIterator_result deepCopy() {
+      return new attachNamespaceIterator_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public attachNamespaceIterator_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public attachNamespaceIterator_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public attachNamespaceIterator_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof attachNamespaceIterator_result)
+        return this.equals((attachNamespaceIterator_result)that);
+      return false;
+    }
+
+    public boolean equals(attachNamespaceIterator_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(attachNamespaceIterator_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("attachNamespaceIterator_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class attachNamespaceIterator_resultStandardSchemeFactory implements SchemeFactory {
+      public attachNamespaceIterator_resultStandardScheme getScheme() {
+        return new attachNamespaceIterator_resultStandardScheme();
+      }
+    }
+
+    private static class attachNamespaceIterator_resultStandardScheme extends StandardScheme<attachNamespaceIterator_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, attachNamespaceIterator_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, attachNamespaceIterator_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class attachNamespaceIterator_resultTupleSchemeFactory implements SchemeFactory {
+      public attachNamespaceIterator_resultTupleScheme getScheme() {
+        return new attachNamespaceIterator_resultTupleScheme();
+      }
+    }
+
+    private static class attachNamespaceIterator_resultTupleScheme extends TupleScheme<attachNamespaceIterator_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, attachNamespaceIterator_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, attachNamespaceIterator_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class removeNamespaceIterator_args implements org.apache.thrift.TBase<removeNamespaceIterator_args, removeNamespaceIterator_args._Fields>, java.io.Serializable, Cloneable, Comparable<removeNamespaceIterator_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("removeNamespaceIterator_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("name", org.apache.thrift.protocol.TType.STRING, (short)3);
+    private static final org.apache.thrift.protocol.TField SCOPES_FIELD_DESC = new org.apache.thrift.protocol.TField("scopes", org.apache.thrift.protocol.TType.SET, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new removeNamespaceIterator_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new removeNamespaceIterator_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+    public String name; // required
+    public Set<IteratorScope> scopes; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName"),
+      NAME((short)3, "name"),
+      SCOPES((short)4, "scopes");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 3: // NAME
+            return NAME;
+          case 4: // SCOPES
+            return SCOPES;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.NAME, new org.apache.thrift.meta_data.FieldMetaData("name", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.SCOPES, new org.apache.thrift.meta_data.FieldMetaData("scopes", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.SetMetaData(org.apache.thrift.protocol.TType.SET, 
+              new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, IteratorScope.class))));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(removeNamespaceIterator_args.class, metaDataMap);
+    }
+
+    public removeNamespaceIterator_args() {
+    }
+
+    public removeNamespaceIterator_args(
+      ByteBuffer login,
+      String namespaceName,
+      String name,
+      Set<IteratorScope> scopes)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+      this.name = name;
+      this.scopes = scopes;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public removeNamespaceIterator_args(removeNamespaceIterator_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetName()) {
+        this.name = other.name;
+      }
+      if (other.isSetScopes()) {
+        Set<IteratorScope> __this__scopes = new HashSet<IteratorScope>(other.scopes.size());
+        for (IteratorScope other_element : other.scopes) {
+          __this__scopes.add(other_element);
+        }
+        this.scopes = __this__scopes;
+      }
+    }
+
+    public removeNamespaceIterator_args deepCopy() {
+      return new removeNamespaceIterator_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+      this.name = null;
+      this.scopes = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public removeNamespaceIterator_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public removeNamespaceIterator_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public removeNamespaceIterator_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public String getName() {
+      return this.name;
+    }
+
+    public removeNamespaceIterator_args setName(String name) {
+      this.name = name;
+      return this;
+    }
+
+    public void unsetName() {
+      this.name = null;
+    }
+
+    /** Returns true if field name is set (has been assigned a value) and false otherwise */
+    public boolean isSetName() {
+      return this.name != null;
+    }
+
+    public void setNameIsSet(boolean value) {
+      if (!value) {
+        this.name = null;
+      }
+    }
+
+    public int getScopesSize() {
+      return (this.scopes == null) ? 0 : this.scopes.size();
+    }
+
+    public java.util.Iterator<IteratorScope> getScopesIterator() {
+      return (this.scopes == null) ? null : this.scopes.iterator();
+    }
+
+    public void addToScopes(IteratorScope elem) {
+      if (this.scopes == null) {
+        this.scopes = new HashSet<IteratorScope>();
+      }
+      this.scopes.add(elem);
+    }
+
+    public Set<IteratorScope> getScopes() {
+      return this.scopes;
+    }
+
+    public removeNamespaceIterator_args setScopes(Set<IteratorScope> scopes) {
+      this.scopes = scopes;
+      return this;
+    }
+
+    public void unsetScopes() {
+      this.scopes = null;
+    }
+
+    /** Returns true if field scopes is set (has been assigned a value) and false otherwise */
+    public boolean isSetScopes() {
+      return this.scopes != null;
+    }
+
+    public void setScopesIsSet(boolean value) {
+      if (!value) {
+        this.scopes = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case NAME:
+        if (value == null) {
+          unsetName();
+        } else {
+          setName((String)value);
+        }
+        break;
+
+      case SCOPES:
+        if (value == null) {
+          unsetScopes();
+        } else {
+          setScopes((Set<IteratorScope>)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case NAME:
+        return getName();
+
+      case SCOPES:
+        return getScopes();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case NAME:
+        return isSetName();
+      case SCOPES:
+        return isSetScopes();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof removeNamespaceIterator_args)
+        return this.equals((removeNamespaceIterator_args)that);
+      return false;
+    }
+
+    public boolean equals(removeNamespaceIterator_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_name = true && this.isSetName();
+      boolean that_present_name = true && that.isSetName();
+      if (this_present_name || that_present_name) {
+        if (!(this_present_name && that_present_name))
+          return false;
+        if (!this.name.equals(that.name))
+          return false;
+      }
+
+      boolean this_present_scopes = true && this.isSetScopes();
+      boolean that_present_scopes = true && that.isSetScopes();
+      if (this_present_scopes || that_present_scopes) {
+        if (!(this_present_scopes && that_present_scopes))
+          return false;
+        if (!this.scopes.equals(that.scopes))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_name = true && (isSetName());
+      list.add(present_name);
+      if (present_name)
+        list.add(name);
+
+      boolean present_scopes = true && (isSetScopes());
+      list.add(present_scopes);
+      if (present_scopes)
+        list.add(scopes);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(removeNamespaceIterator_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetName()).compareTo(other.isSetName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.name, other.name);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetScopes()).compareTo(other.isSetScopes());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetScopes()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.scopes, other.scopes);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("removeNamespaceIterator_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("name:");
+      if (this.name == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.name);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("scopes:");
+      if (this.scopes == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.scopes);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class removeNamespaceIterator_argsStandardSchemeFactory implements SchemeFactory {
+      public removeNamespaceIterator_argsStandardScheme getScheme() {
+        return new removeNamespaceIterator_argsStandardScheme();
+      }
+    }
+
+    private static class removeNamespaceIterator_argsStandardScheme extends StandardScheme<removeNamespaceIterator_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, removeNamespaceIterator_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.name = iprot.readString();
+                struct.setNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // SCOPES
+              if (schemeField.type == org.apache.thrift.protocol.TType.SET) {
+                {
+                  org.apache.thrift.protocol.TSet _set534 = iprot.readSetBegin();
+                  struct.scopes = new HashSet<IteratorScope>(2*_set534.size);
+                  IteratorScope _elem535;
+                  for (int _i536 = 0; _i536 < _set534.size; ++_i536)
+                  {
+                    _elem535 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                    struct.scopes.add(_elem535);
+                  }
+                  iprot.readSetEnd();
+                }
+                struct.setScopesIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, removeNamespaceIterator_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.name != null) {
+          oprot.writeFieldBegin(NAME_FIELD_DESC);
+          oprot.writeString(struct.name);
+          oprot.writeFieldEnd();
+        }
+        if (struct.scopes != null) {
+          oprot.writeFieldBegin(SCOPES_FIELD_DESC);
+          {
+            oprot.writeSetBegin(new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, struct.scopes.size()));
+            for (IteratorScope _iter537 : struct.scopes)
+            {
+              oprot.writeI32(_iter537.getValue());
+            }
+            oprot.writeSetEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class removeNamespaceIterator_argsTupleSchemeFactory implements SchemeFactory {
+      public removeNamespaceIterator_argsTupleScheme getScheme() {
+        return new removeNamespaceIterator_argsTupleScheme();
+      }
+    }
+
+    private static class removeNamespaceIterator_argsTupleScheme extends TupleScheme<removeNamespaceIterator_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, removeNamespaceIterator_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        if (struct.isSetName()) {
+          optionals.set(2);
+        }
+        if (struct.isSetScopes()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetName()) {
+          oprot.writeString(struct.name);
+        }
+        if (struct.isSetScopes()) {
+          {
+            oprot.writeI32(struct.scopes.size());
+            for (IteratorScope _iter538 : struct.scopes)
+            {
+              oprot.writeI32(_iter538.getValue());
+            }
+          }
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, removeNamespaceIterator_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.name = iprot.readString();
+          struct.setNameIsSet(true);
+        }
+        if (incoming.get(3)) {
+          {
+            org.apache.thrift.protocol.TSet _set539 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, iprot.readI32());
+            struct.scopes = new HashSet<IteratorScope>(2*_set539.size);
+            IteratorScope _elem540;
+            for (int _i541 = 0; _i541 < _set539.size; ++_i541)
+            {
+              _elem540 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+              struct.scopes.add(_elem540);
+            }
+          }
+          struct.setScopesIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class removeNamespaceIterator_result implements org.apache.thrift.TBase<removeNamespaceIterator_result, removeNamespaceIterator_result._Fields>, java.io.Serializable, Cloneable, Comparable<removeNamespaceIterator_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("removeNamespaceIterator_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new removeNamespaceIterator_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new removeNamespaceIterator_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(removeNamespaceIterator_result.class, metaDataMap);
+    }
+
+    public removeNamespaceIterator_result() {
+    }
+
+    public removeNamespaceIterator_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public removeNamespaceIterator_result(removeNamespaceIterator_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public removeNamespaceIterator_result deepCopy() {
+      return new removeNamespaceIterator_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public removeNamespaceIterator_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public removeNamespaceIterator_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public removeNamespaceIterator_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof removeNamespaceIterator_result)
+        return this.equals((removeNamespaceIterator_result)that);
+      return false;
+    }
+
+    public boolean equals(removeNamespaceIterator_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(removeNamespaceIterator_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("removeNamespaceIterator_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class removeNamespaceIterator_resultStandardSchemeFactory implements SchemeFactory {
+      public removeNamespaceIterator_resultStandardScheme getScheme() {
+        return new removeNamespaceIterator_resultStandardScheme();
+      }
+    }
+
+    private static class removeNamespaceIterator_resultStandardScheme extends StandardScheme<removeNamespaceIterator_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, removeNamespaceIterator_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, removeNamespaceIterator_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class removeNamespaceIterator_resultTupleSchemeFactory implements SchemeFactory {
+      public removeNamespaceIterator_resultTupleScheme getScheme() {
+        return new removeNamespaceIterator_resultTupleScheme();
+      }
+    }
+
+    private static class removeNamespaceIterator_resultTupleScheme extends TupleScheme<removeNamespaceIterator_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, removeNamespaceIterator_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, removeNamespaceIterator_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class getNamespaceIteratorSetting_args implements org.apache.thrift.TBase<getNamespaceIteratorSetting_args, getNamespaceIteratorSetting_args._Fields>, java.io.Serializable, Cloneable, Comparable<getNamespaceIteratorSetting_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("getNamespaceIteratorSetting_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("name", org.apache.thrift.protocol.TType.STRING, (short)3);
+    private static final org.apache.thrift.protocol.TField SCOPE_FIELD_DESC = new org.apache.thrift.protocol.TField("scope", org.apache.thrift.protocol.TType.I32, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new getNamespaceIteratorSetting_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new getNamespaceIteratorSetting_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+    public String name; // required
+    /**
+     * 
+     * @see IteratorScope
+     */
+    public IteratorScope scope; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName"),
+      NAME((short)3, "name"),
+      /**
+       * 
+       * @see IteratorScope
+       */
+      SCOPE((short)4, "scope");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 3: // NAME
+            return NAME;
+          case 4: // SCOPE
+            return SCOPE;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.NAME, new org.apache.thrift.meta_data.FieldMetaData("name", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.SCOPE, new org.apache.thrift.meta_data.FieldMetaData("scope", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, IteratorScope.class)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(getNamespaceIteratorSetting_args.class, metaDataMap);
+    }
+
+    public getNamespaceIteratorSetting_args() {
+    }
+
+    public getNamespaceIteratorSetting_args(
+      ByteBuffer login,
+      String namespaceName,
+      String name,
+      IteratorScope scope)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+      this.name = name;
+      this.scope = scope;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getNamespaceIteratorSetting_args(getNamespaceIteratorSetting_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetName()) {
+        this.name = other.name;
+      }
+      if (other.isSetScope()) {
+        this.scope = other.scope;
+      }
+    }
+
+    public getNamespaceIteratorSetting_args deepCopy() {
+      return new getNamespaceIteratorSetting_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+      this.name = null;
+      this.scope = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public getNamespaceIteratorSetting_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public getNamespaceIteratorSetting_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public getNamespaceIteratorSetting_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public String getName() {
+      return this.name;
+    }
+
+    public getNamespaceIteratorSetting_args setName(String name) {
+      this.name = name;
+      return this;
+    }
+
+    public void unsetName() {
+      this.name = null;
+    }
+
+    /** Returns true if field name is set (has been assigned a value) and false otherwise */
+    public boolean isSetName() {
+      return this.name != null;
+    }
+
+    public void setNameIsSet(boolean value) {
+      if (!value) {
+        this.name = null;
+      }
+    }
+
+    /**
+     * 
+     * @see IteratorScope
+     */
+    public IteratorScope getScope() {
+      return this.scope;
+    }
+
+    /**
+     * 
+     * @see IteratorScope
+     */
+    public getNamespaceIteratorSetting_args setScope(IteratorScope scope) {
+      this.scope = scope;
+      return this;
+    }
+
+    public void unsetScope() {
+      this.scope = null;
+    }
+
+    /** Returns true if field scope is set (has been assigned a value) and false otherwise */
+    public boolean isSetScope() {
+      return this.scope != null;
+    }
+
+    public void setScopeIsSet(boolean value) {
+      if (!value) {
+        this.scope = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case NAME:
+        if (value == null) {
+          unsetName();
+        } else {
+          setName((String)value);
+        }
+        break;
+
+      case SCOPE:
+        if (value == null) {
+          unsetScope();
+        } else {
+          setScope((IteratorScope)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case NAME:
+        return getName();
+
+      case SCOPE:
+        return getScope();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case NAME:
+        return isSetName();
+      case SCOPE:
+        return isSetScope();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getNamespaceIteratorSetting_args)
+        return this.equals((getNamespaceIteratorSetting_args)that);
+      return false;
+    }
+
+    public boolean equals(getNamespaceIteratorSetting_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_name = true && this.isSetName();
+      boolean that_present_name = true && that.isSetName();
+      if (this_present_name || that_present_name) {
+        if (!(this_present_name && that_present_name))
+          return false;
+        if (!this.name.equals(that.name))
+          return false;
+      }
+
+      boolean this_present_scope = true && this.isSetScope();
+      boolean that_present_scope = true && that.isSetScope();
+      if (this_present_scope || that_present_scope) {
+        if (!(this_present_scope && that_present_scope))
+          return false;
+        if (!this.scope.equals(that.scope))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_name = true && (isSetName());
+      list.add(present_name);
+      if (present_name)
+        list.add(name);
+
+      boolean present_scope = true && (isSetScope());
+      list.add(present_scope);
+      if (present_scope)
+        list.add(scope.getValue());
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(getNamespaceIteratorSetting_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetName()).compareTo(other.isSetName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.name, other.name);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetScope()).compareTo(other.isSetScope());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetScope()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.scope, other.scope);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getNamespaceIteratorSetting_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("name:");
+      if (this.name == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.name);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("scope:");
+      if (this.scope == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.scope);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class getNamespaceIteratorSetting_argsStandardSchemeFactory implements SchemeFactory {
+      public getNamespaceIteratorSetting_argsStandardScheme getScheme() {
+        return new getNamespaceIteratorSetting_argsStandardScheme();
+      }
+    }
+
+    private static class getNamespaceIteratorSetting_argsStandardScheme extends StandardScheme<getNamespaceIteratorSetting_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, getNamespaceIteratorSetting_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.name = iprot.readString();
+                struct.setNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // SCOPE
+              if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
+                struct.scope = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                struct.setScopeIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, getNamespaceIteratorSetting_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.name != null) {
+          oprot.writeFieldBegin(NAME_FIELD_DESC);
+          oprot.writeString(struct.name);
+          oprot.writeFieldEnd();
+        }
+        if (struct.scope != null) {
+          oprot.writeFieldBegin(SCOPE_FIELD_DESC);
+          oprot.writeI32(struct.scope.getValue());
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class getNamespaceIteratorSetting_argsTupleSchemeFactory implements SchemeFactory {
+      public getNamespaceIteratorSetting_argsTupleScheme getScheme() {
+        return new getNamespaceIteratorSetting_argsTupleScheme();
+      }
+    }
+
+    private static class getNamespaceIteratorSetting_argsTupleScheme extends TupleScheme<getNamespaceIteratorSetting_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, getNamespaceIteratorSetting_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        if (struct.isSetName()) {
+          optionals.set(2);
+        }
+        if (struct.isSetScope()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetName()) {
+          oprot.writeString(struct.name);
+        }
+        if (struct.isSetScope()) {
+          oprot.writeI32(struct.scope.getValue());
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, getNamespaceIteratorSetting_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.name = iprot.readString();
+          struct.setNameIsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.scope = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+          struct.setScopeIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class getNamespaceIteratorSetting_result implements org.apache.thrift.TBase<getNamespaceIteratorSetting_result, getNamespaceIteratorSetting_result._Fields>, java.io.Serializable, Cloneable, Comparable<getNamespaceIteratorSetting_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("getNamespaceIteratorSetting_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.STRUCT, (short)0);
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new getNamespaceIteratorSetting_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new getNamespaceIteratorSetting_resultTupleSchemeFactory());
+    }
+
+    public IteratorSetting success; // required
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, IteratorSetting.class)));
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(getNamespaceIteratorSetting_result.class, metaDataMap);
+    }
+
+    public getNamespaceIteratorSetting_result() {
+    }
+
+    public getNamespaceIteratorSetting_result(
+      IteratorSetting success,
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.success = success;
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getNamespaceIteratorSetting_result(getNamespaceIteratorSetting_result other) {
+      if (other.isSetSuccess()) {
+        this.success = new IteratorSetting(other.success);
+      }
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public getNamespaceIteratorSetting_result deepCopy() {
+      return new getNamespaceIteratorSetting_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.success = null;
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public IteratorSetting getSuccess() {
+      return this.success;
+    }
+
+    public getNamespaceIteratorSetting_result setSuccess(IteratorSetting success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public getNamespaceIteratorSetting_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public getNamespaceIteratorSetting_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public getNamespaceIteratorSetting_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((IteratorSetting)value);
+        }
+        break;
+
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getNamespaceIteratorSetting_result)
+        return this.equals((getNamespaceIteratorSetting_result)that);
+      return false;
+    }
+
+    public boolean equals(getNamespaceIteratorSetting_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(getNamespaceIteratorSetting_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getNamespaceIteratorSetting_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+      if (success != null) {
+        success.validate();
+      }
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class getNamespaceIteratorSetting_resultStandardSchemeFactory implements SchemeFactory {
+      public getNamespaceIteratorSetting_resultStandardScheme getScheme() {
+        return new getNamespaceIteratorSetting_resultStandardScheme();
+      }
+    }
+
+    private static class getNamespaceIteratorSetting_resultStandardScheme extends StandardScheme<getNamespaceIteratorSetting_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, getNamespaceIteratorSetting_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.success = new IteratorSetting();
+                struct.success.read(iprot);
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, getNamespaceIteratorSetting_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.success != null) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          struct.success.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class getNamespaceIteratorSetting_resultTupleSchemeFactory implements SchemeFactory {
+      public getNamespaceIteratorSetting_resultTupleScheme getScheme() {
+        return new getNamespaceIteratorSetting_resultTupleScheme();
+      }
+    }
+
+    private static class getNamespaceIteratorSetting_resultTupleScheme extends TupleScheme<getNamespaceIteratorSetting_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, getNamespaceIteratorSetting_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch1()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(2);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetSuccess()) {
+          struct.success.write(oprot);
+        }
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, getNamespaceIteratorSetting_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.success = new IteratorSetting();
+          struct.success.read(iprot);
+          struct.setSuccessIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class listNamespaceIterators_args implements org.apache.thrift.TBase<listNamespaceIterators_args, listNamespaceIterators_args._Fields>, java.io.Serializable, Cloneable, Comparable<listNamespaceIterators_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("listNamespaceIterators_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new listNamespaceIterators_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new listNamespaceIterators_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(listNamespaceIterators_args.class, metaDataMap);
+    }
+
+    public listNamespaceIterators_args() {
+    }
+
+    public listNamespaceIterators_args(
+      ByteBuffer login,
+      String namespaceName)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public listNamespaceIterators_args(listNamespaceIterators_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+    }
+
+    public listNamespaceIterators_args deepCopy() {
+      return new listNamespaceIterators_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public listNamespaceIterators_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public listNamespaceIterators_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public listNamespaceIterators_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof listNamespaceIterators_args)
+        return this.equals((listNamespaceIterators_args)that);
+      return false;
+    }
+
+    public boolean equals(listNamespaceIterators_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(listNamespaceIterators_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("listNamespaceIterators_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class listNamespaceIterators_argsStandardSchemeFactory implements SchemeFactory {
+      public listNamespaceIterators_argsStandardScheme getScheme() {
+        return new listNamespaceIterators_argsStandardScheme();
+      }
+    }
+
+    private static class listNamespaceIterators_argsStandardScheme extends StandardScheme<listNamespaceIterators_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, listNamespaceIterators_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, listNamespaceIterators_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class listNamespaceIterators_argsTupleSchemeFactory implements SchemeFactory {
+      public listNamespaceIterators_argsTupleScheme getScheme() {
+        return new listNamespaceIterators_argsTupleScheme();
+      }
+    }
+
+    private static class listNamespaceIterators_argsTupleScheme extends TupleScheme<listNamespaceIterators_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, listNamespaceIterators_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        oprot.writeBitSet(optionals, 2);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, listNamespaceIterators_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(2);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class listNamespaceIterators_result implements org.apache.thrift.TBase<listNamespaceIterators_result, listNamespaceIterators_result._Fields>, java.io.Serializable, Cloneable, Comparable<listNamespaceIterators_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("listNamespaceIterators_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.MAP, (short)0);
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new listNamespaceIterators_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new listNamespaceIterators_resultTupleSchemeFactory());
+    }
+
+    public Map<String,Set<IteratorScope>> success; // required
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.MapMetaData(org.apache.thrift.protocol.TType.MAP, 
+              new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING), 
+              new org.apache.thrift.meta_data.SetMetaData(org.apache.thrift.protocol.TType.SET, 
+                  new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, IteratorScope.class)))));
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(listNamespaceIterators_result.class, metaDataMap);
+    }
+
+    public listNamespaceIterators_result() {
+    }
+
+    public listNamespaceIterators_result(
+      Map<String,Set<IteratorScope>> success,
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.success = success;
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public listNamespaceIterators_result(listNamespaceIterators_result other) {
+      if (other.isSetSuccess()) {
+        Map<String,Set<IteratorScope>> __this__success = new HashMap<String,Set<IteratorScope>>(other.success.size());
+        for (Map.Entry<String, Set<IteratorScope>> other_element : other.success.entrySet()) {
+
+          String other_element_key = other_element.getKey();
+          Set<IteratorScope> other_element_value = other_element.getValue();
+
+          String __this__success_copy_key = other_element_key;
+
+          Set<IteratorScope> __this__success_copy_value = new HashSet<IteratorScope>(other_element_value.size());
+          for (IteratorScope other_element_value_element : other_element_value) {
+            __this__success_copy_value.add(other_element_value_element);
+          }
+
+          __this__success.put(__this__success_copy_key, __this__success_copy_value);
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public listNamespaceIterators_result deepCopy() {
+      return new listNamespaceIterators_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.success = null;
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public void putToSuccess(String key, Set<IteratorScope> val) {
+      if (this.success == null) {
+        this.success = new HashMap<String,Set<IteratorScope>>();
+      }
+      this.success.put(key, val);
+    }
+
+    public Map<String,Set<IteratorScope>> getSuccess() {
+      return this.success;
+    }
+
+    public listNamespaceIterators_result setSuccess(Map<String,Set<IteratorScope>> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public listNamespaceIterators_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public listNamespaceIterators_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public listNamespaceIterators_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Map<String,Set<IteratorScope>>)value);
+        }
+        break;
+
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof listNamespaceIterators_result)
+        return this.equals((listNamespaceIterators_result)that);
+      return false;
+    }
+
+    public boolean equals(listNamespaceIterators_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(listNamespaceIterators_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("listNamespaceIterators_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class listNamespaceIterators_resultStandardSchemeFactory implements SchemeFactory {
+      public listNamespaceIterators_resultStandardScheme getScheme() {
+        return new listNamespaceIterators_resultStandardScheme();
+      }
+    }
+
+    private static class listNamespaceIterators_resultStandardScheme extends StandardScheme<listNamespaceIterators_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, listNamespaceIterators_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
+                {
+                  org.apache.thrift.protocol.TMap _map542 = iprot.readMapBegin();
+                  struct.success = new HashMap<String,Set<IteratorScope>>(2*_map542.size);
+                  String _key543;
+                  Set<IteratorScope> _val544;
+                  for (int _i545 = 0; _i545 < _map542.size; ++_i545)
+                  {
+                    _key543 = iprot.readString();
+                    {
+                      org.apache.thrift.protocol.TSet _set546 = iprot.readSetBegin();
+                      _val544 = new HashSet<IteratorScope>(2*_set546.size);
+                      IteratorScope _elem547;
+                      for (int _i548 = 0; _i548 < _set546.size; ++_i548)
+                      {
+                        _elem547 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                        _val544.add(_elem547);
+                      }
+                      iprot.readSetEnd();
+                    }
+                    struct.success.put(_key543, _val544);
+                  }
+                  iprot.readMapEnd();
+                }
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, listNamespaceIterators_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.success != null) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          {
+            oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.SET, struct.success.size()));
+            for (Map.Entry<String, Set<IteratorScope>> _iter549 : struct.success.entrySet())
+            {
+              oprot.writeString(_iter549.getKey());
+              {
+                oprot.writeSetBegin(new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, _iter549.getValue().size()));
+                for (IteratorScope _iter550 : _iter549.getValue())
+                {
+                  oprot.writeI32(_iter550.getValue());
+                }
+                oprot.writeSetEnd();
+              }
+            }
+            oprot.writeMapEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class listNamespaceIterators_resultTupleSchemeFactory implements SchemeFactory {
+      public listNamespaceIterators_resultTupleScheme getScheme() {
+        return new listNamespaceIterators_resultTupleScheme();
+      }
+    }
+
+    private static class listNamespaceIterators_resultTupleScheme extends TupleScheme<listNamespaceIterators_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, listNamespaceIterators_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch1()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(2);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetSuccess()) {
+          {
+            oprot.writeI32(struct.success.size());
+            for (Map.Entry<String, Set<IteratorScope>> _iter551 : struct.success.entrySet())
+            {
+              oprot.writeString(_iter551.getKey());
+              {
+                oprot.writeI32(_iter551.getValue().size());
+                for (IteratorScope _iter552 : _iter551.getValue())
+                {
+                  oprot.writeI32(_iter552.getValue());
+                }
+              }
+            }
+          }
+        }
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, listNamespaceIterators_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          {
+            org.apache.thrift.protocol.TMap _map553 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.SET, iprot.readI32());
+            struct.success = new HashMap<String,Set<IteratorScope>>(2*_map553.size);
+            String _key554;
+            Set<IteratorScope> _val555;
+            for (int _i556 = 0; _i556 < _map553.size; ++_i556)
+            {
+              _key554 = iprot.readString();
+              {
+                org.apache.thrift.protocol.TSet _set557 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, iprot.readI32());
+                _val555 = new HashSet<IteratorScope>(2*_set557.size);
+                IteratorScope _elem558;
+                for (int _i559 = 0; _i559 < _set557.size; ++_i559)
+                {
+                  _elem558 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                  _val555.add(_elem558);
+                }
+              }
+              struct.success.put(_key554, _val555);
+            }
+          }
+          struct.setSuccessIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class checkNamespaceIteratorConflicts_args implements org.apache.thrift.TBase<checkNamespaceIteratorConflicts_args, checkNamespaceIteratorConflicts_args._Fields>, java.io.Serializable, Cloneable, Comparable<checkNamespaceIteratorConflicts_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("checkNamespaceIteratorConflicts_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField SETTING_FIELD_DESC = new org.apache.thrift.protocol.TField("setting", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+    private static final org.apache.thrift.protocol.TField SCOPES_FIELD_DESC = new org.apache.thrift.protocol.TField("scopes", org.apache.thrift.protocol.TType.SET, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new checkNamespaceIteratorConflicts_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new checkNamespaceIteratorConflicts_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+    public IteratorSetting setting; // required
+    public Set<IteratorScope> scopes; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName"),
+      SETTING((short)3, "setting"),
+      SCOPES((short)4, "scopes");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 3: // SETTING
+            return SETTING;
+          case 4: // SCOPES
+            return SCOPES;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.SETTING, new org.apache.thrift.meta_data.FieldMetaData("setting", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.StructMetaData(org.apache.thrift.protocol.TType.STRUCT, IteratorSetting.class)));
+      tmpMap.put(_Fields.SCOPES, new org.apache.thrift.meta_data.FieldMetaData("scopes", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.SetMetaData(org.apache.thrift.protocol.TType.SET, 
+              new org.apache.thrift.meta_data.EnumMetaData(org.apache.thrift.protocol.TType.ENUM, IteratorScope.class))));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(checkNamespaceIteratorConflicts_args.class, metaDataMap);
+    }
+
+    public checkNamespaceIteratorConflicts_args() {
+    }
+
+    public checkNamespaceIteratorConflicts_args(
+      ByteBuffer login,
+      String namespaceName,
+      IteratorSetting setting,
+      Set<IteratorScope> scopes)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+      this.setting = setting;
+      this.scopes = scopes;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public checkNamespaceIteratorConflicts_args(checkNamespaceIteratorConflicts_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetSetting()) {
+        this.setting = new IteratorSetting(other.setting);
+      }
+      if (other.isSetScopes()) {
+        Set<IteratorScope> __this__scopes = new HashSet<IteratorScope>(other.scopes.size());
+        for (IteratorScope other_element : other.scopes) {
+          __this__scopes.add(other_element);
+        }
+        this.scopes = __this__scopes;
+      }
+    }
+
+    public checkNamespaceIteratorConflicts_args deepCopy() {
+      return new checkNamespaceIteratorConflicts_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+      this.setting = null;
+      this.scopes = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public checkNamespaceIteratorConflicts_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public checkNamespaceIteratorConflicts_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public checkNamespaceIteratorConflicts_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public IteratorSetting getSetting() {
+      return this.setting;
+    }
+
+    public checkNamespaceIteratorConflicts_args setSetting(IteratorSetting setting) {
+      this.setting = setting;
+      return this;
+    }
+
+    public void unsetSetting() {
+      this.setting = null;
+    }
+
+    /** Returns true if field setting is set (has been assigned a value) and false otherwise */
+    public boolean isSetSetting() {
+      return this.setting != null;
+    }
+
+    public void setSettingIsSet(boolean value) {
+      if (!value) {
+        this.setting = null;
+      }
+    }
+
+    public int getScopesSize() {
+      return (this.scopes == null) ? 0 : this.scopes.size();
+    }
+
+    public java.util.Iterator<IteratorScope> getScopesIterator() {
+      return (this.scopes == null) ? null : this.scopes.iterator();
+    }
+
+    public void addToScopes(IteratorScope elem) {
+      if (this.scopes == null) {
+        this.scopes = new HashSet<IteratorScope>();
+      }
+      this.scopes.add(elem);
+    }
+
+    public Set<IteratorScope> getScopes() {
+      return this.scopes;
+    }
+
+    public checkNamespaceIteratorConflicts_args setScopes(Set<IteratorScope> scopes) {
+      this.scopes = scopes;
+      return this;
+    }
+
+    public void unsetScopes() {
+      this.scopes = null;
+    }
+
+    /** Returns true if field scopes is set (has been assigned a value) and false otherwise */
+    public boolean isSetScopes() {
+      return this.scopes != null;
+    }
+
+    public void setScopesIsSet(boolean value) {
+      if (!value) {
+        this.scopes = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case SETTING:
+        if (value == null) {
+          unsetSetting();
+        } else {
+          setSetting((IteratorSetting)value);
+        }
+        break;
+
+      case SCOPES:
+        if (value == null) {
+          unsetScopes();
+        } else {
+          setScopes((Set<IteratorScope>)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case SETTING:
+        return getSetting();
+
+      case SCOPES:
+        return getScopes();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case SETTING:
+        return isSetSetting();
+      case SCOPES:
+        return isSetScopes();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof checkNamespaceIteratorConflicts_args)
+        return this.equals((checkNamespaceIteratorConflicts_args)that);
+      return false;
+    }
+
+    public boolean equals(checkNamespaceIteratorConflicts_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_setting = true && this.isSetSetting();
+      boolean that_present_setting = true && that.isSetSetting();
+      if (this_present_setting || that_present_setting) {
+        if (!(this_present_setting && that_present_setting))
+          return false;
+        if (!this.setting.equals(that.setting))
+          return false;
+      }
+
+      boolean this_present_scopes = true && this.isSetScopes();
+      boolean that_present_scopes = true && that.isSetScopes();
+      if (this_present_scopes || that_present_scopes) {
+        if (!(this_present_scopes && that_present_scopes))
+          return false;
+        if (!this.scopes.equals(that.scopes))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_setting = true && (isSetSetting());
+      list.add(present_setting);
+      if (present_setting)
+        list.add(setting);
+
+      boolean present_scopes = true && (isSetScopes());
+      list.add(present_scopes);
+      if (present_scopes)
+        list.add(scopes);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(checkNamespaceIteratorConflicts_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetSetting()).compareTo(other.isSetSetting());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSetting()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.setting, other.setting);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetScopes()).compareTo(other.isSetScopes());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetScopes()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.scopes, other.scopes);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("checkNamespaceIteratorConflicts_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("setting:");
+      if (this.setting == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.setting);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("scopes:");
+      if (this.scopes == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.scopes);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+      if (setting != null) {
+        setting.validate();
+      }
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class checkNamespaceIteratorConflicts_argsStandardSchemeFactory implements SchemeFactory {
+      public checkNamespaceIteratorConflicts_argsStandardScheme getScheme() {
+        return new checkNamespaceIteratorConflicts_argsStandardScheme();
+      }
+    }
+
+    private static class checkNamespaceIteratorConflicts_argsStandardScheme extends StandardScheme<checkNamespaceIteratorConflicts_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, checkNamespaceIteratorConflicts_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // SETTING
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.setting = new IteratorSetting();
+                struct.setting.read(iprot);
+                struct.setSettingIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // SCOPES
+              if (schemeField.type == org.apache.thrift.protocol.TType.SET) {
+                {
+                  org.apache.thrift.protocol.TSet _set560 = iprot.readSetBegin();
+                  struct.scopes = new HashSet<IteratorScope>(2*_set560.size);
+                  IteratorScope _elem561;
+                  for (int _i562 = 0; _i562 < _set560.size; ++_i562)
+                  {
+                    _elem561 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+                    struct.scopes.add(_elem561);
+                  }
+                  iprot.readSetEnd();
+                }
+                struct.setScopesIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, checkNamespaceIteratorConflicts_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.setting != null) {
+          oprot.writeFieldBegin(SETTING_FIELD_DESC);
+          struct.setting.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.scopes != null) {
+          oprot.writeFieldBegin(SCOPES_FIELD_DESC);
+          {
+            oprot.writeSetBegin(new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, struct.scopes.size()));
+            for (IteratorScope _iter563 : struct.scopes)
+            {
+              oprot.writeI32(_iter563.getValue());
+            }
+            oprot.writeSetEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class checkNamespaceIteratorConflicts_argsTupleSchemeFactory implements SchemeFactory {
+      public checkNamespaceIteratorConflicts_argsTupleScheme getScheme() {
+        return new checkNamespaceIteratorConflicts_argsTupleScheme();
+      }
+    }
+
+    private static class checkNamespaceIteratorConflicts_argsTupleScheme extends TupleScheme<checkNamespaceIteratorConflicts_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, checkNamespaceIteratorConflicts_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        if (struct.isSetSetting()) {
+          optionals.set(2);
+        }
+        if (struct.isSetScopes()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetSetting()) {
+          struct.setting.write(oprot);
+        }
+        if (struct.isSetScopes()) {
+          {
+            oprot.writeI32(struct.scopes.size());
+            for (IteratorScope _iter564 : struct.scopes)
+            {
+              oprot.writeI32(_iter564.getValue());
+            }
+          }
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, checkNamespaceIteratorConflicts_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.setting = new IteratorSetting();
+          struct.setting.read(iprot);
+          struct.setSettingIsSet(true);
+        }
+        if (incoming.get(3)) {
+          {
+            org.apache.thrift.protocol.TSet _set565 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.I32, iprot.readI32());
+            struct.scopes = new HashSet<IteratorScope>(2*_set565.size);
+            IteratorScope _elem566;
+            for (int _i567 = 0; _i567 < _set565.size; ++_i567)
+            {
+              _elem566 = org.apache.accumulo.proxy.thrift.IteratorScope.findByValue(iprot.readI32());
+              struct.scopes.add(_elem566);
+            }
+          }
+          struct.setScopesIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class checkNamespaceIteratorConflicts_result implements org.apache.thrift.TBase<checkNamespaceIteratorConflicts_result, checkNamespaceIteratorConflicts_result._Fields>, java.io.Serializable, Cloneable, Comparable<checkNamespaceIteratorConflicts_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("checkNamespaceIteratorConflicts_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new checkNamespaceIteratorConflicts_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new checkNamespaceIteratorConflicts_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(checkNamespaceIteratorConflicts_result.class, metaDataMap);
+    }
+
+    public checkNamespaceIteratorConflicts_result() {
+    }
+
+    public checkNamespaceIteratorConflicts_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public checkNamespaceIteratorConflicts_result(checkNamespaceIteratorConflicts_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public checkNamespaceIteratorConflicts_result deepCopy() {
+      return new checkNamespaceIteratorConflicts_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public checkNamespaceIteratorConflicts_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public checkNamespaceIteratorConflicts_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public checkNamespaceIteratorConflicts_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof checkNamespaceIteratorConflicts_result)
+        return this.equals((checkNamespaceIteratorConflicts_result)that);
+      return false;
+    }
+
+    public boolean equals(checkNamespaceIteratorConflicts_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(checkNamespaceIteratorConflicts_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("checkNamespaceIteratorConflicts_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class checkNamespaceIteratorConflicts_resultStandardSchemeFactory implements SchemeFactory {
+      public checkNamespaceIteratorConflicts_resultStandardScheme getScheme() {
+        return new checkNamespaceIteratorConflicts_resultStandardScheme();
+      }
+    }
+
+    private static class checkNamespaceIteratorConflicts_resultStandardScheme extends StandardScheme<checkNamespaceIteratorConflicts_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, checkNamespaceIteratorConflicts_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, checkNamespaceIteratorConflicts_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class checkNamespaceIteratorConflicts_resultTupleSchemeFactory implements SchemeFactory {
+      public checkNamespaceIteratorConflicts_resultTupleScheme getScheme() {
+        return new checkNamespaceIteratorConflicts_resultTupleScheme();
+      }
+    }
+
+    private static class checkNamespaceIteratorConflicts_resultTupleScheme extends TupleScheme<checkNamespaceIteratorConflicts_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, checkNamespaceIteratorConflicts_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, checkNamespaceIteratorConflicts_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class addNamespaceConstraint_args implements org.apache.thrift.TBase<addNamespaceConstraint_args, addNamespaceConstraint_args._Fields>, java.io.Serializable, Cloneable, Comparable<addNamespaceConstraint_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("addNamespaceConstraint_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField CONSTRAINT_CLASS_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("constraintClassName", org.apache.thrift.protocol.TType.STRING, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new addNamespaceConstraint_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new addNamespaceConstraint_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+    public String constraintClassName; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName"),
+      CONSTRAINT_CLASS_NAME((short)3, "constraintClassName");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 3: // CONSTRAINT_CLASS_NAME
+            return CONSTRAINT_CLASS_NAME;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.CONSTRAINT_CLASS_NAME, new org.apache.thrift.meta_data.FieldMetaData("constraintClassName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(addNamespaceConstraint_args.class, metaDataMap);
+    }
+
+    public addNamespaceConstraint_args() {
+    }
+
+    public addNamespaceConstraint_args(
+      ByteBuffer login,
+      String namespaceName,
+      String constraintClassName)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+      this.constraintClassName = constraintClassName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public addNamespaceConstraint_args(addNamespaceConstraint_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetConstraintClassName()) {
+        this.constraintClassName = other.constraintClassName;
+      }
+    }
+
+    public addNamespaceConstraint_args deepCopy() {
+      return new addNamespaceConstraint_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+      this.constraintClassName = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public addNamespaceConstraint_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public addNamespaceConstraint_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public addNamespaceConstraint_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public String getConstraintClassName() {
+      return this.constraintClassName;
+    }
+
+    public addNamespaceConstraint_args setConstraintClassName(String constraintClassName) {
+      this.constraintClassName = constraintClassName;
+      return this;
+    }
+
+    public void unsetConstraintClassName() {
+      this.constraintClassName = null;
+    }
+
+    /** Returns true if field constraintClassName is set (has been assigned a value) and false otherwise */
+    public boolean isSetConstraintClassName() {
+      return this.constraintClassName != null;
+    }
+
+    public void setConstraintClassNameIsSet(boolean value) {
+      if (!value) {
+        this.constraintClassName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case CONSTRAINT_CLASS_NAME:
+        if (value == null) {
+          unsetConstraintClassName();
+        } else {
+          setConstraintClassName((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case CONSTRAINT_CLASS_NAME:
+        return getConstraintClassName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case CONSTRAINT_CLASS_NAME:
+        return isSetConstraintClassName();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof addNamespaceConstraint_args)
+        return this.equals((addNamespaceConstraint_args)that);
+      return false;
+    }
+
+    public boolean equals(addNamespaceConstraint_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_constraintClassName = true && this.isSetConstraintClassName();
+      boolean that_present_constraintClassName = true && that.isSetConstraintClassName();
+      if (this_present_constraintClassName || that_present_constraintClassName) {
+        if (!(this_present_constraintClassName && that_present_constraintClassName))
+          return false;
+        if (!this.constraintClassName.equals(that.constraintClassName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_constraintClassName = true && (isSetConstraintClassName());
+      list.add(present_constraintClassName);
+      if (present_constraintClassName)
+        list.add(constraintClassName);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(addNamespaceConstraint_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetConstraintClassName()).compareTo(other.isSetConstraintClassName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetConstraintClassName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.constraintClassName, other.constraintClassName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("addNamespaceConstraint_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("constraintClassName:");
+      if (this.constraintClassName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.constraintClassName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class addNamespaceConstraint_argsStandardSchemeFactory implements SchemeFactory {
+      public addNamespaceConstraint_argsStandardScheme getScheme() {
+        return new addNamespaceConstraint_argsStandardScheme();
+      }
+    }
+
+    private static class addNamespaceConstraint_argsStandardScheme extends StandardScheme<addNamespaceConstraint_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, addNamespaceConstraint_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // CONSTRAINT_CLASS_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.constraintClassName = iprot.readString();
+                struct.setConstraintClassNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, addNamespaceConstraint_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.constraintClassName != null) {
+          oprot.writeFieldBegin(CONSTRAINT_CLASS_NAME_FIELD_DESC);
+          oprot.writeString(struct.constraintClassName);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class addNamespaceConstraint_argsTupleSchemeFactory implements SchemeFactory {
+      public addNamespaceConstraint_argsTupleScheme getScheme() {
+        return new addNamespaceConstraint_argsTupleScheme();
+      }
+    }
+
+    private static class addNamespaceConstraint_argsTupleScheme extends TupleScheme<addNamespaceConstraint_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, addNamespaceConstraint_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        if (struct.isSetConstraintClassName()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetConstraintClassName()) {
+          oprot.writeString(struct.constraintClassName);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, addNamespaceConstraint_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.constraintClassName = iprot.readString();
+          struct.setConstraintClassNameIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class addNamespaceConstraint_result implements org.apache.thrift.TBase<addNamespaceConstraint_result, addNamespaceConstraint_result._Fields>, java.io.Serializable, Cloneable, Comparable<addNamespaceConstraint_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("addNamespaceConstraint_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.I32, (short)0);
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new addNamespaceConstraint_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new addNamespaceConstraint_resultTupleSchemeFactory());
+    }
+
+    public int success; // required
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private byte __isset_bitfield = 0;
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I32)));
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(addNamespaceConstraint_result.class, metaDataMap);
+    }
+
+    public addNamespaceConstraint_result() {
+    }
+
+    public addNamespaceConstraint_result(
+      int success,
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public addNamespaceConstraint_result(addNamespaceConstraint_result other) {
+      __isset_bitfield = other.__isset_bitfield;
+      this.success = other.success;
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public addNamespaceConstraint_result deepCopy() {
+      return new addNamespaceConstraint_result(this);
+    }
+
+    @Override
+    public void clear() {
+      setSuccessIsSet(false);
+      this.success = 0;
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public int getSuccess() {
+      return this.success;
+    }
+
+    public addNamespaceConstraint_result setSuccess(int success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bitfield = EncodingUtils.clearBit(__isset_bitfield, __SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return EncodingUtils.testBit(__isset_bitfield, __SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __SUCCESS_ISSET_ID, value);
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public addNamespaceConstraint_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public addNamespaceConstraint_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public addNamespaceConstraint_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Integer)value);
+        }
+        break;
+
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof addNamespaceConstraint_result)
+        return this.equals((addNamespaceConstraint_result)that);
+      return false;
+    }
+
+    public boolean equals(addNamespaceConstraint_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(addNamespaceConstraint_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("addNamespaceConstraint_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        // it doesn't seem like you should have to do this, but java serialization is wacky, and doesn't call the default constructor.
+        __isset_bitfield = 0;
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class addNamespaceConstraint_resultStandardSchemeFactory implements SchemeFactory {
+      public addNamespaceConstraint_resultStandardScheme getScheme() {
+        return new addNamespaceConstraint_resultStandardScheme();
+      }
+    }
+
+    private static class addNamespaceConstraint_resultStandardScheme extends StandardScheme<addNamespaceConstraint_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, addNamespaceConstraint_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
+                struct.success = iprot.readI32();
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, addNamespaceConstraint_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.isSetSuccess()) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          oprot.writeI32(struct.success);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class addNamespaceConstraint_resultTupleSchemeFactory implements SchemeFactory {
+      public addNamespaceConstraint_resultTupleScheme getScheme() {
+        return new addNamespaceConstraint_resultTupleScheme();
+      }
+    }
+
+    private static class addNamespaceConstraint_resultTupleScheme extends TupleScheme<addNamespaceConstraint_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, addNamespaceConstraint_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch1()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(2);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetSuccess()) {
+          oprot.writeI32(struct.success);
+        }
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, addNamespaceConstraint_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.success = iprot.readI32();
+          struct.setSuccessIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class removeNamespaceConstraint_args implements org.apache.thrift.TBase<removeNamespaceConstraint_args, removeNamespaceConstraint_args._Fields>, java.io.Serializable, Cloneable, Comparable<removeNamespaceConstraint_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("removeNamespaceConstraint_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField ID_FIELD_DESC = new org.apache.thrift.protocol.TField("id", org.apache.thrift.protocol.TType.I32, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new removeNamespaceConstraint_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new removeNamespaceConstraint_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+    public int id; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName"),
+      ID((short)3, "id");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 3: // ID
+            return ID;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __ID_ISSET_ID = 0;
+    private byte __isset_bitfield = 0;
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.ID, new org.apache.thrift.meta_data.FieldMetaData("id", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I32)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(removeNamespaceConstraint_args.class, metaDataMap);
+    }
+
+    public removeNamespaceConstraint_args() {
+    }
+
+    public removeNamespaceConstraint_args(
+      ByteBuffer login,
+      String namespaceName,
+      int id)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+      this.id = id;
+      setIdIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public removeNamespaceConstraint_args(removeNamespaceConstraint_args other) {
+      __isset_bitfield = other.__isset_bitfield;
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      this.id = other.id;
+    }
+
+    public removeNamespaceConstraint_args deepCopy() {
+      return new removeNamespaceConstraint_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+      setIdIsSet(false);
+      this.id = 0;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public removeNamespaceConstraint_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public removeNamespaceConstraint_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public removeNamespaceConstraint_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public int getId() {
+      return this.id;
+    }
+
+    public removeNamespaceConstraint_args setId(int id) {
+      this.id = id;
+      setIdIsSet(true);
+      return this;
+    }
+
+    public void unsetId() {
+      __isset_bitfield = EncodingUtils.clearBit(__isset_bitfield, __ID_ISSET_ID);
+    }
+
+    /** Returns true if field id is set (has been assigned a value) and false otherwise */
+    public boolean isSetId() {
+      return EncodingUtils.testBit(__isset_bitfield, __ID_ISSET_ID);
+    }
+
+    public void setIdIsSet(boolean value) {
+      __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __ID_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case ID:
+        if (value == null) {
+          unsetId();
+        } else {
+          setId((Integer)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case ID:
+        return getId();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case ID:
+        return isSetId();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof removeNamespaceConstraint_args)
+        return this.equals((removeNamespaceConstraint_args)that);
+      return false;
+    }
+
+    public boolean equals(removeNamespaceConstraint_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_id = true;
+      boolean that_present_id = true;
+      if (this_present_id || that_present_id) {
+        if (!(this_present_id && that_present_id))
+          return false;
+        if (this.id != that.id)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_id = true;
+      list.add(present_id);
+      if (present_id)
+        list.add(id);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(removeNamespaceConstraint_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetId()).compareTo(other.isSetId());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetId()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.id, other.id);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("removeNamespaceConstraint_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("id:");
+      sb.append(this.id);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        // it doesn't seem like you should have to do this, but java serialization is wacky, and doesn't call the default constructor.
+        __isset_bitfield = 0;
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class removeNamespaceConstraint_argsStandardSchemeFactory implements SchemeFactory {
+      public removeNamespaceConstraint_argsStandardScheme getScheme() {
+        return new removeNamespaceConstraint_argsStandardScheme();
+      }
+    }
+
+    private static class removeNamespaceConstraint_argsStandardScheme extends StandardScheme<removeNamespaceConstraint_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, removeNamespaceConstraint_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // ID
+              if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
+                struct.id = iprot.readI32();
+                struct.setIdIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, removeNamespaceConstraint_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldBegin(ID_FIELD_DESC);
+        oprot.writeI32(struct.id);
+        oprot.writeFieldEnd();
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class removeNamespaceConstraint_argsTupleSchemeFactory implements SchemeFactory {
+      public removeNamespaceConstraint_argsTupleScheme getScheme() {
+        return new removeNamespaceConstraint_argsTupleScheme();
+      }
+    }
+
+    private static class removeNamespaceConstraint_argsTupleScheme extends TupleScheme<removeNamespaceConstraint_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, removeNamespaceConstraint_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        if (struct.isSetId()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetId()) {
+          oprot.writeI32(struct.id);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, removeNamespaceConstraint_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.id = iprot.readI32();
+          struct.setIdIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class removeNamespaceConstraint_result implements org.apache.thrift.TBase<removeNamespaceConstraint_result, removeNamespaceConstraint_result._Fields>, java.io.Serializable, Cloneable, Comparable<removeNamespaceConstraint_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("removeNamespaceConstraint_result");
+
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new removeNamespaceConstraint_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new removeNamespaceConstraint_resultTupleSchemeFactory());
+    }
+
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(removeNamespaceConstraint_result.class, metaDataMap);
+    }
+
+    public removeNamespaceConstraint_result() {
+    }
+
+    public removeNamespaceConstraint_result(
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public removeNamespaceConstraint_result(removeNamespaceConstraint_result other) {
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public removeNamespaceConstraint_result deepCopy() {
+      return new removeNamespaceConstraint_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public removeNamespaceConstraint_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public removeNamespaceConstraint_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public removeNamespaceConstraint_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof removeNamespaceConstraint_result)
+        return this.equals((removeNamespaceConstraint_result)that);
+      return false;
+    }
+
+    public boolean equals(removeNamespaceConstraint_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(removeNamespaceConstraint_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("removeNamespaceConstraint_result(");
+      boolean first = true;
+
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class removeNamespaceConstraint_resultStandardSchemeFactory implements SchemeFactory {
+      public removeNamespaceConstraint_resultStandardScheme getScheme() {
+        return new removeNamespaceConstraint_resultStandardScheme();
+      }
+    }
+
+    private static class removeNamespaceConstraint_resultStandardScheme extends StandardScheme<removeNamespaceConstraint_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, removeNamespaceConstraint_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, removeNamespaceConstraint_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class removeNamespaceConstraint_resultTupleSchemeFactory implements SchemeFactory {
+      public removeNamespaceConstraint_resultTupleScheme getScheme() {
+        return new removeNamespaceConstraint_resultTupleScheme();
+      }
+    }
+
+    private static class removeNamespaceConstraint_resultTupleScheme extends TupleScheme<removeNamespaceConstraint_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, removeNamespaceConstraint_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetOuch1()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(2);
+        }
+        oprot.writeBitSet(optionals, 3);
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, removeNamespaceConstraint_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(3);
+        if (incoming.get(0)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class listNamespaceConstraints_args implements org.apache.thrift.TBase<listNamespaceConstraints_args, listNamespaceConstraints_args._Fields>, java.io.Serializable, Cloneable, Comparable<listNamespaceConstraints_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("listNamespaceConstraints_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new listNamespaceConstraints_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new listNamespaceConstraints_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(listNamespaceConstraints_args.class, metaDataMap);
+    }
+
+    public listNamespaceConstraints_args() {
+    }
+
+    public listNamespaceConstraints_args(
+      ByteBuffer login,
+      String namespaceName)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public listNamespaceConstraints_args(listNamespaceConstraints_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+    }
+
+    public listNamespaceConstraints_args deepCopy() {
+      return new listNamespaceConstraints_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public listNamespaceConstraints_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public listNamespaceConstraints_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public listNamespaceConstraints_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof listNamespaceConstraints_args)
+        return this.equals((listNamespaceConstraints_args)that);
+      return false;
+    }
+
+    public boolean equals(listNamespaceConstraints_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(listNamespaceConstraints_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("listNamespaceConstraints_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class listNamespaceConstraints_argsStandardSchemeFactory implements SchemeFactory {
+      public listNamespaceConstraints_argsStandardScheme getScheme() {
+        return new listNamespaceConstraints_argsStandardScheme();
+      }
+    }
+
+    private static class listNamespaceConstraints_argsStandardScheme extends StandardScheme<listNamespaceConstraints_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, listNamespaceConstraints_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, listNamespaceConstraints_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class listNamespaceConstraints_argsTupleSchemeFactory implements SchemeFactory {
+      public listNamespaceConstraints_argsTupleScheme getScheme() {
+        return new listNamespaceConstraints_argsTupleScheme();
+      }
+    }
+
+    private static class listNamespaceConstraints_argsTupleScheme extends TupleScheme<listNamespaceConstraints_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, listNamespaceConstraints_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        oprot.writeBitSet(optionals, 2);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, listNamespaceConstraints_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(2);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class listNamespaceConstraints_result implements org.apache.thrift.TBase<listNamespaceConstraints_result, listNamespaceConstraints_result._Fields>, java.io.Serializable, Cloneable, Comparable<listNamespaceConstraints_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("listNamespaceConstraints_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.MAP, (short)0);
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new listNamespaceConstraints_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new listNamespaceConstraints_resultTupleSchemeFactory());
+    }
+
+    public Map<String,Integer> success; // required
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.MapMetaData(org.apache.thrift.protocol.TType.MAP, 
+              new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING), 
+              new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.I32))));
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(listNamespaceConstraints_result.class, metaDataMap);
+    }
+
+    public listNamespaceConstraints_result() {
+    }
+
+    public listNamespaceConstraints_result(
+      Map<String,Integer> success,
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.success = success;
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public listNamespaceConstraints_result(listNamespaceConstraints_result other) {
+      if (other.isSetSuccess()) {
+        Map<String,Integer> __this__success = new HashMap<String,Integer>(other.success);
+        this.success = __this__success;
+      }
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public listNamespaceConstraints_result deepCopy() {
+      return new listNamespaceConstraints_result(this);
+    }
+
+    @Override
+    public void clear() {
+      this.success = null;
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public void putToSuccess(String key, int val) {
+      if (this.success == null) {
+        this.success = new HashMap<String,Integer>();
+      }
+      this.success.put(key, val);
+    }
+
+    public Map<String,Integer> getSuccess() {
+      return this.success;
+    }
+
+    public listNamespaceConstraints_result setSuccess(Map<String,Integer> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public listNamespaceConstraints_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public listNamespaceConstraints_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public listNamespaceConstraints_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Map<String,Integer>)value);
+        }
+        break;
+
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof listNamespaceConstraints_result)
+        return this.equals((listNamespaceConstraints_result)that);
+      return false;
+    }
+
+    public boolean equals(listNamespaceConstraints_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true && (isSetSuccess());
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(listNamespaceConstraints_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("listNamespaceConstraints_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class listNamespaceConstraints_resultStandardSchemeFactory implements SchemeFactory {
+      public listNamespaceConstraints_resultStandardScheme getScheme() {
+        return new listNamespaceConstraints_resultStandardScheme();
+      }
+    }
+
+    private static class listNamespaceConstraints_resultStandardScheme extends StandardScheme<listNamespaceConstraints_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, listNamespaceConstraints_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.MAP) {
+                {
+                  org.apache.thrift.protocol.TMap _map568 = iprot.readMapBegin();
+                  struct.success = new HashMap<String,Integer>(2*_map568.size);
+                  String _key569;
+                  int _val570;
+                  for (int _i571 = 0; _i571 < _map568.size; ++_i571)
+                  {
+                    _key569 = iprot.readString();
+                    _val570 = iprot.readI32();
+                    struct.success.put(_key569, _val570);
+                  }
+                  iprot.readMapEnd();
+                }
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, listNamespaceConstraints_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.success != null) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          {
+            oprot.writeMapBegin(new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, struct.success.size()));
+            for (Map.Entry<String, Integer> _iter572 : struct.success.entrySet())
+            {
+              oprot.writeString(_iter572.getKey());
+              oprot.writeI32(_iter572.getValue());
+            }
+            oprot.writeMapEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class listNamespaceConstraints_resultTupleSchemeFactory implements SchemeFactory {
+      public listNamespaceConstraints_resultTupleScheme getScheme() {
+        return new listNamespaceConstraints_resultTupleScheme();
+      }
+    }
+
+    private static class listNamespaceConstraints_resultTupleScheme extends TupleScheme<listNamespaceConstraints_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, listNamespaceConstraints_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch1()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(2);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetSuccess()) {
+          {
+            oprot.writeI32(struct.success.size());
+            for (Map.Entry<String, Integer> _iter573 : struct.success.entrySet())
+            {
+              oprot.writeString(_iter573.getKey());
+              oprot.writeI32(_iter573.getValue());
+            }
+          }
+        }
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, listNamespaceConstraints_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          {
+            org.apache.thrift.protocol.TMap _map574 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.I32, iprot.readI32());
+            struct.success = new HashMap<String,Integer>(2*_map574.size);
+            String _key575;
+            int _val576;
+            for (int _i577 = 0; _i577 < _map574.size; ++_i577)
+            {
+              _key575 = iprot.readString();
+              _val576 = iprot.readI32();
+              struct.success.put(_key575, _val576);
+            }
+          }
+          struct.setSuccessIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class testNamespaceClassLoad_args implements org.apache.thrift.TBase<testNamespaceClassLoad_args, testNamespaceClassLoad_args._Fields>, java.io.Serializable, Cloneable, Comparable<testNamespaceClassLoad_args>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("testNamespaceClassLoad_args");
+
+    private static final org.apache.thrift.protocol.TField LOGIN_FIELD_DESC = new org.apache.thrift.protocol.TField("login", org.apache.thrift.protocol.TType.STRING, (short)1);
+    private static final org.apache.thrift.protocol.TField NAMESPACE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("namespaceName", org.apache.thrift.protocol.TType.STRING, (short)2);
+    private static final org.apache.thrift.protocol.TField CLASS_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("className", org.apache.thrift.protocol.TType.STRING, (short)3);
+    private static final org.apache.thrift.protocol.TField AS_TYPE_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("asTypeName", org.apache.thrift.protocol.TType.STRING, (short)4);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new testNamespaceClassLoad_argsStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new testNamespaceClassLoad_argsTupleSchemeFactory());
+    }
+
+    public ByteBuffer login; // required
+    public String namespaceName; // required
+    public String className; // required
+    public String asTypeName; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      LOGIN((short)1, "login"),
+      NAMESPACE_NAME((short)2, "namespaceName"),
+      CLASS_NAME((short)3, "className"),
+      AS_TYPE_NAME((short)4, "asTypeName");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 1: // LOGIN
+            return LOGIN;
+          case 2: // NAMESPACE_NAME
+            return NAMESPACE_NAME;
+          case 3: // CLASS_NAME
+            return CLASS_NAME;
+          case 4: // AS_TYPE_NAME
+            return AS_TYPE_NAME;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.LOGIN, new org.apache.thrift.meta_data.FieldMetaData("login", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING          , true)));
+      tmpMap.put(_Fields.NAMESPACE_NAME, new org.apache.thrift.meta_data.FieldMetaData("namespaceName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.CLASS_NAME, new org.apache.thrift.meta_data.FieldMetaData("className", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      tmpMap.put(_Fields.AS_TYPE_NAME, new org.apache.thrift.meta_data.FieldMetaData("asTypeName", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(testNamespaceClassLoad_args.class, metaDataMap);
+    }
+
+    public testNamespaceClassLoad_args() {
+    }
+
+    public testNamespaceClassLoad_args(
+      ByteBuffer login,
+      String namespaceName,
+      String className,
+      String asTypeName)
+    {
+      this();
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      this.namespaceName = namespaceName;
+      this.className = className;
+      this.asTypeName = asTypeName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public testNamespaceClassLoad_args(testNamespaceClassLoad_args other) {
+      if (other.isSetLogin()) {
+        this.login = org.apache.thrift.TBaseHelper.copyBinary(other.login);
+      }
+      if (other.isSetNamespaceName()) {
+        this.namespaceName = other.namespaceName;
+      }
+      if (other.isSetClassName()) {
+        this.className = other.className;
+      }
+      if (other.isSetAsTypeName()) {
+        this.asTypeName = other.asTypeName;
+      }
+    }
+
+    public testNamespaceClassLoad_args deepCopy() {
+      return new testNamespaceClassLoad_args(this);
+    }
+
+    @Override
+    public void clear() {
+      this.login = null;
+      this.namespaceName = null;
+      this.className = null;
+      this.asTypeName = null;
+    }
+
+    public byte[] getLogin() {
+      setLogin(org.apache.thrift.TBaseHelper.rightSize(login));
+      return login == null ? null : login.array();
+    }
+
+    public ByteBuffer bufferForLogin() {
+      return org.apache.thrift.TBaseHelper.copyBinary(login);
+    }
+
+    public testNamespaceClassLoad_args setLogin(byte[] login) {
+      this.login = login == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(login, login.length));
+      return this;
+    }
+
+    public testNamespaceClassLoad_args setLogin(ByteBuffer login) {
+      this.login = org.apache.thrift.TBaseHelper.copyBinary(login);
+      return this;
+    }
+
+    public void unsetLogin() {
+      this.login = null;
+    }
+
+    /** Returns true if field login is set (has been assigned a value) and false otherwise */
+    public boolean isSetLogin() {
+      return this.login != null;
+    }
+
+    public void setLoginIsSet(boolean value) {
+      if (!value) {
+        this.login = null;
+      }
+    }
+
+    public String getNamespaceName() {
+      return this.namespaceName;
+    }
+
+    public testNamespaceClassLoad_args setNamespaceName(String namespaceName) {
+      this.namespaceName = namespaceName;
+      return this;
+    }
+
+    public void unsetNamespaceName() {
+      this.namespaceName = null;
+    }
+
+    /** Returns true if field namespaceName is set (has been assigned a value) and false otherwise */
+    public boolean isSetNamespaceName() {
+      return this.namespaceName != null;
+    }
+
+    public void setNamespaceNameIsSet(boolean value) {
+      if (!value) {
+        this.namespaceName = null;
+      }
+    }
+
+    public String getClassName() {
+      return this.className;
+    }
+
+    public testNamespaceClassLoad_args setClassName(String className) {
+      this.className = className;
+      return this;
+    }
+
+    public void unsetClassName() {
+      this.className = null;
+    }
+
+    /** Returns true if field className is set (has been assigned a value) and false otherwise */
+    public boolean isSetClassName() {
+      return this.className != null;
+    }
+
+    public void setClassNameIsSet(boolean value) {
+      if (!value) {
+        this.className = null;
+      }
+    }
+
+    public String getAsTypeName() {
+      return this.asTypeName;
+    }
+
+    public testNamespaceClassLoad_args setAsTypeName(String asTypeName) {
+      this.asTypeName = asTypeName;
+      return this;
+    }
+
+    public void unsetAsTypeName() {
+      this.asTypeName = null;
+    }
+
+    /** Returns true if field asTypeName is set (has been assigned a value) and false otherwise */
+    public boolean isSetAsTypeName() {
+      return this.asTypeName != null;
+    }
+
+    public void setAsTypeNameIsSet(boolean value) {
+      if (!value) {
+        this.asTypeName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case LOGIN:
+        if (value == null) {
+          unsetLogin();
+        } else {
+          setLogin((ByteBuffer)value);
+        }
+        break;
+
+      case NAMESPACE_NAME:
+        if (value == null) {
+          unsetNamespaceName();
+        } else {
+          setNamespaceName((String)value);
+        }
+        break;
+
+      case CLASS_NAME:
+        if (value == null) {
+          unsetClassName();
+        } else {
+          setClassName((String)value);
+        }
+        break;
+
+      case AS_TYPE_NAME:
+        if (value == null) {
+          unsetAsTypeName();
+        } else {
+          setAsTypeName((String)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case LOGIN:
+        return getLogin();
+
+      case NAMESPACE_NAME:
+        return getNamespaceName();
+
+      case CLASS_NAME:
+        return getClassName();
+
+      case AS_TYPE_NAME:
+        return getAsTypeName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case LOGIN:
+        return isSetLogin();
+      case NAMESPACE_NAME:
+        return isSetNamespaceName();
+      case CLASS_NAME:
+        return isSetClassName();
+      case AS_TYPE_NAME:
+        return isSetAsTypeName();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof testNamespaceClassLoad_args)
+        return this.equals((testNamespaceClassLoad_args)that);
+      return false;
+    }
+
+    public boolean equals(testNamespaceClassLoad_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_login = true && this.isSetLogin();
+      boolean that_present_login = true && that.isSetLogin();
+      if (this_present_login || that_present_login) {
+        if (!(this_present_login && that_present_login))
+          return false;
+        if (!this.login.equals(that.login))
+          return false;
+      }
+
+      boolean this_present_namespaceName = true && this.isSetNamespaceName();
+      boolean that_present_namespaceName = true && that.isSetNamespaceName();
+      if (this_present_namespaceName || that_present_namespaceName) {
+        if (!(this_present_namespaceName && that_present_namespaceName))
+          return false;
+        if (!this.namespaceName.equals(that.namespaceName))
+          return false;
+      }
+
+      boolean this_present_className = true && this.isSetClassName();
+      boolean that_present_className = true && that.isSetClassName();
+      if (this_present_className || that_present_className) {
+        if (!(this_present_className && that_present_className))
+          return false;
+        if (!this.className.equals(that.className))
+          return false;
+      }
+
+      boolean this_present_asTypeName = true && this.isSetAsTypeName();
+      boolean that_present_asTypeName = true && that.isSetAsTypeName();
+      if (this_present_asTypeName || that_present_asTypeName) {
+        if (!(this_present_asTypeName && that_present_asTypeName))
+          return false;
+        if (!this.asTypeName.equals(that.asTypeName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_login = true && (isSetLogin());
+      list.add(present_login);
+      if (present_login)
+        list.add(login);
+
+      boolean present_namespaceName = true && (isSetNamespaceName());
+      list.add(present_namespaceName);
+      if (present_namespaceName)
+        list.add(namespaceName);
+
+      boolean present_className = true && (isSetClassName());
+      list.add(present_className);
+      if (present_className)
+        list.add(className);
+
+      boolean present_asTypeName = true && (isSetAsTypeName());
+      list.add(present_asTypeName);
+      if (present_asTypeName)
+        list.add(asTypeName);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(testNamespaceClassLoad_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetLogin()).compareTo(other.isSetLogin());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetLogin()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.login, other.login);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetNamespaceName()).compareTo(other.isSetNamespaceName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetNamespaceName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.namespaceName, other.namespaceName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetClassName()).compareTo(other.isSetClassName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetClassName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.className, other.className);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetAsTypeName()).compareTo(other.isSetAsTypeName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetAsTypeName()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.asTypeName, other.asTypeName);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("testNamespaceClassLoad_args(");
+      boolean first = true;
+
+      sb.append("login:");
+      if (this.login == null) {
+        sb.append("null");
+      } else {
+        org.apache.thrift.TBaseHelper.toString(this.login, sb);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("namespaceName:");
+      if (this.namespaceName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.namespaceName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("className:");
+      if (this.className == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.className);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("asTypeName:");
+      if (this.asTypeName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.asTypeName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class testNamespaceClassLoad_argsStandardSchemeFactory implements SchemeFactory {
+      public testNamespaceClassLoad_argsStandardScheme getScheme() {
+        return new testNamespaceClassLoad_argsStandardScheme();
+      }
+    }
+
+    private static class testNamespaceClassLoad_argsStandardScheme extends StandardScheme<testNamespaceClassLoad_args> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, testNamespaceClassLoad_args struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 1: // LOGIN
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.login = iprot.readBinary();
+                struct.setLoginIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // NAMESPACE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.namespaceName = iprot.readString();
+                struct.setNamespaceNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // CLASS_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.className = iprot.readString();
+                struct.setClassNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 4: // AS_TYPE_NAME
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+                struct.asTypeName = iprot.readString();
+                struct.setAsTypeNameIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, testNamespaceClassLoad_args struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.login != null) {
+          oprot.writeFieldBegin(LOGIN_FIELD_DESC);
+          oprot.writeBinary(struct.login);
+          oprot.writeFieldEnd();
+        }
+        if (struct.namespaceName != null) {
+          oprot.writeFieldBegin(NAMESPACE_NAME_FIELD_DESC);
+          oprot.writeString(struct.namespaceName);
+          oprot.writeFieldEnd();
+        }
+        if (struct.className != null) {
+          oprot.writeFieldBegin(CLASS_NAME_FIELD_DESC);
+          oprot.writeString(struct.className);
+          oprot.writeFieldEnd();
+        }
+        if (struct.asTypeName != null) {
+          oprot.writeFieldBegin(AS_TYPE_NAME_FIELD_DESC);
+          oprot.writeString(struct.asTypeName);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class testNamespaceClassLoad_argsTupleSchemeFactory implements SchemeFactory {
+      public testNamespaceClassLoad_argsTupleScheme getScheme() {
+        return new testNamespaceClassLoad_argsTupleScheme();
+      }
+    }
+
+    private static class testNamespaceClassLoad_argsTupleScheme extends TupleScheme<testNamespaceClassLoad_args> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, testNamespaceClassLoad_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetLogin()) {
+          optionals.set(0);
+        }
+        if (struct.isSetNamespaceName()) {
+          optionals.set(1);
+        }
+        if (struct.isSetClassName()) {
+          optionals.set(2);
+        }
+        if (struct.isSetAsTypeName()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetLogin()) {
+          oprot.writeBinary(struct.login);
+        }
+        if (struct.isSetNamespaceName()) {
+          oprot.writeString(struct.namespaceName);
+        }
+        if (struct.isSetClassName()) {
+          oprot.writeString(struct.className);
+        }
+        if (struct.isSetAsTypeName()) {
+          oprot.writeString(struct.asTypeName);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, testNamespaceClassLoad_args struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.login = iprot.readBinary();
+          struct.setLoginIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.namespaceName = iprot.readString();
+          struct.setNamespaceNameIsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.className = iprot.readString();
+          struct.setClassNameIsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.asTypeName = iprot.readString();
+          struct.setAsTypeNameIsSet(true);
+        }
+      }
+    }
+
+  }
+
+  public static class testNamespaceClassLoad_result implements org.apache.thrift.TBase<testNamespaceClassLoad_result, testNamespaceClassLoad_result._Fields>, java.io.Serializable, Cloneable, Comparable<testNamespaceClassLoad_result>   {
+    private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("testNamespaceClassLoad_result");
+
+    private static final org.apache.thrift.protocol.TField SUCCESS_FIELD_DESC = new org.apache.thrift.protocol.TField("success", org.apache.thrift.protocol.TType.BOOL, (short)0);
+    private static final org.apache.thrift.protocol.TField OUCH1_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch1", org.apache.thrift.protocol.TType.STRUCT, (short)1);
+    private static final org.apache.thrift.protocol.TField OUCH2_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch2", org.apache.thrift.protocol.TType.STRUCT, (short)2);
+    private static final org.apache.thrift.protocol.TField OUCH3_FIELD_DESC = new org.apache.thrift.protocol.TField("ouch3", org.apache.thrift.protocol.TType.STRUCT, (short)3);
+
+    private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+    static {
+      schemes.put(StandardScheme.class, new testNamespaceClassLoad_resultStandardSchemeFactory());
+      schemes.put(TupleScheme.class, new testNamespaceClassLoad_resultTupleSchemeFactory());
+    }
+
+    public boolean success; // required
+    public AccumuloException ouch1; // required
+    public AccumuloSecurityException ouch2; // required
+    public NamespaceNotFoundException ouch3; // required
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      OUCH1((short)1, "ouch1"),
+      OUCH2((short)2, "ouch2"),
+      OUCH3((short)3, "ouch3");
+
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        switch(fieldId) {
+          case 0: // SUCCESS
+            return SUCCESS;
+          case 1: // OUCH1
+            return OUCH1;
+          case 2: // OUCH2
+            return OUCH2;
+          case 3: // OUCH3
+            return OUCH3;
+          default:
+            return null;
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private byte __isset_bitfield = 0;
+    public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+    static {
+      Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+      tmpMap.put(_Fields.SUCCESS, new org.apache.thrift.meta_data.FieldMetaData("success", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.BOOL)));
+      tmpMap.put(_Fields.OUCH1, new org.apache.thrift.meta_data.FieldMetaData("ouch1", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH2, new org.apache.thrift.meta_data.FieldMetaData("ouch2", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      tmpMap.put(_Fields.OUCH3, new org.apache.thrift.meta_data.FieldMetaData("ouch3", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+          new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRUCT)));
+      metaDataMap = Collections.unmodifiableMap(tmpMap);
+      org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(testNamespaceClassLoad_result.class, metaDataMap);
+    }
+
+    public testNamespaceClassLoad_result() {
+    }
+
+    public testNamespaceClassLoad_result(
+      boolean success,
+      AccumuloException ouch1,
+      AccumuloSecurityException ouch2,
+      NamespaceNotFoundException ouch3)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.ouch1 = ouch1;
+      this.ouch2 = ouch2;
+      this.ouch3 = ouch3;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public testNamespaceClassLoad_result(testNamespaceClassLoad_result other) {
+      __isset_bitfield = other.__isset_bitfield;
+      this.success = other.success;
+      if (other.isSetOuch1()) {
+        this.ouch1 = new AccumuloException(other.ouch1);
+      }
+      if (other.isSetOuch2()) {
+        this.ouch2 = new AccumuloSecurityException(other.ouch2);
+      }
+      if (other.isSetOuch3()) {
+        this.ouch3 = new NamespaceNotFoundException(other.ouch3);
+      }
+    }
+
+    public testNamespaceClassLoad_result deepCopy() {
+      return new testNamespaceClassLoad_result(this);
+    }
+
+    @Override
+    public void clear() {
+      setSuccessIsSet(false);
+      this.success = false;
+      this.ouch1 = null;
+      this.ouch2 = null;
+      this.ouch3 = null;
+    }
+
+    public boolean isSuccess() {
+      return this.success;
+    }
+
+    public testNamespaceClassLoad_result setSuccess(boolean success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bitfield = EncodingUtils.clearBit(__isset_bitfield, __SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been assigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return EncodingUtils.testBit(__isset_bitfield, __SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bitfield = EncodingUtils.setBit(__isset_bitfield, __SUCCESS_ISSET_ID, value);
+    }
+
+    public AccumuloException getOuch1() {
+      return this.ouch1;
+    }
+
+    public testNamespaceClassLoad_result setOuch1(AccumuloException ouch1) {
+      this.ouch1 = ouch1;
+      return this;
+    }
+
+    public void unsetOuch1() {
+      this.ouch1 = null;
+    }
+
+    /** Returns true if field ouch1 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch1() {
+      return this.ouch1 != null;
+    }
+
+    public void setOuch1IsSet(boolean value) {
+      if (!value) {
+        this.ouch1 = null;
+      }
+    }
+
+    public AccumuloSecurityException getOuch2() {
+      return this.ouch2;
+    }
+
+    public testNamespaceClassLoad_result setOuch2(AccumuloSecurityException ouch2) {
+      this.ouch2 = ouch2;
+      return this;
+    }
+
+    public void unsetOuch2() {
+      this.ouch2 = null;
+    }
+
+    /** Returns true if field ouch2 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch2() {
+      return this.ouch2 != null;
+    }
+
+    public void setOuch2IsSet(boolean value) {
+      if (!value) {
+        this.ouch2 = null;
+      }
+    }
+
+    public NamespaceNotFoundException getOuch3() {
+      return this.ouch3;
+    }
+
+    public testNamespaceClassLoad_result setOuch3(NamespaceNotFoundException ouch3) {
+      this.ouch3 = ouch3;
+      return this;
+    }
+
+    public void unsetOuch3() {
+      this.ouch3 = null;
+    }
+
+    /** Returns true if field ouch3 is set (has been assigned a value) and false otherwise */
+    public boolean isSetOuch3() {
+      return this.ouch3 != null;
+    }
+
+    public void setOuch3IsSet(boolean value) {
+      if (!value) {
+        this.ouch3 = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Boolean)value);
+        }
+        break;
+
+      case OUCH1:
+        if (value == null) {
+          unsetOuch1();
+        } else {
+          setOuch1((AccumuloException)value);
+        }
+        break;
+
+      case OUCH2:
+        if (value == null) {
+          unsetOuch2();
+        } else {
+          setOuch2((AccumuloSecurityException)value);
+        }
+        break;
+
+      case OUCH3:
+        if (value == null) {
+          unsetOuch3();
+        } else {
+          setOuch3((NamespaceNotFoundException)value);
+        }
+        break;
+
+      }
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSuccess();
+
+      case OUCH1:
+        return getOuch1();
+
+      case OUCH2:
+        return getOuch2();
+
+      case OUCH3:
+        return getOuch3();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      if (field == null) {
+        throw new IllegalArgumentException();
+      }
+
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case OUCH1:
+        return isSetOuch1();
+      case OUCH2:
+        return isSetOuch2();
+      case OUCH3:
+        return isSetOuch3();
+      }
+      throw new IllegalStateException();
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof testNamespaceClassLoad_result)
+        return this.equals((testNamespaceClassLoad_result)that);
+      return false;
+    }
+
+    public boolean equals(testNamespaceClassLoad_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_ouch1 = true && this.isSetOuch1();
+      boolean that_present_ouch1 = true && that.isSetOuch1();
+      if (this_present_ouch1 || that_present_ouch1) {
+        if (!(this_present_ouch1 && that_present_ouch1))
+          return false;
+        if (!this.ouch1.equals(that.ouch1))
+          return false;
+      }
+
+      boolean this_present_ouch2 = true && this.isSetOuch2();
+      boolean that_present_ouch2 = true && that.isSetOuch2();
+      if (this_present_ouch2 || that_present_ouch2) {
+        if (!(this_present_ouch2 && that_present_ouch2))
+          return false;
+        if (!this.ouch2.equals(that.ouch2))
+          return false;
+      }
+
+      boolean this_present_ouch3 = true && this.isSetOuch3();
+      boolean that_present_ouch3 = true && that.isSetOuch3();
+      if (this_present_ouch3 || that_present_ouch3) {
+        if (!(this_present_ouch3 && that_present_ouch3))
+          return false;
+        if (!this.ouch3.equals(that.ouch3))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      boolean present_ouch1 = true && (isSetOuch1());
+      list.add(present_ouch1);
+      if (present_ouch1)
+        list.add(ouch1);
+
+      boolean present_ouch2 = true && (isSetOuch2());
+      list.add(present_ouch2);
+      if (present_ouch2)
+        list.add(ouch2);
+
+      boolean present_ouch3 = true && (isSetOuch3());
+      list.add(present_ouch3);
+      if (present_ouch3)
+        list.add(ouch3);
+
+      return list.hashCode();
+    }
+
+    @Override
+    public int compareTo(testNamespaceClassLoad_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(other.isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetSuccess()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.success, other.success);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch1()).compareTo(other.isSetOuch1());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch1()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch1, other.ouch1);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch2()).compareTo(other.isSetOuch2());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch2()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch2, other.ouch2);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      lastComparison = Boolean.valueOf(isSetOuch3()).compareTo(other.isSetOuch3());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      if (isSetOuch3()) {
+        lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.ouch3, other.ouch3);
+        if (lastComparison != 0) {
+          return lastComparison;
+        }
+      }
+      return 0;
+    }
+
+    public _Fields fieldForId(int fieldId) {
+      return _Fields.findByThriftId(fieldId);
+    }
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+      schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+      schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+      }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("testNamespaceClassLoad_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch1:");
+      if (this.ouch1 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch1);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch2:");
+      if (this.ouch2 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch2);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ouch3:");
+      if (this.ouch3 == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ouch3);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws org.apache.thrift.TException {
+      // check for required fields
+      // check for sub-struct validity
+    }
+
+    private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+      try {
+        write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+      try {
+        // it doesn't seem like you should have to do this, but java serialization is wacky, and doesn't call the default constructor.
+        __isset_bitfield = 0;
+        read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+      } catch (org.apache.thrift.TException te) {
+        throw new java.io.IOException(te);
+      }
+    }
+
+    private static class testNamespaceClassLoad_resultStandardSchemeFactory implements SchemeFactory {
+      public testNamespaceClassLoad_resultStandardScheme getScheme() {
+        return new testNamespaceClassLoad_resultStandardScheme();
+      }
+    }
+
+    private static class testNamespaceClassLoad_resultStandardScheme extends StandardScheme<testNamespaceClassLoad_result> {
+
+      public void read(org.apache.thrift.protocol.TProtocol iprot, testNamespaceClassLoad_result struct) throws org.apache.thrift.TException {
+        org.apache.thrift.protocol.TField schemeField;
+        iprot.readStructBegin();
+        while (true)
+        {
+          schemeField = iprot.readFieldBegin();
+          if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+            break;
+          }
+          switch (schemeField.id) {
+            case 0: // SUCCESS
+              if (schemeField.type == org.apache.thrift.protocol.TType.BOOL) {
+                struct.success = iprot.readBool();
+                struct.setSuccessIsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 1: // OUCH1
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch1 = new AccumuloException();
+                struct.ouch1.read(iprot);
+                struct.setOuch1IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 2: // OUCH2
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch2 = new AccumuloSecurityException();
+                struct.ouch2.read(iprot);
+                struct.setOuch2IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            case 3: // OUCH3
+              if (schemeField.type == org.apache.thrift.protocol.TType.STRUCT) {
+                struct.ouch3 = new NamespaceNotFoundException();
+                struct.ouch3.read(iprot);
+                struct.setOuch3IsSet(true);
+              } else { 
+                org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+              }
+              break;
+            default:
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+          }
+          iprot.readFieldEnd();
+        }
+        iprot.readStructEnd();
+
+        // check for required fields of primitive type, which can't be checked in the validate method
+        struct.validate();
+      }
+
+      public void write(org.apache.thrift.protocol.TProtocol oprot, testNamespaceClassLoad_result struct) throws org.apache.thrift.TException {
+        struct.validate();
+
+        oprot.writeStructBegin(STRUCT_DESC);
+        if (struct.isSetSuccess()) {
+          oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+          oprot.writeBool(struct.success);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch1 != null) {
+          oprot.writeFieldBegin(OUCH1_FIELD_DESC);
+          struct.ouch1.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch2 != null) {
+          oprot.writeFieldBegin(OUCH2_FIELD_DESC);
+          struct.ouch2.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        if (struct.ouch3 != null) {
+          oprot.writeFieldBegin(OUCH3_FIELD_DESC);
+          struct.ouch3.write(oprot);
+          oprot.writeFieldEnd();
+        }
+        oprot.writeFieldStop();
+        oprot.writeStructEnd();
+      }
+
+    }
+
+    private static class testNamespaceClassLoad_resultTupleSchemeFactory implements SchemeFactory {
+      public testNamespaceClassLoad_resultTupleScheme getScheme() {
+        return new testNamespaceClassLoad_resultTupleScheme();
+      }
+    }
+
+    private static class testNamespaceClassLoad_resultTupleScheme extends TupleScheme<testNamespaceClassLoad_result> {
+
+      @Override
+      public void write(org.apache.thrift.protocol.TProtocol prot, testNamespaceClassLoad_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol oprot = (TTupleProtocol) prot;
+        BitSet optionals = new BitSet();
+        if (struct.isSetSuccess()) {
+          optionals.set(0);
+        }
+        if (struct.isSetOuch1()) {
+          optionals.set(1);
+        }
+        if (struct.isSetOuch2()) {
+          optionals.set(2);
+        }
+        if (struct.isSetOuch3()) {
+          optionals.set(3);
+        }
+        oprot.writeBitSet(optionals, 4);
+        if (struct.isSetSuccess()) {
+          oprot.writeBool(struct.success);
+        }
+        if (struct.isSetOuch1()) {
+          struct.ouch1.write(oprot);
+        }
+        if (struct.isSetOuch2()) {
+          struct.ouch2.write(oprot);
+        }
+        if (struct.isSetOuch3()) {
+          struct.ouch3.write(oprot);
+        }
+      }
+
+      @Override
+      public void read(org.apache.thrift.protocol.TProtocol prot, testNamespaceClassLoad_result struct) throws org.apache.thrift.TException {
+        TTupleProtocol iprot = (TTupleProtocol) prot;
+        BitSet incoming = iprot.readBitSet(4);
+        if (incoming.get(0)) {
+          struct.success = iprot.readBool();
+          struct.setSuccessIsSet(true);
+        }
+        if (incoming.get(1)) {
+          struct.ouch1 = new AccumuloException();
+          struct.ouch1.read(iprot);
+          struct.setOuch1IsSet(true);
+        }
+        if (incoming.get(2)) {
+          struct.ouch2 = new AccumuloSecurityException();
+          struct.ouch2.read(iprot);
+          struct.setOuch2IsSet(true);
+        }
+        if (incoming.get(3)) {
+          struct.ouch3 = new NamespaceNotFoundException();
+          struct.ouch3.read(iprot);
+          struct.setOuch3IsSet(true);
+        }
+      }
+    }
+
+  }
+
 }
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloSecurityException.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloSecurityException.java
index 28b1e6c..f77a908 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloSecurityException.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/AccumuloSecurityException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class AccumuloSecurityException extends TException implements org.apache.thrift.TBase<AccumuloSecurityException, AccumuloSecurityException._Fields>, java.io.Serializable, Cloneable, Comparable<AccumuloSecurityException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class AccumuloSecurityException extends TException implements org.apache.thrift.TBase<AccumuloSecurityException, AccumuloSecurityException._Fields>, java.io.Serializable, Cloneable, Comparable<AccumuloSecurityException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("AccumuloSecurityException");
 
   private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ActiveCompaction.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ActiveCompaction.java
index 58e1321..986b68c 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ActiveCompaction.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ActiveCompaction.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ActiveCompaction implements org.apache.thrift.TBase<ActiveCompaction, ActiveCompaction._Fields>, java.io.Serializable, Cloneable, Comparable<ActiveCompaction> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ActiveCompaction implements org.apache.thrift.TBase<ActiveCompaction, ActiveCompaction._Fields>, java.io.Serializable, Cloneable, Comparable<ActiveCompaction> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ActiveCompaction");
 
   private static final org.apache.thrift.protocol.TField EXTENT_FIELD_DESC = new org.apache.thrift.protocol.TField("extent", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -676,7 +679,7 @@
       return getExtent();
 
     case AGE:
-      return Long.valueOf(getAge());
+      return getAge();
 
     case INPUT_FILES:
       return getInputFiles();
@@ -694,10 +697,10 @@
       return getLocalityGroup();
 
     case ENTRIES_READ:
-      return Long.valueOf(getEntriesRead());
+      return getEntriesRead();
 
     case ENTRIES_WRITTEN:
-      return Long.valueOf(getEntriesWritten());
+      return getEntriesWritten();
 
     case ITERATORS:
       return getIterators();
@@ -845,7 +848,59 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_extent = true && (isSetExtent());
+    list.add(present_extent);
+    if (present_extent)
+      list.add(extent);
+
+    boolean present_age = true;
+    list.add(present_age);
+    if (present_age)
+      list.add(age);
+
+    boolean present_inputFiles = true && (isSetInputFiles());
+    list.add(present_inputFiles);
+    if (present_inputFiles)
+      list.add(inputFiles);
+
+    boolean present_outputFile = true && (isSetOutputFile());
+    list.add(present_outputFile);
+    if (present_outputFile)
+      list.add(outputFile);
+
+    boolean present_type = true && (isSetType());
+    list.add(present_type);
+    if (present_type)
+      list.add(type.getValue());
+
+    boolean present_reason = true && (isSetReason());
+    list.add(present_reason);
+    if (present_reason)
+      list.add(reason.getValue());
+
+    boolean present_localityGroup = true && (isSetLocalityGroup());
+    list.add(present_localityGroup);
+    if (present_localityGroup)
+      list.add(localityGroup);
+
+    boolean present_entriesRead = true;
+    list.add(present_entriesRead);
+    if (present_entriesRead)
+      list.add(entriesRead);
+
+    boolean present_entriesWritten = true;
+    list.add(present_entriesWritten);
+    if (present_entriesWritten)
+      list.add(entriesWritten);
+
+    boolean present_iterators = true && (isSetIterators());
+    list.add(present_iterators);
+    if (present_iterators)
+      list.add(iterators);
+
+    return list.hashCode();
   }
 
   @Override
@@ -1113,11 +1168,11 @@
               {
                 org.apache.thrift.protocol.TList _list138 = iprot.readListBegin();
                 struct.inputFiles = new ArrayList<String>(_list138.size);
-                for (int _i139 = 0; _i139 < _list138.size; ++_i139)
+                String _elem139;
+                for (int _i140 = 0; _i140 < _list138.size; ++_i140)
                 {
-                  String _elem140;
-                  _elem140 = iprot.readString();
-                  struct.inputFiles.add(_elem140);
+                  _elem139 = iprot.readString();
+                  struct.inputFiles.add(_elem139);
                 }
                 iprot.readListEnd();
               }
@@ -1136,7 +1191,7 @@
             break;
           case 5: // TYPE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.type = CompactionType.findByValue(iprot.readI32());
+              struct.type = org.apache.accumulo.proxy.thrift.CompactionType.findByValue(iprot.readI32());
               struct.setTypeIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -1144,7 +1199,7 @@
             break;
           case 6: // REASON
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.reason = CompactionReason.findByValue(iprot.readI32());
+              struct.reason = org.apache.accumulo.proxy.thrift.CompactionReason.findByValue(iprot.readI32());
               struct.setReasonIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -1179,12 +1234,12 @@
               {
                 org.apache.thrift.protocol.TList _list141 = iprot.readListBegin();
                 struct.iterators = new ArrayList<IteratorSetting>(_list141.size);
-                for (int _i142 = 0; _i142 < _list141.size; ++_i142)
+                IteratorSetting _elem142;
+                for (int _i143 = 0; _i143 < _list141.size; ++_i143)
                 {
-                  IteratorSetting _elem143;
-                  _elem143 = new IteratorSetting();
-                  _elem143.read(iprot);
-                  struct.iterators.add(_elem143);
+                  _elem142 = new IteratorSetting();
+                  _elem142.read(iprot);
+                  struct.iterators.add(_elem142);
                 }
                 iprot.readListEnd();
               }
@@ -1376,11 +1431,11 @@
         {
           org.apache.thrift.protocol.TList _list148 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.inputFiles = new ArrayList<String>(_list148.size);
-          for (int _i149 = 0; _i149 < _list148.size; ++_i149)
+          String _elem149;
+          for (int _i150 = 0; _i150 < _list148.size; ++_i150)
           {
-            String _elem150;
-            _elem150 = iprot.readString();
-            struct.inputFiles.add(_elem150);
+            _elem149 = iprot.readString();
+            struct.inputFiles.add(_elem149);
           }
         }
         struct.setInputFilesIsSet(true);
@@ -1390,11 +1445,11 @@
         struct.setOutputFileIsSet(true);
       }
       if (incoming.get(4)) {
-        struct.type = CompactionType.findByValue(iprot.readI32());
+        struct.type = org.apache.accumulo.proxy.thrift.CompactionType.findByValue(iprot.readI32());
         struct.setTypeIsSet(true);
       }
       if (incoming.get(5)) {
-        struct.reason = CompactionReason.findByValue(iprot.readI32());
+        struct.reason = org.apache.accumulo.proxy.thrift.CompactionReason.findByValue(iprot.readI32());
         struct.setReasonIsSet(true);
       }
       if (incoming.get(6)) {
@@ -1413,12 +1468,12 @@
         {
           org.apache.thrift.protocol.TList _list151 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.iterators = new ArrayList<IteratorSetting>(_list151.size);
-          for (int _i152 = 0; _i152 < _list151.size; ++_i152)
+          IteratorSetting _elem152;
+          for (int _i153 = 0; _i153 < _list151.size; ++_i153)
           {
-            IteratorSetting _elem153;
-            _elem153 = new IteratorSetting();
-            _elem153.read(iprot);
-            struct.iterators.add(_elem153);
+            _elem152 = new IteratorSetting();
+            _elem152.read(iprot);
+            struct.iterators.add(_elem152);
           }
         }
         struct.setIteratorsIsSet(true);
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ActiveScan.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ActiveScan.java
index bc9ad51..9f4d892 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ActiveScan.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ActiveScan.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ActiveScan implements org.apache.thrift.TBase<ActiveScan, ActiveScan._Fields>, java.io.Serializable, Cloneable, Comparable<ActiveScan> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ActiveScan implements org.apache.thrift.TBase<ActiveScan, ActiveScan._Fields>, java.io.Serializable, Cloneable, Comparable<ActiveScan> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ActiveScan");
 
   private static final org.apache.thrift.protocol.TField CLIENT_FIELD_DESC = new org.apache.thrift.protocol.TField("client", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -747,10 +750,10 @@
       return getTable();
 
     case AGE:
-      return Long.valueOf(getAge());
+      return getAge();
 
     case IDLE_TIME:
-      return Long.valueOf(getIdleTime());
+      return getIdleTime();
 
     case TYPE:
       return getType();
@@ -924,7 +927,64 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_client = true && (isSetClient());
+    list.add(present_client);
+    if (present_client)
+      list.add(client);
+
+    boolean present_user = true && (isSetUser());
+    list.add(present_user);
+    if (present_user)
+      list.add(user);
+
+    boolean present_table = true && (isSetTable());
+    list.add(present_table);
+    if (present_table)
+      list.add(table);
+
+    boolean present_age = true;
+    list.add(present_age);
+    if (present_age)
+      list.add(age);
+
+    boolean present_idleTime = true;
+    list.add(present_idleTime);
+    if (present_idleTime)
+      list.add(idleTime);
+
+    boolean present_type = true && (isSetType());
+    list.add(present_type);
+    if (present_type)
+      list.add(type.getValue());
+
+    boolean present_state = true && (isSetState());
+    list.add(present_state);
+    if (present_state)
+      list.add(state.getValue());
+
+    boolean present_extent = true && (isSetExtent());
+    list.add(present_extent);
+    if (present_extent)
+      list.add(extent);
+
+    boolean present_columns = true && (isSetColumns());
+    list.add(present_columns);
+    if (present_columns)
+      list.add(columns);
+
+    boolean present_iterators = true && (isSetIterators());
+    list.add(present_iterators);
+    if (present_iterators)
+      list.add(iterators);
+
+    boolean present_authorizations = true && (isSetAuthorizations());
+    list.add(present_authorizations);
+    if (present_authorizations)
+      list.add(authorizations);
+
+    return list.hashCode();
   }
 
   @Override
@@ -1141,7 +1201,7 @@
     if (this.authorizations == null) {
       sb.append("null");
     } else {
-      sb.append(this.authorizations);
+      org.apache.thrift.TBaseHelper.toString(this.authorizations, sb);
     }
     first = false;
     sb.append(")");
@@ -1234,7 +1294,7 @@
             break;
           case 6: // TYPE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.type = ScanType.findByValue(iprot.readI32());
+              struct.type = org.apache.accumulo.proxy.thrift.ScanType.findByValue(iprot.readI32());
               struct.setTypeIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -1242,7 +1302,7 @@
             break;
           case 7: // STATE
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.state = ScanState.findByValue(iprot.readI32());
+              struct.state = org.apache.accumulo.proxy.thrift.ScanState.findByValue(iprot.readI32());
               struct.setStateIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -1262,12 +1322,12 @@
               {
                 org.apache.thrift.protocol.TList _list114 = iprot.readListBegin();
                 struct.columns = new ArrayList<Column>(_list114.size);
-                for (int _i115 = 0; _i115 < _list114.size; ++_i115)
+                Column _elem115;
+                for (int _i116 = 0; _i116 < _list114.size; ++_i116)
                 {
-                  Column _elem116;
-                  _elem116 = new Column();
-                  _elem116.read(iprot);
-                  struct.columns.add(_elem116);
+                  _elem115 = new Column();
+                  _elem115.read(iprot);
+                  struct.columns.add(_elem115);
                 }
                 iprot.readListEnd();
               }
@@ -1281,12 +1341,12 @@
               {
                 org.apache.thrift.protocol.TList _list117 = iprot.readListBegin();
                 struct.iterators = new ArrayList<IteratorSetting>(_list117.size);
-                for (int _i118 = 0; _i118 < _list117.size; ++_i118)
+                IteratorSetting _elem118;
+                for (int _i119 = 0; _i119 < _list117.size; ++_i119)
                 {
-                  IteratorSetting _elem119;
-                  _elem119 = new IteratorSetting();
-                  _elem119.read(iprot);
-                  struct.iterators.add(_elem119);
+                  _elem118 = new IteratorSetting();
+                  _elem118.read(iprot);
+                  struct.iterators.add(_elem118);
                 }
                 iprot.readListEnd();
               }
@@ -1300,11 +1360,11 @@
               {
                 org.apache.thrift.protocol.TList _list120 = iprot.readListBegin();
                 struct.authorizations = new ArrayList<ByteBuffer>(_list120.size);
-                for (int _i121 = 0; _i121 < _list120.size; ++_i121)
+                ByteBuffer _elem121;
+                for (int _i122 = 0; _i122 < _list120.size; ++_i122)
                 {
-                  ByteBuffer _elem122;
-                  _elem122 = iprot.readBinary();
-                  struct.authorizations.add(_elem122);
+                  _elem121 = iprot.readBinary();
+                  struct.authorizations.add(_elem121);
                 }
                 iprot.readListEnd();
               }
@@ -1530,11 +1590,11 @@
         struct.setIdleTimeIsSet(true);
       }
       if (incoming.get(5)) {
-        struct.type = ScanType.findByValue(iprot.readI32());
+        struct.type = org.apache.accumulo.proxy.thrift.ScanType.findByValue(iprot.readI32());
         struct.setTypeIsSet(true);
       }
       if (incoming.get(6)) {
-        struct.state = ScanState.findByValue(iprot.readI32());
+        struct.state = org.apache.accumulo.proxy.thrift.ScanState.findByValue(iprot.readI32());
         struct.setStateIsSet(true);
       }
       if (incoming.get(7)) {
@@ -1546,12 +1606,12 @@
         {
           org.apache.thrift.protocol.TList _list129 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.columns = new ArrayList<Column>(_list129.size);
-          for (int _i130 = 0; _i130 < _list129.size; ++_i130)
+          Column _elem130;
+          for (int _i131 = 0; _i131 < _list129.size; ++_i131)
           {
-            Column _elem131;
-            _elem131 = new Column();
-            _elem131.read(iprot);
-            struct.columns.add(_elem131);
+            _elem130 = new Column();
+            _elem130.read(iprot);
+            struct.columns.add(_elem130);
           }
         }
         struct.setColumnsIsSet(true);
@@ -1560,12 +1620,12 @@
         {
           org.apache.thrift.protocol.TList _list132 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.iterators = new ArrayList<IteratorSetting>(_list132.size);
-          for (int _i133 = 0; _i133 < _list132.size; ++_i133)
+          IteratorSetting _elem133;
+          for (int _i134 = 0; _i134 < _list132.size; ++_i134)
           {
-            IteratorSetting _elem134;
-            _elem134 = new IteratorSetting();
-            _elem134.read(iprot);
-            struct.iterators.add(_elem134);
+            _elem133 = new IteratorSetting();
+            _elem133.read(iprot);
+            struct.iterators.add(_elem133);
           }
         }
         struct.setIteratorsIsSet(true);
@@ -1574,11 +1634,11 @@
         {
           org.apache.thrift.protocol.TList _list135 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.authorizations = new ArrayList<ByteBuffer>(_list135.size);
-          for (int _i136 = 0; _i136 < _list135.size; ++_i136)
+          ByteBuffer _elem136;
+          for (int _i137 = 0; _i137 < _list135.size; ++_i137)
           {
-            ByteBuffer _elem137;
-            _elem137 = iprot.readBinary();
-            struct.authorizations.add(_elem137);
+            _elem136 = iprot.readBinary();
+            struct.authorizations.add(_elem136);
           }
         }
         struct.setAuthorizationsIsSet(true);
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/BatchScanOptions.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/BatchScanOptions.java
index 948d822..777e075 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/BatchScanOptions.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/BatchScanOptions.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class BatchScanOptions implements org.apache.thrift.TBase<BatchScanOptions, BatchScanOptions._Fields>, java.io.Serializable, Cloneable, Comparable<BatchScanOptions> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class BatchScanOptions implements org.apache.thrift.TBase<BatchScanOptions, BatchScanOptions._Fields>, java.io.Serializable, Cloneable, Comparable<BatchScanOptions> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("BatchScanOptions");
 
   private static final org.apache.thrift.protocol.TField AUTHORIZATIONS_FIELD_DESC = new org.apache.thrift.protocol.TField("authorizations", org.apache.thrift.protocol.TType.SET, (short)1);
@@ -142,7 +145,7 @@
   // isset id assignments
   private static final int __THREADS_ISSET_ID = 0;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.AUTHORIZATIONS,_Fields.RANGES,_Fields.COLUMNS,_Fields.ITERATORS,_Fields.THREADS};
+  private static final _Fields optionals[] = {_Fields.AUTHORIZATIONS,_Fields.RANGES,_Fields.COLUMNS,_Fields.ITERATORS,_Fields.THREADS};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -453,7 +456,7 @@
       return getIterators();
 
     case THREADS:
-      return Integer.valueOf(getThreads());
+      return getThreads();
 
     }
     throw new IllegalStateException();
@@ -543,7 +546,34 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_authorizations = true && (isSetAuthorizations());
+    list.add(present_authorizations);
+    if (present_authorizations)
+      list.add(authorizations);
+
+    boolean present_ranges = true && (isSetRanges());
+    list.add(present_ranges);
+    if (present_ranges)
+      list.add(ranges);
+
+    boolean present_columns = true && (isSetColumns());
+    list.add(present_columns);
+    if (present_columns)
+      list.add(columns);
+
+    boolean present_iterators = true && (isSetIterators());
+    list.add(present_iterators);
+    if (present_iterators)
+      list.add(iterators);
+
+    boolean present_threads = true && (isSetThreads());
+    list.add(present_threads);
+    if (present_threads)
+      list.add(threads);
+
+    return list.hashCode();
   }
 
   @Override
@@ -629,7 +659,7 @@
       if (this.authorizations == null) {
         sb.append("null");
       } else {
-        sb.append(this.authorizations);
+        org.apache.thrift.TBaseHelper.toString(this.authorizations, sb);
       }
       first = false;
     }
@@ -719,11 +749,11 @@
               {
                 org.apache.thrift.protocol.TSet _set50 = iprot.readSetBegin();
                 struct.authorizations = new HashSet<ByteBuffer>(2*_set50.size);
-                for (int _i51 = 0; _i51 < _set50.size; ++_i51)
+                ByteBuffer _elem51;
+                for (int _i52 = 0; _i52 < _set50.size; ++_i52)
                 {
-                  ByteBuffer _elem52;
-                  _elem52 = iprot.readBinary();
-                  struct.authorizations.add(_elem52);
+                  _elem51 = iprot.readBinary();
+                  struct.authorizations.add(_elem51);
                 }
                 iprot.readSetEnd();
               }
@@ -737,12 +767,12 @@
               {
                 org.apache.thrift.protocol.TList _list53 = iprot.readListBegin();
                 struct.ranges = new ArrayList<Range>(_list53.size);
-                for (int _i54 = 0; _i54 < _list53.size; ++_i54)
+                Range _elem54;
+                for (int _i55 = 0; _i55 < _list53.size; ++_i55)
                 {
-                  Range _elem55;
-                  _elem55 = new Range();
-                  _elem55.read(iprot);
-                  struct.ranges.add(_elem55);
+                  _elem54 = new Range();
+                  _elem54.read(iprot);
+                  struct.ranges.add(_elem54);
                 }
                 iprot.readListEnd();
               }
@@ -756,12 +786,12 @@
               {
                 org.apache.thrift.protocol.TList _list56 = iprot.readListBegin();
                 struct.columns = new ArrayList<ScanColumn>(_list56.size);
-                for (int _i57 = 0; _i57 < _list56.size; ++_i57)
+                ScanColumn _elem57;
+                for (int _i58 = 0; _i58 < _list56.size; ++_i58)
                 {
-                  ScanColumn _elem58;
-                  _elem58 = new ScanColumn();
-                  _elem58.read(iprot);
-                  struct.columns.add(_elem58);
+                  _elem57 = new ScanColumn();
+                  _elem57.read(iprot);
+                  struct.columns.add(_elem57);
                 }
                 iprot.readListEnd();
               }
@@ -775,12 +805,12 @@
               {
                 org.apache.thrift.protocol.TList _list59 = iprot.readListBegin();
                 struct.iterators = new ArrayList<IteratorSetting>(_list59.size);
-                for (int _i60 = 0; _i60 < _list59.size; ++_i60)
+                IteratorSetting _elem60;
+                for (int _i61 = 0; _i61 < _list59.size; ++_i61)
                 {
-                  IteratorSetting _elem61;
-                  _elem61 = new IteratorSetting();
-                  _elem61.read(iprot);
-                  struct.iterators.add(_elem61);
+                  _elem60 = new IteratorSetting();
+                  _elem60.read(iprot);
+                  struct.iterators.add(_elem60);
                 }
                 iprot.readListEnd();
               }
@@ -956,11 +986,11 @@
         {
           org.apache.thrift.protocol.TSet _set70 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.authorizations = new HashSet<ByteBuffer>(2*_set70.size);
-          for (int _i71 = 0; _i71 < _set70.size; ++_i71)
+          ByteBuffer _elem71;
+          for (int _i72 = 0; _i72 < _set70.size; ++_i72)
           {
-            ByteBuffer _elem72;
-            _elem72 = iprot.readBinary();
-            struct.authorizations.add(_elem72);
+            _elem71 = iprot.readBinary();
+            struct.authorizations.add(_elem71);
           }
         }
         struct.setAuthorizationsIsSet(true);
@@ -969,12 +999,12 @@
         {
           org.apache.thrift.protocol.TList _list73 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.ranges = new ArrayList<Range>(_list73.size);
-          for (int _i74 = 0; _i74 < _list73.size; ++_i74)
+          Range _elem74;
+          for (int _i75 = 0; _i75 < _list73.size; ++_i75)
           {
-            Range _elem75;
-            _elem75 = new Range();
-            _elem75.read(iprot);
-            struct.ranges.add(_elem75);
+            _elem74 = new Range();
+            _elem74.read(iprot);
+            struct.ranges.add(_elem74);
           }
         }
         struct.setRangesIsSet(true);
@@ -983,12 +1013,12 @@
         {
           org.apache.thrift.protocol.TList _list76 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.columns = new ArrayList<ScanColumn>(_list76.size);
-          for (int _i77 = 0; _i77 < _list76.size; ++_i77)
+          ScanColumn _elem77;
+          for (int _i78 = 0; _i78 < _list76.size; ++_i78)
           {
-            ScanColumn _elem78;
-            _elem78 = new ScanColumn();
-            _elem78.read(iprot);
-            struct.columns.add(_elem78);
+            _elem77 = new ScanColumn();
+            _elem77.read(iprot);
+            struct.columns.add(_elem77);
           }
         }
         struct.setColumnsIsSet(true);
@@ -997,12 +1027,12 @@
         {
           org.apache.thrift.protocol.TList _list79 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.iterators = new ArrayList<IteratorSetting>(_list79.size);
-          for (int _i80 = 0; _i80 < _list79.size; ++_i80)
+          IteratorSetting _elem80;
+          for (int _i81 = 0; _i81 < _list79.size; ++_i81)
           {
-            IteratorSetting _elem81;
-            _elem81 = new IteratorSetting();
-            _elem81.read(iprot);
-            struct.iterators.add(_elem81);
+            _elem80 = new IteratorSetting();
+            _elem80.read(iprot);
+            struct.iterators.add(_elem80);
           }
         }
         struct.setIteratorsIsSet(true);
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Column.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Column.java
index 007eb53..96ae162 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Column.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Column.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class Column implements org.apache.thrift.TBase<Column, Column._Fields>, java.io.Serializable, Cloneable, Comparable<Column> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class Column implements org.apache.thrift.TBase<Column, Column._Fields>, java.io.Serializable, Cloneable, Comparable<Column> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("Column");
 
   private static final org.apache.thrift.protocol.TField COL_FAMILY_FIELD_DESC = new org.apache.thrift.protocol.TField("colFamily", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -152,9 +155,9 @@
     ByteBuffer colVisibility)
   {
     this();
-    this.colFamily = colFamily;
-    this.colQualifier = colQualifier;
-    this.colVisibility = colVisibility;
+    this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(colFamily);
+    this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
+    this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
   }
 
   /**
@@ -163,15 +166,12 @@
   public Column(Column other) {
     if (other.isSetColFamily()) {
       this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(other.colFamily);
-;
     }
     if (other.isSetColQualifier()) {
       this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(other.colQualifier);
-;
     }
     if (other.isSetColVisibility()) {
       this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(other.colVisibility);
-;
     }
   }
 
@@ -192,16 +192,16 @@
   }
 
   public ByteBuffer bufferForColFamily() {
-    return colFamily;
+    return org.apache.thrift.TBaseHelper.copyBinary(colFamily);
   }
 
   public Column setColFamily(byte[] colFamily) {
-    setColFamily(colFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(colFamily));
+    this.colFamily = colFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colFamily, colFamily.length));
     return this;
   }
 
   public Column setColFamily(ByteBuffer colFamily) {
-    this.colFamily = colFamily;
+    this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(colFamily);
     return this;
   }
 
@@ -226,16 +226,16 @@
   }
 
   public ByteBuffer bufferForColQualifier() {
-    return colQualifier;
+    return org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
   }
 
   public Column setColQualifier(byte[] colQualifier) {
-    setColQualifier(colQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(colQualifier));
+    this.colQualifier = colQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colQualifier, colQualifier.length));
     return this;
   }
 
   public Column setColQualifier(ByteBuffer colQualifier) {
-    this.colQualifier = colQualifier;
+    this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
     return this;
   }
 
@@ -260,16 +260,16 @@
   }
 
   public ByteBuffer bufferForColVisibility() {
-    return colVisibility;
+    return org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
   }
 
   public Column setColVisibility(byte[] colVisibility) {
-    setColVisibility(colVisibility == null ? (ByteBuffer)null : ByteBuffer.wrap(colVisibility));
+    this.colVisibility = colVisibility == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colVisibility, colVisibility.length));
     return this;
   }
 
   public Column setColVisibility(ByteBuffer colVisibility) {
-    this.colVisibility = colVisibility;
+    this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
     return this;
   }
 
@@ -394,7 +394,24 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_colFamily = true && (isSetColFamily());
+    list.add(present_colFamily);
+    if (present_colFamily)
+      list.add(colFamily);
+
+    boolean present_colQualifier = true && (isSetColQualifier());
+    list.add(present_colQualifier);
+    if (present_colQualifier)
+      list.add(colQualifier);
+
+    boolean present_colVisibility = true && (isSetColVisibility());
+    list.add(present_colVisibility);
+    if (present_colVisibility)
+      list.add(colVisibility);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ColumnUpdate.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ColumnUpdate.java
index 97d9542..33dfdbe 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ColumnUpdate.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ColumnUpdate.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ColumnUpdate implements org.apache.thrift.TBase<ColumnUpdate, ColumnUpdate._Fields>, java.io.Serializable, Cloneable, Comparable<ColumnUpdate> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ColumnUpdate implements org.apache.thrift.TBase<ColumnUpdate, ColumnUpdate._Fields>, java.io.Serializable, Cloneable, Comparable<ColumnUpdate> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ColumnUpdate");
 
   private static final org.apache.thrift.protocol.TField COL_FAMILY_FIELD_DESC = new org.apache.thrift.protocol.TField("colFamily", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -148,7 +151,7 @@
   private static final int __TIMESTAMP_ISSET_ID = 0;
   private static final int __DELETECELL_ISSET_ID = 1;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.COL_VISIBILITY,_Fields.TIMESTAMP,_Fields.VALUE,_Fields.DELETE_CELL};
+  private static final _Fields optionals[] = {_Fields.COL_VISIBILITY,_Fields.TIMESTAMP,_Fields.VALUE,_Fields.DELETE_CELL};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -176,8 +179,8 @@
     ByteBuffer colQualifier)
   {
     this();
-    this.colFamily = colFamily;
-    this.colQualifier = colQualifier;
+    this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(colFamily);
+    this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
   }
 
   /**
@@ -187,20 +190,16 @@
     __isset_bitfield = other.__isset_bitfield;
     if (other.isSetColFamily()) {
       this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(other.colFamily);
-;
     }
     if (other.isSetColQualifier()) {
       this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(other.colQualifier);
-;
     }
     if (other.isSetColVisibility()) {
       this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(other.colVisibility);
-;
     }
     this.timestamp = other.timestamp;
     if (other.isSetValue()) {
       this.value = org.apache.thrift.TBaseHelper.copyBinary(other.value);
-;
     }
     this.deleteCell = other.deleteCell;
   }
@@ -227,16 +226,16 @@
   }
 
   public ByteBuffer bufferForColFamily() {
-    return colFamily;
+    return org.apache.thrift.TBaseHelper.copyBinary(colFamily);
   }
 
   public ColumnUpdate setColFamily(byte[] colFamily) {
-    setColFamily(colFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(colFamily));
+    this.colFamily = colFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colFamily, colFamily.length));
     return this;
   }
 
   public ColumnUpdate setColFamily(ByteBuffer colFamily) {
-    this.colFamily = colFamily;
+    this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(colFamily);
     return this;
   }
 
@@ -261,16 +260,16 @@
   }
 
   public ByteBuffer bufferForColQualifier() {
-    return colQualifier;
+    return org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
   }
 
   public ColumnUpdate setColQualifier(byte[] colQualifier) {
-    setColQualifier(colQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(colQualifier));
+    this.colQualifier = colQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colQualifier, colQualifier.length));
     return this;
   }
 
   public ColumnUpdate setColQualifier(ByteBuffer colQualifier) {
-    this.colQualifier = colQualifier;
+    this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
     return this;
   }
 
@@ -295,16 +294,16 @@
   }
 
   public ByteBuffer bufferForColVisibility() {
-    return colVisibility;
+    return org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
   }
 
   public ColumnUpdate setColVisibility(byte[] colVisibility) {
-    setColVisibility(colVisibility == null ? (ByteBuffer)null : ByteBuffer.wrap(colVisibility));
+    this.colVisibility = colVisibility == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colVisibility, colVisibility.length));
     return this;
   }
 
   public ColumnUpdate setColVisibility(ByteBuffer colVisibility) {
-    this.colVisibility = colVisibility;
+    this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
     return this;
   }
 
@@ -352,16 +351,16 @@
   }
 
   public ByteBuffer bufferForValue() {
-    return value;
+    return org.apache.thrift.TBaseHelper.copyBinary(value);
   }
 
   public ColumnUpdate setValue(byte[] value) {
-    setValue(value == null ? (ByteBuffer)null : ByteBuffer.wrap(value));
+    this.value = value == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(value, value.length));
     return this;
   }
 
   public ColumnUpdate setValue(ByteBuffer value) {
-    this.value = value;
+    this.value = org.apache.thrift.TBaseHelper.copyBinary(value);
     return this;
   }
 
@@ -468,13 +467,13 @@
       return getColVisibility();
 
     case TIMESTAMP:
-      return Long.valueOf(getTimestamp());
+      return getTimestamp();
 
     case VALUE:
       return getValue();
 
     case DELETE_CELL:
-      return Boolean.valueOf(isDeleteCell());
+      return isDeleteCell();
 
     }
     throw new IllegalStateException();
@@ -575,7 +574,39 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_colFamily = true && (isSetColFamily());
+    list.add(present_colFamily);
+    if (present_colFamily)
+      list.add(colFamily);
+
+    boolean present_colQualifier = true && (isSetColQualifier());
+    list.add(present_colQualifier);
+    if (present_colQualifier)
+      list.add(colQualifier);
+
+    boolean present_colVisibility = true && (isSetColVisibility());
+    list.add(present_colVisibility);
+    if (present_colVisibility)
+      list.add(colVisibility);
+
+    boolean present_timestamp = true && (isSetTimestamp());
+    list.add(present_timestamp);
+    if (present_timestamp)
+      list.add(timestamp);
+
+    boolean present_value = true && (isSetValue());
+    list.add(present_value);
+    if (present_value)
+      list.add(value);
+
+    boolean present_deleteCell = true && (isSetDeleteCell());
+    list.add(present_deleteCell);
+    if (present_deleteCell)
+      list.add(deleteCell);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionReason.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionReason.java
index 1875275..77cf53d 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionReason.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionReason.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionStrategyConfig.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionStrategyConfig.java
index 2e74e39..e8f7f88 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionStrategyConfig.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionStrategyConfig.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class CompactionStrategyConfig implements org.apache.thrift.TBase<CompactionStrategyConfig, CompactionStrategyConfig._Fields>, java.io.Serializable, Cloneable, Comparable<CompactionStrategyConfig> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class CompactionStrategyConfig implements org.apache.thrift.TBase<CompactionStrategyConfig, CompactionStrategyConfig._Fields>, java.io.Serializable, Cloneable, Comparable<CompactionStrategyConfig> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("CompactionStrategyConfig");
 
   private static final org.apache.thrift.protocol.TField CLASS_NAME_FIELD_DESC = new org.apache.thrift.protocol.TField("className", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -316,7 +319,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_className = true && (isSetClassName());
+    list.add(present_className);
+    if (present_className)
+      list.add(className);
+
+    boolean present_options = true && (isSetOptions());
+    list.add(present_options);
+    if (present_options)
+      list.add(options);
+
+    return list.hashCode();
   }
 
   @Override
@@ -438,13 +453,13 @@
               {
                 org.apache.thrift.protocol.TMap _map154 = iprot.readMapBegin();
                 struct.options = new HashMap<String,String>(2*_map154.size);
-                for (int _i155 = 0; _i155 < _map154.size; ++_i155)
+                String _key155;
+                String _val156;
+                for (int _i157 = 0; _i157 < _map154.size; ++_i157)
                 {
-                  String _key156;
-                  String _val157;
-                  _key156 = iprot.readString();
-                  _val157 = iprot.readString();
-                  struct.options.put(_key156, _val157);
+                  _key155 = iprot.readString();
+                  _val156 = iprot.readString();
+                  struct.options.put(_key155, _val156);
                 }
                 iprot.readMapEnd();
               }
@@ -538,13 +553,13 @@
         {
           org.apache.thrift.protocol.TMap _map160 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.options = new HashMap<String,String>(2*_map160.size);
-          for (int _i161 = 0; _i161 < _map160.size; ++_i161)
+          String _key161;
+          String _val162;
+          for (int _i163 = 0; _i163 < _map160.size; ++_i163)
           {
-            String _key162;
-            String _val163;
-            _key162 = iprot.readString();
-            _val163 = iprot.readString();
-            struct.options.put(_key162, _val163);
+            _key161 = iprot.readString();
+            _val162 = iprot.readString();
+            struct.options.put(_key161, _val162);
           }
         }
         struct.setOptionsIsSet(true);
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionType.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionType.java
index 1b82951..d561796 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionType.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/CompactionType.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Condition.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Condition.java
index c4b3c07..d946a87 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Condition.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Condition.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class Condition implements org.apache.thrift.TBase<Condition, Condition._Fields>, java.io.Serializable, Cloneable, Comparable<Condition> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class Condition implements org.apache.thrift.TBase<Condition, Condition._Fields>, java.io.Serializable, Cloneable, Comparable<Condition> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("Condition");
 
   private static final org.apache.thrift.protocol.TField COLUMN_FIELD_DESC = new org.apache.thrift.protocol.TField("column", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -137,7 +140,7 @@
   // isset id assignments
   private static final int __TIMESTAMP_ISSET_ID = 0;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.TIMESTAMP,_Fields.VALUE,_Fields.ITERATORS};
+  private static final _Fields optionals[] = {_Fields.TIMESTAMP,_Fields.VALUE,_Fields.ITERATORS};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -175,7 +178,6 @@
     this.timestamp = other.timestamp;
     if (other.isSetValue()) {
       this.value = org.apache.thrift.TBaseHelper.copyBinary(other.value);
-;
     }
     if (other.isSetIterators()) {
       List<IteratorSetting> __this__iterators = new ArrayList<IteratorSetting>(other.iterators.size());
@@ -252,16 +254,16 @@
   }
 
   public ByteBuffer bufferForValue() {
-    return value;
+    return org.apache.thrift.TBaseHelper.copyBinary(value);
   }
 
   public Condition setValue(byte[] value) {
-    setValue(value == null ? (ByteBuffer)null : ByteBuffer.wrap(value));
+    this.value = value == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(value, value.length));
     return this;
   }
 
   public Condition setValue(ByteBuffer value) {
-    this.value = value;
+    this.value = org.apache.thrift.TBaseHelper.copyBinary(value);
     return this;
   }
 
@@ -362,7 +364,7 @@
       return getColumn();
 
     case TIMESTAMP:
-      return Long.valueOf(getTimestamp());
+      return getTimestamp();
 
     case VALUE:
       return getValue();
@@ -447,7 +449,29 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_column = true && (isSetColumn());
+    list.add(present_column);
+    if (present_column)
+      list.add(column);
+
+    boolean present_timestamp = true && (isSetTimestamp());
+    list.add(present_timestamp);
+    if (present_timestamp)
+      list.add(timestamp);
+
+    boolean present_value = true && (isSetValue());
+    list.add(present_value);
+    if (present_value)
+      list.add(value);
+
+    boolean present_iterators = true && (isSetIterators());
+    list.add(present_iterators);
+    if (present_iterators)
+      list.add(iterators);
+
+    return list.hashCode();
   }
 
   @Override
@@ -629,12 +653,12 @@
               {
                 org.apache.thrift.protocol.TList _list82 = iprot.readListBegin();
                 struct.iterators = new ArrayList<IteratorSetting>(_list82.size);
-                for (int _i83 = 0; _i83 < _list82.size; ++_i83)
+                IteratorSetting _elem83;
+                for (int _i84 = 0; _i84 < _list82.size; ++_i84)
                 {
-                  IteratorSetting _elem84;
-                  _elem84 = new IteratorSetting();
-                  _elem84.read(iprot);
-                  struct.iterators.add(_elem84);
+                  _elem83 = new IteratorSetting();
+                  _elem83.read(iprot);
+                  struct.iterators.add(_elem83);
                 }
                 iprot.readListEnd();
               }
@@ -761,12 +785,12 @@
         {
           org.apache.thrift.protocol.TList _list87 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.iterators = new ArrayList<IteratorSetting>(_list87.size);
-          for (int _i88 = 0; _i88 < _list87.size; ++_i88)
+          IteratorSetting _elem88;
+          for (int _i89 = 0; _i89 < _list87.size; ++_i89)
           {
-            IteratorSetting _elem89;
-            _elem89 = new IteratorSetting();
-            _elem89.read(iprot);
-            struct.iterators.add(_elem89);
+            _elem88 = new IteratorSetting();
+            _elem88.read(iprot);
+            struct.iterators.add(_elem88);
           }
         }
         struct.setIteratorsIsSet(true);
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalStatus.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalStatus.java
index 515c416..c8626a5 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalStatus.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalStatus.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalUpdates.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalUpdates.java
index 551e996..07f1338 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalUpdates.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalUpdates.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ConditionalUpdates implements org.apache.thrift.TBase<ConditionalUpdates, ConditionalUpdates._Fields>, java.io.Serializable, Cloneable, Comparable<ConditionalUpdates> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ConditionalUpdates implements org.apache.thrift.TBase<ConditionalUpdates, ConditionalUpdates._Fields>, java.io.Serializable, Cloneable, Comparable<ConditionalUpdates> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ConditionalUpdates");
 
   private static final org.apache.thrift.protocol.TField CONDITIONS_FIELD_DESC = new org.apache.thrift.protocol.TField("conditions", org.apache.thrift.protocol.TType.LIST, (short)2);
@@ -342,7 +345,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_conditions = true && (isSetConditions());
+    list.add(present_conditions);
+    if (present_conditions)
+      list.add(conditions);
+
+    boolean present_updates = true && (isSetUpdates());
+    list.add(present_updates);
+    if (present_updates)
+      list.add(updates);
+
+    return list.hashCode();
   }
 
   @Override
@@ -456,12 +471,12 @@
               {
                 org.apache.thrift.protocol.TList _list90 = iprot.readListBegin();
                 struct.conditions = new ArrayList<Condition>(_list90.size);
-                for (int _i91 = 0; _i91 < _list90.size; ++_i91)
+                Condition _elem91;
+                for (int _i92 = 0; _i92 < _list90.size; ++_i92)
                 {
-                  Condition _elem92;
-                  _elem92 = new Condition();
-                  _elem92.read(iprot);
-                  struct.conditions.add(_elem92);
+                  _elem91 = new Condition();
+                  _elem91.read(iprot);
+                  struct.conditions.add(_elem91);
                 }
                 iprot.readListEnd();
               }
@@ -475,12 +490,12 @@
               {
                 org.apache.thrift.protocol.TList _list93 = iprot.readListBegin();
                 struct.updates = new ArrayList<ColumnUpdate>(_list93.size);
-                for (int _i94 = 0; _i94 < _list93.size; ++_i94)
+                ColumnUpdate _elem94;
+                for (int _i95 = 0; _i95 < _list93.size; ++_i95)
                 {
-                  ColumnUpdate _elem95;
-                  _elem95 = new ColumnUpdate();
-                  _elem95.read(iprot);
-                  struct.updates.add(_elem95);
+                  _elem94 = new ColumnUpdate();
+                  _elem94.read(iprot);
+                  struct.updates.add(_elem94);
                 }
                 iprot.readListEnd();
               }
@@ -581,12 +596,12 @@
         {
           org.apache.thrift.protocol.TList _list100 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.conditions = new ArrayList<Condition>(_list100.size);
-          for (int _i101 = 0; _i101 < _list100.size; ++_i101)
+          Condition _elem101;
+          for (int _i102 = 0; _i102 < _list100.size; ++_i102)
           {
-            Condition _elem102;
-            _elem102 = new Condition();
-            _elem102.read(iprot);
-            struct.conditions.add(_elem102);
+            _elem101 = new Condition();
+            _elem101.read(iprot);
+            struct.conditions.add(_elem101);
           }
         }
         struct.setConditionsIsSet(true);
@@ -595,12 +610,12 @@
         {
           org.apache.thrift.protocol.TList _list103 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.updates = new ArrayList<ColumnUpdate>(_list103.size);
-          for (int _i104 = 0; _i104 < _list103.size; ++_i104)
+          ColumnUpdate _elem104;
+          for (int _i105 = 0; _i105 < _list103.size; ++_i105)
           {
-            ColumnUpdate _elem105;
-            _elem105 = new ColumnUpdate();
-            _elem105.read(iprot);
-            struct.updates.add(_elem105);
+            _elem104 = new ColumnUpdate();
+            _elem104.read(iprot);
+            struct.updates.add(_elem104);
           }
         }
         struct.setUpdatesIsSet(true);
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalWriterOptions.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalWriterOptions.java
index 16b21fc..16c78a3 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalWriterOptions.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ConditionalWriterOptions.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ConditionalWriterOptions implements org.apache.thrift.TBase<ConditionalWriterOptions, ConditionalWriterOptions._Fields>, java.io.Serializable, Cloneable, Comparable<ConditionalWriterOptions> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ConditionalWriterOptions implements org.apache.thrift.TBase<ConditionalWriterOptions, ConditionalWriterOptions._Fields>, java.io.Serializable, Cloneable, Comparable<ConditionalWriterOptions> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ConditionalWriterOptions");
 
   private static final org.apache.thrift.protocol.TField MAX_MEMORY_FIELD_DESC = new org.apache.thrift.protocol.TField("maxMemory", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -152,7 +155,7 @@
   private static final int __TIMEOUTMS_ISSET_ID = 1;
   private static final int __THREADS_ISSET_ID = 2;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.MAX_MEMORY,_Fields.TIMEOUT_MS,_Fields.THREADS,_Fields.AUTHORIZATIONS,_Fields.DURABILITY};
+  private static final _Fields optionals[] = {_Fields.MAX_MEMORY,_Fields.TIMEOUT_MS,_Fields.THREADS,_Fields.AUTHORIZATIONS,_Fields.DURABILITY};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -395,13 +398,13 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case MAX_MEMORY:
-      return Long.valueOf(getMaxMemory());
+      return getMaxMemory();
 
     case TIMEOUT_MS:
-      return Long.valueOf(getTimeoutMs());
+      return getTimeoutMs();
 
     case THREADS:
-      return Integer.valueOf(getThreads());
+      return getThreads();
 
     case AUTHORIZATIONS:
       return getAuthorizations();
@@ -497,7 +500,34 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_maxMemory = true && (isSetMaxMemory());
+    list.add(present_maxMemory);
+    if (present_maxMemory)
+      list.add(maxMemory);
+
+    boolean present_timeoutMs = true && (isSetTimeoutMs());
+    list.add(present_timeoutMs);
+    if (present_timeoutMs)
+      list.add(timeoutMs);
+
+    boolean present_threads = true && (isSetThreads());
+    list.add(present_threads);
+    if (present_threads)
+      list.add(threads);
+
+    boolean present_authorizations = true && (isSetAuthorizations());
+    list.add(present_authorizations);
+    if (present_authorizations)
+      list.add(authorizations);
+
+    boolean present_durability = true && (isSetDurability());
+    list.add(present_durability);
+    if (present_durability)
+      list.add(durability.getValue());
+
+    return list.hashCode();
   }
 
   @Override
@@ -601,7 +631,7 @@
       if (this.authorizations == null) {
         sb.append("null");
       } else {
-        sb.append(this.authorizations);
+        org.apache.thrift.TBaseHelper.toString(this.authorizations, sb);
       }
       first = false;
     }
@@ -689,11 +719,11 @@
               {
                 org.apache.thrift.protocol.TSet _set106 = iprot.readSetBegin();
                 struct.authorizations = new HashSet<ByteBuffer>(2*_set106.size);
-                for (int _i107 = 0; _i107 < _set106.size; ++_i107)
+                ByteBuffer _elem107;
+                for (int _i108 = 0; _i108 < _set106.size; ++_i108)
                 {
-                  ByteBuffer _elem108;
-                  _elem108 = iprot.readBinary();
-                  struct.authorizations.add(_elem108);
+                  _elem107 = iprot.readBinary();
+                  struct.authorizations.add(_elem107);
                 }
                 iprot.readSetEnd();
               }
@@ -704,7 +734,7 @@
             break;
           case 5: // DURABILITY
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.durability = Durability.findByValue(iprot.readI32());
+              struct.durability = org.apache.accumulo.proxy.thrift.Durability.findByValue(iprot.readI32());
               struct.setDurabilityIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -838,17 +868,17 @@
         {
           org.apache.thrift.protocol.TSet _set111 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.authorizations = new HashSet<ByteBuffer>(2*_set111.size);
-          for (int _i112 = 0; _i112 < _set111.size; ++_i112)
+          ByteBuffer _elem112;
+          for (int _i113 = 0; _i113 < _set111.size; ++_i113)
           {
-            ByteBuffer _elem113;
-            _elem113 = iprot.readBinary();
-            struct.authorizations.add(_elem113);
+            _elem112 = iprot.readBinary();
+            struct.authorizations.add(_elem112);
           }
         }
         struct.setAuthorizationsIsSet(true);
       }
       if (incoming.get(4)) {
-        struct.durability = Durability.findByValue(iprot.readI32());
+        struct.durability = org.apache.accumulo.proxy.thrift.Durability.findByValue(iprot.readI32());
         struct.setDurabilityIsSet(true);
       }
     }
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/DiskUsage.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/DiskUsage.java
index 82a886d..c49910f 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/DiskUsage.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/DiskUsage.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class DiskUsage implements org.apache.thrift.TBase<DiskUsage, DiskUsage._Fields>, java.io.Serializable, Cloneable, Comparable<DiskUsage> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class DiskUsage implements org.apache.thrift.TBase<DiskUsage, DiskUsage._Fields>, java.io.Serializable, Cloneable, Comparable<DiskUsage> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("DiskUsage");
 
   private static final org.apache.thrift.protocol.TField TABLES_FIELD_DESC = new org.apache.thrift.protocol.TField("tables", org.apache.thrift.protocol.TType.LIST, (short)1);
@@ -264,7 +267,7 @@
       return getTables();
 
     case USAGE:
-      return Long.valueOf(getUsage());
+      return getUsage();
 
     }
     throw new IllegalStateException();
@@ -321,7 +324,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_tables = true && (isSetTables());
+    list.add(present_tables);
+    if (present_tables)
+      list.add(tables);
+
+    boolean present_usage = true;
+    list.add(present_usage);
+    if (present_usage)
+      list.add(usage);
+
+    return list.hashCode();
   }
 
   @Override
@@ -433,11 +448,11 @@
               {
                 org.apache.thrift.protocol.TList _list0 = iprot.readListBegin();
                 struct.tables = new ArrayList<String>(_list0.size);
-                for (int _i1 = 0; _i1 < _list0.size; ++_i1)
+                String _elem1;
+                for (int _i2 = 0; _i2 < _list0.size; ++_i2)
                 {
-                  String _elem2;
-                  _elem2 = iprot.readString();
-                  struct.tables.add(_elem2);
+                  _elem1 = iprot.readString();
+                  struct.tables.add(_elem1);
                 }
                 iprot.readListEnd();
               }
@@ -531,11 +546,11 @@
         {
           org.apache.thrift.protocol.TList _list5 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.tables = new ArrayList<String>(_list5.size);
-          for (int _i6 = 0; _i6 < _list5.size; ++_i6)
+          String _elem6;
+          for (int _i7 = 0; _i7 < _list5.size; ++_i7)
           {
-            String _elem7;
-            _elem7 = iprot.readString();
-            struct.tables.add(_elem7);
+            _elem6 = iprot.readString();
+            struct.tables.add(_elem6);
           }
         }
         struct.setTablesIsSet(true);
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Durability.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Durability.java
index fb4612a..daa16c8 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Durability.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Durability.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/IteratorScope.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/IteratorScope.java
index 0fc8de8..65408bd 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/IteratorScope.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/IteratorScope.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/IteratorSetting.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/IteratorSetting.java
index eabc686..826c46f 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/IteratorSetting.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/IteratorSetting.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class IteratorSetting implements org.apache.thrift.TBase<IteratorSetting, IteratorSetting._Fields>, java.io.Serializable, Cloneable, Comparable<IteratorSetting> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class IteratorSetting implements org.apache.thrift.TBase<IteratorSetting, IteratorSetting._Fields>, java.io.Serializable, Cloneable, Comparable<IteratorSetting> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("IteratorSetting");
 
   private static final org.apache.thrift.protocol.TField PRIORITY_FIELD_DESC = new org.apache.thrift.protocol.TField("priority", org.apache.thrift.protocol.TType.I32, (short)1);
@@ -348,7 +351,7 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case PRIORITY:
-      return Integer.valueOf(getPriority());
+      return getPriority();
 
     case NAME:
       return getName();
@@ -436,7 +439,29 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_priority = true;
+    list.add(present_priority);
+    if (present_priority)
+      list.add(priority);
+
+    boolean present_name = true && (isSetName());
+    list.add(present_name);
+    if (present_name)
+      list.add(name);
+
+    boolean present_iteratorClass = true && (isSetIteratorClass());
+    list.add(present_iteratorClass);
+    if (present_iteratorClass)
+      list.add(iteratorClass);
+
+    boolean present_properties = true && (isSetProperties());
+    list.add(present_properties);
+    if (present_properties)
+      list.add(properties);
+
+    return list.hashCode();
   }
 
   @Override
@@ -608,13 +633,13 @@
               {
                 org.apache.thrift.protocol.TMap _map16 = iprot.readMapBegin();
                 struct.properties = new HashMap<String,String>(2*_map16.size);
-                for (int _i17 = 0; _i17 < _map16.size; ++_i17)
+                String _key17;
+                String _val18;
+                for (int _i19 = 0; _i19 < _map16.size; ++_i19)
                 {
-                  String _key18;
-                  String _val19;
-                  _key18 = iprot.readString();
-                  _val19 = iprot.readString();
-                  struct.properties.put(_key18, _val19);
+                  _key17 = iprot.readString();
+                  _val18 = iprot.readString();
+                  struct.properties.put(_key17, _val18);
                 }
                 iprot.readMapEnd();
               }
@@ -736,13 +761,13 @@
         {
           org.apache.thrift.protocol.TMap _map22 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.properties = new HashMap<String,String>(2*_map22.size);
-          for (int _i23 = 0; _i23 < _map22.size; ++_i23)
+          String _key23;
+          String _val24;
+          for (int _i25 = 0; _i25 < _map22.size; ++_i25)
           {
-            String _key24;
-            String _val25;
-            _key24 = iprot.readString();
-            _val25 = iprot.readString();
-            struct.properties.put(_key24, _val25);
+            _key23 = iprot.readString();
+            _val24 = iprot.readString();
+            struct.properties.put(_key23, _val24);
           }
         }
         struct.setPropertiesIsSet(true);
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Key.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Key.java
index 6984cf2..93237c5 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Key.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Key.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class Key implements org.apache.thrift.TBase<Key, Key._Fields>, java.io.Serializable, Cloneable, Comparable<Key> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class Key implements org.apache.thrift.TBase<Key, Key._Fields>, java.io.Serializable, Cloneable, Comparable<Key> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("Key");
 
   private static final org.apache.thrift.protocol.TField ROW_FIELD_DESC = new org.apache.thrift.protocol.TField("row", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -142,7 +145,7 @@
   // isset id assignments
   private static final int __TIMESTAMP_ISSET_ID = 0;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.TIMESTAMP};
+  private static final _Fields optionals[] = {_Fields.TIMESTAMP};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -172,10 +175,10 @@
     ByteBuffer colVisibility)
   {
     this();
-    this.row = row;
-    this.colFamily = colFamily;
-    this.colQualifier = colQualifier;
-    this.colVisibility = colVisibility;
+    this.row = org.apache.thrift.TBaseHelper.copyBinary(row);
+    this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(colFamily);
+    this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
+    this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
   }
 
   /**
@@ -185,19 +188,15 @@
     __isset_bitfield = other.__isset_bitfield;
     if (other.isSetRow()) {
       this.row = org.apache.thrift.TBaseHelper.copyBinary(other.row);
-;
     }
     if (other.isSetColFamily()) {
       this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(other.colFamily);
-;
     }
     if (other.isSetColQualifier()) {
       this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(other.colQualifier);
-;
     }
     if (other.isSetColVisibility()) {
       this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(other.colVisibility);
-;
     }
     this.timestamp = other.timestamp;
   }
@@ -222,16 +221,16 @@
   }
 
   public ByteBuffer bufferForRow() {
-    return row;
+    return org.apache.thrift.TBaseHelper.copyBinary(row);
   }
 
   public Key setRow(byte[] row) {
-    setRow(row == null ? (ByteBuffer)null : ByteBuffer.wrap(row));
+    this.row = row == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(row, row.length));
     return this;
   }
 
   public Key setRow(ByteBuffer row) {
-    this.row = row;
+    this.row = org.apache.thrift.TBaseHelper.copyBinary(row);
     return this;
   }
 
@@ -256,16 +255,16 @@
   }
 
   public ByteBuffer bufferForColFamily() {
-    return colFamily;
+    return org.apache.thrift.TBaseHelper.copyBinary(colFamily);
   }
 
   public Key setColFamily(byte[] colFamily) {
-    setColFamily(colFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(colFamily));
+    this.colFamily = colFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colFamily, colFamily.length));
     return this;
   }
 
   public Key setColFamily(ByteBuffer colFamily) {
-    this.colFamily = colFamily;
+    this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(colFamily);
     return this;
   }
 
@@ -290,16 +289,16 @@
   }
 
   public ByteBuffer bufferForColQualifier() {
-    return colQualifier;
+    return org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
   }
 
   public Key setColQualifier(byte[] colQualifier) {
-    setColQualifier(colQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(colQualifier));
+    this.colQualifier = colQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colQualifier, colQualifier.length));
     return this;
   }
 
   public Key setColQualifier(ByteBuffer colQualifier) {
-    this.colQualifier = colQualifier;
+    this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
     return this;
   }
 
@@ -324,16 +323,16 @@
   }
 
   public ByteBuffer bufferForColVisibility() {
-    return colVisibility;
+    return org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
   }
 
   public Key setColVisibility(byte[] colVisibility) {
-    setColVisibility(colVisibility == null ? (ByteBuffer)null : ByteBuffer.wrap(colVisibility));
+    this.colVisibility = colVisibility == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colVisibility, colVisibility.length));
     return this;
   }
 
   public Key setColVisibility(ByteBuffer colVisibility) {
-    this.colVisibility = colVisibility;
+    this.colVisibility = org.apache.thrift.TBaseHelper.copyBinary(colVisibility);
     return this;
   }
 
@@ -435,7 +434,7 @@
       return getColVisibility();
 
     case TIMESTAMP:
-      return Long.valueOf(getTimestamp());
+      return getTimestamp();
 
     }
     throw new IllegalStateException();
@@ -525,7 +524,34 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_row = true && (isSetRow());
+    list.add(present_row);
+    if (present_row)
+      list.add(row);
+
+    boolean present_colFamily = true && (isSetColFamily());
+    list.add(present_colFamily);
+    if (present_colFamily)
+      list.add(colFamily);
+
+    boolean present_colQualifier = true && (isSetColQualifier());
+    list.add(present_colQualifier);
+    if (present_colQualifier)
+      list.add(colQualifier);
+
+    boolean present_colVisibility = true && (isSetColVisibility());
+    list.add(present_colVisibility);
+    if (present_colVisibility)
+      list.add(colVisibility);
+
+    boolean present_timestamp = true && (isSetTimestamp());
+    list.add(present_timestamp);
+    if (present_timestamp)
+      list.add(timestamp);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyExtent.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyExtent.java
index 1136284..09001c6 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyExtent.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyExtent.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class KeyExtent implements org.apache.thrift.TBase<KeyExtent, KeyExtent._Fields>, java.io.Serializable, Cloneable, Comparable<KeyExtent> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class KeyExtent implements org.apache.thrift.TBase<KeyExtent, KeyExtent._Fields>, java.io.Serializable, Cloneable, Comparable<KeyExtent> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("KeyExtent");
 
   private static final org.apache.thrift.protocol.TField TABLE_ID_FIELD_DESC = new org.apache.thrift.protocol.TField("tableId", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -153,8 +156,8 @@
   {
     this();
     this.tableId = tableId;
-    this.endRow = endRow;
-    this.prevEndRow = prevEndRow;
+    this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
+    this.prevEndRow = org.apache.thrift.TBaseHelper.copyBinary(prevEndRow);
   }
 
   /**
@@ -166,11 +169,9 @@
     }
     if (other.isSetEndRow()) {
       this.endRow = org.apache.thrift.TBaseHelper.copyBinary(other.endRow);
-;
     }
     if (other.isSetPrevEndRow()) {
       this.prevEndRow = org.apache.thrift.TBaseHelper.copyBinary(other.prevEndRow);
-;
     }
   }
 
@@ -215,16 +216,16 @@
   }
 
   public ByteBuffer bufferForEndRow() {
-    return endRow;
+    return org.apache.thrift.TBaseHelper.copyBinary(endRow);
   }
 
   public KeyExtent setEndRow(byte[] endRow) {
-    setEndRow(endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(endRow));
+    this.endRow = endRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(endRow, endRow.length));
     return this;
   }
 
   public KeyExtent setEndRow(ByteBuffer endRow) {
-    this.endRow = endRow;
+    this.endRow = org.apache.thrift.TBaseHelper.copyBinary(endRow);
     return this;
   }
 
@@ -249,16 +250,16 @@
   }
 
   public ByteBuffer bufferForPrevEndRow() {
-    return prevEndRow;
+    return org.apache.thrift.TBaseHelper.copyBinary(prevEndRow);
   }
 
   public KeyExtent setPrevEndRow(byte[] prevEndRow) {
-    setPrevEndRow(prevEndRow == null ? (ByteBuffer)null : ByteBuffer.wrap(prevEndRow));
+    this.prevEndRow = prevEndRow == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(prevEndRow, prevEndRow.length));
     return this;
   }
 
   public KeyExtent setPrevEndRow(ByteBuffer prevEndRow) {
-    this.prevEndRow = prevEndRow;
+    this.prevEndRow = org.apache.thrift.TBaseHelper.copyBinary(prevEndRow);
     return this;
   }
 
@@ -383,7 +384,24 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_tableId = true && (isSetTableId());
+    list.add(present_tableId);
+    if (present_tableId)
+      list.add(tableId);
+
+    boolean present_endRow = true && (isSetEndRow());
+    list.add(present_endRow);
+    if (present_endRow)
+      list.add(endRow);
+
+    boolean present_prevEndRow = true && (isSetPrevEndRow());
+    list.add(present_prevEndRow);
+    if (present_prevEndRow)
+      list.add(prevEndRow);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyValue.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyValue.java
index 76d71b5..480236c 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyValue.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyValue.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class KeyValue implements org.apache.thrift.TBase<KeyValue, KeyValue._Fields>, java.io.Serializable, Cloneable, Comparable<KeyValue> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class KeyValue implements org.apache.thrift.TBase<KeyValue, KeyValue._Fields>, java.io.Serializable, Cloneable, Comparable<KeyValue> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("KeyValue");
 
   private static final org.apache.thrift.protocol.TField KEY_FIELD_DESC = new org.apache.thrift.protocol.TField("key", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -145,7 +148,7 @@
   {
     this();
     this.key = key;
-    this.value = value;
+    this.value = org.apache.thrift.TBaseHelper.copyBinary(value);
   }
 
   /**
@@ -157,7 +160,6 @@
     }
     if (other.isSetValue()) {
       this.value = org.apache.thrift.TBaseHelper.copyBinary(other.value);
-;
     }
   }
 
@@ -201,16 +203,16 @@
   }
 
   public ByteBuffer bufferForValue() {
-    return value;
+    return org.apache.thrift.TBaseHelper.copyBinary(value);
   }
 
   public KeyValue setValue(byte[] value) {
-    setValue(value == null ? (ByteBuffer)null : ByteBuffer.wrap(value));
+    this.value = value == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(value, value.length));
     return this;
   }
 
   public KeyValue setValue(ByteBuffer value) {
-    this.value = value;
+    this.value = org.apache.thrift.TBaseHelper.copyBinary(value);
     return this;
   }
 
@@ -313,7 +315,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_key = true && (isSetKey());
+    list.add(present_key);
+    if (present_key)
+      list.add(key);
+
+    boolean present_value = true && (isSetValue());
+    list.add(present_value);
+    if (present_value)
+      list.add(value);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyValueAndPeek.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyValueAndPeek.java
index 88b0c3f..3bb4f13 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyValueAndPeek.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/KeyValueAndPeek.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class KeyValueAndPeek implements org.apache.thrift.TBase<KeyValueAndPeek, KeyValueAndPeek._Fields>, java.io.Serializable, Cloneable, Comparable<KeyValueAndPeek> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class KeyValueAndPeek implements org.apache.thrift.TBase<KeyValueAndPeek, KeyValueAndPeek._Fields>, java.io.Serializable, Cloneable, Comparable<KeyValueAndPeek> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("KeyValueAndPeek");
 
   private static final org.apache.thrift.protocol.TField KEY_VALUE_FIELD_DESC = new org.apache.thrift.protocol.TField("keyValue", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -247,7 +250,7 @@
       return getKeyValue();
 
     case HAS_NEXT:
-      return Boolean.valueOf(isHasNext());
+      return isHasNext();
 
     }
     throw new IllegalStateException();
@@ -304,7 +307,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_keyValue = true && (isSetKeyValue());
+    list.add(present_keyValue);
+    if (present_keyValue)
+      list.add(keyValue);
+
+    boolean present_hasNext = true;
+    list.add(present_hasNext);
+    if (present_hasNext)
+      list.add(hasNext);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/MutationsRejectedException.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/MutationsRejectedException.java
index e5dfda1..db5b6c4 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/MutationsRejectedException.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/MutationsRejectedException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class MutationsRejectedException extends TException implements org.apache.thrift.TBase<MutationsRejectedException, MutationsRejectedException._Fields>, java.io.Serializable, Cloneable, Comparable<MutationsRejectedException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class MutationsRejectedException extends TException implements org.apache.thrift.TBase<MutationsRejectedException, MutationsRejectedException._Fields>, java.io.Serializable, Cloneable, Comparable<MutationsRejectedException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("MutationsRejectedException");
 
   private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespaceExistsException.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespaceExistsException.java
new file mode 100644
index 0000000..db1a380
--- /dev/null
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespaceExistsException.java
@@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.accumulo.proxy.thrift;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class NamespaceExistsException extends TException implements org.apache.thrift.TBase<NamespaceExistsException, NamespaceExistsException._Fields>, java.io.Serializable, Cloneable, Comparable<NamespaceExistsException> {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("NamespaceExistsException");
+
+  private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
+
+  private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+  static {
+    schemes.put(StandardScheme.class, new NamespaceExistsExceptionStandardSchemeFactory());
+    schemes.put(TupleScheme.class, new NamespaceExistsExceptionTupleSchemeFactory());
+  }
+
+  public String msg; // required
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+    MSG((short)1, "msg");
+
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      switch(fieldId) {
+        case 1: // MSG
+          return MSG;
+        default:
+          return null;
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+  public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+  static {
+    Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+    tmpMap.put(_Fields.MSG, new org.apache.thrift.meta_data.FieldMetaData("msg", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+    metaDataMap = Collections.unmodifiableMap(tmpMap);
+    org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(NamespaceExistsException.class, metaDataMap);
+  }
+
+  public NamespaceExistsException() {
+  }
+
+  public NamespaceExistsException(
+    String msg)
+  {
+    this();
+    this.msg = msg;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public NamespaceExistsException(NamespaceExistsException other) {
+    if (other.isSetMsg()) {
+      this.msg = other.msg;
+    }
+  }
+
+  public NamespaceExistsException deepCopy() {
+    return new NamespaceExistsException(this);
+  }
+
+  @Override
+  public void clear() {
+    this.msg = null;
+  }
+
+  public String getMsg() {
+    return this.msg;
+  }
+
+  public NamespaceExistsException setMsg(String msg) {
+    this.msg = msg;
+    return this;
+  }
+
+  public void unsetMsg() {
+    this.msg = null;
+  }
+
+  /** Returns true if field msg is set (has been assigned a value) and false otherwise */
+  public boolean isSetMsg() {
+    return this.msg != null;
+  }
+
+  public void setMsgIsSet(boolean value) {
+    if (!value) {
+      this.msg = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case MSG:
+      if (value == null) {
+        unsetMsg();
+      } else {
+        setMsg((String)value);
+      }
+      break;
+
+    }
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case MSG:
+      return getMsg();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    if (field == null) {
+      throw new IllegalArgumentException();
+    }
+
+    switch (field) {
+    case MSG:
+      return isSetMsg();
+    }
+    throw new IllegalStateException();
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof NamespaceExistsException)
+      return this.equals((NamespaceExistsException)that);
+    return false;
+  }
+
+  public boolean equals(NamespaceExistsException that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_msg = true && this.isSetMsg();
+    boolean that_present_msg = true && that.isSetMsg();
+    if (this_present_msg || that_present_msg) {
+      if (!(this_present_msg && that_present_msg))
+        return false;
+      if (!this.msg.equals(that.msg))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
+  }
+
+  @Override
+  public int compareTo(NamespaceExistsException other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+
+    lastComparison = Boolean.valueOf(isSetMsg()).compareTo(other.isSetMsg());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetMsg()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.msg, other.msg);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    return 0;
+  }
+
+  public _Fields fieldForId(int fieldId) {
+    return _Fields.findByThriftId(fieldId);
+  }
+
+  public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+    schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+  }
+
+  public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+    schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("NamespaceExistsException(");
+    boolean first = true;
+
+    sb.append("msg:");
+    if (this.msg == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.msg);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws org.apache.thrift.TException {
+    // check for required fields
+    // check for sub-struct validity
+  }
+
+  private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+    try {
+      write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+    try {
+      read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private static class NamespaceExistsExceptionStandardSchemeFactory implements SchemeFactory {
+    public NamespaceExistsExceptionStandardScheme getScheme() {
+      return new NamespaceExistsExceptionStandardScheme();
+    }
+  }
+
+  private static class NamespaceExistsExceptionStandardScheme extends StandardScheme<NamespaceExistsException> {
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot, NamespaceExistsException struct) throws org.apache.thrift.TException {
+      org.apache.thrift.protocol.TField schemeField;
+      iprot.readStructBegin();
+      while (true)
+      {
+        schemeField = iprot.readFieldBegin();
+        if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+          break;
+        }
+        switch (schemeField.id) {
+          case 1: // MSG
+            if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+              struct.msg = iprot.readString();
+              struct.setMsgIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          default:
+            org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+        }
+        iprot.readFieldEnd();
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      struct.validate();
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot, NamespaceExistsException struct) throws org.apache.thrift.TException {
+      struct.validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (struct.msg != null) {
+        oprot.writeFieldBegin(MSG_FIELD_DESC);
+        oprot.writeString(struct.msg);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+  }
+
+  private static class NamespaceExistsExceptionTupleSchemeFactory implements SchemeFactory {
+    public NamespaceExistsExceptionTupleScheme getScheme() {
+      return new NamespaceExistsExceptionTupleScheme();
+    }
+  }
+
+  private static class NamespaceExistsExceptionTupleScheme extends TupleScheme<NamespaceExistsException> {
+
+    @Override
+    public void write(org.apache.thrift.protocol.TProtocol prot, NamespaceExistsException struct) throws org.apache.thrift.TException {
+      TTupleProtocol oprot = (TTupleProtocol) prot;
+      BitSet optionals = new BitSet();
+      if (struct.isSetMsg()) {
+        optionals.set(0);
+      }
+      oprot.writeBitSet(optionals, 1);
+      if (struct.isSetMsg()) {
+        oprot.writeString(struct.msg);
+      }
+    }
+
+    @Override
+    public void read(org.apache.thrift.protocol.TProtocol prot, NamespaceExistsException struct) throws org.apache.thrift.TException {
+      TTupleProtocol iprot = (TTupleProtocol) prot;
+      BitSet incoming = iprot.readBitSet(1);
+      if (incoming.get(0)) {
+        struct.msg = iprot.readString();
+        struct.setMsgIsSet(true);
+      }
+    }
+  }
+
+}
+
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespaceNotEmptyException.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespaceNotEmptyException.java
new file mode 100644
index 0000000..f22320e
--- /dev/null
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespaceNotEmptyException.java
@@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.accumulo.proxy.thrift;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class NamespaceNotEmptyException extends TException implements org.apache.thrift.TBase<NamespaceNotEmptyException, NamespaceNotEmptyException._Fields>, java.io.Serializable, Cloneable, Comparable<NamespaceNotEmptyException> {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("NamespaceNotEmptyException");
+
+  private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
+
+  private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+  static {
+    schemes.put(StandardScheme.class, new NamespaceNotEmptyExceptionStandardSchemeFactory());
+    schemes.put(TupleScheme.class, new NamespaceNotEmptyExceptionTupleSchemeFactory());
+  }
+
+  public String msg; // required
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+    MSG((short)1, "msg");
+
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      switch(fieldId) {
+        case 1: // MSG
+          return MSG;
+        default:
+          return null;
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+  public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+  static {
+    Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+    tmpMap.put(_Fields.MSG, new org.apache.thrift.meta_data.FieldMetaData("msg", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+    metaDataMap = Collections.unmodifiableMap(tmpMap);
+    org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(NamespaceNotEmptyException.class, metaDataMap);
+  }
+
+  public NamespaceNotEmptyException() {
+  }
+
+  public NamespaceNotEmptyException(
+    String msg)
+  {
+    this();
+    this.msg = msg;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public NamespaceNotEmptyException(NamespaceNotEmptyException other) {
+    if (other.isSetMsg()) {
+      this.msg = other.msg;
+    }
+  }
+
+  public NamespaceNotEmptyException deepCopy() {
+    return new NamespaceNotEmptyException(this);
+  }
+
+  @Override
+  public void clear() {
+    this.msg = null;
+  }
+
+  public String getMsg() {
+    return this.msg;
+  }
+
+  public NamespaceNotEmptyException setMsg(String msg) {
+    this.msg = msg;
+    return this;
+  }
+
+  public void unsetMsg() {
+    this.msg = null;
+  }
+
+  /** Returns true if field msg is set (has been assigned a value) and false otherwise */
+  public boolean isSetMsg() {
+    return this.msg != null;
+  }
+
+  public void setMsgIsSet(boolean value) {
+    if (!value) {
+      this.msg = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case MSG:
+      if (value == null) {
+        unsetMsg();
+      } else {
+        setMsg((String)value);
+      }
+      break;
+
+    }
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case MSG:
+      return getMsg();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    if (field == null) {
+      throw new IllegalArgumentException();
+    }
+
+    switch (field) {
+    case MSG:
+      return isSetMsg();
+    }
+    throw new IllegalStateException();
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof NamespaceNotEmptyException)
+      return this.equals((NamespaceNotEmptyException)that);
+    return false;
+  }
+
+  public boolean equals(NamespaceNotEmptyException that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_msg = true && this.isSetMsg();
+    boolean that_present_msg = true && that.isSetMsg();
+    if (this_present_msg || that_present_msg) {
+      if (!(this_present_msg && that_present_msg))
+        return false;
+      if (!this.msg.equals(that.msg))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
+  }
+
+  @Override
+  public int compareTo(NamespaceNotEmptyException other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+
+    lastComparison = Boolean.valueOf(isSetMsg()).compareTo(other.isSetMsg());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetMsg()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.msg, other.msg);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    return 0;
+  }
+
+  public _Fields fieldForId(int fieldId) {
+    return _Fields.findByThriftId(fieldId);
+  }
+
+  public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+    schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+  }
+
+  public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+    schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("NamespaceNotEmptyException(");
+    boolean first = true;
+
+    sb.append("msg:");
+    if (this.msg == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.msg);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws org.apache.thrift.TException {
+    // check for required fields
+    // check for sub-struct validity
+  }
+
+  private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+    try {
+      write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+    try {
+      read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private static class NamespaceNotEmptyExceptionStandardSchemeFactory implements SchemeFactory {
+    public NamespaceNotEmptyExceptionStandardScheme getScheme() {
+      return new NamespaceNotEmptyExceptionStandardScheme();
+    }
+  }
+
+  private static class NamespaceNotEmptyExceptionStandardScheme extends StandardScheme<NamespaceNotEmptyException> {
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot, NamespaceNotEmptyException struct) throws org.apache.thrift.TException {
+      org.apache.thrift.protocol.TField schemeField;
+      iprot.readStructBegin();
+      while (true)
+      {
+        schemeField = iprot.readFieldBegin();
+        if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+          break;
+        }
+        switch (schemeField.id) {
+          case 1: // MSG
+            if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+              struct.msg = iprot.readString();
+              struct.setMsgIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          default:
+            org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+        }
+        iprot.readFieldEnd();
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      struct.validate();
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot, NamespaceNotEmptyException struct) throws org.apache.thrift.TException {
+      struct.validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (struct.msg != null) {
+        oprot.writeFieldBegin(MSG_FIELD_DESC);
+        oprot.writeString(struct.msg);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+  }
+
+  private static class NamespaceNotEmptyExceptionTupleSchemeFactory implements SchemeFactory {
+    public NamespaceNotEmptyExceptionTupleScheme getScheme() {
+      return new NamespaceNotEmptyExceptionTupleScheme();
+    }
+  }
+
+  private static class NamespaceNotEmptyExceptionTupleScheme extends TupleScheme<NamespaceNotEmptyException> {
+
+    @Override
+    public void write(org.apache.thrift.protocol.TProtocol prot, NamespaceNotEmptyException struct) throws org.apache.thrift.TException {
+      TTupleProtocol oprot = (TTupleProtocol) prot;
+      BitSet optionals = new BitSet();
+      if (struct.isSetMsg()) {
+        optionals.set(0);
+      }
+      oprot.writeBitSet(optionals, 1);
+      if (struct.isSetMsg()) {
+        oprot.writeString(struct.msg);
+      }
+    }
+
+    @Override
+    public void read(org.apache.thrift.protocol.TProtocol prot, NamespaceNotEmptyException struct) throws org.apache.thrift.TException {
+      TTupleProtocol iprot = (TTupleProtocol) prot;
+      BitSet incoming = iprot.readBitSet(1);
+      if (incoming.get(0)) {
+        struct.msg = iprot.readString();
+        struct.setMsgIsSet(true);
+      }
+    }
+  }
+
+}
+
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespaceNotFoundException.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespaceNotFoundException.java
new file mode 100644
index 0000000..9e31e48
--- /dev/null
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespaceNotFoundException.java
@@ -0,0 +1,414 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.accumulo.proxy.thrift;
+
+import org.apache.thrift.scheme.IScheme;
+import org.apache.thrift.scheme.SchemeFactory;
+import org.apache.thrift.scheme.StandardScheme;
+
+import org.apache.thrift.scheme.TupleScheme;
+import org.apache.thrift.protocol.TTupleProtocol;
+import org.apache.thrift.protocol.TProtocolException;
+import org.apache.thrift.EncodingUtils;
+import org.apache.thrift.TException;
+import org.apache.thrift.async.AsyncMethodCallback;
+import org.apache.thrift.server.AbstractNonblockingServer.*;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+import javax.annotation.Generated;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class NamespaceNotFoundException extends TException implements org.apache.thrift.TBase<NamespaceNotFoundException, NamespaceNotFoundException._Fields>, java.io.Serializable, Cloneable, Comparable<NamespaceNotFoundException> {
+  private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("NamespaceNotFoundException");
+
+  private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
+
+  private static final Map<Class<? extends IScheme>, SchemeFactory> schemes = new HashMap<Class<? extends IScheme>, SchemeFactory>();
+  static {
+    schemes.put(StandardScheme.class, new NamespaceNotFoundExceptionStandardSchemeFactory());
+    schemes.put(TupleScheme.class, new NamespaceNotFoundExceptionTupleSchemeFactory());
+  }
+
+  public String msg; // required
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements org.apache.thrift.TFieldIdEnum {
+    MSG((short)1, "msg");
+
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      switch(fieldId) {
+        case 1: // MSG
+          return MSG;
+        default:
+          return null;
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+  public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
+  static {
+    Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
+    tmpMap.put(_Fields.MSG, new org.apache.thrift.meta_data.FieldMetaData("msg", org.apache.thrift.TFieldRequirementType.DEFAULT, 
+        new org.apache.thrift.meta_data.FieldValueMetaData(org.apache.thrift.protocol.TType.STRING)));
+    metaDataMap = Collections.unmodifiableMap(tmpMap);
+    org.apache.thrift.meta_data.FieldMetaData.addStructMetaDataMap(NamespaceNotFoundException.class, metaDataMap);
+  }
+
+  public NamespaceNotFoundException() {
+  }
+
+  public NamespaceNotFoundException(
+    String msg)
+  {
+    this();
+    this.msg = msg;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public NamespaceNotFoundException(NamespaceNotFoundException other) {
+    if (other.isSetMsg()) {
+      this.msg = other.msg;
+    }
+  }
+
+  public NamespaceNotFoundException deepCopy() {
+    return new NamespaceNotFoundException(this);
+  }
+
+  @Override
+  public void clear() {
+    this.msg = null;
+  }
+
+  public String getMsg() {
+    return this.msg;
+  }
+
+  public NamespaceNotFoundException setMsg(String msg) {
+    this.msg = msg;
+    return this;
+  }
+
+  public void unsetMsg() {
+    this.msg = null;
+  }
+
+  /** Returns true if field msg is set (has been assigned a value) and false otherwise */
+  public boolean isSetMsg() {
+    return this.msg != null;
+  }
+
+  public void setMsgIsSet(boolean value) {
+    if (!value) {
+      this.msg = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case MSG:
+      if (value == null) {
+        unsetMsg();
+      } else {
+        setMsg((String)value);
+      }
+      break;
+
+    }
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case MSG:
+      return getMsg();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been assigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    if (field == null) {
+      throw new IllegalArgumentException();
+    }
+
+    switch (field) {
+    case MSG:
+      return isSetMsg();
+    }
+    throw new IllegalStateException();
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof NamespaceNotFoundException)
+      return this.equals((NamespaceNotFoundException)that);
+    return false;
+  }
+
+  public boolean equals(NamespaceNotFoundException that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_msg = true && this.isSetMsg();
+    boolean that_present_msg = true && that.isSetMsg();
+    if (this_present_msg || that_present_msg) {
+      if (!(this_present_msg && that_present_msg))
+        return false;
+      if (!this.msg.equals(that.msg))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
+  }
+
+  @Override
+  public int compareTo(NamespaceNotFoundException other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+
+    lastComparison = Boolean.valueOf(isSetMsg()).compareTo(other.isSetMsg());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (isSetMsg()) {
+      lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.msg, other.msg);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    return 0;
+  }
+
+  public _Fields fieldForId(int fieldId) {
+    return _Fields.findByThriftId(fieldId);
+  }
+
+  public void read(org.apache.thrift.protocol.TProtocol iprot) throws org.apache.thrift.TException {
+    schemes.get(iprot.getScheme()).getScheme().read(iprot, this);
+  }
+
+  public void write(org.apache.thrift.protocol.TProtocol oprot) throws org.apache.thrift.TException {
+    schemes.get(oprot.getScheme()).getScheme().write(oprot, this);
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("NamespaceNotFoundException(");
+    boolean first = true;
+
+    sb.append("msg:");
+    if (this.msg == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.msg);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws org.apache.thrift.TException {
+    // check for required fields
+    // check for sub-struct validity
+  }
+
+  private void writeObject(java.io.ObjectOutputStream out) throws java.io.IOException {
+    try {
+      write(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(out)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private void readObject(java.io.ObjectInputStream in) throws java.io.IOException, ClassNotFoundException {
+    try {
+      read(new org.apache.thrift.protocol.TCompactProtocol(new org.apache.thrift.transport.TIOStreamTransport(in)));
+    } catch (org.apache.thrift.TException te) {
+      throw new java.io.IOException(te);
+    }
+  }
+
+  private static class NamespaceNotFoundExceptionStandardSchemeFactory implements SchemeFactory {
+    public NamespaceNotFoundExceptionStandardScheme getScheme() {
+      return new NamespaceNotFoundExceptionStandardScheme();
+    }
+  }
+
+  private static class NamespaceNotFoundExceptionStandardScheme extends StandardScheme<NamespaceNotFoundException> {
+
+    public void read(org.apache.thrift.protocol.TProtocol iprot, NamespaceNotFoundException struct) throws org.apache.thrift.TException {
+      org.apache.thrift.protocol.TField schemeField;
+      iprot.readStructBegin();
+      while (true)
+      {
+        schemeField = iprot.readFieldBegin();
+        if (schemeField.type == org.apache.thrift.protocol.TType.STOP) { 
+          break;
+        }
+        switch (schemeField.id) {
+          case 1: // MSG
+            if (schemeField.type == org.apache.thrift.protocol.TType.STRING) {
+              struct.msg = iprot.readString();
+              struct.setMsgIsSet(true);
+            } else { 
+              org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          default:
+            org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+        }
+        iprot.readFieldEnd();
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      struct.validate();
+    }
+
+    public void write(org.apache.thrift.protocol.TProtocol oprot, NamespaceNotFoundException struct) throws org.apache.thrift.TException {
+      struct.validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (struct.msg != null) {
+        oprot.writeFieldBegin(MSG_FIELD_DESC);
+        oprot.writeString(struct.msg);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+  }
+
+  private static class NamespaceNotFoundExceptionTupleSchemeFactory implements SchemeFactory {
+    public NamespaceNotFoundExceptionTupleScheme getScheme() {
+      return new NamespaceNotFoundExceptionTupleScheme();
+    }
+  }
+
+  private static class NamespaceNotFoundExceptionTupleScheme extends TupleScheme<NamespaceNotFoundException> {
+
+    @Override
+    public void write(org.apache.thrift.protocol.TProtocol prot, NamespaceNotFoundException struct) throws org.apache.thrift.TException {
+      TTupleProtocol oprot = (TTupleProtocol) prot;
+      BitSet optionals = new BitSet();
+      if (struct.isSetMsg()) {
+        optionals.set(0);
+      }
+      oprot.writeBitSet(optionals, 1);
+      if (struct.isSetMsg()) {
+        oprot.writeString(struct.msg);
+      }
+    }
+
+    @Override
+    public void read(org.apache.thrift.protocol.TProtocol prot, NamespaceNotFoundException struct) throws org.apache.thrift.TException {
+      TTupleProtocol iprot = (TTupleProtocol) prot;
+      BitSet incoming = iprot.readBitSet(1);
+      if (incoming.get(0)) {
+        struct.msg = iprot.readString();
+        struct.setMsgIsSet(true);
+      }
+    }
+  }
+
+}
+
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespacePermission.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespacePermission.java
new file mode 100644
index 0000000..6d790f6
--- /dev/null
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NamespacePermission.java
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift Compiler (0.9.3)
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ *  @generated
+ */
+package org.apache.accumulo.proxy.thrift;
+
+
+import java.util.Map;
+import java.util.HashMap;
+import org.apache.thrift.TEnum;
+
+@SuppressWarnings({"unused"}) public enum NamespacePermission implements org.apache.thrift.TEnum {
+  READ(0),
+  WRITE(1),
+  ALTER_NAMESPACE(2),
+  GRANT(3),
+  ALTER_TABLE(4),
+  CREATE_TABLE(5),
+  DROP_TABLE(6),
+  BULK_IMPORT(7),
+  DROP_NAMESPACE(8);
+
+  private final int value;
+
+  private NamespacePermission(int value) {
+    this.value = value;
+  }
+
+  /**
+   * Get the integer value of this enum value, as defined in the Thrift IDL.
+   */
+  public int getValue() {
+    return value;
+  }
+
+  /**
+   * Find a the enum type by its integer value, as defined in the Thrift IDL.
+   * @return null if the value is not found.
+   */
+  public static NamespacePermission findByValue(int value) { 
+    switch (value) {
+      case 0:
+        return READ;
+      case 1:
+        return WRITE;
+      case 2:
+        return ALTER_NAMESPACE;
+      case 3:
+        return GRANT;
+      case 4:
+        return ALTER_TABLE;
+      case 5:
+        return CREATE_TABLE;
+      case 6:
+        return DROP_TABLE;
+      case 7:
+        return BULK_IMPORT;
+      case 8:
+        return DROP_NAMESPACE;
+      default:
+        return null;
+    }
+  }
+}
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NoMoreEntriesException.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NoMoreEntriesException.java
index d67bcd2..182277a 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NoMoreEntriesException.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/NoMoreEntriesException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class NoMoreEntriesException extends TException implements org.apache.thrift.TBase<NoMoreEntriesException, NoMoreEntriesException._Fields>, java.io.Serializable, Cloneable, Comparable<NoMoreEntriesException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class NoMoreEntriesException extends TException implements org.apache.thrift.TBase<NoMoreEntriesException, NoMoreEntriesException._Fields>, java.io.Serializable, Cloneable, Comparable<NoMoreEntriesException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("NoMoreEntriesException");
 
   private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/PartialKey.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/PartialKey.java
index 2a0f269..b03eae9 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/PartialKey.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/PartialKey.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Range.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Range.java
index bc66c6b..db8fe8e 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Range.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/Range.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class Range implements org.apache.thrift.TBase<Range, Range._Fields>, java.io.Serializable, Cloneable, Comparable<Range> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class Range implements org.apache.thrift.TBase<Range, Range._Fields>, java.io.Serializable, Cloneable, Comparable<Range> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("Range");
 
   private static final org.apache.thrift.protocol.TField START_FIELD_DESC = new org.apache.thrift.protocol.TField("start", org.apache.thrift.protocol.TType.STRUCT, (short)1);
@@ -337,13 +340,13 @@
       return getStart();
 
     case START_INCLUSIVE:
-      return Boolean.valueOf(isStartInclusive());
+      return isStartInclusive();
 
     case STOP:
       return getStop();
 
     case STOP_INCLUSIVE:
-      return Boolean.valueOf(isStopInclusive());
+      return isStopInclusive();
 
     }
     throw new IllegalStateException();
@@ -422,7 +425,29 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_start = true && (isSetStart());
+    list.add(present_start);
+    if (present_start)
+      list.add(start);
+
+    boolean present_startInclusive = true;
+    list.add(present_startInclusive);
+    if (present_startInclusive)
+      list.add(startInclusive);
+
+    boolean present_stop = true && (isSetStop());
+    list.add(present_stop);
+    if (present_stop)
+      list.add(stop);
+
+    boolean present_stopInclusive = true;
+    list.add(present_stopInclusive);
+    if (present_stopInclusive)
+      list.add(stopInclusive);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanColumn.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanColumn.java
index 296c885..3cea424 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanColumn.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanColumn.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ScanColumn implements org.apache.thrift.TBase<ScanColumn, ScanColumn._Fields>, java.io.Serializable, Cloneable, Comparable<ScanColumn> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ScanColumn implements org.apache.thrift.TBase<ScanColumn, ScanColumn._Fields>, java.io.Serializable, Cloneable, Comparable<ScanColumn> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ScanColumn");
 
   private static final org.apache.thrift.protocol.TField COL_FAMILY_FIELD_DESC = new org.apache.thrift.protocol.TField("colFamily", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -125,7 +128,7 @@
   }
 
   // isset id assignments
-  private _Fields optionals[] = {_Fields.COL_QUALIFIER};
+  private static final _Fields optionals[] = {_Fields.COL_QUALIFIER};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -144,7 +147,7 @@
     ByteBuffer colFamily)
   {
     this();
-    this.colFamily = colFamily;
+    this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(colFamily);
   }
 
   /**
@@ -153,11 +156,9 @@
   public ScanColumn(ScanColumn other) {
     if (other.isSetColFamily()) {
       this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(other.colFamily);
-;
     }
     if (other.isSetColQualifier()) {
       this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(other.colQualifier);
-;
     }
   }
 
@@ -177,16 +178,16 @@
   }
 
   public ByteBuffer bufferForColFamily() {
-    return colFamily;
+    return org.apache.thrift.TBaseHelper.copyBinary(colFamily);
   }
 
   public ScanColumn setColFamily(byte[] colFamily) {
-    setColFamily(colFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(colFamily));
+    this.colFamily = colFamily == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colFamily, colFamily.length));
     return this;
   }
 
   public ScanColumn setColFamily(ByteBuffer colFamily) {
-    this.colFamily = colFamily;
+    this.colFamily = org.apache.thrift.TBaseHelper.copyBinary(colFamily);
     return this;
   }
 
@@ -211,16 +212,16 @@
   }
 
   public ByteBuffer bufferForColQualifier() {
-    return colQualifier;
+    return org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
   }
 
   public ScanColumn setColQualifier(byte[] colQualifier) {
-    setColQualifier(colQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(colQualifier));
+    this.colQualifier = colQualifier == null ? (ByteBuffer)null : ByteBuffer.wrap(Arrays.copyOf(colQualifier, colQualifier.length));
     return this;
   }
 
   public ScanColumn setColQualifier(ByteBuffer colQualifier) {
-    this.colQualifier = colQualifier;
+    this.colQualifier = org.apache.thrift.TBaseHelper.copyBinary(colQualifier);
     return this;
   }
 
@@ -323,7 +324,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_colFamily = true && (isSetColFamily());
+    list.add(present_colFamily);
+    if (present_colFamily)
+      list.add(colFamily);
+
+    boolean present_colQualifier = true && (isSetColQualifier());
+    list.add(present_colQualifier);
+    if (present_colQualifier)
+      list.add(colQualifier);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanOptions.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanOptions.java
index 047daa0..6675c8e 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanOptions.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanOptions.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ScanOptions implements org.apache.thrift.TBase<ScanOptions, ScanOptions._Fields>, java.io.Serializable, Cloneable, Comparable<ScanOptions> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ScanOptions implements org.apache.thrift.TBase<ScanOptions, ScanOptions._Fields>, java.io.Serializable, Cloneable, Comparable<ScanOptions> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ScanOptions");
 
   private static final org.apache.thrift.protocol.TField AUTHORIZATIONS_FIELD_DESC = new org.apache.thrift.protocol.TField("authorizations", org.apache.thrift.protocol.TType.SET, (short)1);
@@ -142,7 +145,7 @@
   // isset id assignments
   private static final int __BUFFERSIZE_ISSET_ID = 0;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.AUTHORIZATIONS,_Fields.RANGE,_Fields.COLUMNS,_Fields.ITERATORS,_Fields.BUFFER_SIZE};
+  private static final _Fields optionals[] = {_Fields.AUTHORIZATIONS,_Fields.RANGE,_Fields.COLUMNS,_Fields.ITERATORS,_Fields.BUFFER_SIZE};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -433,7 +436,7 @@
       return getIterators();
 
     case BUFFER_SIZE:
-      return Integer.valueOf(getBufferSize());
+      return getBufferSize();
 
     }
     throw new IllegalStateException();
@@ -523,7 +526,34 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_authorizations = true && (isSetAuthorizations());
+    list.add(present_authorizations);
+    if (present_authorizations)
+      list.add(authorizations);
+
+    boolean present_range = true && (isSetRange());
+    list.add(present_range);
+    if (present_range)
+      list.add(range);
+
+    boolean present_columns = true && (isSetColumns());
+    list.add(present_columns);
+    if (present_columns)
+      list.add(columns);
+
+    boolean present_iterators = true && (isSetIterators());
+    list.add(present_iterators);
+    if (present_iterators)
+      list.add(iterators);
+
+    boolean present_bufferSize = true && (isSetBufferSize());
+    list.add(present_bufferSize);
+    if (present_bufferSize)
+      list.add(bufferSize);
+
+    return list.hashCode();
   }
 
   @Override
@@ -609,7 +639,7 @@
       if (this.authorizations == null) {
         sb.append("null");
       } else {
-        sb.append(this.authorizations);
+        org.apache.thrift.TBaseHelper.toString(this.authorizations, sb);
       }
       first = false;
     }
@@ -702,11 +732,11 @@
               {
                 org.apache.thrift.protocol.TSet _set26 = iprot.readSetBegin();
                 struct.authorizations = new HashSet<ByteBuffer>(2*_set26.size);
-                for (int _i27 = 0; _i27 < _set26.size; ++_i27)
+                ByteBuffer _elem27;
+                for (int _i28 = 0; _i28 < _set26.size; ++_i28)
                 {
-                  ByteBuffer _elem28;
-                  _elem28 = iprot.readBinary();
-                  struct.authorizations.add(_elem28);
+                  _elem27 = iprot.readBinary();
+                  struct.authorizations.add(_elem27);
                 }
                 iprot.readSetEnd();
               }
@@ -729,12 +759,12 @@
               {
                 org.apache.thrift.protocol.TList _list29 = iprot.readListBegin();
                 struct.columns = new ArrayList<ScanColumn>(_list29.size);
-                for (int _i30 = 0; _i30 < _list29.size; ++_i30)
+                ScanColumn _elem30;
+                for (int _i31 = 0; _i31 < _list29.size; ++_i31)
                 {
-                  ScanColumn _elem31;
-                  _elem31 = new ScanColumn();
-                  _elem31.read(iprot);
-                  struct.columns.add(_elem31);
+                  _elem30 = new ScanColumn();
+                  _elem30.read(iprot);
+                  struct.columns.add(_elem30);
                 }
                 iprot.readListEnd();
               }
@@ -748,12 +778,12 @@
               {
                 org.apache.thrift.protocol.TList _list32 = iprot.readListBegin();
                 struct.iterators = new ArrayList<IteratorSetting>(_list32.size);
-                for (int _i33 = 0; _i33 < _list32.size; ++_i33)
+                IteratorSetting _elem33;
+                for (int _i34 = 0; _i34 < _list32.size; ++_i34)
                 {
-                  IteratorSetting _elem34;
-                  _elem34 = new IteratorSetting();
-                  _elem34.read(iprot);
-                  struct.iterators.add(_elem34);
+                  _elem33 = new IteratorSetting();
+                  _elem33.read(iprot);
+                  struct.iterators.add(_elem33);
                 }
                 iprot.readListEnd();
               }
@@ -916,11 +946,11 @@
         {
           org.apache.thrift.protocol.TSet _set41 = new org.apache.thrift.protocol.TSet(org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.authorizations = new HashSet<ByteBuffer>(2*_set41.size);
-          for (int _i42 = 0; _i42 < _set41.size; ++_i42)
+          ByteBuffer _elem42;
+          for (int _i43 = 0; _i43 < _set41.size; ++_i43)
           {
-            ByteBuffer _elem43;
-            _elem43 = iprot.readBinary();
-            struct.authorizations.add(_elem43);
+            _elem42 = iprot.readBinary();
+            struct.authorizations.add(_elem42);
           }
         }
         struct.setAuthorizationsIsSet(true);
@@ -934,12 +964,12 @@
         {
           org.apache.thrift.protocol.TList _list44 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.columns = new ArrayList<ScanColumn>(_list44.size);
-          for (int _i45 = 0; _i45 < _list44.size; ++_i45)
+          ScanColumn _elem45;
+          for (int _i46 = 0; _i46 < _list44.size; ++_i46)
           {
-            ScanColumn _elem46;
-            _elem46 = new ScanColumn();
-            _elem46.read(iprot);
-            struct.columns.add(_elem46);
+            _elem45 = new ScanColumn();
+            _elem45.read(iprot);
+            struct.columns.add(_elem45);
           }
         }
         struct.setColumnsIsSet(true);
@@ -948,12 +978,12 @@
         {
           org.apache.thrift.protocol.TList _list47 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.iterators = new ArrayList<IteratorSetting>(_list47.size);
-          for (int _i48 = 0; _i48 < _list47.size; ++_i48)
+          IteratorSetting _elem48;
+          for (int _i49 = 0; _i49 < _list47.size; ++_i49)
           {
-            IteratorSetting _elem49;
-            _elem49 = new IteratorSetting();
-            _elem49.read(iprot);
-            struct.iterators.add(_elem49);
+            _elem48 = new IteratorSetting();
+            _elem48.read(iprot);
+            struct.iterators.add(_elem48);
           }
         }
         struct.setIteratorsIsSet(true);
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanResult.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanResult.java
index 3775e7d..861b0de 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanResult.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanResult.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class ScanResult implements org.apache.thrift.TBase<ScanResult, ScanResult._Fields>, java.io.Serializable, Cloneable, Comparable<ScanResult> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class ScanResult implements org.apache.thrift.TBase<ScanResult, ScanResult._Fields>, java.io.Serializable, Cloneable, Comparable<ScanResult> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("ScanResult");
 
   private static final org.apache.thrift.protocol.TField RESULTS_FIELD_DESC = new org.apache.thrift.protocol.TField("results", org.apache.thrift.protocol.TType.LIST, (short)1);
@@ -267,7 +270,7 @@
       return getResults();
 
     case MORE:
-      return Boolean.valueOf(isMore());
+      return isMore();
 
     }
     throw new IllegalStateException();
@@ -324,7 +327,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_results = true && (isSetResults());
+    list.add(present_results);
+    if (present_results)
+      list.add(results);
+
+    boolean present_more = true;
+    list.add(present_more);
+    if (present_more)
+      list.add(more);
+
+    return list.hashCode();
   }
 
   @Override
@@ -436,12 +451,12 @@
               {
                 org.apache.thrift.protocol.TList _list8 = iprot.readListBegin();
                 struct.results = new ArrayList<KeyValue>(_list8.size);
-                for (int _i9 = 0; _i9 < _list8.size; ++_i9)
+                KeyValue _elem9;
+                for (int _i10 = 0; _i10 < _list8.size; ++_i10)
                 {
-                  KeyValue _elem10;
-                  _elem10 = new KeyValue();
-                  _elem10.read(iprot);
-                  struct.results.add(_elem10);
+                  _elem9 = new KeyValue();
+                  _elem9.read(iprot);
+                  struct.results.add(_elem9);
                 }
                 iprot.readListEnd();
               }
@@ -535,12 +550,12 @@
         {
           org.apache.thrift.protocol.TList _list13 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.results = new ArrayList<KeyValue>(_list13.size);
-          for (int _i14 = 0; _i14 < _list13.size; ++_i14)
+          KeyValue _elem14;
+          for (int _i15 = 0; _i15 < _list13.size; ++_i15)
           {
-            KeyValue _elem15;
-            _elem15 = new KeyValue();
-            _elem15.read(iprot);
-            struct.results.add(_elem15);
+            _elem14 = new KeyValue();
+            _elem14.read(iprot);
+            struct.results.add(_elem14);
           }
         }
         struct.setResultsIsSet(true);
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanState.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanState.java
index 127d147..8e79212 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanState.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanState.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanType.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanType.java
index f417110..14ac9ce 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanType.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/ScanType.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/SystemPermission.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/SystemPermission.java
index 929b83a..6f4b549 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/SystemPermission.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/SystemPermission.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TableExistsException.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TableExistsException.java
index 9e3cf9c..509f022 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TableExistsException.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TableExistsException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TableExistsException extends TException implements org.apache.thrift.TBase<TableExistsException, TableExistsException._Fields>, java.io.Serializable, Cloneable, Comparable<TableExistsException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TableExistsException extends TException implements org.apache.thrift.TBase<TableExistsException, TableExistsException._Fields>, java.io.Serializable, Cloneable, Comparable<TableExistsException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TableExistsException");
 
   private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TableNotFoundException.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TableNotFoundException.java
index f12059b..d889faf 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TableNotFoundException.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TableNotFoundException.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TableNotFoundException extends TException implements org.apache.thrift.TBase<TableNotFoundException, TableNotFoundException._Fields>, java.io.Serializable, Cloneable, Comparable<TableNotFoundException> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TableNotFoundException extends TException implements org.apache.thrift.TBase<TableNotFoundException, TableNotFoundException._Fields>, java.io.Serializable, Cloneable, Comparable<TableNotFoundException> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("TableNotFoundException");
 
   private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TablePermission.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TablePermission.java
index 04882fa..1beac63 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TablePermission.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TablePermission.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TimeType.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TimeType.java
index 32564e0..26565a2 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TimeType.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/TimeType.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/UnknownScanner.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/UnknownScanner.java
index f6a4b1e..3630f94 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/UnknownScanner.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/UnknownScanner.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class UnknownScanner extends TException implements org.apache.thrift.TBase<UnknownScanner, UnknownScanner._Fields>, java.io.Serializable, Cloneable, Comparable<UnknownScanner> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class UnknownScanner extends TException implements org.apache.thrift.TBase<UnknownScanner, UnknownScanner._Fields>, java.io.Serializable, Cloneable, Comparable<UnknownScanner> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("UnknownScanner");
 
   private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/UnknownWriter.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/UnknownWriter.java
index 661aa1b..cd82742 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/UnknownWriter.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/UnknownWriter.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class UnknownWriter extends TException implements org.apache.thrift.TBase<UnknownWriter, UnknownWriter._Fields>, java.io.Serializable, Cloneable, Comparable<UnknownWriter> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class UnknownWriter extends TException implements org.apache.thrift.TBase<UnknownWriter, UnknownWriter._Fields>, java.io.Serializable, Cloneable, Comparable<UnknownWriter> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("UnknownWriter");
 
   private static final org.apache.thrift.protocol.TField MSG_FIELD_DESC = new org.apache.thrift.protocol.TField("msg", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -243,7 +246,14 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/WriterOptions.java b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/WriterOptions.java
index 7ecde35..02d4548 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/thrift/WriterOptions.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/thrift/WriterOptions.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class WriterOptions implements org.apache.thrift.TBase<WriterOptions, WriterOptions._Fields>, java.io.Serializable, Cloneable, Comparable<WriterOptions> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class WriterOptions implements org.apache.thrift.TBase<WriterOptions, WriterOptions._Fields>, java.io.Serializable, Cloneable, Comparable<WriterOptions> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("WriterOptions");
 
   private static final org.apache.thrift.protocol.TField MAX_MEMORY_FIELD_DESC = new org.apache.thrift.protocol.TField("maxMemory", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -153,7 +156,7 @@
   private static final int __TIMEOUTMS_ISSET_ID = 2;
   private static final int __THREADS_ISSET_ID = 3;
   private byte __isset_bitfield = 0;
-  private _Fields optionals[] = {_Fields.DURABILITY};
+  private static final _Fields optionals[] = {_Fields.DURABILITY};
   public static final Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     Map<_Fields, org.apache.thrift.meta_data.FieldMetaData> tmpMap = new EnumMap<_Fields, org.apache.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -394,16 +397,16 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case MAX_MEMORY:
-      return Long.valueOf(getMaxMemory());
+      return getMaxMemory();
 
     case LATENCY_MS:
-      return Long.valueOf(getLatencyMs());
+      return getLatencyMs();
 
     case TIMEOUT_MS:
-      return Long.valueOf(getTimeoutMs());
+      return getTimeoutMs();
 
     case THREADS:
-      return Integer.valueOf(getThreads());
+      return getThreads();
 
     case DURABILITY:
       return getDurability();
@@ -496,7 +499,34 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_maxMemory = true;
+    list.add(present_maxMemory);
+    if (present_maxMemory)
+      list.add(maxMemory);
+
+    boolean present_latencyMs = true;
+    list.add(present_latencyMs);
+    if (present_latencyMs)
+      list.add(latencyMs);
+
+    boolean present_timeoutMs = true;
+    list.add(present_timeoutMs);
+    if (present_timeoutMs)
+      list.add(timeoutMs);
+
+    boolean present_threads = true;
+    list.add(present_threads);
+    if (present_threads)
+      list.add(threads);
+
+    boolean present_durability = true && (isSetDurability());
+    list.add(present_durability);
+    if (present_durability)
+      list.add(durability.getValue());
+
+    return list.hashCode();
   }
 
   @Override
@@ -681,7 +711,7 @@
             break;
           case 5: // DURABILITY
             if (schemeField.type == org.apache.thrift.protocol.TType.I32) {
-              struct.durability = Durability.findByValue(iprot.readI32());
+              struct.durability = org.apache.accumulo.proxy.thrift.Durability.findByValue(iprot.readI32());
               struct.setDurabilityIsSet(true);
             } else { 
               org.apache.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
@@ -793,7 +823,7 @@
         struct.setThreadsIsSet(true);
       }
       if (incoming.get(4)) {
-        struct.durability = Durability.findByValue(iprot.readI32());
+        struct.durability = org.apache.accumulo.proxy.thrift.Durability.findByValue(iprot.readI32());
         struct.setDurabilityIsSet(true);
       }
     }
diff --git a/proxy/src/main/python/AccumuloProxy-remote b/proxy/src/main/python/AccumuloProxy-remote
index a8d7542..bc08e9b 100644
--- a/proxy/src/main/python/AccumuloProxy-remote
+++ b/proxy/src/main/python/AccumuloProxy-remote
@@ -14,7 +14,7 @@
 # limitations under the License.
 #!/usr/bin/env python
 #
-# Autogenerated by Thrift Compiler (0.9.1)
+# Autogenerated by Thrift Compiler (0.9.3)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
@@ -26,6 +26,7 @@
 from urlparse import urlparse
 from thrift.transport import TTransport
 from thrift.transport import TSocket
+from thrift.transport import TSSLSocket
 from thrift.transport import THttpClient
 from thrift.protocol import TBinaryProtocol
 
@@ -33,88 +34,111 @@
 from accumulo.ttypes import *
 
 if len(sys.argv) <= 1 or sys.argv[1] == '--help':
-  print ''
-  print 'Usage: ' + sys.argv[0] + ' [-h host[:port]] [-u url] [-f[ramed]] function [arg1 [arg2...]]'
-  print ''
-  print 'Functions:'
-  print '  string login(string principal,  loginProperties)'
-  print '  i32 addConstraint(string login, string tableName, string constraintClassName)'
-  print '  void addSplits(string login, string tableName,  splits)'
-  print '  void attachIterator(string login, string tableName, IteratorSetting setting,  scopes)'
-  print '  void checkIteratorConflicts(string login, string tableName, IteratorSetting setting,  scopes)'
-  print '  void clearLocatorCache(string login, string tableName)'
-  print '  void cloneTable(string login, string tableName, string newTableName, bool flush,  propertiesToSet,  propertiesToExclude)'
-  print '  void compactTable(string login, string tableName, string startRow, string endRow,  iterators, bool flush, bool wait, CompactionStrategyConfig compactionStrategy)'
-  print '  void cancelCompaction(string login, string tableName)'
-  print '  void createTable(string login, string tableName, bool versioningIter, TimeType type)'
-  print '  void deleteTable(string login, string tableName)'
-  print '  void deleteRows(string login, string tableName, string startRow, string endRow)'
-  print '  void exportTable(string login, string tableName, string exportDir)'
-  print '  void flushTable(string login, string tableName, string startRow, string endRow, bool wait)'
-  print '   getDiskUsage(string login,  tables)'
-  print '   getLocalityGroups(string login, string tableName)'
-  print '  IteratorSetting getIteratorSetting(string login, string tableName, string iteratorName, IteratorScope scope)'
-  print '  string getMaxRow(string login, string tableName,  auths, string startRow, bool startInclusive, string endRow, bool endInclusive)'
-  print '   getTableProperties(string login, string tableName)'
-  print '  void importDirectory(string login, string tableName, string importDir, string failureDir, bool setTime)'
-  print '  void importTable(string login, string tableName, string importDir)'
-  print '   listSplits(string login, string tableName, i32 maxSplits)'
-  print '   listTables(string login)'
-  print '   listIterators(string login, string tableName)'
-  print '   listConstraints(string login, string tableName)'
-  print '  void mergeTablets(string login, string tableName, string startRow, string endRow)'
-  print '  void offlineTable(string login, string tableName, bool wait)'
-  print '  void onlineTable(string login, string tableName, bool wait)'
-  print '  void removeConstraint(string login, string tableName, i32 constraint)'
-  print '  void removeIterator(string login, string tableName, string iterName,  scopes)'
-  print '  void removeTableProperty(string login, string tableName, string property)'
-  print '  void renameTable(string login, string oldTableName, string newTableName)'
-  print '  void setLocalityGroups(string login, string tableName,  groups)'
-  print '  void setTableProperty(string login, string tableName, string property, string value)'
-  print '   splitRangeByTablets(string login, string tableName, Range range, i32 maxSplits)'
-  print '  bool tableExists(string login, string tableName)'
-  print '   tableIdMap(string login)'
-  print '  bool testTableClassLoad(string login, string tableName, string className, string asTypeName)'
-  print '  void pingTabletServer(string login, string tserver)'
-  print '   getActiveScans(string login, string tserver)'
-  print '   getActiveCompactions(string login, string tserver)'
-  print '   getSiteConfiguration(string login)'
-  print '   getSystemConfiguration(string login)'
-  print '   getTabletServers(string login)'
-  print '  void removeProperty(string login, string property)'
-  print '  void setProperty(string login, string property, string value)'
-  print '  bool testClassLoad(string login, string className, string asTypeName)'
-  print '  bool authenticateUser(string login, string user,  properties)'
-  print '  void changeUserAuthorizations(string login, string user,  authorizations)'
-  print '  void changeLocalUserPassword(string login, string user, string password)'
-  print '  void createLocalUser(string login, string user, string password)'
-  print '  void dropLocalUser(string login, string user)'
-  print '   getUserAuthorizations(string login, string user)'
-  print '  void grantSystemPermission(string login, string user, SystemPermission perm)'
-  print '  void grantTablePermission(string login, string user, string table, TablePermission perm)'
-  print '  bool hasSystemPermission(string login, string user, SystemPermission perm)'
-  print '  bool hasTablePermission(string login, string user, string table, TablePermission perm)'
-  print '   listLocalUsers(string login)'
-  print '  void revokeSystemPermission(string login, string user, SystemPermission perm)'
-  print '  void revokeTablePermission(string login, string user, string table, TablePermission perm)'
-  print '  string createBatchScanner(string login, string tableName, BatchScanOptions options)'
-  print '  string createScanner(string login, string tableName, ScanOptions options)'
-  print '  bool hasNext(string scanner)'
-  print '  KeyValueAndPeek nextEntry(string scanner)'
-  print '  ScanResult nextK(string scanner, i32 k)'
-  print '  void closeScanner(string scanner)'
-  print '  void updateAndFlush(string login, string tableName,  cells)'
-  print '  string createWriter(string login, string tableName, WriterOptions opts)'
-  print '  void update(string writer,  cells)'
-  print '  void flush(string writer)'
-  print '  void closeWriter(string writer)'
-  print '  ConditionalStatus updateRowConditionally(string login, string tableName, string row, ConditionalUpdates updates)'
-  print '  string createConditionalWriter(string login, string tableName, ConditionalWriterOptions options)'
-  print '   updateRowsConditionally(string conditionalWriter,  updates)'
-  print '  void closeConditionalWriter(string conditionalWriter)'
-  print '  Range getRowRange(string row)'
-  print '  Key getFollowing(Key key, PartialKey part)'
-  print ''
+  print('')
+  print('Usage: ' + sys.argv[0] + ' [-h host[:port]] [-u url] [-f[ramed]] [-s[sl]] function [arg1 [arg2...]]')
+  print('')
+  print('Functions:')
+  print('  string login(string principal,  loginProperties)')
+  print('  i32 addConstraint(string login, string tableName, string constraintClassName)')
+  print('  void addSplits(string login, string tableName,  splits)')
+  print('  void attachIterator(string login, string tableName, IteratorSetting setting,  scopes)')
+  print('  void checkIteratorConflicts(string login, string tableName, IteratorSetting setting,  scopes)')
+  print('  void clearLocatorCache(string login, string tableName)')
+  print('  void cloneTable(string login, string tableName, string newTableName, bool flush,  propertiesToSet,  propertiesToExclude)')
+  print('  void compactTable(string login, string tableName, string startRow, string endRow,  iterators, bool flush, bool wait, CompactionStrategyConfig compactionStrategy)')
+  print('  void cancelCompaction(string login, string tableName)')
+  print('  void createTable(string login, string tableName, bool versioningIter, TimeType type)')
+  print('  void deleteTable(string login, string tableName)')
+  print('  void deleteRows(string login, string tableName, string startRow, string endRow)')
+  print('  void exportTable(string login, string tableName, string exportDir)')
+  print('  void flushTable(string login, string tableName, string startRow, string endRow, bool wait)')
+  print('   getDiskUsage(string login,  tables)')
+  print('   getLocalityGroups(string login, string tableName)')
+  print('  IteratorSetting getIteratorSetting(string login, string tableName, string iteratorName, IteratorScope scope)')
+  print('  string getMaxRow(string login, string tableName,  auths, string startRow, bool startInclusive, string endRow, bool endInclusive)')
+  print('   getTableProperties(string login, string tableName)')
+  print('  void importDirectory(string login, string tableName, string importDir, string failureDir, bool setTime)')
+  print('  void importTable(string login, string tableName, string importDir)')
+  print('   listSplits(string login, string tableName, i32 maxSplits)')
+  print('   listTables(string login)')
+  print('   listIterators(string login, string tableName)')
+  print('   listConstraints(string login, string tableName)')
+  print('  void mergeTablets(string login, string tableName, string startRow, string endRow)')
+  print('  void offlineTable(string login, string tableName, bool wait)')
+  print('  void onlineTable(string login, string tableName, bool wait)')
+  print('  void removeConstraint(string login, string tableName, i32 constraint)')
+  print('  void removeIterator(string login, string tableName, string iterName,  scopes)')
+  print('  void removeTableProperty(string login, string tableName, string property)')
+  print('  void renameTable(string login, string oldTableName, string newTableName)')
+  print('  void setLocalityGroups(string login, string tableName,  groups)')
+  print('  void setTableProperty(string login, string tableName, string property, string value)')
+  print('   splitRangeByTablets(string login, string tableName, Range range, i32 maxSplits)')
+  print('  bool tableExists(string login, string tableName)')
+  print('   tableIdMap(string login)')
+  print('  bool testTableClassLoad(string login, string tableName, string className, string asTypeName)')
+  print('  void pingTabletServer(string login, string tserver)')
+  print('   getActiveScans(string login, string tserver)')
+  print('   getActiveCompactions(string login, string tserver)')
+  print('   getSiteConfiguration(string login)')
+  print('   getSystemConfiguration(string login)')
+  print('   getTabletServers(string login)')
+  print('  void removeProperty(string login, string property)')
+  print('  void setProperty(string login, string property, string value)')
+  print('  bool testClassLoad(string login, string className, string asTypeName)')
+  print('  bool authenticateUser(string login, string user,  properties)')
+  print('  void changeUserAuthorizations(string login, string user,  authorizations)')
+  print('  void changeLocalUserPassword(string login, string user, string password)')
+  print('  void createLocalUser(string login, string user, string password)')
+  print('  void dropLocalUser(string login, string user)')
+  print('   getUserAuthorizations(string login, string user)')
+  print('  void grantSystemPermission(string login, string user, SystemPermission perm)')
+  print('  void grantTablePermission(string login, string user, string table, TablePermission perm)')
+  print('  bool hasSystemPermission(string login, string user, SystemPermission perm)')
+  print('  bool hasTablePermission(string login, string user, string table, TablePermission perm)')
+  print('   listLocalUsers(string login)')
+  print('  void revokeSystemPermission(string login, string user, SystemPermission perm)')
+  print('  void revokeTablePermission(string login, string user, string table, TablePermission perm)')
+  print('  void grantNamespacePermission(string login, string user, string namespaceName, NamespacePermission perm)')
+  print('  bool hasNamespacePermission(string login, string user, string namespaceName, NamespacePermission perm)')
+  print('  void revokeNamespacePermission(string login, string user, string namespaceName, NamespacePermission perm)')
+  print('  string createBatchScanner(string login, string tableName, BatchScanOptions options)')
+  print('  string createScanner(string login, string tableName, ScanOptions options)')
+  print('  bool hasNext(string scanner)')
+  print('  KeyValueAndPeek nextEntry(string scanner)')
+  print('  ScanResult nextK(string scanner, i32 k)')
+  print('  void closeScanner(string scanner)')
+  print('  void updateAndFlush(string login, string tableName,  cells)')
+  print('  string createWriter(string login, string tableName, WriterOptions opts)')
+  print('  void update(string writer,  cells)')
+  print('  void flush(string writer)')
+  print('  void closeWriter(string writer)')
+  print('  ConditionalStatus updateRowConditionally(string login, string tableName, string row, ConditionalUpdates updates)')
+  print('  string createConditionalWriter(string login, string tableName, ConditionalWriterOptions options)')
+  print('   updateRowsConditionally(string conditionalWriter,  updates)')
+  print('  void closeConditionalWriter(string conditionalWriter)')
+  print('  Range getRowRange(string row)')
+  print('  Key getFollowing(Key key, PartialKey part)')
+  print('  string systemNamespace()')
+  print('  string defaultNamespace()')
+  print('   listNamespaces(string login)')
+  print('  bool namespaceExists(string login, string namespaceName)')
+  print('  void createNamespace(string login, string namespaceName)')
+  print('  void deleteNamespace(string login, string namespaceName)')
+  print('  void renameNamespace(string login, string oldNamespaceName, string newNamespaceName)')
+  print('  void setNamespaceProperty(string login, string namespaceName, string property, string value)')
+  print('  void removeNamespaceProperty(string login, string namespaceName, string property)')
+  print('   getNamespaceProperties(string login, string namespaceName)')
+  print('   namespaceIdMap(string login)')
+  print('  void attachNamespaceIterator(string login, string namespaceName, IteratorSetting setting,  scopes)')
+  print('  void removeNamespaceIterator(string login, string namespaceName, string name,  scopes)')
+  print('  IteratorSetting getNamespaceIteratorSetting(string login, string namespaceName, string name, IteratorScope scope)')
+  print('   listNamespaceIterators(string login, string namespaceName)')
+  print('  void checkNamespaceIteratorConflicts(string login, string namespaceName, IteratorSetting setting,  scopes)')
+  print('  i32 addNamespaceConstraint(string login, string namespaceName, string constraintClassName)')
+  print('  void removeNamespaceConstraint(string login, string namespaceName, i32 id)')
+  print('   listNamespaceConstraints(string login, string namespaceName)')
+  print('  bool testNamespaceClassLoad(string login, string namespaceName, string className, string asTypeName)')
+  print('')
   sys.exit(0)
 
 pp = pprint.PrettyPrinter(indent = 2)
@@ -122,6 +146,7 @@
 port = 9090
 uri = ''
 framed = False
+ssl = False
 http = False
 argi = 1
 
@@ -150,13 +175,17 @@
   framed = True
   argi += 1
 
+if sys.argv[argi] == '-s' or sys.argv[argi] == '-ssl':
+  ssl = True
+  argi += 1
+
 cmd = sys.argv[argi]
 args = sys.argv[argi+1:]
 
 if http:
   transport = THttpClient.THttpClient(host, port, uri)
 else:
-  socket = TSocket.TSocket(host, port)
+  socket = TSSLSocket.TSSLSocket(host, port, validate=False) if ssl else TSocket.TSocket(host, port)
   if framed:
     transport = TTransport.TFramedTransport(socket)
   else:
@@ -167,468 +196,606 @@
 
 if cmd == 'login':
   if len(args) != 2:
-    print 'login requires 2 args'
+    print('login requires 2 args')
     sys.exit(1)
   pp.pprint(client.login(args[0],eval(args[1]),))
 
 elif cmd == 'addConstraint':
   if len(args) != 3:
-    print 'addConstraint requires 3 args'
+    print('addConstraint requires 3 args')
     sys.exit(1)
   pp.pprint(client.addConstraint(args[0],args[1],args[2],))
 
 elif cmd == 'addSplits':
   if len(args) != 3:
-    print 'addSplits requires 3 args'
+    print('addSplits requires 3 args')
     sys.exit(1)
   pp.pprint(client.addSplits(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'attachIterator':
   if len(args) != 4:
-    print 'attachIterator requires 4 args'
+    print('attachIterator requires 4 args')
     sys.exit(1)
   pp.pprint(client.attachIterator(args[0],args[1],eval(args[2]),eval(args[3]),))
 
 elif cmd == 'checkIteratorConflicts':
   if len(args) != 4:
-    print 'checkIteratorConflicts requires 4 args'
+    print('checkIteratorConflicts requires 4 args')
     sys.exit(1)
   pp.pprint(client.checkIteratorConflicts(args[0],args[1],eval(args[2]),eval(args[3]),))
 
 elif cmd == 'clearLocatorCache':
   if len(args) != 2:
-    print 'clearLocatorCache requires 2 args'
+    print('clearLocatorCache requires 2 args')
     sys.exit(1)
   pp.pprint(client.clearLocatorCache(args[0],args[1],))
 
 elif cmd == 'cloneTable':
   if len(args) != 6:
-    print 'cloneTable requires 6 args'
+    print('cloneTable requires 6 args')
     sys.exit(1)
   pp.pprint(client.cloneTable(args[0],args[1],args[2],eval(args[3]),eval(args[4]),eval(args[5]),))
 
 elif cmd == 'compactTable':
   if len(args) != 8:
-    print 'compactTable requires 8 args'
+    print('compactTable requires 8 args')
     sys.exit(1)
   pp.pprint(client.compactTable(args[0],args[1],args[2],args[3],eval(args[4]),eval(args[5]),eval(args[6]),eval(args[7]),))
 
 elif cmd == 'cancelCompaction':
   if len(args) != 2:
-    print 'cancelCompaction requires 2 args'
+    print('cancelCompaction requires 2 args')
     sys.exit(1)
   pp.pprint(client.cancelCompaction(args[0],args[1],))
 
 elif cmd == 'createTable':
   if len(args) != 4:
-    print 'createTable requires 4 args'
+    print('createTable requires 4 args')
     sys.exit(1)
   pp.pprint(client.createTable(args[0],args[1],eval(args[2]),eval(args[3]),))
 
 elif cmd == 'deleteTable':
   if len(args) != 2:
-    print 'deleteTable requires 2 args'
+    print('deleteTable requires 2 args')
     sys.exit(1)
   pp.pprint(client.deleteTable(args[0],args[1],))
 
 elif cmd == 'deleteRows':
   if len(args) != 4:
-    print 'deleteRows requires 4 args'
+    print('deleteRows requires 4 args')
     sys.exit(1)
   pp.pprint(client.deleteRows(args[0],args[1],args[2],args[3],))
 
 elif cmd == 'exportTable':
   if len(args) != 3:
-    print 'exportTable requires 3 args'
+    print('exportTable requires 3 args')
     sys.exit(1)
   pp.pprint(client.exportTable(args[0],args[1],args[2],))
 
 elif cmd == 'flushTable':
   if len(args) != 5:
-    print 'flushTable requires 5 args'
+    print('flushTable requires 5 args')
     sys.exit(1)
   pp.pprint(client.flushTable(args[0],args[1],args[2],args[3],eval(args[4]),))
 
 elif cmd == 'getDiskUsage':
   if len(args) != 2:
-    print 'getDiskUsage requires 2 args'
+    print('getDiskUsage requires 2 args')
     sys.exit(1)
   pp.pprint(client.getDiskUsage(args[0],eval(args[1]),))
 
 elif cmd == 'getLocalityGroups':
   if len(args) != 2:
-    print 'getLocalityGroups requires 2 args'
+    print('getLocalityGroups requires 2 args')
     sys.exit(1)
   pp.pprint(client.getLocalityGroups(args[0],args[1],))
 
 elif cmd == 'getIteratorSetting':
   if len(args) != 4:
-    print 'getIteratorSetting requires 4 args'
+    print('getIteratorSetting requires 4 args')
     sys.exit(1)
   pp.pprint(client.getIteratorSetting(args[0],args[1],args[2],eval(args[3]),))
 
 elif cmd == 'getMaxRow':
   if len(args) != 7:
-    print 'getMaxRow requires 7 args'
+    print('getMaxRow requires 7 args')
     sys.exit(1)
   pp.pprint(client.getMaxRow(args[0],args[1],eval(args[2]),args[3],eval(args[4]),args[5],eval(args[6]),))
 
 elif cmd == 'getTableProperties':
   if len(args) != 2:
-    print 'getTableProperties requires 2 args'
+    print('getTableProperties requires 2 args')
     sys.exit(1)
   pp.pprint(client.getTableProperties(args[0],args[1],))
 
 elif cmd == 'importDirectory':
   if len(args) != 5:
-    print 'importDirectory requires 5 args'
+    print('importDirectory requires 5 args')
     sys.exit(1)
   pp.pprint(client.importDirectory(args[0],args[1],args[2],args[3],eval(args[4]),))
 
 elif cmd == 'importTable':
   if len(args) != 3:
-    print 'importTable requires 3 args'
+    print('importTable requires 3 args')
     sys.exit(1)
   pp.pprint(client.importTable(args[0],args[1],args[2],))
 
 elif cmd == 'listSplits':
   if len(args) != 3:
-    print 'listSplits requires 3 args'
+    print('listSplits requires 3 args')
     sys.exit(1)
   pp.pprint(client.listSplits(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'listTables':
   if len(args) != 1:
-    print 'listTables requires 1 args'
+    print('listTables requires 1 args')
     sys.exit(1)
   pp.pprint(client.listTables(args[0],))
 
 elif cmd == 'listIterators':
   if len(args) != 2:
-    print 'listIterators requires 2 args'
+    print('listIterators requires 2 args')
     sys.exit(1)
   pp.pprint(client.listIterators(args[0],args[1],))
 
 elif cmd == 'listConstraints':
   if len(args) != 2:
-    print 'listConstraints requires 2 args'
+    print('listConstraints requires 2 args')
     sys.exit(1)
   pp.pprint(client.listConstraints(args[0],args[1],))
 
 elif cmd == 'mergeTablets':
   if len(args) != 4:
-    print 'mergeTablets requires 4 args'
+    print('mergeTablets requires 4 args')
     sys.exit(1)
   pp.pprint(client.mergeTablets(args[0],args[1],args[2],args[3],))
 
 elif cmd == 'offlineTable':
   if len(args) != 3:
-    print 'offlineTable requires 3 args'
+    print('offlineTable requires 3 args')
     sys.exit(1)
   pp.pprint(client.offlineTable(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'onlineTable':
   if len(args) != 3:
-    print 'onlineTable requires 3 args'
+    print('onlineTable requires 3 args')
     sys.exit(1)
   pp.pprint(client.onlineTable(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'removeConstraint':
   if len(args) != 3:
-    print 'removeConstraint requires 3 args'
+    print('removeConstraint requires 3 args')
     sys.exit(1)
   pp.pprint(client.removeConstraint(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'removeIterator':
   if len(args) != 4:
-    print 'removeIterator requires 4 args'
+    print('removeIterator requires 4 args')
     sys.exit(1)
   pp.pprint(client.removeIterator(args[0],args[1],args[2],eval(args[3]),))
 
 elif cmd == 'removeTableProperty':
   if len(args) != 3:
-    print 'removeTableProperty requires 3 args'
+    print('removeTableProperty requires 3 args')
     sys.exit(1)
   pp.pprint(client.removeTableProperty(args[0],args[1],args[2],))
 
 elif cmd == 'renameTable':
   if len(args) != 3:
-    print 'renameTable requires 3 args'
+    print('renameTable requires 3 args')
     sys.exit(1)
   pp.pprint(client.renameTable(args[0],args[1],args[2],))
 
 elif cmd == 'setLocalityGroups':
   if len(args) != 3:
-    print 'setLocalityGroups requires 3 args'
+    print('setLocalityGroups requires 3 args')
     sys.exit(1)
   pp.pprint(client.setLocalityGroups(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'setTableProperty':
   if len(args) != 4:
-    print 'setTableProperty requires 4 args'
+    print('setTableProperty requires 4 args')
     sys.exit(1)
   pp.pprint(client.setTableProperty(args[0],args[1],args[2],args[3],))
 
 elif cmd == 'splitRangeByTablets':
   if len(args) != 4:
-    print 'splitRangeByTablets requires 4 args'
+    print('splitRangeByTablets requires 4 args')
     sys.exit(1)
   pp.pprint(client.splitRangeByTablets(args[0],args[1],eval(args[2]),eval(args[3]),))
 
 elif cmd == 'tableExists':
   if len(args) != 2:
-    print 'tableExists requires 2 args'
+    print('tableExists requires 2 args')
     sys.exit(1)
   pp.pprint(client.tableExists(args[0],args[1],))
 
 elif cmd == 'tableIdMap':
   if len(args) != 1:
-    print 'tableIdMap requires 1 args'
+    print('tableIdMap requires 1 args')
     sys.exit(1)
   pp.pprint(client.tableIdMap(args[0],))
 
 elif cmd == 'testTableClassLoad':
   if len(args) != 4:
-    print 'testTableClassLoad requires 4 args'
+    print('testTableClassLoad requires 4 args')
     sys.exit(1)
   pp.pprint(client.testTableClassLoad(args[0],args[1],args[2],args[3],))
 
 elif cmd == 'pingTabletServer':
   if len(args) != 2:
-    print 'pingTabletServer requires 2 args'
+    print('pingTabletServer requires 2 args')
     sys.exit(1)
   pp.pprint(client.pingTabletServer(args[0],args[1],))
 
 elif cmd == 'getActiveScans':
   if len(args) != 2:
-    print 'getActiveScans requires 2 args'
+    print('getActiveScans requires 2 args')
     sys.exit(1)
   pp.pprint(client.getActiveScans(args[0],args[1],))
 
 elif cmd == 'getActiveCompactions':
   if len(args) != 2:
-    print 'getActiveCompactions requires 2 args'
+    print('getActiveCompactions requires 2 args')
     sys.exit(1)
   pp.pprint(client.getActiveCompactions(args[0],args[1],))
 
 elif cmd == 'getSiteConfiguration':
   if len(args) != 1:
-    print 'getSiteConfiguration requires 1 args'
+    print('getSiteConfiguration requires 1 args')
     sys.exit(1)
   pp.pprint(client.getSiteConfiguration(args[0],))
 
 elif cmd == 'getSystemConfiguration':
   if len(args) != 1:
-    print 'getSystemConfiguration requires 1 args'
+    print('getSystemConfiguration requires 1 args')
     sys.exit(1)
   pp.pprint(client.getSystemConfiguration(args[0],))
 
 elif cmd == 'getTabletServers':
   if len(args) != 1:
-    print 'getTabletServers requires 1 args'
+    print('getTabletServers requires 1 args')
     sys.exit(1)
   pp.pprint(client.getTabletServers(args[0],))
 
 elif cmd == 'removeProperty':
   if len(args) != 2:
-    print 'removeProperty requires 2 args'
+    print('removeProperty requires 2 args')
     sys.exit(1)
   pp.pprint(client.removeProperty(args[0],args[1],))
 
 elif cmd == 'setProperty':
   if len(args) != 3:
-    print 'setProperty requires 3 args'
+    print('setProperty requires 3 args')
     sys.exit(1)
   pp.pprint(client.setProperty(args[0],args[1],args[2],))
 
 elif cmd == 'testClassLoad':
   if len(args) != 3:
-    print 'testClassLoad requires 3 args'
+    print('testClassLoad requires 3 args')
     sys.exit(1)
   pp.pprint(client.testClassLoad(args[0],args[1],args[2],))
 
 elif cmd == 'authenticateUser':
   if len(args) != 3:
-    print 'authenticateUser requires 3 args'
+    print('authenticateUser requires 3 args')
     sys.exit(1)
   pp.pprint(client.authenticateUser(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'changeUserAuthorizations':
   if len(args) != 3:
-    print 'changeUserAuthorizations requires 3 args'
+    print('changeUserAuthorizations requires 3 args')
     sys.exit(1)
   pp.pprint(client.changeUserAuthorizations(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'changeLocalUserPassword':
   if len(args) != 3:
-    print 'changeLocalUserPassword requires 3 args'
+    print('changeLocalUserPassword requires 3 args')
     sys.exit(1)
   pp.pprint(client.changeLocalUserPassword(args[0],args[1],args[2],))
 
 elif cmd == 'createLocalUser':
   if len(args) != 3:
-    print 'createLocalUser requires 3 args'
+    print('createLocalUser requires 3 args')
     sys.exit(1)
   pp.pprint(client.createLocalUser(args[0],args[1],args[2],))
 
 elif cmd == 'dropLocalUser':
   if len(args) != 2:
-    print 'dropLocalUser requires 2 args'
+    print('dropLocalUser requires 2 args')
     sys.exit(1)
   pp.pprint(client.dropLocalUser(args[0],args[1],))
 
 elif cmd == 'getUserAuthorizations':
   if len(args) != 2:
-    print 'getUserAuthorizations requires 2 args'
+    print('getUserAuthorizations requires 2 args')
     sys.exit(1)
   pp.pprint(client.getUserAuthorizations(args[0],args[1],))
 
 elif cmd == 'grantSystemPermission':
   if len(args) != 3:
-    print 'grantSystemPermission requires 3 args'
+    print('grantSystemPermission requires 3 args')
     sys.exit(1)
   pp.pprint(client.grantSystemPermission(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'grantTablePermission':
   if len(args) != 4:
-    print 'grantTablePermission requires 4 args'
+    print('grantTablePermission requires 4 args')
     sys.exit(1)
   pp.pprint(client.grantTablePermission(args[0],args[1],args[2],eval(args[3]),))
 
 elif cmd == 'hasSystemPermission':
   if len(args) != 3:
-    print 'hasSystemPermission requires 3 args'
+    print('hasSystemPermission requires 3 args')
     sys.exit(1)
   pp.pprint(client.hasSystemPermission(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'hasTablePermission':
   if len(args) != 4:
-    print 'hasTablePermission requires 4 args'
+    print('hasTablePermission requires 4 args')
     sys.exit(1)
   pp.pprint(client.hasTablePermission(args[0],args[1],args[2],eval(args[3]),))
 
 elif cmd == 'listLocalUsers':
   if len(args) != 1:
-    print 'listLocalUsers requires 1 args'
+    print('listLocalUsers requires 1 args')
     sys.exit(1)
   pp.pprint(client.listLocalUsers(args[0],))
 
 elif cmd == 'revokeSystemPermission':
   if len(args) != 3:
-    print 'revokeSystemPermission requires 3 args'
+    print('revokeSystemPermission requires 3 args')
     sys.exit(1)
   pp.pprint(client.revokeSystemPermission(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'revokeTablePermission':
   if len(args) != 4:
-    print 'revokeTablePermission requires 4 args'
+    print('revokeTablePermission requires 4 args')
     sys.exit(1)
   pp.pprint(client.revokeTablePermission(args[0],args[1],args[2],eval(args[3]),))
 
+elif cmd == 'grantNamespacePermission':
+  if len(args) != 4:
+    print('grantNamespacePermission requires 4 args')
+    sys.exit(1)
+  pp.pprint(client.grantNamespacePermission(args[0],args[1],args[2],eval(args[3]),))
+
+elif cmd == 'hasNamespacePermission':
+  if len(args) != 4:
+    print('hasNamespacePermission requires 4 args')
+    sys.exit(1)
+  pp.pprint(client.hasNamespacePermission(args[0],args[1],args[2],eval(args[3]),))
+
+elif cmd == 'revokeNamespacePermission':
+  if len(args) != 4:
+    print('revokeNamespacePermission requires 4 args')
+    sys.exit(1)
+  pp.pprint(client.revokeNamespacePermission(args[0],args[1],args[2],eval(args[3]),))
+
 elif cmd == 'createBatchScanner':
   if len(args) != 3:
-    print 'createBatchScanner requires 3 args'
+    print('createBatchScanner requires 3 args')
     sys.exit(1)
   pp.pprint(client.createBatchScanner(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'createScanner':
   if len(args) != 3:
-    print 'createScanner requires 3 args'
+    print('createScanner requires 3 args')
     sys.exit(1)
   pp.pprint(client.createScanner(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'hasNext':
   if len(args) != 1:
-    print 'hasNext requires 1 args'
+    print('hasNext requires 1 args')
     sys.exit(1)
   pp.pprint(client.hasNext(args[0],))
 
 elif cmd == 'nextEntry':
   if len(args) != 1:
-    print 'nextEntry requires 1 args'
+    print('nextEntry requires 1 args')
     sys.exit(1)
   pp.pprint(client.nextEntry(args[0],))
 
 elif cmd == 'nextK':
   if len(args) != 2:
-    print 'nextK requires 2 args'
+    print('nextK requires 2 args')
     sys.exit(1)
   pp.pprint(client.nextK(args[0],eval(args[1]),))
 
 elif cmd == 'closeScanner':
   if len(args) != 1:
-    print 'closeScanner requires 1 args'
+    print('closeScanner requires 1 args')
     sys.exit(1)
   pp.pprint(client.closeScanner(args[0],))
 
 elif cmd == 'updateAndFlush':
   if len(args) != 3:
-    print 'updateAndFlush requires 3 args'
+    print('updateAndFlush requires 3 args')
     sys.exit(1)
   pp.pprint(client.updateAndFlush(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'createWriter':
   if len(args) != 3:
-    print 'createWriter requires 3 args'
+    print('createWriter requires 3 args')
     sys.exit(1)
   pp.pprint(client.createWriter(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'update':
   if len(args) != 2:
-    print 'update requires 2 args'
+    print('update requires 2 args')
     sys.exit(1)
   pp.pprint(client.update(args[0],eval(args[1]),))
 
 elif cmd == 'flush':
   if len(args) != 1:
-    print 'flush requires 1 args'
+    print('flush requires 1 args')
     sys.exit(1)
   pp.pprint(client.flush(args[0],))
 
 elif cmd == 'closeWriter':
   if len(args) != 1:
-    print 'closeWriter requires 1 args'
+    print('closeWriter requires 1 args')
     sys.exit(1)
   pp.pprint(client.closeWriter(args[0],))
 
 elif cmd == 'updateRowConditionally':
   if len(args) != 4:
-    print 'updateRowConditionally requires 4 args'
+    print('updateRowConditionally requires 4 args')
     sys.exit(1)
   pp.pprint(client.updateRowConditionally(args[0],args[1],args[2],eval(args[3]),))
 
 elif cmd == 'createConditionalWriter':
   if len(args) != 3:
-    print 'createConditionalWriter requires 3 args'
+    print('createConditionalWriter requires 3 args')
     sys.exit(1)
   pp.pprint(client.createConditionalWriter(args[0],args[1],eval(args[2]),))
 
 elif cmd == 'updateRowsConditionally':
   if len(args) != 2:
-    print 'updateRowsConditionally requires 2 args'
+    print('updateRowsConditionally requires 2 args')
     sys.exit(1)
   pp.pprint(client.updateRowsConditionally(args[0],eval(args[1]),))
 
 elif cmd == 'closeConditionalWriter':
   if len(args) != 1:
-    print 'closeConditionalWriter requires 1 args'
+    print('closeConditionalWriter requires 1 args')
     sys.exit(1)
   pp.pprint(client.closeConditionalWriter(args[0],))
 
 elif cmd == 'getRowRange':
   if len(args) != 1:
-    print 'getRowRange requires 1 args'
+    print('getRowRange requires 1 args')
     sys.exit(1)
   pp.pprint(client.getRowRange(args[0],))
 
 elif cmd == 'getFollowing':
   if len(args) != 2:
-    print 'getFollowing requires 2 args'
+    print('getFollowing requires 2 args')
     sys.exit(1)
   pp.pprint(client.getFollowing(eval(args[0]),eval(args[1]),))
 
+elif cmd == 'systemNamespace':
+  if len(args) != 0:
+    print('systemNamespace requires 0 args')
+    sys.exit(1)
+  pp.pprint(client.systemNamespace())
+
+elif cmd == 'defaultNamespace':
+  if len(args) != 0:
+    print('defaultNamespace requires 0 args')
+    sys.exit(1)
+  pp.pprint(client.defaultNamespace())
+
+elif cmd == 'listNamespaces':
+  if len(args) != 1:
+    print('listNamespaces requires 1 args')
+    sys.exit(1)
+  pp.pprint(client.listNamespaces(args[0],))
+
+elif cmd == 'namespaceExists':
+  if len(args) != 2:
+    print('namespaceExists requires 2 args')
+    sys.exit(1)
+  pp.pprint(client.namespaceExists(args[0],args[1],))
+
+elif cmd == 'createNamespace':
+  if len(args) != 2:
+    print('createNamespace requires 2 args')
+    sys.exit(1)
+  pp.pprint(client.createNamespace(args[0],args[1],))
+
+elif cmd == 'deleteNamespace':
+  if len(args) != 2:
+    print('deleteNamespace requires 2 args')
+    sys.exit(1)
+  pp.pprint(client.deleteNamespace(args[0],args[1],))
+
+elif cmd == 'renameNamespace':
+  if len(args) != 3:
+    print('renameNamespace requires 3 args')
+    sys.exit(1)
+  pp.pprint(client.renameNamespace(args[0],args[1],args[2],))
+
+elif cmd == 'setNamespaceProperty':
+  if len(args) != 4:
+    print('setNamespaceProperty requires 4 args')
+    sys.exit(1)
+  pp.pprint(client.setNamespaceProperty(args[0],args[1],args[2],args[3],))
+
+elif cmd == 'removeNamespaceProperty':
+  if len(args) != 3:
+    print('removeNamespaceProperty requires 3 args')
+    sys.exit(1)
+  pp.pprint(client.removeNamespaceProperty(args[0],args[1],args[2],))
+
+elif cmd == 'getNamespaceProperties':
+  if len(args) != 2:
+    print('getNamespaceProperties requires 2 args')
+    sys.exit(1)
+  pp.pprint(client.getNamespaceProperties(args[0],args[1],))
+
+elif cmd == 'namespaceIdMap':
+  if len(args) != 1:
+    print('namespaceIdMap requires 1 args')
+    sys.exit(1)
+  pp.pprint(client.namespaceIdMap(args[0],))
+
+elif cmd == 'attachNamespaceIterator':
+  if len(args) != 4:
+    print('attachNamespaceIterator requires 4 args')
+    sys.exit(1)
+  pp.pprint(client.attachNamespaceIterator(args[0],args[1],eval(args[2]),eval(args[3]),))
+
+elif cmd == 'removeNamespaceIterator':
+  if len(args) != 4:
+    print('removeNamespaceIterator requires 4 args')
+    sys.exit(1)
+  pp.pprint(client.removeNamespaceIterator(args[0],args[1],args[2],eval(args[3]),))
+
+elif cmd == 'getNamespaceIteratorSetting':
+  if len(args) != 4:
+    print('getNamespaceIteratorSetting requires 4 args')
+    sys.exit(1)
+  pp.pprint(client.getNamespaceIteratorSetting(args[0],args[1],args[2],eval(args[3]),))
+
+elif cmd == 'listNamespaceIterators':
+  if len(args) != 2:
+    print('listNamespaceIterators requires 2 args')
+    sys.exit(1)
+  pp.pprint(client.listNamespaceIterators(args[0],args[1],))
+
+elif cmd == 'checkNamespaceIteratorConflicts':
+  if len(args) != 4:
+    print('checkNamespaceIteratorConflicts requires 4 args')
+    sys.exit(1)
+  pp.pprint(client.checkNamespaceIteratorConflicts(args[0],args[1],eval(args[2]),eval(args[3]),))
+
+elif cmd == 'addNamespaceConstraint':
+  if len(args) != 3:
+    print('addNamespaceConstraint requires 3 args')
+    sys.exit(1)
+  pp.pprint(client.addNamespaceConstraint(args[0],args[1],args[2],))
+
+elif cmd == 'removeNamespaceConstraint':
+  if len(args) != 3:
+    print('removeNamespaceConstraint requires 3 args')
+    sys.exit(1)
+  pp.pprint(client.removeNamespaceConstraint(args[0],args[1],eval(args[2]),))
+
+elif cmd == 'listNamespaceConstraints':
+  if len(args) != 2:
+    print('listNamespaceConstraints requires 2 args')
+    sys.exit(1)
+  pp.pprint(client.listNamespaceConstraints(args[0],args[1],))
+
+elif cmd == 'testNamespaceClassLoad':
+  if len(args) != 4:
+    print('testNamespaceClassLoad requires 4 args')
+    sys.exit(1)
+  pp.pprint(client.testNamespaceClassLoad(args[0],args[1],args[2],args[3],))
+
 else:
-  print 'Unrecognized method %s' % cmd
+  print('Unrecognized method %s' % cmd)
   sys.exit(1)
 
 transport.close()
diff --git a/proxy/src/main/python/AccumuloProxy.py b/proxy/src/main/python/AccumuloProxy.py
index 2805fff..19bd257 100644
--- a/proxy/src/main/python/AccumuloProxy.py
+++ b/proxy/src/main/python/AccumuloProxy.py
@@ -13,7 +13,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-# Autogenerated by Thrift Compiler (0.9.1)
+# Autogenerated by Thrift Compiler (0.9.3)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
@@ -21,6 +21,7 @@
 #
 
 from thrift.Thrift import TType, TMessageType, TException, TApplicationException
+import logging
 from ttypes import *
 from thrift.Thrift import TProcessor
 from thrift.transport import TTransport
@@ -573,6 +574,36 @@
     """
     pass
 
+  def grantNamespacePermission(self, login, user, namespaceName, perm):
+    """
+    Parameters:
+     - login
+     - user
+     - namespaceName
+     - perm
+    """
+    pass
+
+  def hasNamespacePermission(self, login, user, namespaceName, perm):
+    """
+    Parameters:
+     - login
+     - user
+     - namespaceName
+     - perm
+    """
+    pass
+
+  def revokeNamespacePermission(self, login, user, namespaceName, perm):
+    """
+    Parameters:
+     - login
+     - user
+     - namespaceName
+     - perm
+    """
+    pass
+
   def createBatchScanner(self, login, tableName, options):
     """
     Parameters:
@@ -709,6 +740,170 @@
     """
     pass
 
+  def systemNamespace(self):
+    pass
+
+  def defaultNamespace(self):
+    pass
+
+  def listNamespaces(self, login):
+    """
+    Parameters:
+     - login
+    """
+    pass
+
+  def namespaceExists(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    pass
+
+  def createNamespace(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    pass
+
+  def deleteNamespace(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    pass
+
+  def renameNamespace(self, login, oldNamespaceName, newNamespaceName):
+    """
+    Parameters:
+     - login
+     - oldNamespaceName
+     - newNamespaceName
+    """
+    pass
+
+  def setNamespaceProperty(self, login, namespaceName, property, value):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - property
+     - value
+    """
+    pass
+
+  def removeNamespaceProperty(self, login, namespaceName, property):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - property
+    """
+    pass
+
+  def getNamespaceProperties(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    pass
+
+  def namespaceIdMap(self, login):
+    """
+    Parameters:
+     - login
+    """
+    pass
+
+  def attachNamespaceIterator(self, login, namespaceName, setting, scopes):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - setting
+     - scopes
+    """
+    pass
+
+  def removeNamespaceIterator(self, login, namespaceName, name, scopes):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - name
+     - scopes
+    """
+    pass
+
+  def getNamespaceIteratorSetting(self, login, namespaceName, name, scope):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - name
+     - scope
+    """
+    pass
+
+  def listNamespaceIterators(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    pass
+
+  def checkNamespaceIteratorConflicts(self, login, namespaceName, setting, scopes):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - setting
+     - scopes
+    """
+    pass
+
+  def addNamespaceConstraint(self, login, namespaceName, constraintClassName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - constraintClassName
+    """
+    pass
+
+  def removeNamespaceConstraint(self, login, namespaceName, id):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - id
+    """
+    pass
+
+  def listNamespaceConstraints(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    pass
+
+  def testNamespaceClassLoad(self, login, namespaceName, className, asTypeName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - className
+     - asTypeName
+    """
+    pass
+
 
 class Client(Iface):
   def __init__(self, iprot, oprot=None):
@@ -736,20 +931,21 @@
     self._oprot.trans.flush()
 
   def recv_login(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = login_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch2 is not None:
       raise result.ouch2
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "login failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "login failed: unknown result")
 
   def addConstraint(self, login, tableName, constraintClassName):
     """
@@ -772,15 +968,16 @@
     self._oprot.trans.flush()
 
   def recv_addConstraint(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = addConstraint_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -789,7 +986,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "addConstraint failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "addConstraint failed: unknown result")
 
   def addSplits(self, login, tableName, splits):
     """
@@ -812,15 +1009,16 @@
     self._oprot.trans.flush()
 
   def recv_addSplits(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = addSplits_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -852,15 +1050,16 @@
     self._oprot.trans.flush()
 
   def recv_attachIterator(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = attachIterator_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -892,15 +1091,16 @@
     self._oprot.trans.flush()
 
   def recv_checkIteratorConflicts(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = checkIteratorConflicts_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -928,15 +1128,16 @@
     self._oprot.trans.flush()
 
   def recv_clearLocatorCache(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = clearLocatorCache_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     return
@@ -968,15 +1169,16 @@
     self._oprot.trans.flush()
 
   def recv_cloneTable(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = cloneTable_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1018,15 +1220,16 @@
     self._oprot.trans.flush()
 
   def recv_compactTable(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = compactTable_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1054,15 +1257,16 @@
     self._oprot.trans.flush()
 
   def recv_cancelCompaction(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = cancelCompaction_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1094,15 +1298,16 @@
     self._oprot.trans.flush()
 
   def recv_createTable(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = createTable_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1130,15 +1335,16 @@
     self._oprot.trans.flush()
 
   def recv_deleteTable(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = deleteTable_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1170,15 +1376,16 @@
     self._oprot.trans.flush()
 
   def recv_deleteRows(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = deleteRows_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1208,15 +1415,16 @@
     self._oprot.trans.flush()
 
   def recv_exportTable(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = exportTable_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1250,15 +1458,16 @@
     self._oprot.trans.flush()
 
   def recv_flushTable(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = flushTable_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1286,15 +1495,16 @@
     self._oprot.trans.flush()
 
   def recv_getDiskUsage(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getDiskUsage_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -1303,7 +1513,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getDiskUsage failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getDiskUsage failed: unknown result")
 
   def getLocalityGroups(self, login, tableName):
     """
@@ -1324,15 +1534,16 @@
     self._oprot.trans.flush()
 
   def recv_getLocalityGroups(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getLocalityGroups_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -1341,7 +1552,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getLocalityGroups failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getLocalityGroups failed: unknown result")
 
   def getIteratorSetting(self, login, tableName, iteratorName, scope):
     """
@@ -1366,15 +1577,16 @@
     self._oprot.trans.flush()
 
   def recv_getIteratorSetting(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getIteratorSetting_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -1383,7 +1595,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getIteratorSetting failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getIteratorSetting failed: unknown result")
 
   def getMaxRow(self, login, tableName, auths, startRow, startInclusive, endRow, endInclusive):
     """
@@ -1414,15 +1626,16 @@
     self._oprot.trans.flush()
 
   def recv_getMaxRow(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getMaxRow_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -1431,7 +1644,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getMaxRow failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getMaxRow failed: unknown result")
 
   def getTableProperties(self, login, tableName):
     """
@@ -1452,15 +1665,16 @@
     self._oprot.trans.flush()
 
   def recv_getTableProperties(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getTableProperties_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -1469,7 +1683,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getTableProperties failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getTableProperties failed: unknown result")
 
   def importDirectory(self, login, tableName, importDir, failureDir, setTime):
     """
@@ -1496,15 +1710,16 @@
     self._oprot.trans.flush()
 
   def recv_importDirectory(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = importDirectory_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch3 is not None:
@@ -1534,15 +1749,16 @@
     self._oprot.trans.flush()
 
   def recv_importTable(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = importTable_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1572,15 +1788,16 @@
     self._oprot.trans.flush()
 
   def recv_listSplits(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = listSplits_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -1589,7 +1806,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "listSplits failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "listSplits failed: unknown result")
 
   def listTables(self, login):
     """
@@ -1608,18 +1825,19 @@
     self._oprot.trans.flush()
 
   def recv_listTables(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = listTables_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "listTables failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "listTables failed: unknown result")
 
   def listIterators(self, login, tableName):
     """
@@ -1640,15 +1858,16 @@
     self._oprot.trans.flush()
 
   def recv_listIterators(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = listIterators_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -1657,7 +1876,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "listIterators failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "listIterators failed: unknown result")
 
   def listConstraints(self, login, tableName):
     """
@@ -1678,15 +1897,16 @@
     self._oprot.trans.flush()
 
   def recv_listConstraints(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = listConstraints_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -1695,7 +1915,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "listConstraints failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "listConstraints failed: unknown result")
 
   def mergeTablets(self, login, tableName, startRow, endRow):
     """
@@ -1720,15 +1940,16 @@
     self._oprot.trans.flush()
 
   def recv_mergeTablets(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = mergeTablets_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1758,15 +1979,16 @@
     self._oprot.trans.flush()
 
   def recv_offlineTable(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = offlineTable_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1796,15 +2018,16 @@
     self._oprot.trans.flush()
 
   def recv_onlineTable(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = onlineTable_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1834,15 +2057,16 @@
     self._oprot.trans.flush()
 
   def recv_removeConstraint(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = removeConstraint_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1874,15 +2098,16 @@
     self._oprot.trans.flush()
 
   def recv_removeIterator(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = removeIterator_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1912,15 +2137,16 @@
     self._oprot.trans.flush()
 
   def recv_removeTableProperty(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = removeTableProperty_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1950,15 +2176,16 @@
     self._oprot.trans.flush()
 
   def recv_renameTable(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = renameTable_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -1990,15 +2217,16 @@
     self._oprot.trans.flush()
 
   def recv_setLocalityGroups(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = setLocalityGroups_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2030,15 +2258,16 @@
     self._oprot.trans.flush()
 
   def recv_setTableProperty(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = setTableProperty_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2070,15 +2299,16 @@
     self._oprot.trans.flush()
 
   def recv_splitRangeByTablets(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = splitRangeByTablets_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -2087,7 +2317,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "splitRangeByTablets failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "splitRangeByTablets failed: unknown result")
 
   def tableExists(self, login, tableName):
     """
@@ -2108,18 +2338,19 @@
     self._oprot.trans.flush()
 
   def recv_tableExists(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = tableExists_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "tableExists failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "tableExists failed: unknown result")
 
   def tableIdMap(self, login):
     """
@@ -2138,18 +2369,19 @@
     self._oprot.trans.flush()
 
   def recv_tableIdMap(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = tableIdMap_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "tableIdMap failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "tableIdMap failed: unknown result")
 
   def testTableClassLoad(self, login, tableName, className, asTypeName):
     """
@@ -2174,15 +2406,16 @@
     self._oprot.trans.flush()
 
   def recv_testTableClassLoad(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = testTableClassLoad_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -2191,7 +2424,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "testTableClassLoad failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "testTableClassLoad failed: unknown result")
 
   def pingTabletServer(self, login, tserver):
     """
@@ -2212,15 +2445,16 @@
     self._oprot.trans.flush()
 
   def recv_pingTabletServer(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = pingTabletServer_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2246,22 +2480,23 @@
     self._oprot.trans.flush()
 
   def recv_getActiveScans(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getActiveScans_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
       raise result.ouch2
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getActiveScans failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getActiveScans failed: unknown result")
 
   def getActiveCompactions(self, login, tserver):
     """
@@ -2282,22 +2517,23 @@
     self._oprot.trans.flush()
 
   def recv_getActiveCompactions(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getActiveCompactions_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
       raise result.ouch2
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getActiveCompactions failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getActiveCompactions failed: unknown result")
 
   def getSiteConfiguration(self, login):
     """
@@ -2316,22 +2552,23 @@
     self._oprot.trans.flush()
 
   def recv_getSiteConfiguration(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getSiteConfiguration_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
       raise result.ouch2
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getSiteConfiguration failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getSiteConfiguration failed: unknown result")
 
   def getSystemConfiguration(self, login):
     """
@@ -2350,22 +2587,23 @@
     self._oprot.trans.flush()
 
   def recv_getSystemConfiguration(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getSystemConfiguration_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
       raise result.ouch2
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getSystemConfiguration failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getSystemConfiguration failed: unknown result")
 
   def getTabletServers(self, login):
     """
@@ -2384,18 +2622,19 @@
     self._oprot.trans.flush()
 
   def recv_getTabletServers(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getTabletServers_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getTabletServers failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getTabletServers failed: unknown result")
 
   def removeProperty(self, login, property):
     """
@@ -2416,15 +2655,16 @@
     self._oprot.trans.flush()
 
   def recv_removeProperty(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = removeProperty_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2452,15 +2692,16 @@
     self._oprot.trans.flush()
 
   def recv_setProperty(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = setProperty_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2488,22 +2729,23 @@
     self._oprot.trans.flush()
 
   def recv_testClassLoad(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = testClassLoad_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
       raise result.ouch2
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "testClassLoad failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "testClassLoad failed: unknown result")
 
   def authenticateUser(self, login, user, properties):
     """
@@ -2526,22 +2768,23 @@
     self._oprot.trans.flush()
 
   def recv_authenticateUser(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = authenticateUser_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
       raise result.ouch2
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "authenticateUser failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "authenticateUser failed: unknown result")
 
   def changeUserAuthorizations(self, login, user, authorizations):
     """
@@ -2564,15 +2807,16 @@
     self._oprot.trans.flush()
 
   def recv_changeUserAuthorizations(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = changeUserAuthorizations_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2600,15 +2844,16 @@
     self._oprot.trans.flush()
 
   def recv_changeLocalUserPassword(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = changeLocalUserPassword_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2636,15 +2881,16 @@
     self._oprot.trans.flush()
 
   def recv_createLocalUser(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = createLocalUser_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2670,15 +2916,16 @@
     self._oprot.trans.flush()
 
   def recv_dropLocalUser(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = dropLocalUser_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2704,22 +2951,23 @@
     self._oprot.trans.flush()
 
   def recv_getUserAuthorizations(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getUserAuthorizations_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
       raise result.ouch2
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getUserAuthorizations failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getUserAuthorizations failed: unknown result")
 
   def grantSystemPermission(self, login, user, perm):
     """
@@ -2742,15 +2990,16 @@
     self._oprot.trans.flush()
 
   def recv_grantSystemPermission(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = grantSystemPermission_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2780,15 +3029,16 @@
     self._oprot.trans.flush()
 
   def recv_grantTablePermission(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = grantTablePermission_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2818,22 +3068,23 @@
     self._oprot.trans.flush()
 
   def recv_hasSystemPermission(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = hasSystemPermission_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
       raise result.ouch2
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "hasSystemPermission failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "hasSystemPermission failed: unknown result")
 
   def hasTablePermission(self, login, user, table, perm):
     """
@@ -2858,15 +3109,16 @@
     self._oprot.trans.flush()
 
   def recv_hasTablePermission(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = hasTablePermission_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -2875,7 +3127,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "hasTablePermission failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "hasTablePermission failed: unknown result")
 
   def listLocalUsers(self, login):
     """
@@ -2894,15 +3146,16 @@
     self._oprot.trans.flush()
 
   def recv_listLocalUsers(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = listLocalUsers_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -2911,7 +3164,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "listLocalUsers failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "listLocalUsers failed: unknown result")
 
   def revokeSystemPermission(self, login, user, perm):
     """
@@ -2934,15 +3187,16 @@
     self._oprot.trans.flush()
 
   def recv_revokeSystemPermission(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = revokeSystemPermission_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2972,15 +3226,16 @@
     self._oprot.trans.flush()
 
   def recv_revokeTablePermission(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = revokeTablePermission_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -2989,6 +3244,125 @@
       raise result.ouch3
     return
 
+  def grantNamespacePermission(self, login, user, namespaceName, perm):
+    """
+    Parameters:
+     - login
+     - user
+     - namespaceName
+     - perm
+    """
+    self.send_grantNamespacePermission(login, user, namespaceName, perm)
+    self.recv_grantNamespacePermission()
+
+  def send_grantNamespacePermission(self, login, user, namespaceName, perm):
+    self._oprot.writeMessageBegin('grantNamespacePermission', TMessageType.CALL, self._seqid)
+    args = grantNamespacePermission_args()
+    args.login = login
+    args.user = user
+    args.namespaceName = namespaceName
+    args.perm = perm
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_grantNamespacePermission(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = grantNamespacePermission_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    return
+
+  def hasNamespacePermission(self, login, user, namespaceName, perm):
+    """
+    Parameters:
+     - login
+     - user
+     - namespaceName
+     - perm
+    """
+    self.send_hasNamespacePermission(login, user, namespaceName, perm)
+    return self.recv_hasNamespacePermission()
+
+  def send_hasNamespacePermission(self, login, user, namespaceName, perm):
+    self._oprot.writeMessageBegin('hasNamespacePermission', TMessageType.CALL, self._seqid)
+    args = hasNamespacePermission_args()
+    args.login = login
+    args.user = user
+    args.namespaceName = namespaceName
+    args.perm = perm
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_hasNamespacePermission(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = hasNamespacePermission_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "hasNamespacePermission failed: unknown result")
+
+  def revokeNamespacePermission(self, login, user, namespaceName, perm):
+    """
+    Parameters:
+     - login
+     - user
+     - namespaceName
+     - perm
+    """
+    self.send_revokeNamespacePermission(login, user, namespaceName, perm)
+    self.recv_revokeNamespacePermission()
+
+  def send_revokeNamespacePermission(self, login, user, namespaceName, perm):
+    self._oprot.writeMessageBegin('revokeNamespacePermission', TMessageType.CALL, self._seqid)
+    args = revokeNamespacePermission_args()
+    args.login = login
+    args.user = user
+    args.namespaceName = namespaceName
+    args.perm = perm
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_revokeNamespacePermission(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = revokeNamespacePermission_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    return
+
   def createBatchScanner(self, login, tableName, options):
     """
     Parameters:
@@ -3010,15 +3384,16 @@
     self._oprot.trans.flush()
 
   def recv_createBatchScanner(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = createBatchScanner_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -3027,7 +3402,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "createBatchScanner failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "createBatchScanner failed: unknown result")
 
   def createScanner(self, login, tableName, options):
     """
@@ -3050,15 +3425,16 @@
     self._oprot.trans.flush()
 
   def recv_createScanner(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = createScanner_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -3067,7 +3443,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "createScanner failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "createScanner failed: unknown result")
 
   def hasNext(self, scanner):
     """
@@ -3086,20 +3462,21 @@
     self._oprot.trans.flush()
 
   def recv_hasNext(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = hasNext_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
       raise result.ouch1
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "hasNext failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "hasNext failed: unknown result")
 
   def nextEntry(self, scanner):
     """
@@ -3118,15 +3495,16 @@
     self._oprot.trans.flush()
 
   def recv_nextEntry(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = nextEntry_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -3135,7 +3513,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "nextEntry failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "nextEntry failed: unknown result")
 
   def nextK(self, scanner, k):
     """
@@ -3156,15 +3534,16 @@
     self._oprot.trans.flush()
 
   def recv_nextK(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = nextK_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -3173,7 +3552,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "nextK failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "nextK failed: unknown result")
 
   def closeScanner(self, scanner):
     """
@@ -3192,15 +3571,16 @@
     self._oprot.trans.flush()
 
   def recv_closeScanner(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = closeScanner_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     return
@@ -3226,15 +3606,16 @@
     self._oprot.trans.flush()
 
   def recv_updateAndFlush(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = updateAndFlush_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.outch1 is not None:
       raise result.outch1
     if result.ouch2 is not None:
@@ -3266,15 +3647,16 @@
     self._oprot.trans.flush()
 
   def recv_createWriter(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = createWriter_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.outch1 is not None:
@@ -3283,7 +3665,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "createWriter failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "createWriter failed: unknown result")
 
   def update(self, writer, cells):
     """
@@ -3294,7 +3676,7 @@
     self.send_update(writer, cells)
 
   def send_update(self, writer, cells):
-    self._oprot.writeMessageBegin('update', TMessageType.CALL, self._seqid)
+    self._oprot.writeMessageBegin('update', TMessageType.ONEWAY, self._seqid)
     args = update_args()
     args.writer = writer
     args.cells = cells
@@ -3318,15 +3700,16 @@
     self._oprot.trans.flush()
 
   def recv_flush(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = flush_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -3350,15 +3733,16 @@
     self._oprot.trans.flush()
 
   def recv_closeWriter(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = closeWriter_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.ouch1 is not None:
       raise result.ouch1
     if result.ouch2 is not None:
@@ -3388,15 +3772,16 @@
     self._oprot.trans.flush()
 
   def recv_updateRowConditionally(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = updateRowConditionally_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -3405,7 +3790,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "updateRowConditionally failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "updateRowConditionally failed: unknown result")
 
   def createConditionalWriter(self, login, tableName, options):
     """
@@ -3428,15 +3813,16 @@
     self._oprot.trans.flush()
 
   def recv_createConditionalWriter(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = createConditionalWriter_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -3445,7 +3831,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "createConditionalWriter failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "createConditionalWriter failed: unknown result")
 
   def updateRowsConditionally(self, conditionalWriter, updates):
     """
@@ -3466,15 +3852,16 @@
     self._oprot.trans.flush()
 
   def recv_updateRowsConditionally(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = updateRowsConditionally_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
     if result.ouch1 is not None:
@@ -3483,7 +3870,7 @@
       raise result.ouch2
     if result.ouch3 is not None:
       raise result.ouch3
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "updateRowsConditionally failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "updateRowsConditionally failed: unknown result")
 
   def closeConditionalWriter(self, conditionalWriter):
     """
@@ -3502,15 +3889,16 @@
     self._oprot.trans.flush()
 
   def recv_closeConditionalWriter(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = closeConditionalWriter_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     return
 
   def getRowRange(self, row):
@@ -3530,18 +3918,19 @@
     self._oprot.trans.flush()
 
   def recv_getRowRange(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getRowRange_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getRowRange failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getRowRange failed: unknown result")
 
   def getFollowing(self, key, part):
     """
@@ -3562,18 +3951,781 @@
     self._oprot.trans.flush()
 
   def recv_getFollowing(self):
-    (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
     if mtype == TMessageType.EXCEPTION:
       x = TApplicationException()
-      x.read(self._iprot)
-      self._iprot.readMessageEnd()
+      x.read(iprot)
+      iprot.readMessageEnd()
       raise x
     result = getFollowing_result()
-    result.read(self._iprot)
-    self._iprot.readMessageEnd()
+    result.read(iprot)
+    iprot.readMessageEnd()
     if result.success is not None:
       return result.success
-    raise TApplicationException(TApplicationException.MISSING_RESULT, "getFollowing failed: unknown result");
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getFollowing failed: unknown result")
+
+  def systemNamespace(self):
+    self.send_systemNamespace()
+    return self.recv_systemNamespace()
+
+  def send_systemNamespace(self):
+    self._oprot.writeMessageBegin('systemNamespace', TMessageType.CALL, self._seqid)
+    args = systemNamespace_args()
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_systemNamespace(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = systemNamespace_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "systemNamespace failed: unknown result")
+
+  def defaultNamespace(self):
+    self.send_defaultNamespace()
+    return self.recv_defaultNamespace()
+
+  def send_defaultNamespace(self):
+    self._oprot.writeMessageBegin('defaultNamespace', TMessageType.CALL, self._seqid)
+    args = defaultNamespace_args()
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_defaultNamespace(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = defaultNamespace_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "defaultNamespace failed: unknown result")
+
+  def listNamespaces(self, login):
+    """
+    Parameters:
+     - login
+    """
+    self.send_listNamespaces(login)
+    return self.recv_listNamespaces()
+
+  def send_listNamespaces(self, login):
+    self._oprot.writeMessageBegin('listNamespaces', TMessageType.CALL, self._seqid)
+    args = listNamespaces_args()
+    args.login = login
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_listNamespaces(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = listNamespaces_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "listNamespaces failed: unknown result")
+
+  def namespaceExists(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    self.send_namespaceExists(login, namespaceName)
+    return self.recv_namespaceExists()
+
+  def send_namespaceExists(self, login, namespaceName):
+    self._oprot.writeMessageBegin('namespaceExists', TMessageType.CALL, self._seqid)
+    args = namespaceExists_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_namespaceExists(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = namespaceExists_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "namespaceExists failed: unknown result")
+
+  def createNamespace(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    self.send_createNamespace(login, namespaceName)
+    self.recv_createNamespace()
+
+  def send_createNamespace(self, login, namespaceName):
+    self._oprot.writeMessageBegin('createNamespace', TMessageType.CALL, self._seqid)
+    args = createNamespace_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_createNamespace(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = createNamespace_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    return
+
+  def deleteNamespace(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    self.send_deleteNamespace(login, namespaceName)
+    self.recv_deleteNamespace()
+
+  def send_deleteNamespace(self, login, namespaceName):
+    self._oprot.writeMessageBegin('deleteNamespace', TMessageType.CALL, self._seqid)
+    args = deleteNamespace_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_deleteNamespace(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = deleteNamespace_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    if result.ouch4 is not None:
+      raise result.ouch4
+    return
+
+  def renameNamespace(self, login, oldNamespaceName, newNamespaceName):
+    """
+    Parameters:
+     - login
+     - oldNamespaceName
+     - newNamespaceName
+    """
+    self.send_renameNamespace(login, oldNamespaceName, newNamespaceName)
+    self.recv_renameNamespace()
+
+  def send_renameNamespace(self, login, oldNamespaceName, newNamespaceName):
+    self._oprot.writeMessageBegin('renameNamespace', TMessageType.CALL, self._seqid)
+    args = renameNamespace_args()
+    args.login = login
+    args.oldNamespaceName = oldNamespaceName
+    args.newNamespaceName = newNamespaceName
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_renameNamespace(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = renameNamespace_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    if result.ouch4 is not None:
+      raise result.ouch4
+    return
+
+  def setNamespaceProperty(self, login, namespaceName, property, value):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - property
+     - value
+    """
+    self.send_setNamespaceProperty(login, namespaceName, property, value)
+    self.recv_setNamespaceProperty()
+
+  def send_setNamespaceProperty(self, login, namespaceName, property, value):
+    self._oprot.writeMessageBegin('setNamespaceProperty', TMessageType.CALL, self._seqid)
+    args = setNamespaceProperty_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.property = property
+    args.value = value
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_setNamespaceProperty(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = setNamespaceProperty_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    return
+
+  def removeNamespaceProperty(self, login, namespaceName, property):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - property
+    """
+    self.send_removeNamespaceProperty(login, namespaceName, property)
+    self.recv_removeNamespaceProperty()
+
+  def send_removeNamespaceProperty(self, login, namespaceName, property):
+    self._oprot.writeMessageBegin('removeNamespaceProperty', TMessageType.CALL, self._seqid)
+    args = removeNamespaceProperty_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.property = property
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_removeNamespaceProperty(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = removeNamespaceProperty_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    return
+
+  def getNamespaceProperties(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    self.send_getNamespaceProperties(login, namespaceName)
+    return self.recv_getNamespaceProperties()
+
+  def send_getNamespaceProperties(self, login, namespaceName):
+    self._oprot.writeMessageBegin('getNamespaceProperties', TMessageType.CALL, self._seqid)
+    args = getNamespaceProperties_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_getNamespaceProperties(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = getNamespaceProperties_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getNamespaceProperties failed: unknown result")
+
+  def namespaceIdMap(self, login):
+    """
+    Parameters:
+     - login
+    """
+    self.send_namespaceIdMap(login)
+    return self.recv_namespaceIdMap()
+
+  def send_namespaceIdMap(self, login):
+    self._oprot.writeMessageBegin('namespaceIdMap', TMessageType.CALL, self._seqid)
+    args = namespaceIdMap_args()
+    args.login = login
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_namespaceIdMap(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = namespaceIdMap_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "namespaceIdMap failed: unknown result")
+
+  def attachNamespaceIterator(self, login, namespaceName, setting, scopes):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - setting
+     - scopes
+    """
+    self.send_attachNamespaceIterator(login, namespaceName, setting, scopes)
+    self.recv_attachNamespaceIterator()
+
+  def send_attachNamespaceIterator(self, login, namespaceName, setting, scopes):
+    self._oprot.writeMessageBegin('attachNamespaceIterator', TMessageType.CALL, self._seqid)
+    args = attachNamespaceIterator_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.setting = setting
+    args.scopes = scopes
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_attachNamespaceIterator(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = attachNamespaceIterator_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    return
+
+  def removeNamespaceIterator(self, login, namespaceName, name, scopes):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - name
+     - scopes
+    """
+    self.send_removeNamespaceIterator(login, namespaceName, name, scopes)
+    self.recv_removeNamespaceIterator()
+
+  def send_removeNamespaceIterator(self, login, namespaceName, name, scopes):
+    self._oprot.writeMessageBegin('removeNamespaceIterator', TMessageType.CALL, self._seqid)
+    args = removeNamespaceIterator_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.name = name
+    args.scopes = scopes
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_removeNamespaceIterator(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = removeNamespaceIterator_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    return
+
+  def getNamespaceIteratorSetting(self, login, namespaceName, name, scope):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - name
+     - scope
+    """
+    self.send_getNamespaceIteratorSetting(login, namespaceName, name, scope)
+    return self.recv_getNamespaceIteratorSetting()
+
+  def send_getNamespaceIteratorSetting(self, login, namespaceName, name, scope):
+    self._oprot.writeMessageBegin('getNamespaceIteratorSetting', TMessageType.CALL, self._seqid)
+    args = getNamespaceIteratorSetting_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.name = name
+    args.scope = scope
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_getNamespaceIteratorSetting(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = getNamespaceIteratorSetting_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "getNamespaceIteratorSetting failed: unknown result")
+
+  def listNamespaceIterators(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    self.send_listNamespaceIterators(login, namespaceName)
+    return self.recv_listNamespaceIterators()
+
+  def send_listNamespaceIterators(self, login, namespaceName):
+    self._oprot.writeMessageBegin('listNamespaceIterators', TMessageType.CALL, self._seqid)
+    args = listNamespaceIterators_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_listNamespaceIterators(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = listNamespaceIterators_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "listNamespaceIterators failed: unknown result")
+
+  def checkNamespaceIteratorConflicts(self, login, namespaceName, setting, scopes):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - setting
+     - scopes
+    """
+    self.send_checkNamespaceIteratorConflicts(login, namespaceName, setting, scopes)
+    self.recv_checkNamespaceIteratorConflicts()
+
+  def send_checkNamespaceIteratorConflicts(self, login, namespaceName, setting, scopes):
+    self._oprot.writeMessageBegin('checkNamespaceIteratorConflicts', TMessageType.CALL, self._seqid)
+    args = checkNamespaceIteratorConflicts_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.setting = setting
+    args.scopes = scopes
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_checkNamespaceIteratorConflicts(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = checkNamespaceIteratorConflicts_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    return
+
+  def addNamespaceConstraint(self, login, namespaceName, constraintClassName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - constraintClassName
+    """
+    self.send_addNamespaceConstraint(login, namespaceName, constraintClassName)
+    return self.recv_addNamespaceConstraint()
+
+  def send_addNamespaceConstraint(self, login, namespaceName, constraintClassName):
+    self._oprot.writeMessageBegin('addNamespaceConstraint', TMessageType.CALL, self._seqid)
+    args = addNamespaceConstraint_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.constraintClassName = constraintClassName
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_addNamespaceConstraint(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = addNamespaceConstraint_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "addNamespaceConstraint failed: unknown result")
+
+  def removeNamespaceConstraint(self, login, namespaceName, id):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - id
+    """
+    self.send_removeNamespaceConstraint(login, namespaceName, id)
+    self.recv_removeNamespaceConstraint()
+
+  def send_removeNamespaceConstraint(self, login, namespaceName, id):
+    self._oprot.writeMessageBegin('removeNamespaceConstraint', TMessageType.CALL, self._seqid)
+    args = removeNamespaceConstraint_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.id = id
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_removeNamespaceConstraint(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = removeNamespaceConstraint_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    return
+
+  def listNamespaceConstraints(self, login, namespaceName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+    """
+    self.send_listNamespaceConstraints(login, namespaceName)
+    return self.recv_listNamespaceConstraints()
+
+  def send_listNamespaceConstraints(self, login, namespaceName):
+    self._oprot.writeMessageBegin('listNamespaceConstraints', TMessageType.CALL, self._seqid)
+    args = listNamespaceConstraints_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_listNamespaceConstraints(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = listNamespaceConstraints_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "listNamespaceConstraints failed: unknown result")
+
+  def testNamespaceClassLoad(self, login, namespaceName, className, asTypeName):
+    """
+    Parameters:
+     - login
+     - namespaceName
+     - className
+     - asTypeName
+    """
+    self.send_testNamespaceClassLoad(login, namespaceName, className, asTypeName)
+    return self.recv_testNamespaceClassLoad()
+
+  def send_testNamespaceClassLoad(self, login, namespaceName, className, asTypeName):
+    self._oprot.writeMessageBegin('testNamespaceClassLoad', TMessageType.CALL, self._seqid)
+    args = testNamespaceClassLoad_args()
+    args.login = login
+    args.namespaceName = namespaceName
+    args.className = className
+    args.asTypeName = asTypeName
+    args.write(self._oprot)
+    self._oprot.writeMessageEnd()
+    self._oprot.trans.flush()
+
+  def recv_testNamespaceClassLoad(self):
+    iprot = self._iprot
+    (fname, mtype, rseqid) = iprot.readMessageBegin()
+    if mtype == TMessageType.EXCEPTION:
+      x = TApplicationException()
+      x.read(iprot)
+      iprot.readMessageEnd()
+      raise x
+    result = testNamespaceClassLoad_result()
+    result.read(iprot)
+    iprot.readMessageEnd()
+    if result.success is not None:
+      return result.success
+    if result.ouch1 is not None:
+      raise result.ouch1
+    if result.ouch2 is not None:
+      raise result.ouch2
+    if result.ouch3 is not None:
+      raise result.ouch3
+    raise TApplicationException(TApplicationException.MISSING_RESULT, "testNamespaceClassLoad failed: unknown result")
 
 
 class Processor(Iface, TProcessor):
@@ -3640,6 +4792,9 @@
     self._processMap["listLocalUsers"] = Processor.process_listLocalUsers
     self._processMap["revokeSystemPermission"] = Processor.process_revokeSystemPermission
     self._processMap["revokeTablePermission"] = Processor.process_revokeTablePermission
+    self._processMap["grantNamespacePermission"] = Processor.process_grantNamespacePermission
+    self._processMap["hasNamespacePermission"] = Processor.process_hasNamespacePermission
+    self._processMap["revokeNamespacePermission"] = Processor.process_revokeNamespacePermission
     self._processMap["createBatchScanner"] = Processor.process_createBatchScanner
     self._processMap["createScanner"] = Processor.process_createScanner
     self._processMap["hasNext"] = Processor.process_hasNext
@@ -3657,6 +4812,26 @@
     self._processMap["closeConditionalWriter"] = Processor.process_closeConditionalWriter
     self._processMap["getRowRange"] = Processor.process_getRowRange
     self._processMap["getFollowing"] = Processor.process_getFollowing
+    self._processMap["systemNamespace"] = Processor.process_systemNamespace
+    self._processMap["defaultNamespace"] = Processor.process_defaultNamespace
+    self._processMap["listNamespaces"] = Processor.process_listNamespaces
+    self._processMap["namespaceExists"] = Processor.process_namespaceExists
+    self._processMap["createNamespace"] = Processor.process_createNamespace
+    self._processMap["deleteNamespace"] = Processor.process_deleteNamespace
+    self._processMap["renameNamespace"] = Processor.process_renameNamespace
+    self._processMap["setNamespaceProperty"] = Processor.process_setNamespaceProperty
+    self._processMap["removeNamespaceProperty"] = Processor.process_removeNamespaceProperty
+    self._processMap["getNamespaceProperties"] = Processor.process_getNamespaceProperties
+    self._processMap["namespaceIdMap"] = Processor.process_namespaceIdMap
+    self._processMap["attachNamespaceIterator"] = Processor.process_attachNamespaceIterator
+    self._processMap["removeNamespaceIterator"] = Processor.process_removeNamespaceIterator
+    self._processMap["getNamespaceIteratorSetting"] = Processor.process_getNamespaceIteratorSetting
+    self._processMap["listNamespaceIterators"] = Processor.process_listNamespaceIterators
+    self._processMap["checkNamespaceIteratorConflicts"] = Processor.process_checkNamespaceIteratorConflicts
+    self._processMap["addNamespaceConstraint"] = Processor.process_addNamespaceConstraint
+    self._processMap["removeNamespaceConstraint"] = Processor.process_removeNamespaceConstraint
+    self._processMap["listNamespaceConstraints"] = Processor.process_listNamespaceConstraints
+    self._processMap["testNamespaceClassLoad"] = Processor.process_testNamespaceClassLoad
 
   def process(self, iprot, oprot):
     (name, type, seqid) = iprot.readMessageBegin()
@@ -3680,9 +4855,17 @@
     result = login_result()
     try:
       result.success = self._handler.login(args.principal, args.loginProperties)
-    except AccumuloSecurityException, ouch2:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("login", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("login", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3694,13 +4877,23 @@
     result = addConstraint_result()
     try:
       result.success = self._handler.addConstraint(args.login, args.tableName, args.constraintClassName)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("addConstraint", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("addConstraint", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3712,13 +4905,23 @@
     result = addSplits_result()
     try:
       self._handler.addSplits(args.login, args.tableName, args.splits)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("addSplits", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("addSplits", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3730,13 +4933,23 @@
     result = attachIterator_result()
     try:
       self._handler.attachIterator(args.login, args.tableName, args.setting, args.scopes)
-    except AccumuloSecurityException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloSecurityException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloException, ouch2:
+    except AccumuloException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("attachIterator", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("attachIterator", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3748,13 +4961,23 @@
     result = checkIteratorConflicts_result()
     try:
       self._handler.checkIteratorConflicts(args.login, args.tableName, args.setting, args.scopes)
-    except AccumuloSecurityException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloSecurityException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloException, ouch2:
+    except AccumuloException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("checkIteratorConflicts", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("checkIteratorConflicts", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3766,9 +4989,17 @@
     result = clearLocatorCache_result()
     try:
       self._handler.clearLocatorCache(args.login, args.tableName)
-    except TableNotFoundException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except TableNotFoundException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    oprot.writeMessageBegin("clearLocatorCache", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("clearLocatorCache", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3780,15 +5011,26 @@
     result = cloneTable_result()
     try:
       self._handler.cloneTable(args.login, args.tableName, args.newTableName, args.flush, args.propertiesToSet, args.propertiesToExclude)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    except TableExistsException, ouch4:
+    except TableExistsException as ouch4:
+      msg_type = TMessageType.REPLY
       result.ouch4 = ouch4
-    oprot.writeMessageBegin("cloneTable", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("cloneTable", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3800,13 +5042,23 @@
     result = compactTable_result()
     try:
       self._handler.compactTable(args.login, args.tableName, args.startRow, args.endRow, args.iterators, args.flush, args.wait, args.compactionStrategy)
-    except AccumuloSecurityException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloSecurityException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except TableNotFoundException, ouch2:
+    except TableNotFoundException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except AccumuloException, ouch3:
+    except AccumuloException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("compactTable", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("compactTable", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3818,13 +5070,23 @@
     result = cancelCompaction_result()
     try:
       self._handler.cancelCompaction(args.login, args.tableName)
-    except AccumuloSecurityException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloSecurityException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except TableNotFoundException, ouch2:
+    except TableNotFoundException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except AccumuloException, ouch3:
+    except AccumuloException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("cancelCompaction", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("cancelCompaction", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3836,13 +5098,23 @@
     result = createTable_result()
     try:
       self._handler.createTable(args.login, args.tableName, args.versioningIter, args.type)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableExistsException, ouch3:
+    except TableExistsException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("createTable", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("createTable", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3854,13 +5126,23 @@
     result = deleteTable_result()
     try:
       self._handler.deleteTable(args.login, args.tableName)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("deleteTable", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("deleteTable", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3872,13 +5154,23 @@
     result = deleteRows_result()
     try:
       self._handler.deleteRows(args.login, args.tableName, args.startRow, args.endRow)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("deleteRows", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("deleteRows", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3890,13 +5182,23 @@
     result = exportTable_result()
     try:
       self._handler.exportTable(args.login, args.tableName, args.exportDir)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("exportTable", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("exportTable", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3908,13 +5210,23 @@
     result = flushTable_result()
     try:
       self._handler.flushTable(args.login, args.tableName, args.startRow, args.endRow, args.wait)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("flushTable", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("flushTable", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3926,13 +5238,23 @@
     result = getDiskUsage_result()
     try:
       result.success = self._handler.getDiskUsage(args.login, args.tables)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("getDiskUsage", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getDiskUsage", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3944,13 +5266,23 @@
     result = getLocalityGroups_result()
     try:
       result.success = self._handler.getLocalityGroups(args.login, args.tableName)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("getLocalityGroups", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getLocalityGroups", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3962,13 +5294,23 @@
     result = getIteratorSetting_result()
     try:
       result.success = self._handler.getIteratorSetting(args.login, args.tableName, args.iteratorName, args.scope)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("getIteratorSetting", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getIteratorSetting", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3980,13 +5322,23 @@
     result = getMaxRow_result()
     try:
       result.success = self._handler.getMaxRow(args.login, args.tableName, args.auths, args.startRow, args.startInclusive, args.endRow, args.endInclusive)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("getMaxRow", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getMaxRow", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -3998,13 +5350,23 @@
     result = getTableProperties_result()
     try:
       result.success = self._handler.getTableProperties(args.login, args.tableName)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("getTableProperties", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getTableProperties", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4016,13 +5378,23 @@
     result = importDirectory_result()
     try:
       self._handler.importDirectory(args.login, args.tableName, args.importDir, args.failureDir, args.setTime)
-    except TableNotFoundException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except TableNotFoundException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloException, ouch3:
+    except AccumuloException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    except AccumuloSecurityException, ouch4:
+    except AccumuloSecurityException as ouch4:
+      msg_type = TMessageType.REPLY
       result.ouch4 = ouch4
-    oprot.writeMessageBegin("importDirectory", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("importDirectory", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4034,13 +5406,23 @@
     result = importTable_result()
     try:
       self._handler.importTable(args.login, args.tableName, args.importDir)
-    except TableExistsException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except TableExistsException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloException, ouch2:
+    except AccumuloException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except AccumuloSecurityException, ouch3:
+    except AccumuloSecurityException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("importTable", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("importTable", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4052,13 +5434,23 @@
     result = listSplits_result()
     try:
       result.success = self._handler.listSplits(args.login, args.tableName, args.maxSplits)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("listSplits", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("listSplits", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4068,8 +5460,16 @@
     args.read(iprot)
     iprot.readMessageEnd()
     result = listTables_result()
-    result.success = self._handler.listTables(args.login)
-    oprot.writeMessageBegin("listTables", TMessageType.REPLY, seqid)
+    try:
+      result.success = self._handler.listTables(args.login)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("listTables", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4081,13 +5481,23 @@
     result = listIterators_result()
     try:
       result.success = self._handler.listIterators(args.login, args.tableName)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("listIterators", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("listIterators", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4099,13 +5509,23 @@
     result = listConstraints_result()
     try:
       result.success = self._handler.listConstraints(args.login, args.tableName)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("listConstraints", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("listConstraints", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4117,13 +5537,23 @@
     result = mergeTablets_result()
     try:
       self._handler.mergeTablets(args.login, args.tableName, args.startRow, args.endRow)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("mergeTablets", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("mergeTablets", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4135,13 +5565,23 @@
     result = offlineTable_result()
     try:
       self._handler.offlineTable(args.login, args.tableName, args.wait)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("offlineTable", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("offlineTable", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4153,13 +5593,23 @@
     result = onlineTable_result()
     try:
       self._handler.onlineTable(args.login, args.tableName, args.wait)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("onlineTable", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("onlineTable", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4171,13 +5621,23 @@
     result = removeConstraint_result()
     try:
       self._handler.removeConstraint(args.login, args.tableName, args.constraint)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("removeConstraint", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("removeConstraint", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4189,13 +5649,23 @@
     result = removeIterator_result()
     try:
       self._handler.removeIterator(args.login, args.tableName, args.iterName, args.scopes)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("removeIterator", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("removeIterator", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4207,13 +5677,23 @@
     result = removeTableProperty_result()
     try:
       self._handler.removeTableProperty(args.login, args.tableName, args.property)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("removeTableProperty", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("removeTableProperty", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4225,15 +5705,26 @@
     result = renameTable_result()
     try:
       self._handler.renameTable(args.login, args.oldTableName, args.newTableName)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    except TableExistsException, ouch4:
+    except TableExistsException as ouch4:
+      msg_type = TMessageType.REPLY
       result.ouch4 = ouch4
-    oprot.writeMessageBegin("renameTable", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("renameTable", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4245,13 +5736,23 @@
     result = setLocalityGroups_result()
     try:
       self._handler.setLocalityGroups(args.login, args.tableName, args.groups)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("setLocalityGroups", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("setLocalityGroups", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4263,13 +5764,23 @@
     result = setTableProperty_result()
     try:
       self._handler.setTableProperty(args.login, args.tableName, args.property, args.value)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("setTableProperty", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("setTableProperty", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4281,13 +5792,23 @@
     result = splitRangeByTablets_result()
     try:
       result.success = self._handler.splitRangeByTablets(args.login, args.tableName, args.range, args.maxSplits)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("splitRangeByTablets", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("splitRangeByTablets", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4297,8 +5818,16 @@
     args.read(iprot)
     iprot.readMessageEnd()
     result = tableExists_result()
-    result.success = self._handler.tableExists(args.login, args.tableName)
-    oprot.writeMessageBegin("tableExists", TMessageType.REPLY, seqid)
+    try:
+      result.success = self._handler.tableExists(args.login, args.tableName)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("tableExists", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4308,8 +5837,16 @@
     args.read(iprot)
     iprot.readMessageEnd()
     result = tableIdMap_result()
-    result.success = self._handler.tableIdMap(args.login)
-    oprot.writeMessageBegin("tableIdMap", TMessageType.REPLY, seqid)
+    try:
+      result.success = self._handler.tableIdMap(args.login)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("tableIdMap", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4321,13 +5858,23 @@
     result = testTableClassLoad_result()
     try:
       result.success = self._handler.testTableClassLoad(args.login, args.tableName, args.className, args.asTypeName)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("testTableClassLoad", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("testTableClassLoad", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4339,11 +5886,20 @@
     result = pingTabletServer_result()
     try:
       self._handler.pingTabletServer(args.login, args.tserver)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("pingTabletServer", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("pingTabletServer", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4355,11 +5911,20 @@
     result = getActiveScans_result()
     try:
       result.success = self._handler.getActiveScans(args.login, args.tserver)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("getActiveScans", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getActiveScans", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4371,11 +5936,20 @@
     result = getActiveCompactions_result()
     try:
       result.success = self._handler.getActiveCompactions(args.login, args.tserver)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("getActiveCompactions", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getActiveCompactions", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4387,11 +5961,20 @@
     result = getSiteConfiguration_result()
     try:
       result.success = self._handler.getSiteConfiguration(args.login)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("getSiteConfiguration", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getSiteConfiguration", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4403,11 +5986,20 @@
     result = getSystemConfiguration_result()
     try:
       result.success = self._handler.getSystemConfiguration(args.login)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("getSystemConfiguration", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getSystemConfiguration", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4417,8 +6009,16 @@
     args.read(iprot)
     iprot.readMessageEnd()
     result = getTabletServers_result()
-    result.success = self._handler.getTabletServers(args.login)
-    oprot.writeMessageBegin("getTabletServers", TMessageType.REPLY, seqid)
+    try:
+      result.success = self._handler.getTabletServers(args.login)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getTabletServers", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4430,11 +6030,20 @@
     result = removeProperty_result()
     try:
       self._handler.removeProperty(args.login, args.property)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("removeProperty", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("removeProperty", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4446,11 +6055,20 @@
     result = setProperty_result()
     try:
       self._handler.setProperty(args.login, args.property, args.value)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("setProperty", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("setProperty", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4462,11 +6080,20 @@
     result = testClassLoad_result()
     try:
       result.success = self._handler.testClassLoad(args.login, args.className, args.asTypeName)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("testClassLoad", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("testClassLoad", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4478,11 +6105,20 @@
     result = authenticateUser_result()
     try:
       result.success = self._handler.authenticateUser(args.login, args.user, args.properties)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("authenticateUser", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("authenticateUser", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4494,11 +6130,20 @@
     result = changeUserAuthorizations_result()
     try:
       self._handler.changeUserAuthorizations(args.login, args.user, args.authorizations)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("changeUserAuthorizations", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("changeUserAuthorizations", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4510,11 +6155,20 @@
     result = changeLocalUserPassword_result()
     try:
       self._handler.changeLocalUserPassword(args.login, args.user, args.password)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("changeLocalUserPassword", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("changeLocalUserPassword", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4526,11 +6180,20 @@
     result = createLocalUser_result()
     try:
       self._handler.createLocalUser(args.login, args.user, args.password)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("createLocalUser", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("createLocalUser", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4542,11 +6205,20 @@
     result = dropLocalUser_result()
     try:
       self._handler.dropLocalUser(args.login, args.user)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("dropLocalUser", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("dropLocalUser", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4558,11 +6230,20 @@
     result = getUserAuthorizations_result()
     try:
       result.success = self._handler.getUserAuthorizations(args.login, args.user)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("getUserAuthorizations", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getUserAuthorizations", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4574,11 +6255,20 @@
     result = grantSystemPermission_result()
     try:
       self._handler.grantSystemPermission(args.login, args.user, args.perm)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("grantSystemPermission", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("grantSystemPermission", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4590,13 +6280,23 @@
     result = grantTablePermission_result()
     try:
       self._handler.grantTablePermission(args.login, args.user, args.table, args.perm)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("grantTablePermission", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("grantTablePermission", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4608,11 +6308,20 @@
     result = hasSystemPermission_result()
     try:
       result.success = self._handler.hasSystemPermission(args.login, args.user, args.perm)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("hasSystemPermission", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("hasSystemPermission", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4624,13 +6333,23 @@
     result = hasTablePermission_result()
     try:
       result.success = self._handler.hasTablePermission(args.login, args.user, args.table, args.perm)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("hasTablePermission", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("hasTablePermission", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4642,13 +6361,23 @@
     result = listLocalUsers_result()
     try:
       result.success = self._handler.listLocalUsers(args.login)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("listLocalUsers", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("listLocalUsers", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4660,11 +6389,20 @@
     result = revokeSystemPermission_result()
     try:
       self._handler.revokeSystemPermission(args.login, args.user, args.perm)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("revokeSystemPermission", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("revokeSystemPermission", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4676,13 +6414,98 @@
     result = revokeTablePermission_result()
     try:
       self._handler.revokeTablePermission(args.login, args.user, args.table, args.perm)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("revokeTablePermission", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("revokeTablePermission", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_grantNamespacePermission(self, seqid, iprot, oprot):
+    args = grantNamespacePermission_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = grantNamespacePermission_result()
+    try:
+      self._handler.grantNamespacePermission(args.login, args.user, args.namespaceName, args.perm)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("grantNamespacePermission", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_hasNamespacePermission(self, seqid, iprot, oprot):
+    args = hasNamespacePermission_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = hasNamespacePermission_result()
+    try:
+      result.success = self._handler.hasNamespacePermission(args.login, args.user, args.namespaceName, args.perm)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("hasNamespacePermission", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_revokeNamespacePermission(self, seqid, iprot, oprot):
+    args = revokeNamespacePermission_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = revokeNamespacePermission_result()
+    try:
+      self._handler.revokeNamespacePermission(args.login, args.user, args.namespaceName, args.perm)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("revokeNamespacePermission", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4694,13 +6517,23 @@
     result = createBatchScanner_result()
     try:
       result.success = self._handler.createBatchScanner(args.login, args.tableName, args.options)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("createBatchScanner", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("createBatchScanner", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4712,13 +6545,23 @@
     result = createScanner_result()
     try:
       result.success = self._handler.createScanner(args.login, args.tableName, args.options)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("createScanner", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("createScanner", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4730,9 +6573,17 @@
     result = hasNext_result()
     try:
       result.success = self._handler.hasNext(args.scanner)
-    except UnknownScanner, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except UnknownScanner as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    oprot.writeMessageBegin("hasNext", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("hasNext", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4744,13 +6595,23 @@
     result = nextEntry_result()
     try:
       result.success = self._handler.nextEntry(args.scanner)
-    except NoMoreEntriesException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except NoMoreEntriesException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except UnknownScanner, ouch2:
+    except UnknownScanner as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except AccumuloSecurityException, ouch3:
+    except AccumuloSecurityException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("nextEntry", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("nextEntry", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4762,13 +6623,23 @@
     result = nextK_result()
     try:
       result.success = self._handler.nextK(args.scanner, args.k)
-    except NoMoreEntriesException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except NoMoreEntriesException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except UnknownScanner, ouch2:
+    except UnknownScanner as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except AccumuloSecurityException, ouch3:
+    except AccumuloSecurityException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("nextK", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("nextK", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4780,9 +6651,17 @@
     result = closeScanner_result()
     try:
       self._handler.closeScanner(args.scanner)
-    except UnknownScanner, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except UnknownScanner as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    oprot.writeMessageBegin("closeScanner", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("closeScanner", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4794,15 +6673,26 @@
     result = updateAndFlush_result()
     try:
       self._handler.updateAndFlush(args.login, args.tableName, args.cells)
-    except AccumuloException, outch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as outch1:
+      msg_type = TMessageType.REPLY
       result.outch1 = outch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    except MutationsRejectedException, ouch4:
+    except MutationsRejectedException as ouch4:
+      msg_type = TMessageType.REPLY
       result.ouch4 = ouch4
-    oprot.writeMessageBegin("updateAndFlush", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("updateAndFlush", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4814,13 +6704,23 @@
     result = createWriter_result()
     try:
       result.success = self._handler.createWriter(args.login, args.tableName, args.opts)
-    except AccumuloException, outch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as outch1:
+      msg_type = TMessageType.REPLY
       result.outch1 = outch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("createWriter", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("createWriter", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4829,8 +6729,13 @@
     args = update_args()
     args.read(iprot)
     iprot.readMessageEnd()
-    self._handler.update(args.writer, args.cells)
-    return
+    try:
+      self._handler.update(args.writer, args.cells)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except:
+      pass
 
   def process_flush(self, seqid, iprot, oprot):
     args = flush_args()
@@ -4839,11 +6744,20 @@
     result = flush_result()
     try:
       self._handler.flush(args.writer)
-    except UnknownWriter, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except UnknownWriter as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except MutationsRejectedException, ouch2:
+    except MutationsRejectedException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("flush", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("flush", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4855,11 +6769,20 @@
     result = closeWriter_result()
     try:
       self._handler.closeWriter(args.writer)
-    except UnknownWriter, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except UnknownWriter as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except MutationsRejectedException, ouch2:
+    except MutationsRejectedException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    oprot.writeMessageBegin("closeWriter", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("closeWriter", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4871,13 +6794,23 @@
     result = updateRowConditionally_result()
     try:
       result.success = self._handler.updateRowConditionally(args.login, args.tableName, args.row, args.updates)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("updateRowConditionally", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("updateRowConditionally", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4889,13 +6822,23 @@
     result = createConditionalWriter_result()
     try:
       result.success = self._handler.createConditionalWriter(args.login, args.tableName, args.options)
-    except AccumuloException, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloSecurityException, ouch2:
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except TableNotFoundException, ouch3:
+    except TableNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("createConditionalWriter", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("createConditionalWriter", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4907,13 +6850,23 @@
     result = updateRowsConditionally_result()
     try:
       result.success = self._handler.updateRowsConditionally(args.conditionalWriter, args.updates)
-    except UnknownWriter, ouch1:
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except UnknownWriter as ouch1:
+      msg_type = TMessageType.REPLY
       result.ouch1 = ouch1
-    except AccumuloException, ouch2:
+    except AccumuloException as ouch2:
+      msg_type = TMessageType.REPLY
       result.ouch2 = ouch2
-    except AccumuloSecurityException, ouch3:
+    except AccumuloSecurityException as ouch3:
+      msg_type = TMessageType.REPLY
       result.ouch3 = ouch3
-    oprot.writeMessageBegin("updateRowsConditionally", TMessageType.REPLY, seqid)
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("updateRowsConditionally", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4923,8 +6876,16 @@
     args.read(iprot)
     iprot.readMessageEnd()
     result = closeConditionalWriter_result()
-    self._handler.closeConditionalWriter(args.conditionalWriter)
-    oprot.writeMessageBegin("closeConditionalWriter", TMessageType.REPLY, seqid)
+    try:
+      self._handler.closeConditionalWriter(args.conditionalWriter)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("closeConditionalWriter", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4934,8 +6895,16 @@
     args.read(iprot)
     iprot.readMessageEnd()
     result = getRowRange_result()
-    result.success = self._handler.getRowRange(args.row)
-    oprot.writeMessageBegin("getRowRange", TMessageType.REPLY, seqid)
+    try:
+      result.success = self._handler.getRowRange(args.row)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getRowRange", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4945,8 +6914,555 @@
     args.read(iprot)
     iprot.readMessageEnd()
     result = getFollowing_result()
-    result.success = self._handler.getFollowing(args.key, args.part)
-    oprot.writeMessageBegin("getFollowing", TMessageType.REPLY, seqid)
+    try:
+      result.success = self._handler.getFollowing(args.key, args.part)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getFollowing", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_systemNamespace(self, seqid, iprot, oprot):
+    args = systemNamespace_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = systemNamespace_result()
+    try:
+      result.success = self._handler.systemNamespace()
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("systemNamespace", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_defaultNamespace(self, seqid, iprot, oprot):
+    args = defaultNamespace_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = defaultNamespace_result()
+    try:
+      result.success = self._handler.defaultNamespace()
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("defaultNamespace", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_listNamespaces(self, seqid, iprot, oprot):
+    args = listNamespaces_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = listNamespaces_result()
+    try:
+      result.success = self._handler.listNamespaces(args.login)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("listNamespaces", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_namespaceExists(self, seqid, iprot, oprot):
+    args = namespaceExists_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = namespaceExists_result()
+    try:
+      result.success = self._handler.namespaceExists(args.login, args.namespaceName)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("namespaceExists", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_createNamespace(self, seqid, iprot, oprot):
+    args = createNamespace_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = createNamespace_result()
+    try:
+      self._handler.createNamespace(args.login, args.namespaceName)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceExistsException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("createNamespace", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_deleteNamespace(self, seqid, iprot, oprot):
+    args = deleteNamespace_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = deleteNamespace_result()
+    try:
+      self._handler.deleteNamespace(args.login, args.namespaceName)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except NamespaceNotEmptyException as ouch4:
+      msg_type = TMessageType.REPLY
+      result.ouch4 = ouch4
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("deleteNamespace", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_renameNamespace(self, seqid, iprot, oprot):
+    args = renameNamespace_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = renameNamespace_result()
+    try:
+      self._handler.renameNamespace(args.login, args.oldNamespaceName, args.newNamespaceName)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except NamespaceExistsException as ouch4:
+      msg_type = TMessageType.REPLY
+      result.ouch4 = ouch4
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("renameNamespace", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_setNamespaceProperty(self, seqid, iprot, oprot):
+    args = setNamespaceProperty_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = setNamespaceProperty_result()
+    try:
+      self._handler.setNamespaceProperty(args.login, args.namespaceName, args.property, args.value)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("setNamespaceProperty", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_removeNamespaceProperty(self, seqid, iprot, oprot):
+    args = removeNamespaceProperty_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = removeNamespaceProperty_result()
+    try:
+      self._handler.removeNamespaceProperty(args.login, args.namespaceName, args.property)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("removeNamespaceProperty", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_getNamespaceProperties(self, seqid, iprot, oprot):
+    args = getNamespaceProperties_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = getNamespaceProperties_result()
+    try:
+      result.success = self._handler.getNamespaceProperties(args.login, args.namespaceName)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getNamespaceProperties", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_namespaceIdMap(self, seqid, iprot, oprot):
+    args = namespaceIdMap_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = namespaceIdMap_result()
+    try:
+      result.success = self._handler.namespaceIdMap(args.login)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("namespaceIdMap", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_attachNamespaceIterator(self, seqid, iprot, oprot):
+    args = attachNamespaceIterator_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = attachNamespaceIterator_result()
+    try:
+      self._handler.attachNamespaceIterator(args.login, args.namespaceName, args.setting, args.scopes)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("attachNamespaceIterator", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_removeNamespaceIterator(self, seqid, iprot, oprot):
+    args = removeNamespaceIterator_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = removeNamespaceIterator_result()
+    try:
+      self._handler.removeNamespaceIterator(args.login, args.namespaceName, args.name, args.scopes)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("removeNamespaceIterator", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_getNamespaceIteratorSetting(self, seqid, iprot, oprot):
+    args = getNamespaceIteratorSetting_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = getNamespaceIteratorSetting_result()
+    try:
+      result.success = self._handler.getNamespaceIteratorSetting(args.login, args.namespaceName, args.name, args.scope)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("getNamespaceIteratorSetting", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_listNamespaceIterators(self, seqid, iprot, oprot):
+    args = listNamespaceIterators_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = listNamespaceIterators_result()
+    try:
+      result.success = self._handler.listNamespaceIterators(args.login, args.namespaceName)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("listNamespaceIterators", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_checkNamespaceIteratorConflicts(self, seqid, iprot, oprot):
+    args = checkNamespaceIteratorConflicts_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = checkNamespaceIteratorConflicts_result()
+    try:
+      self._handler.checkNamespaceIteratorConflicts(args.login, args.namespaceName, args.setting, args.scopes)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("checkNamespaceIteratorConflicts", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_addNamespaceConstraint(self, seqid, iprot, oprot):
+    args = addNamespaceConstraint_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = addNamespaceConstraint_result()
+    try:
+      result.success = self._handler.addNamespaceConstraint(args.login, args.namespaceName, args.constraintClassName)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("addNamespaceConstraint", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_removeNamespaceConstraint(self, seqid, iprot, oprot):
+    args = removeNamespaceConstraint_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = removeNamespaceConstraint_result()
+    try:
+      self._handler.removeNamespaceConstraint(args.login, args.namespaceName, args.id)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("removeNamespaceConstraint", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_listNamespaceConstraints(self, seqid, iprot, oprot):
+    args = listNamespaceConstraints_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = listNamespaceConstraints_result()
+    try:
+      result.success = self._handler.listNamespaceConstraints(args.login, args.namespaceName)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("listNamespaceConstraints", msg_type, seqid)
+    result.write(oprot)
+    oprot.writeMessageEnd()
+    oprot.trans.flush()
+
+  def process_testNamespaceClassLoad(self, seqid, iprot, oprot):
+    args = testNamespaceClassLoad_args()
+    args.read(iprot)
+    iprot.readMessageEnd()
+    result = testNamespaceClassLoad_result()
+    try:
+      result.success = self._handler.testNamespaceClassLoad(args.login, args.namespaceName, args.className, args.asTypeName)
+      msg_type = TMessageType.REPLY
+    except (TTransport.TTransportException, KeyboardInterrupt, SystemExit):
+      raise
+    except AccumuloException as ouch1:
+      msg_type = TMessageType.REPLY
+      result.ouch1 = ouch1
+    except AccumuloSecurityException as ouch2:
+      msg_type = TMessageType.REPLY
+      result.ouch2 = ouch2
+    except NamespaceNotFoundException as ouch3:
+      msg_type = TMessageType.REPLY
+      result.ouch3 = ouch3
+    except Exception as ex:
+      msg_type = TMessageType.EXCEPTION
+      logging.exception(ex)
+      result = TApplicationException(TApplicationException.INTERNAL_ERROR, 'Internal error')
+    oprot.writeMessageBegin("testNamespaceClassLoad", msg_type, seqid)
     result.write(oprot)
     oprot.writeMessageEnd()
     oprot.trans.flush()
@@ -4982,7 +7498,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.principal = iprot.readString();
+          self.principal = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
@@ -4990,8 +7506,8 @@
           self.loginProperties = {}
           (_ktype145, _vtype146, _size144 ) = iprot.readMapBegin()
           for _i148 in xrange(_size144):
-            _key149 = iprot.readString();
-            _val150 = iprot.readString();
+            _key149 = iprot.readString()
+            _val150 = iprot.readString()
             self.loginProperties[_key149] = _val150
           iprot.readMapEnd()
         else:
@@ -5025,6 +7541,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.principal)
+    value = (value * 31) ^ hash(self.loginProperties)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5063,7 +7585,7 @@
         break
       if fid == 0:
         if ftype == TType.STRING:
-          self.success = iprot.readString();
+          self.success = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -5097,6 +7619,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5139,17 +7667,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.constraintClassName = iprot.readString();
+          self.constraintClassName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -5181,6 +7709,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.constraintClassName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5225,7 +7760,7 @@
         break
       if fid == 0:
         if ftype == TType.I32:
-          self.success = iprot.readI32();
+          self.success = iprot.readI32()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -5279,6 +7814,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5321,12 +7864,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -5334,7 +7877,7 @@
           self.splits = set()
           (_etype156, _size153) = iprot.readSetBegin()
           for _i157 in xrange(_size153):
-            _elem158 = iprot.readString();
+            _elem158 = iprot.readString()
             self.splits.add(_elem158)
           iprot.readSetEnd()
         else:
@@ -5371,6 +7914,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.splits)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5458,6 +8008,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5503,12 +8060,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -5522,7 +8079,7 @@
           self.scopes = set()
           (_etype163, _size160) = iprot.readSetBegin()
           for _i164 in xrange(_size160):
-            _elem165 = iprot.readI32();
+            _elem165 = iprot.readI32()
             self.scopes.add(_elem165)
           iprot.readSetEnd()
         else:
@@ -5563,6 +8120,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.setting)
+    value = (value * 31) ^ hash(self.scopes)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5650,6 +8215,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5695,12 +8267,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -5714,7 +8286,7 @@
           self.scopes = set()
           (_etype170, _size167) = iprot.readSetBegin()
           for _i171 in xrange(_size167):
-            _elem172 = iprot.readI32();
+            _elem172 = iprot.readI32()
             self.scopes.add(_elem172)
           iprot.readSetEnd()
         else:
@@ -5755,6 +8327,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.setting)
+    value = (value * 31) ^ hash(self.scopes)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5842,6 +8422,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5881,12 +8468,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -5914,6 +8501,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -5975,6 +8568,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -6026,22 +8624,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.newTableName = iprot.readString();
+          self.newTableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.BOOL:
-          self.flush = iprot.readBool();
+          self.flush = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 5:
@@ -6049,8 +8647,8 @@
           self.propertiesToSet = {}
           (_ktype175, _vtype176, _size174 ) = iprot.readMapBegin()
           for _i178 in xrange(_size174):
-            _key179 = iprot.readString();
-            _val180 = iprot.readString();
+            _key179 = iprot.readString()
+            _val180 = iprot.readString()
             self.propertiesToSet[_key179] = _val180
           iprot.readMapEnd()
         else:
@@ -6060,7 +8658,7 @@
           self.propertiesToExclude = set()
           (_etype184, _size181) = iprot.readSetBegin()
           for _i185 in xrange(_size181):
-            _elem186 = iprot.readString();
+            _elem186 = iprot.readString()
             self.propertiesToExclude.add(_elem186)
           iprot.readSetEnd()
         else:
@@ -6113,6 +8711,16 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.newTableName)
+    value = (value * 31) ^ hash(self.flush)
+    value = (value * 31) ^ hash(self.propertiesToSet)
+    value = (value * 31) ^ hash(self.propertiesToExclude)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -6213,6 +8821,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    value = (value * 31) ^ hash(self.ouch4)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -6270,22 +8886,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.startRow = iprot.readString();
+          self.startRow = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.STRING:
-          self.endRow = iprot.readString();
+          self.endRow = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 5:
@@ -6301,12 +8917,12 @@
           iprot.skip(ftype)
       elif fid == 6:
         if ftype == TType.BOOL:
-          self.flush = iprot.readBool();
+          self.flush = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 7:
         if ftype == TType.BOOL:
-          self.wait = iprot.readBool();
+          self.wait = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 8:
@@ -6367,6 +8983,18 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.startRow)
+    value = (value * 31) ^ hash(self.endRow)
+    value = (value * 31) ^ hash(self.iterators)
+    value = (value * 31) ^ hash(self.flush)
+    value = (value * 31) ^ hash(self.wait)
+    value = (value * 31) ^ hash(self.compactionStrategy)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -6454,6 +9082,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -6493,12 +9128,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -6526,6 +9161,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -6613,6 +9254,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -6658,22 +9306,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.BOOL:
-          self.versioningIter = iprot.readBool();
+          self.versioningIter = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.I32:
-          self.type = iprot.readI32();
+          self.type = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -6709,6 +9357,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.versioningIter)
+    value = (value * 31) ^ hash(self.type)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -6796,6 +9452,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -6835,12 +9498,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -6868,6 +9531,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -6955,6 +9624,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7000,22 +9676,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.startRow = iprot.readString();
+          self.startRow = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.STRING:
-          self.endRow = iprot.readString();
+          self.endRow = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -7051,6 +9727,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.startRow)
+    value = (value * 31) ^ hash(self.endRow)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7138,6 +9822,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7180,17 +9871,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.exportDir = iprot.readString();
+          self.exportDir = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -7222,6 +9913,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.exportDir)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7309,6 +10007,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7357,27 +10062,27 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.startRow = iprot.readString();
+          self.startRow = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.STRING:
-          self.endRow = iprot.readString();
+          self.endRow = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.BOOL:
-          self.wait = iprot.readBool();
+          self.wait = iprot.readBool()
         else:
           iprot.skip(ftype)
       else:
@@ -7417,6 +10122,15 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.startRow)
+    value = (value * 31) ^ hash(self.endRow)
+    value = (value * 31) ^ hash(self.wait)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7504,6 +10218,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7543,7 +10264,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
@@ -7551,7 +10272,7 @@
           self.tables = set()
           (_etype200, _size197) = iprot.readSetBegin()
           for _i201 in xrange(_size197):
-            _elem202 = iprot.readString();
+            _elem202 = iprot.readString()
             self.tables.add(_elem202)
           iprot.readSetEnd()
         else:
@@ -7584,6 +10305,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tables)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7691,6 +10418,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7730,12 +10465,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -7763,6 +10498,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7810,11 +10551,11 @@
           self.success = {}
           (_ktype212, _vtype213, _size211 ) = iprot.readMapBegin()
           for _i215 in xrange(_size211):
-            _key216 = iprot.readString();
+            _key216 = iprot.readString()
             _val217 = set()
             (_etype221, _size218) = iprot.readSetBegin()
             for _i222 in xrange(_size218):
-              _elem223 = iprot.readString();
+              _elem223 = iprot.readString()
               _val217.add(_elem223)
             iprot.readSetEnd()
             self.success[_key216] = _val217
@@ -7879,6 +10620,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -7924,22 +10673,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.iteratorName = iprot.readString();
+          self.iteratorName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.I32:
-          self.scope = iprot.readI32();
+          self.scope = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -7975,6 +10724,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.iteratorName)
+    value = (value * 31) ^ hash(self.scope)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8074,6 +10831,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8128,12 +10893,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -8141,29 +10906,29 @@
           self.auths = set()
           (_etype230, _size227) = iprot.readSetBegin()
           for _i231 in xrange(_size227):
-            _elem232 = iprot.readString();
+            _elem232 = iprot.readString()
             self.auths.add(_elem232)
           iprot.readSetEnd()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.STRING:
-          self.startRow = iprot.readString();
+          self.startRow = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.BOOL:
-          self.startInclusive = iprot.readBool();
+          self.startInclusive = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 6:
         if ftype == TType.STRING:
-          self.endRow = iprot.readString();
+          self.endRow = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 7:
         if ftype == TType.BOOL:
-          self.endInclusive = iprot.readBool();
+          self.endInclusive = iprot.readBool()
         else:
           iprot.skip(ftype)
       else:
@@ -8214,6 +10979,17 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.auths)
+    value = (value * 31) ^ hash(self.startRow)
+    value = (value * 31) ^ hash(self.startInclusive)
+    value = (value * 31) ^ hash(self.endRow)
+    value = (value * 31) ^ hash(self.endInclusive)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8258,7 +11034,7 @@
         break
       if fid == 0:
         if ftype == TType.STRING:
-          self.success = iprot.readString();
+          self.success = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -8312,6 +11088,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8351,12 +11135,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -8384,6 +11168,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8431,8 +11221,8 @@
           self.success = {}
           (_ktype235, _vtype236, _size234 ) = iprot.readMapBegin()
           for _i238 in xrange(_size234):
-            _key239 = iprot.readString();
-            _val240 = iprot.readString();
+            _key239 = iprot.readString()
+            _val240 = iprot.readString()
             self.success[_key239] = _val240
           iprot.readMapEnd()
         else:
@@ -8492,6 +11282,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8540,27 +11338,27 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.importDir = iprot.readString();
+          self.importDir = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.STRING:
-          self.failureDir = iprot.readString();
+          self.failureDir = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.BOOL:
-          self.setTime = iprot.readBool();
+          self.setTime = iprot.readBool()
         else:
           iprot.skip(ftype)
       else:
@@ -8600,6 +11398,15 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.importDir)
+    value = (value * 31) ^ hash(self.failureDir)
+    value = (value * 31) ^ hash(self.setTime)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8687,6 +11494,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch3)
+    value = (value * 31) ^ hash(self.ouch4)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8729,17 +11543,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.importDir = iprot.readString();
+          self.importDir = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -8771,6 +11585,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.importDir)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8858,6 +11679,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8900,17 +11728,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.I32:
-          self.maxSplits = iprot.readI32();
+          self.maxSplits = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -8942,6 +11770,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.maxSplits)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -8989,7 +11824,7 @@
           self.success = []
           (_etype246, _size243) = iprot.readListBegin()
           for _i247 in xrange(_size243):
-            _elem248 = iprot.readString();
+            _elem248 = iprot.readString()
             self.success.append(_elem248)
           iprot.readListEnd()
         else:
@@ -9048,6 +11883,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9084,7 +11927,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -9108,6 +11951,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9146,7 +11994,7 @@
           self.success = set()
           (_etype253, _size250) = iprot.readSetBegin()
           for _i254 in xrange(_size250):
-            _elem255 = iprot.readString();
+            _elem255 = iprot.readString()
             self.success.add(_elem255)
           iprot.readSetEnd()
         else:
@@ -9175,6 +12023,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9214,12 +12067,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -9247,6 +12100,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9294,11 +12153,11 @@
           self.success = {}
           (_ktype258, _vtype259, _size257 ) = iprot.readMapBegin()
           for _i261 in xrange(_size257):
-            _key262 = iprot.readString();
+            _key262 = iprot.readString()
             _val263 = set()
             (_etype267, _size264) = iprot.readSetBegin()
             for _i268 in xrange(_size264):
-              _elem269 = iprot.readI32();
+              _elem269 = iprot.readI32()
               _val263.add(_elem269)
             iprot.readSetEnd()
             self.success[_key262] = _val263
@@ -9363,6 +12222,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9402,12 +12269,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -9435,6 +12302,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9482,8 +12355,8 @@
           self.success = {}
           (_ktype274, _vtype275, _size273 ) = iprot.readMapBegin()
           for _i277 in xrange(_size273):
-            _key278 = iprot.readString();
-            _val279 = iprot.readI32();
+            _key278 = iprot.readString()
+            _val279 = iprot.readI32()
             self.success[_key278] = _val279
           iprot.readMapEnd()
         else:
@@ -9543,6 +12416,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9588,22 +12469,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.startRow = iprot.readString();
+          self.startRow = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.STRING:
-          self.endRow = iprot.readString();
+          self.endRow = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -9639,6 +12520,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.startRow)
+    value = (value * 31) ^ hash(self.endRow)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9726,6 +12615,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9768,17 +12664,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.BOOL:
-          self.wait = iprot.readBool();
+          self.wait = iprot.readBool()
         else:
           iprot.skip(ftype)
       else:
@@ -9810,6 +12706,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.wait)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9897,6 +12800,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -9939,17 +12849,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.BOOL:
-          self.wait = iprot.readBool();
+          self.wait = iprot.readBool()
         else:
           iprot.skip(ftype)
       else:
@@ -9981,6 +12891,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.wait)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10068,6 +12985,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10110,17 +13034,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.I32:
-          self.constraint = iprot.readI32();
+          self.constraint = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -10152,6 +13076,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.constraint)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10239,6 +13170,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10284,17 +13222,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.iterName = iprot.readString();
+          self.iterName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
@@ -10302,7 +13240,7 @@
           self.scopes = set()
           (_etype285, _size282) = iprot.readSetBegin()
           for _i286 in xrange(_size282):
-            _elem287 = iprot.readI32();
+            _elem287 = iprot.readI32()
             self.scopes.add(_elem287)
           iprot.readSetEnd()
         else:
@@ -10343,6 +13281,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.iterName)
+    value = (value * 31) ^ hash(self.scopes)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10430,6 +13376,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10472,17 +13425,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.property = iprot.readString();
+          self.property = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -10514,6 +13467,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.property)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10601,6 +13561,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10643,17 +13610,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.oldTableName = iprot.readString();
+          self.oldTableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.newTableName = iprot.readString();
+          self.newTableName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -10685,6 +13652,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.oldTableName)
+    value = (value * 31) ^ hash(self.newTableName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10785,6 +13759,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    value = (value * 31) ^ hash(self.ouch4)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10827,12 +13809,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -10840,11 +13822,11 @@
           self.groups = {}
           (_ktype290, _vtype291, _size289 ) = iprot.readMapBegin()
           for _i293 in xrange(_size289):
-            _key294 = iprot.readString();
+            _key294 = iprot.readString()
             _val295 = set()
             (_etype299, _size296) = iprot.readSetBegin()
             for _i300 in xrange(_size296):
-              _elem301 = iprot.readString();
+              _elem301 = iprot.readString()
               _val295.add(_elem301)
             iprot.readSetEnd()
             self.groups[_key294] = _val295
@@ -10887,6 +13869,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.groups)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -10974,6 +13963,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11019,22 +14015,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.property = iprot.readString();
+          self.property = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.STRING:
-          self.value = iprot.readString();
+          self.value = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -11070,6 +14066,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.property)
+    value = (value * 31) ^ hash(self.value)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11157,6 +14161,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11202,12 +14213,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -11218,7 +14229,7 @@
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.I32:
-          self.maxSplits = iprot.readI32();
+          self.maxSplits = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -11254,6 +14265,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.range)
+    value = (value * 31) ^ hash(self.maxSplits)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11361,6 +14380,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11400,12 +14427,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -11433,6 +14460,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11468,7 +14501,7 @@
         break
       if fid == 0:
         if ftype == TType.BOOL:
-          self.success = iprot.readBool();
+          self.success = iprot.readBool()
         else:
           iprot.skip(ftype)
       else:
@@ -11492,6 +14525,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11528,7 +14566,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -11552,6 +14590,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11590,8 +14633,8 @@
           self.success = {}
           (_ktype313, _vtype314, _size312 ) = iprot.readMapBegin()
           for _i316 in xrange(_size312):
-            _key317 = iprot.readString();
-            _val318 = iprot.readString();
+            _key317 = iprot.readString()
+            _val318 = iprot.readString()
             self.success[_key317] = _val318
           iprot.readMapEnd()
         else:
@@ -11621,6 +14664,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11666,22 +14714,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.className = iprot.readString();
+          self.className = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.STRING:
-          self.asTypeName = iprot.readString();
+          self.asTypeName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -11717,6 +14765,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.className)
+    value = (value * 31) ^ hash(self.asTypeName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11761,7 +14817,7 @@
         break
       if fid == 0:
         if ftype == TType.BOOL:
-          self.success = iprot.readBool();
+          self.success = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -11815,6 +14871,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11854,12 +14918,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tserver = iprot.readString();
+          self.tserver = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -11887,6 +14951,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tserver)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -11961,6 +15031,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12000,12 +15076,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tserver = iprot.readString();
+          self.tserver = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -12033,6 +15109,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tserver)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12127,6 +15209,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12166,12 +15255,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tserver = iprot.readString();
+          self.tserver = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -12199,6 +15288,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tserver)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12293,6 +15388,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12329,7 +15431,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -12353,6 +15455,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12397,8 +15504,8 @@
           self.success = {}
           (_ktype336, _vtype337, _size335 ) = iprot.readMapBegin()
           for _i339 in xrange(_size335):
-            _key340 = iprot.readString();
-            _val341 = iprot.readString();
+            _key340 = iprot.readString()
+            _val341 = iprot.readString()
             self.success[_key340] = _val341
           iprot.readMapEnd()
         else:
@@ -12448,6 +15555,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12484,7 +15598,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -12508,6 +15622,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12552,8 +15671,8 @@
           self.success = {}
           (_ktype345, _vtype346, _size344 ) = iprot.readMapBegin()
           for _i348 in xrange(_size344):
-            _key349 = iprot.readString();
-            _val350 = iprot.readString();
+            _key349 = iprot.readString()
+            _val350 = iprot.readString()
             self.success[_key349] = _val350
           iprot.readMapEnd()
         else:
@@ -12603,6 +15722,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12639,7 +15765,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -12663,6 +15789,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12701,7 +15832,7 @@
           self.success = []
           (_etype356, _size353) = iprot.readListBegin()
           for _i357 in xrange(_size353):
-            _elem358 = iprot.readString();
+            _elem358 = iprot.readString()
             self.success.append(_elem358)
           iprot.readListEnd()
         else:
@@ -12730,6 +15861,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12769,12 +15905,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.property = iprot.readString();
+          self.property = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -12802,6 +15938,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.property)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12876,6 +16018,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -12918,17 +16066,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.property = iprot.readString();
+          self.property = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.value = iprot.readString();
+          self.value = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -12960,6 +16108,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.property)
+    value = (value * 31) ^ hash(self.value)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13034,6 +16189,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13076,17 +16237,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.className = iprot.readString();
+          self.className = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.asTypeName = iprot.readString();
+          self.asTypeName = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -13118,6 +16279,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.className)
+    value = (value * 31) ^ hash(self.asTypeName)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13159,7 +16327,7 @@
         break
       if fid == 0:
         if ftype == TType.BOOL:
-          self.success = iprot.readBool();
+          self.success = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -13203,6 +16371,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13245,12 +16420,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -13258,8 +16433,8 @@
           self.properties = {}
           (_ktype361, _vtype362, _size360 ) = iprot.readMapBegin()
           for _i364 in xrange(_size360):
-            _key365 = iprot.readString();
-            _val366 = iprot.readString();
+            _key365 = iprot.readString()
+            _val366 = iprot.readString()
             self.properties[_key365] = _val366
           iprot.readMapEnd()
         else:
@@ -13297,6 +16472,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.properties)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13338,7 +16520,7 @@
         break
       if fid == 0:
         if ftype == TType.BOOL:
-          self.success = iprot.readBool();
+          self.success = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -13382,6 +16564,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13424,12 +16613,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -13437,7 +16626,7 @@
           self.authorizations = set()
           (_etype372, _size369) = iprot.readSetBegin()
           for _i373 in xrange(_size369):
-            _elem374 = iprot.readString();
+            _elem374 = iprot.readString()
             self.authorizations.add(_elem374)
           iprot.readSetEnd()
         else:
@@ -13474,6 +16663,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.authorizations)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13548,6 +16744,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13590,17 +16792,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.password = iprot.readString();
+          self.password = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -13632,6 +16834,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.password)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13706,6 +16915,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13748,17 +16963,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.password = iprot.readString();
+          self.password = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -13790,6 +17005,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.password)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13864,6 +17086,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -13903,12 +17131,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -13936,6 +17164,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14010,6 +17244,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14049,12 +17289,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -14082,6 +17322,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14126,7 +17372,7 @@
           self.success = []
           (_etype379, _size376) = iprot.readListBegin()
           for _i380 in xrange(_size376):
-            _elem381 = iprot.readString();
+            _elem381 = iprot.readString()
             self.success.append(_elem381)
           iprot.readListEnd()
         else:
@@ -14175,6 +17421,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14217,17 +17470,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.I32:
-          self.perm = iprot.readI32();
+          self.perm = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -14259,6 +17512,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.perm)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14333,6 +17593,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14378,22 +17644,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.table = iprot.readString();
+          self.table = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.I32:
-          self.perm = iprot.readI32();
+          self.perm = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -14429,6 +17695,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.table)
+    value = (value * 31) ^ hash(self.perm)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14516,6 +17790,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14558,17 +17839,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.I32:
-          self.perm = iprot.readI32();
+          self.perm = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -14600,6 +17881,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.perm)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14641,7 +17929,7 @@
         break
       if fid == 0:
         if ftype == TType.BOOL:
-          self.success = iprot.readBool();
+          self.success = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -14685,6 +17973,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14730,22 +18025,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.table = iprot.readString();
+          self.table = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.I32:
-          self.perm = iprot.readI32();
+          self.perm = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -14781,6 +18076,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.table)
+    value = (value * 31) ^ hash(self.perm)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14825,7 +18128,7 @@
         break
       if fid == 0:
         if ftype == TType.BOOL:
-          self.success = iprot.readBool();
+          self.success = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -14879,6 +18182,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14915,7 +18226,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -14939,6 +18250,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -14986,7 +18302,7 @@
           self.success = set()
           (_etype386, _size383) = iprot.readSetBegin()
           for _i387 in xrange(_size383):
-            _elem388 = iprot.readString();
+            _elem388 = iprot.readString()
             self.success.add(_elem388)
           iprot.readSetEnd()
         else:
@@ -15045,6 +18361,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15087,17 +18411,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.I32:
-          self.perm = iprot.readI32();
+          self.perm = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -15129,6 +18453,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.perm)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15203,6 +18534,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15248,22 +18585,22 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.table = iprot.readString();
+          self.table = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.I32:
-          self.perm = iprot.readI32();
+          self.perm = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -15299,6 +18636,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.table)
+    value = (value * 31) ^ hash(self.perm)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15386,6 +18731,577 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class grantNamespacePermission_args:
+  """
+  Attributes:
+   - login
+   - user
+   - namespaceName
+   - perm
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'user', None, None, ), # 2
+    (3, TType.STRING, 'namespaceName', None, None, ), # 3
+    (4, TType.I32, 'perm', None, None, ), # 4
+  )
+
+  def __init__(self, login=None, user=None, namespaceName=None, perm=None,):
+    self.login = login
+    self.user = user
+    self.namespaceName = namespaceName
+    self.perm = perm
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.user = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.I32:
+          self.perm = iprot.readI32()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('grantNamespacePermission_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.user is not None:
+      oprot.writeFieldBegin('user', TType.STRING, 2)
+      oprot.writeString(self.user)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 3)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.perm is not None:
+      oprot.writeFieldBegin('perm', TType.I32, 4)
+      oprot.writeI32(self.perm)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.perm)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class grantNamespacePermission_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+  )
+
+  def __init__(self, ouch1=None, ouch2=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('grantNamespacePermission_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class hasNamespacePermission_args:
+  """
+  Attributes:
+   - login
+   - user
+   - namespaceName
+   - perm
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'user', None, None, ), # 2
+    (3, TType.STRING, 'namespaceName', None, None, ), # 3
+    (4, TType.I32, 'perm', None, None, ), # 4
+  )
+
+  def __init__(self, login=None, user=None, namespaceName=None, perm=None,):
+    self.login = login
+    self.user = user
+    self.namespaceName = namespaceName
+    self.perm = perm
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.user = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.I32:
+          self.perm = iprot.readI32()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('hasNamespacePermission_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.user is not None:
+      oprot.writeFieldBegin('user', TType.STRING, 2)
+      oprot.writeString(self.user)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 3)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.perm is not None:
+      oprot.writeFieldBegin('perm', TType.I32, 4)
+      oprot.writeI32(self.perm)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.perm)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class hasNamespacePermission_result:
+  """
+  Attributes:
+   - success
+   - ouch1
+   - ouch2
+  """
+
+  thrift_spec = (
+    (0, TType.BOOL, 'success', None, None, ), # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+  )
+
+  def __init__(self, success=None, ouch1=None, ouch2=None,):
+    self.success = success
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.BOOL:
+          self.success = iprot.readBool()
+        else:
+          iprot.skip(ftype)
+      elif fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('hasNamespacePermission_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.BOOL, 0)
+      oprot.writeBool(self.success)
+      oprot.writeFieldEnd()
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class revokeNamespacePermission_args:
+  """
+  Attributes:
+   - login
+   - user
+   - namespaceName
+   - perm
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'user', None, None, ), # 2
+    (3, TType.STRING, 'namespaceName', None, None, ), # 3
+    (4, TType.I32, 'perm', None, None, ), # 4
+  )
+
+  def __init__(self, login=None, user=None, namespaceName=None, perm=None,):
+    self.login = login
+    self.user = user
+    self.namespaceName = namespaceName
+    self.perm = perm
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.user = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.I32:
+          self.perm = iprot.readI32()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('revokeNamespacePermission_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.user is not None:
+      oprot.writeFieldBegin('user', TType.STRING, 2)
+      oprot.writeString(self.user)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 3)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.perm is not None:
+      oprot.writeFieldBegin('perm', TType.I32, 4)
+      oprot.writeI32(self.perm)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.perm)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class revokeNamespacePermission_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+  )
+
+  def __init__(self, ouch1=None, ouch2=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('revokeNamespacePermission_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15428,12 +19344,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -15471,6 +19387,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.options)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15515,7 +19438,7 @@
         break
       if fid == 0:
         if ftype == TType.STRING:
-          self.success = iprot.readString();
+          self.success = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -15569,6 +19492,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15611,12 +19542,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -15654,6 +19585,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.options)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15698,7 +19636,7 @@
         break
       if fid == 0:
         if ftype == TType.STRING:
-          self.success = iprot.readString();
+          self.success = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -15752,6 +19690,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15788,7 +19734,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.scanner = iprot.readString();
+          self.scanner = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -15812,6 +19758,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.scanner)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15850,7 +19801,7 @@
         break
       if fid == 0:
         if ftype == TType.BOOL:
-          self.success = iprot.readBool();
+          self.success = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -15884,6 +19835,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -15920,7 +19877,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.scanner = iprot.readString();
+          self.scanner = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -15944,6 +19901,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.scanner)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16043,6 +20005,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16082,12 +20052,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.scanner = iprot.readString();
+          self.scanner = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.I32:
-          self.k = iprot.readI32();
+          self.k = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -16115,6 +20085,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.scanner)
+    value = (value * 31) ^ hash(self.k)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16214,6 +20190,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16250,7 +20234,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.scanner = iprot.readString();
+          self.scanner = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -16274,6 +20258,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.scanner)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16335,6 +20324,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16377,12 +20371,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -16390,7 +20384,7 @@
           self.cells = {}
           (_ktype391, _vtype392, _size390 ) = iprot.readMapBegin()
           for _i394 in xrange(_size390):
-            _key395 = iprot.readString();
+            _key395 = iprot.readString()
             _val396 = []
             (_etype400, _size397) = iprot.readListBegin()
             for _i401 in xrange(_size397):
@@ -16438,6 +20432,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.cells)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16538,6 +20539,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.outch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    value = (value * 31) ^ hash(self.ouch4)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16580,12 +20589,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -16623,6 +20632,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.opts)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16667,7 +20683,7 @@
         break
       if fid == 0:
         if ftype == TType.STRING:
-          self.success = iprot.readString();
+          self.success = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -16721,6 +20737,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.outch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16760,7 +20784,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.writer = iprot.readString();
+          self.writer = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
@@ -16768,7 +20792,7 @@
           self.cells = {}
           (_ktype407, _vtype408, _size406 ) = iprot.readMapBegin()
           for _i410 in xrange(_size406):
-            _key411 = iprot.readString();
+            _key411 = iprot.readString()
             _val412 = []
             (_etype416, _size413) = iprot.readListBegin()
             for _i417 in xrange(_size413):
@@ -16812,6 +20836,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.writer)
+    value = (value * 31) ^ hash(self.cells)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16848,7 +20878,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.writer = iprot.readString();
+          self.writer = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -16872,6 +20902,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.writer)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16946,6 +20981,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -16982,7 +21023,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.writer = iprot.readString();
+          self.writer = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -17006,6 +21047,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.writer)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17080,6 +21126,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17125,17 +21177,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.row = iprot.readString();
+          self.row = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
@@ -17177,6 +21229,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.row)
+    value = (value * 31) ^ hash(self.updates)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17221,7 +21281,7 @@
         break
       if fid == 0:
         if ftype == TType.I32:
-          self.success = iprot.readI32();
+          self.success = iprot.readI32()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -17275,6 +21335,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17317,12 +21385,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.login = iprot.readString();
+          self.login = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.tableName = iprot.readString();
+          self.tableName = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -17360,6 +21428,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.tableName)
+    value = (value * 31) ^ hash(self.options)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17404,7 +21479,7 @@
         break
       if fid == 0:
         if ftype == TType.STRING:
-          self.success = iprot.readString();
+          self.success = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 1:
@@ -17458,6 +21533,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17497,7 +21580,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.conditionalWriter = iprot.readString();
+          self.conditionalWriter = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
@@ -17505,7 +21588,7 @@
           self.updates = {}
           (_ktype423, _vtype424, _size422 ) = iprot.readMapBegin()
           for _i426 in xrange(_size422):
-            _key427 = iprot.readString();
+            _key427 = iprot.readString()
             _val428 = ConditionalUpdates()
             _val428.read(iprot)
             self.updates[_key427] = _val428
@@ -17541,6 +21624,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.conditionalWriter)
+    value = (value * 31) ^ hash(self.updates)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17588,8 +21677,8 @@
           self.success = {}
           (_ktype432, _vtype433, _size431 ) = iprot.readMapBegin()
           for _i435 in xrange(_size431):
-            _key436 = iprot.readString();
-            _val437 = iprot.readI32();
+            _key436 = iprot.readString()
+            _val437 = iprot.readI32()
             self.success[_key436] = _val437
           iprot.readMapEnd()
         else:
@@ -17649,6 +21738,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17685,7 +21782,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.conditionalWriter = iprot.readString();
+          self.conditionalWriter = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -17709,6 +21806,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.conditionalWriter)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17751,6 +21853,10 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17787,7 +21893,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.row = iprot.readString();
+          self.row = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -17811,6 +21917,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.row)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17871,6 +21982,11 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -17916,7 +22032,7 @@
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.I32:
-          self.part = iprot.readI32();
+          self.part = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -17944,6 +22060,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.key)
+    value = (value * 31) ^ hash(self.part)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -18004,6 +22126,3686 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class systemNamespace_args:
+
+  thrift_spec = (
+  )
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('systemNamespace_args')
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class systemNamespace_result:
+  """
+  Attributes:
+   - success
+  """
+
+  thrift_spec = (
+    (0, TType.STRING, 'success', None, None, ), # 0
+  )
+
+  def __init__(self, success=None,):
+    self.success = success
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.STRING:
+          self.success = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('systemNamespace_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.STRING, 0)
+      oprot.writeString(self.success)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class defaultNamespace_args:
+
+  thrift_spec = (
+  )
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('defaultNamespace_args')
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class defaultNamespace_result:
+  """
+  Attributes:
+   - success
+  """
+
+  thrift_spec = (
+    (0, TType.STRING, 'success', None, None, ), # 0
+  )
+
+  def __init__(self, success=None,):
+    self.success = success
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.STRING:
+          self.success = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('defaultNamespace_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.STRING, 0)
+      oprot.writeString(self.success)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class listNamespaces_args:
+  """
+  Attributes:
+   - login
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+  )
+
+  def __init__(self, login=None,):
+    self.login = login
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('listNamespaces_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class listNamespaces_result:
+  """
+  Attributes:
+   - success
+   - ouch1
+   - ouch2
+  """
+
+  thrift_spec = (
+    (0, TType.LIST, 'success', (TType.STRING,None), None, ), # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+  )
+
+  def __init__(self, success=None, ouch1=None, ouch2=None,):
+    self.success = success
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.LIST:
+          self.success = []
+          (_etype443, _size440) = iprot.readListBegin()
+          for _i444 in xrange(_size440):
+            _elem445 = iprot.readString()
+            self.success.append(_elem445)
+          iprot.readListEnd()
+        else:
+          iprot.skip(ftype)
+      elif fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('listNamespaces_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.LIST, 0)
+      oprot.writeListBegin(TType.STRING, len(self.success))
+      for iter446 in self.success:
+        oprot.writeString(iter446)
+      oprot.writeListEnd()
+      oprot.writeFieldEnd()
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class namespaceExists_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+  )
+
+  def __init__(self, login=None, namespaceName=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('namespaceExists_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class namespaceExists_result:
+  """
+  Attributes:
+   - success
+   - ouch1
+   - ouch2
+  """
+
+  thrift_spec = (
+    (0, TType.BOOL, 'success', None, None, ), # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+  )
+
+  def __init__(self, success=None, ouch1=None, ouch2=None,):
+    self.success = success
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.BOOL:
+          self.success = iprot.readBool()
+        else:
+          iprot.skip(ftype)
+      elif fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('namespaceExists_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.BOOL, 0)
+      oprot.writeBool(self.success)
+      oprot.writeFieldEnd()
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class createNamespace_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+  )
+
+  def __init__(self, login=None, namespaceName=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('createNamespace_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class createNamespace_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceExistsException, NamespaceExistsException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, ouch1=None, ouch2=None, ouch3=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceExistsException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('createNamespace_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class deleteNamespace_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+  )
+
+  def __init__(self, login=None, namespaceName=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('deleteNamespace_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class deleteNamespace_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+   - ouch3
+   - ouch4
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+    (4, TType.STRUCT, 'ouch4', (NamespaceNotEmptyException, NamespaceNotEmptyException.thrift_spec), None, ), # 4
+  )
+
+  def __init__(self, ouch1=None, ouch2=None, ouch3=None, ouch4=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+    self.ouch4 = ouch4
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.STRUCT:
+          self.ouch4 = NamespaceNotEmptyException()
+          self.ouch4.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('deleteNamespace_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch4 is not None:
+      oprot.writeFieldBegin('ouch4', TType.STRUCT, 4)
+      self.ouch4.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    value = (value * 31) ^ hash(self.ouch4)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class renameNamespace_args:
+  """
+  Attributes:
+   - login
+   - oldNamespaceName
+   - newNamespaceName
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'oldNamespaceName', None, None, ), # 2
+    (3, TType.STRING, 'newNamespaceName', None, None, ), # 3
+  )
+
+  def __init__(self, login=None, oldNamespaceName=None, newNamespaceName=None,):
+    self.login = login
+    self.oldNamespaceName = oldNamespaceName
+    self.newNamespaceName = newNamespaceName
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.oldNamespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRING:
+          self.newNamespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('renameNamespace_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.oldNamespaceName is not None:
+      oprot.writeFieldBegin('oldNamespaceName', TType.STRING, 2)
+      oprot.writeString(self.oldNamespaceName)
+      oprot.writeFieldEnd()
+    if self.newNamespaceName is not None:
+      oprot.writeFieldBegin('newNamespaceName', TType.STRING, 3)
+      oprot.writeString(self.newNamespaceName)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.oldNamespaceName)
+    value = (value * 31) ^ hash(self.newNamespaceName)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class renameNamespace_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+   - ouch3
+   - ouch4
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+    (4, TType.STRUCT, 'ouch4', (NamespaceExistsException, NamespaceExistsException.thrift_spec), None, ), # 4
+  )
+
+  def __init__(self, ouch1=None, ouch2=None, ouch3=None, ouch4=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+    self.ouch4 = ouch4
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.STRUCT:
+          self.ouch4 = NamespaceExistsException()
+          self.ouch4.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('renameNamespace_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch4 is not None:
+      oprot.writeFieldBegin('ouch4', TType.STRUCT, 4)
+      self.ouch4.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    value = (value * 31) ^ hash(self.ouch4)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class setNamespaceProperty_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+   - property
+   - value
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+    (3, TType.STRING, 'property', None, None, ), # 3
+    (4, TType.STRING, 'value', None, None, ), # 4
+  )
+
+  def __init__(self, login=None, namespaceName=None, property=None, value=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+    self.property = property
+    self.value = value
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRING:
+          self.property = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.STRING:
+          self.value = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('setNamespaceProperty_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.property is not None:
+      oprot.writeFieldBegin('property', TType.STRING, 3)
+      oprot.writeString(self.property)
+      oprot.writeFieldEnd()
+    if self.value is not None:
+      oprot.writeFieldBegin('value', TType.STRING, 4)
+      oprot.writeString(self.value)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.property)
+    value = (value * 31) ^ hash(self.value)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class setNamespaceProperty_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, ouch1=None, ouch2=None, ouch3=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('setNamespaceProperty_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class removeNamespaceProperty_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+   - property
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+    (3, TType.STRING, 'property', None, None, ), # 3
+  )
+
+  def __init__(self, login=None, namespaceName=None, property=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+    self.property = property
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRING:
+          self.property = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('removeNamespaceProperty_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.property is not None:
+      oprot.writeFieldBegin('property', TType.STRING, 3)
+      oprot.writeString(self.property)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.property)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class removeNamespaceProperty_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, ouch1=None, ouch2=None, ouch3=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('removeNamespaceProperty_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class getNamespaceProperties_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+  )
+
+  def __init__(self, login=None, namespaceName=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('getNamespaceProperties_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class getNamespaceProperties_result:
+  """
+  Attributes:
+   - success
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    (0, TType.MAP, 'success', (TType.STRING,None,TType.STRING,None), None, ), # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, success=None, ouch1=None, ouch2=None, ouch3=None,):
+    self.success = success
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.MAP:
+          self.success = {}
+          (_ktype448, _vtype449, _size447 ) = iprot.readMapBegin()
+          for _i451 in xrange(_size447):
+            _key452 = iprot.readString()
+            _val453 = iprot.readString()
+            self.success[_key452] = _val453
+          iprot.readMapEnd()
+        else:
+          iprot.skip(ftype)
+      elif fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('getNamespaceProperties_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.MAP, 0)
+      oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.success))
+      for kiter454,viter455 in self.success.items():
+        oprot.writeString(kiter454)
+        oprot.writeString(viter455)
+      oprot.writeMapEnd()
+      oprot.writeFieldEnd()
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class namespaceIdMap_args:
+  """
+  Attributes:
+   - login
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+  )
+
+  def __init__(self, login=None,):
+    self.login = login
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('namespaceIdMap_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class namespaceIdMap_result:
+  """
+  Attributes:
+   - success
+   - ouch1
+   - ouch2
+  """
+
+  thrift_spec = (
+    (0, TType.MAP, 'success', (TType.STRING,None,TType.STRING,None), None, ), # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+  )
+
+  def __init__(self, success=None, ouch1=None, ouch2=None,):
+    self.success = success
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.MAP:
+          self.success = {}
+          (_ktype457, _vtype458, _size456 ) = iprot.readMapBegin()
+          for _i460 in xrange(_size456):
+            _key461 = iprot.readString()
+            _val462 = iprot.readString()
+            self.success[_key461] = _val462
+          iprot.readMapEnd()
+        else:
+          iprot.skip(ftype)
+      elif fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('namespaceIdMap_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.MAP, 0)
+      oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.success))
+      for kiter463,viter464 in self.success.items():
+        oprot.writeString(kiter463)
+        oprot.writeString(viter464)
+      oprot.writeMapEnd()
+      oprot.writeFieldEnd()
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class attachNamespaceIterator_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+   - setting
+   - scopes
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+    (3, TType.STRUCT, 'setting', (IteratorSetting, IteratorSetting.thrift_spec), None, ), # 3
+    (4, TType.SET, 'scopes', (TType.I32,None), None, ), # 4
+  )
+
+  def __init__(self, login=None, namespaceName=None, setting=None, scopes=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+    self.setting = setting
+    self.scopes = scopes
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.setting = IteratorSetting()
+          self.setting.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.SET:
+          self.scopes = set()
+          (_etype468, _size465) = iprot.readSetBegin()
+          for _i469 in xrange(_size465):
+            _elem470 = iprot.readI32()
+            self.scopes.add(_elem470)
+          iprot.readSetEnd()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('attachNamespaceIterator_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.setting is not None:
+      oprot.writeFieldBegin('setting', TType.STRUCT, 3)
+      self.setting.write(oprot)
+      oprot.writeFieldEnd()
+    if self.scopes is not None:
+      oprot.writeFieldBegin('scopes', TType.SET, 4)
+      oprot.writeSetBegin(TType.I32, len(self.scopes))
+      for iter471 in self.scopes:
+        oprot.writeI32(iter471)
+      oprot.writeSetEnd()
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.setting)
+    value = (value * 31) ^ hash(self.scopes)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class attachNamespaceIterator_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, ouch1=None, ouch2=None, ouch3=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('attachNamespaceIterator_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class removeNamespaceIterator_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+   - name
+   - scopes
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+    (3, TType.STRING, 'name', None, None, ), # 3
+    (4, TType.SET, 'scopes', (TType.I32,None), None, ), # 4
+  )
+
+  def __init__(self, login=None, namespaceName=None, name=None, scopes=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+    self.name = name
+    self.scopes = scopes
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRING:
+          self.name = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.SET:
+          self.scopes = set()
+          (_etype475, _size472) = iprot.readSetBegin()
+          for _i476 in xrange(_size472):
+            _elem477 = iprot.readI32()
+            self.scopes.add(_elem477)
+          iprot.readSetEnd()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('removeNamespaceIterator_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.name is not None:
+      oprot.writeFieldBegin('name', TType.STRING, 3)
+      oprot.writeString(self.name)
+      oprot.writeFieldEnd()
+    if self.scopes is not None:
+      oprot.writeFieldBegin('scopes', TType.SET, 4)
+      oprot.writeSetBegin(TType.I32, len(self.scopes))
+      for iter478 in self.scopes:
+        oprot.writeI32(iter478)
+      oprot.writeSetEnd()
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.name)
+    value = (value * 31) ^ hash(self.scopes)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class removeNamespaceIterator_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, ouch1=None, ouch2=None, ouch3=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('removeNamespaceIterator_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class getNamespaceIteratorSetting_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+   - name
+   - scope
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+    (3, TType.STRING, 'name', None, None, ), # 3
+    (4, TType.I32, 'scope', None, None, ), # 4
+  )
+
+  def __init__(self, login=None, namespaceName=None, name=None, scope=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+    self.name = name
+    self.scope = scope
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRING:
+          self.name = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.I32:
+          self.scope = iprot.readI32()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('getNamespaceIteratorSetting_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.name is not None:
+      oprot.writeFieldBegin('name', TType.STRING, 3)
+      oprot.writeString(self.name)
+      oprot.writeFieldEnd()
+    if self.scope is not None:
+      oprot.writeFieldBegin('scope', TType.I32, 4)
+      oprot.writeI32(self.scope)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.name)
+    value = (value * 31) ^ hash(self.scope)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class getNamespaceIteratorSetting_result:
+  """
+  Attributes:
+   - success
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    (0, TType.STRUCT, 'success', (IteratorSetting, IteratorSetting.thrift_spec), None, ), # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, success=None, ouch1=None, ouch2=None, ouch3=None,):
+    self.success = success
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.STRUCT:
+          self.success = IteratorSetting()
+          self.success.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('getNamespaceIteratorSetting_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.STRUCT, 0)
+      self.success.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class listNamespaceIterators_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+  )
+
+  def __init__(self, login=None, namespaceName=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('listNamespaceIterators_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class listNamespaceIterators_result:
+  """
+  Attributes:
+   - success
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    (0, TType.MAP, 'success', (TType.STRING,None,TType.SET,(TType.I32,None)), None, ), # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, success=None, ouch1=None, ouch2=None, ouch3=None,):
+    self.success = success
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.MAP:
+          self.success = {}
+          (_ktype480, _vtype481, _size479 ) = iprot.readMapBegin()
+          for _i483 in xrange(_size479):
+            _key484 = iprot.readString()
+            _val485 = set()
+            (_etype489, _size486) = iprot.readSetBegin()
+            for _i490 in xrange(_size486):
+              _elem491 = iprot.readI32()
+              _val485.add(_elem491)
+            iprot.readSetEnd()
+            self.success[_key484] = _val485
+          iprot.readMapEnd()
+        else:
+          iprot.skip(ftype)
+      elif fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('listNamespaceIterators_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.MAP, 0)
+      oprot.writeMapBegin(TType.STRING, TType.SET, len(self.success))
+      for kiter492,viter493 in self.success.items():
+        oprot.writeString(kiter492)
+        oprot.writeSetBegin(TType.I32, len(viter493))
+        for iter494 in viter493:
+          oprot.writeI32(iter494)
+        oprot.writeSetEnd()
+      oprot.writeMapEnd()
+      oprot.writeFieldEnd()
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class checkNamespaceIteratorConflicts_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+   - setting
+   - scopes
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+    (3, TType.STRUCT, 'setting', (IteratorSetting, IteratorSetting.thrift_spec), None, ), # 3
+    (4, TType.SET, 'scopes', (TType.I32,None), None, ), # 4
+  )
+
+  def __init__(self, login=None, namespaceName=None, setting=None, scopes=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+    self.setting = setting
+    self.scopes = scopes
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.setting = IteratorSetting()
+          self.setting.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.SET:
+          self.scopes = set()
+          (_etype498, _size495) = iprot.readSetBegin()
+          for _i499 in xrange(_size495):
+            _elem500 = iprot.readI32()
+            self.scopes.add(_elem500)
+          iprot.readSetEnd()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('checkNamespaceIteratorConflicts_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.setting is not None:
+      oprot.writeFieldBegin('setting', TType.STRUCT, 3)
+      self.setting.write(oprot)
+      oprot.writeFieldEnd()
+    if self.scopes is not None:
+      oprot.writeFieldBegin('scopes', TType.SET, 4)
+      oprot.writeSetBegin(TType.I32, len(self.scopes))
+      for iter501 in self.scopes:
+        oprot.writeI32(iter501)
+      oprot.writeSetEnd()
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.setting)
+    value = (value * 31) ^ hash(self.scopes)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class checkNamespaceIteratorConflicts_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, ouch1=None, ouch2=None, ouch3=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('checkNamespaceIteratorConflicts_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class addNamespaceConstraint_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+   - constraintClassName
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+    (3, TType.STRING, 'constraintClassName', None, None, ), # 3
+  )
+
+  def __init__(self, login=None, namespaceName=None, constraintClassName=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+    self.constraintClassName = constraintClassName
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRING:
+          self.constraintClassName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('addNamespaceConstraint_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.constraintClassName is not None:
+      oprot.writeFieldBegin('constraintClassName', TType.STRING, 3)
+      oprot.writeString(self.constraintClassName)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.constraintClassName)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class addNamespaceConstraint_result:
+  """
+  Attributes:
+   - success
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    (0, TType.I32, 'success', None, None, ), # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, success=None, ouch1=None, ouch2=None, ouch3=None,):
+    self.success = success
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.I32:
+          self.success = iprot.readI32()
+        else:
+          iprot.skip(ftype)
+      elif fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('addNamespaceConstraint_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.I32, 0)
+      oprot.writeI32(self.success)
+      oprot.writeFieldEnd()
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class removeNamespaceConstraint_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+   - id
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+    (3, TType.I32, 'id', None, None, ), # 3
+  )
+
+  def __init__(self, login=None, namespaceName=None, id=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+    self.id = id
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.I32:
+          self.id = iprot.readI32()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('removeNamespaceConstraint_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.id is not None:
+      oprot.writeFieldBegin('id', TType.I32, 3)
+      oprot.writeI32(self.id)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.id)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class removeNamespaceConstraint_result:
+  """
+  Attributes:
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, ouch1=None, ouch2=None, ouch3=None,):
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('removeNamespaceConstraint_result')
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class listNamespaceConstraints_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+  )
+
+  def __init__(self, login=None, namespaceName=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('listNamespaceConstraints_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class listNamespaceConstraints_result:
+  """
+  Attributes:
+   - success
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    (0, TType.MAP, 'success', (TType.STRING,None,TType.I32,None), None, ), # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, success=None, ouch1=None, ouch2=None, ouch3=None,):
+    self.success = success
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.MAP:
+          self.success = {}
+          (_ktype503, _vtype504, _size502 ) = iprot.readMapBegin()
+          for _i506 in xrange(_size502):
+            _key507 = iprot.readString()
+            _val508 = iprot.readI32()
+            self.success[_key507] = _val508
+          iprot.readMapEnd()
+        else:
+          iprot.skip(ftype)
+      elif fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('listNamespaceConstraints_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.MAP, 0)
+      oprot.writeMapBegin(TType.STRING, TType.I32, len(self.success))
+      for kiter509,viter510 in self.success.items():
+        oprot.writeString(kiter509)
+        oprot.writeI32(viter510)
+      oprot.writeMapEnd()
+      oprot.writeFieldEnd()
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class testNamespaceClassLoad_args:
+  """
+  Attributes:
+   - login
+   - namespaceName
+   - className
+   - asTypeName
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'login', None, None, ), # 1
+    (2, TType.STRING, 'namespaceName', None, None, ), # 2
+    (3, TType.STRING, 'className', None, None, ), # 3
+    (4, TType.STRING, 'asTypeName', None, None, ), # 4
+  )
+
+  def __init__(self, login=None, namespaceName=None, className=None, asTypeName=None,):
+    self.login = login
+    self.namespaceName = namespaceName
+    self.className = className
+    self.asTypeName = asTypeName
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.login = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRING:
+          self.namespaceName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRING:
+          self.className = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      elif fid == 4:
+        if ftype == TType.STRING:
+          self.asTypeName = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('testNamespaceClassLoad_args')
+    if self.login is not None:
+      oprot.writeFieldBegin('login', TType.STRING, 1)
+      oprot.writeString(self.login)
+      oprot.writeFieldEnd()
+    if self.namespaceName is not None:
+      oprot.writeFieldBegin('namespaceName', TType.STRING, 2)
+      oprot.writeString(self.namespaceName)
+      oprot.writeFieldEnd()
+    if self.className is not None:
+      oprot.writeFieldBegin('className', TType.STRING, 3)
+      oprot.writeString(self.className)
+      oprot.writeFieldEnd()
+    if self.asTypeName is not None:
+      oprot.writeFieldBegin('asTypeName', TType.STRING, 4)
+      oprot.writeString(self.asTypeName)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.login)
+    value = (value * 31) ^ hash(self.namespaceName)
+    value = (value * 31) ^ hash(self.className)
+    value = (value * 31) ^ hash(self.asTypeName)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class testNamespaceClassLoad_result:
+  """
+  Attributes:
+   - success
+   - ouch1
+   - ouch2
+   - ouch3
+  """
+
+  thrift_spec = (
+    (0, TType.BOOL, 'success', None, None, ), # 0
+    (1, TType.STRUCT, 'ouch1', (AccumuloException, AccumuloException.thrift_spec), None, ), # 1
+    (2, TType.STRUCT, 'ouch2', (AccumuloSecurityException, AccumuloSecurityException.thrift_spec), None, ), # 2
+    (3, TType.STRUCT, 'ouch3', (NamespaceNotFoundException, NamespaceNotFoundException.thrift_spec), None, ), # 3
+  )
+
+  def __init__(self, success=None, ouch1=None, ouch2=None, ouch3=None,):
+    self.success = success
+    self.ouch1 = ouch1
+    self.ouch2 = ouch2
+    self.ouch3 = ouch3
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 0:
+        if ftype == TType.BOOL:
+          self.success = iprot.readBool()
+        else:
+          iprot.skip(ftype)
+      elif fid == 1:
+        if ftype == TType.STRUCT:
+          self.ouch1 = AccumuloException()
+          self.ouch1.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 2:
+        if ftype == TType.STRUCT:
+          self.ouch2 = AccumuloSecurityException()
+          self.ouch2.read(iprot)
+        else:
+          iprot.skip(ftype)
+      elif fid == 3:
+        if ftype == TType.STRUCT:
+          self.ouch3 = NamespaceNotFoundException()
+          self.ouch3.read(iprot)
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('testNamespaceClassLoad_result')
+    if self.success is not None:
+      oprot.writeFieldBegin('success', TType.BOOL, 0)
+      oprot.writeBool(self.success)
+      oprot.writeFieldEnd()
+    if self.ouch1 is not None:
+      oprot.writeFieldBegin('ouch1', TType.STRUCT, 1)
+      self.ouch1.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch2 is not None:
+      oprot.writeFieldBegin('ouch2', TType.STRUCT, 2)
+      self.ouch2.write(oprot)
+      oprot.writeFieldEnd()
+    if self.ouch3 is not None:
+      oprot.writeFieldBegin('ouch3', TType.STRUCT, 3)
+      self.ouch3.write(oprot)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.success)
+    value = (value * 31) ^ hash(self.ouch1)
+    value = (value * 31) ^ hash(self.ouch2)
+    value = (value * 31) ^ hash(self.ouch3)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
diff --git a/proxy/src/main/python/constants.py b/proxy/src/main/python/constants.py
index aea4826..8139236 100644
--- a/proxy/src/main/python/constants.py
+++ b/proxy/src/main/python/constants.py
@@ -13,7 +13,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-# Autogenerated by Thrift Compiler (0.9.1)
+# Autogenerated by Thrift Compiler (0.9.3)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
diff --git a/proxy/src/main/python/ttypes.py b/proxy/src/main/python/ttypes.py
index 9444f71..87a977d 100644
--- a/proxy/src/main/python/ttypes.py
+++ b/proxy/src/main/python/ttypes.py
@@ -13,7 +13,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-# Autogenerated by Thrift Compiler (0.9.1)
+# Autogenerated by Thrift Compiler (0.9.3)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
@@ -114,6 +114,41 @@
     "SYSTEM": 7,
   }
 
+class NamespacePermission:
+  READ = 0
+  WRITE = 1
+  ALTER_NAMESPACE = 2
+  GRANT = 3
+  ALTER_TABLE = 4
+  CREATE_TABLE = 5
+  DROP_TABLE = 6
+  BULK_IMPORT = 7
+  DROP_NAMESPACE = 8
+
+  _VALUES_TO_NAMES = {
+    0: "READ",
+    1: "WRITE",
+    2: "ALTER_NAMESPACE",
+    3: "GRANT",
+    4: "ALTER_TABLE",
+    5: "CREATE_TABLE",
+    6: "DROP_TABLE",
+    7: "BULK_IMPORT",
+    8: "DROP_NAMESPACE",
+  }
+
+  _NAMES_TO_VALUES = {
+    "READ": 0,
+    "WRITE": 1,
+    "ALTER_NAMESPACE": 2,
+    "GRANT": 3,
+    "ALTER_TABLE": 4,
+    "CREATE_TABLE": 5,
+    "DROP_TABLE": 6,
+    "BULK_IMPORT": 7,
+    "DROP_NAMESPACE": 8,
+  }
+
 class ScanType:
   SINGLE = 0
   BATCH = 1
@@ -303,27 +338,27 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.row = iprot.readString();
+          self.row = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.colFamily = iprot.readString();
+          self.colFamily = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.colQualifier = iprot.readString();
+          self.colQualifier = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.STRING:
-          self.colVisibility = iprot.readString();
+          self.colVisibility = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.I64:
-          self.timestamp = iprot.readI64();
+          self.timestamp = iprot.readI64()
         else:
           iprot.skip(ftype)
       else:
@@ -363,6 +398,15 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.row)
+    value = (value * 31) ^ hash(self.colFamily)
+    value = (value * 31) ^ hash(self.colQualifier)
+    value = (value * 31) ^ hash(self.colVisibility)
+    value = (value * 31) ^ hash(self.timestamp)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -414,32 +458,32 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.colFamily = iprot.readString();
+          self.colFamily = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.colQualifier = iprot.readString();
+          self.colQualifier = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.colVisibility = iprot.readString();
+          self.colVisibility = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.I64:
-          self.timestamp = iprot.readI64();
+          self.timestamp = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.STRING:
-          self.value = iprot.readString();
+          self.value = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 6:
         if ftype == TType.BOOL:
-          self.deleteCell = iprot.readBool();
+          self.deleteCell = iprot.readBool()
         else:
           iprot.skip(ftype)
       else:
@@ -483,6 +527,16 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.colFamily)
+    value = (value * 31) ^ hash(self.colQualifier)
+    value = (value * 31) ^ hash(self.colVisibility)
+    value = (value * 31) ^ hash(self.timestamp)
+    value = (value * 31) ^ hash(self.value)
+    value = (value * 31) ^ hash(self.deleteCell)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -525,14 +579,14 @@
           self.tables = []
           (_etype3, _size0) = iprot.readListBegin()
           for _i4 in xrange(_size0):
-            _elem5 = iprot.readString();
+            _elem5 = iprot.readString()
             self.tables.append(_elem5)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.I64:
-          self.usage = iprot.readI64();
+          self.usage = iprot.readI64()
         else:
           iprot.skip(ftype)
       else:
@@ -563,6 +617,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.tables)
+    value = (value * 31) ^ hash(self.usage)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -608,7 +668,7 @@
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.value = iprot.readString();
+          self.value = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -636,6 +696,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.key)
+    value = (value * 31) ^ hash(self.value)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -686,7 +752,7 @@
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.BOOL:
-          self.more = iprot.readBool();
+          self.more = iprot.readBool()
         else:
           iprot.skip(ftype)
       else:
@@ -717,6 +783,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.results)
+    value = (value * 31) ^ hash(self.more)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -768,7 +840,7 @@
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.BOOL:
-          self.startInclusive = iprot.readBool();
+          self.startInclusive = iprot.readBool()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -779,7 +851,7 @@
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.BOOL:
-          self.stopInclusive = iprot.readBool();
+          self.stopInclusive = iprot.readBool()
         else:
           iprot.skip(ftype)
       else:
@@ -815,6 +887,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.start)
+    value = (value * 31) ^ hash(self.startInclusive)
+    value = (value * 31) ^ hash(self.stop)
+    value = (value * 31) ^ hash(self.stopInclusive)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -854,12 +934,12 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.colFamily = iprot.readString();
+          self.colFamily = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.colQualifier = iprot.readString();
+          self.colQualifier = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -887,6 +967,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.colFamily)
+    value = (value * 31) ^ hash(self.colQualifier)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -932,17 +1018,17 @@
         break
       if fid == 1:
         if ftype == TType.I32:
-          self.priority = iprot.readI32();
+          self.priority = iprot.readI32()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.name = iprot.readString();
+          self.name = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.iteratorClass = iprot.readString();
+          self.iteratorClass = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
@@ -950,8 +1036,8 @@
           self.properties = {}
           (_ktype15, _vtype16, _size14 ) = iprot.readMapBegin()
           for _i18 in xrange(_size14):
-            _key19 = iprot.readString();
-            _val20 = iprot.readString();
+            _key19 = iprot.readString()
+            _val20 = iprot.readString()
             self.properties[_key19] = _val20
           iprot.readMapEnd()
         else:
@@ -993,6 +1079,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.priority)
+    value = (value * 31) ^ hash(self.name)
+    value = (value * 31) ^ hash(self.iteratorClass)
+    value = (value * 31) ^ hash(self.properties)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -1044,7 +1138,7 @@
           self.authorizations = set()
           (_etype26, _size23) = iprot.readSetBegin()
           for _i27 in xrange(_size23):
-            _elem28 = iprot.readString();
+            _elem28 = iprot.readString()
             self.authorizations.add(_elem28)
           iprot.readSetEnd()
         else:
@@ -1079,7 +1173,7 @@
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.I32:
-          self.bufferSize = iprot.readI32();
+          self.bufferSize = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -1128,6 +1222,15 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.authorizations)
+    value = (value * 31) ^ hash(self.range)
+    value = (value * 31) ^ hash(self.columns)
+    value = (value * 31) ^ hash(self.iterators)
+    value = (value * 31) ^ hash(self.bufferSize)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -1179,7 +1282,7 @@
           self.authorizations = set()
           (_etype47, _size44) = iprot.readSetBegin()
           for _i48 in xrange(_size44):
-            _elem49 = iprot.readString();
+            _elem49 = iprot.readString()
             self.authorizations.add(_elem49)
           iprot.readSetEnd()
         else:
@@ -1219,7 +1322,7 @@
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.I32:
-          self.threads = iprot.readI32();
+          self.threads = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -1271,6 +1374,15 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.authorizations)
+    value = (value * 31) ^ hash(self.ranges)
+    value = (value * 31) ^ hash(self.columns)
+    value = (value * 31) ^ hash(self.iterators)
+    value = (value * 31) ^ hash(self.threads)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -1316,7 +1428,7 @@
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.BOOL:
-          self.hasNext = iprot.readBool();
+          self.hasNext = iprot.readBool()
         else:
           iprot.skip(ftype)
       else:
@@ -1344,6 +1456,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.keyValue)
+    value = (value * 31) ^ hash(self.hasNext)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -1386,17 +1504,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.tableId = iprot.readString();
+          self.tableId = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.endRow = iprot.readString();
+          self.endRow = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.prevEndRow = iprot.readString();
+          self.prevEndRow = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -1428,6 +1546,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.tableId)
+    value = (value * 31) ^ hash(self.endRow)
+    value = (value * 31) ^ hash(self.prevEndRow)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -1470,17 +1595,17 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.colFamily = iprot.readString();
+          self.colFamily = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.colQualifier = iprot.readString();
+          self.colQualifier = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.colVisibility = iprot.readString();
+          self.colVisibility = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -1512,6 +1637,13 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.colFamily)
+    value = (value * 31) ^ hash(self.colQualifier)
+    value = (value * 31) ^ hash(self.colVisibility)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -1563,12 +1695,12 @@
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.I64:
-          self.timestamp = iprot.readI64();
+          self.timestamp = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.value = iprot.readString();
+          self.value = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
@@ -1618,6 +1750,14 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.column)
+    value = (value * 31) ^ hash(self.timestamp)
+    value = (value * 31) ^ hash(self.value)
+    value = (value * 31) ^ hash(self.iterators)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -1709,6 +1849,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.conditions)
+    value = (value * 31) ^ hash(self.updates)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -1757,17 +1903,17 @@
         break
       if fid == 1:
         if ftype == TType.I64:
-          self.maxMemory = iprot.readI64();
+          self.maxMemory = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.I64:
-          self.timeoutMs = iprot.readI64();
+          self.timeoutMs = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.I32:
-          self.threads = iprot.readI32();
+          self.threads = iprot.readI32()
         else:
           iprot.skip(ftype)
       elif fid == 4:
@@ -1775,14 +1921,14 @@
           self.authorizations = set()
           (_etype96, _size93) = iprot.readSetBegin()
           for _i97 in xrange(_size93):
-            _elem98 = iprot.readString();
+            _elem98 = iprot.readString()
             self.authorizations.add(_elem98)
           iprot.readSetEnd()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.I32:
-          self.durability = iprot.readI32();
+          self.durability = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -1825,6 +1971,15 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.maxMemory)
+    value = (value * 31) ^ hash(self.timeoutMs)
+    value = (value * 31) ^ hash(self.threads)
+    value = (value * 31) ^ hash(self.authorizations)
+    value = (value * 31) ^ hash(self.durability)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -1891,37 +2046,37 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.client = iprot.readString();
+          self.client = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.STRING:
-          self.user = iprot.readString();
+          self.user = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.STRING:
-          self.table = iprot.readString();
+          self.table = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.I64:
-          self.age = iprot.readI64();
+          self.age = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.I64:
-          self.idleTime = iprot.readI64();
+          self.idleTime = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 6:
         if ftype == TType.I32:
-          self.type = iprot.readI32();
+          self.type = iprot.readI32()
         else:
           iprot.skip(ftype)
       elif fid == 7:
         if ftype == TType.I32:
-          self.state = iprot.readI32();
+          self.state = iprot.readI32()
         else:
           iprot.skip(ftype)
       elif fid == 8:
@@ -1957,7 +2112,7 @@
           self.authorizations = []
           (_etype115, _size112) = iprot.readListBegin()
           for _i116 in xrange(_size112):
-            _elem117 = iprot.readString();
+            _elem117 = iprot.readString()
             self.authorizations.append(_elem117)
           iprot.readListEnd()
         else:
@@ -2032,6 +2187,21 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.client)
+    value = (value * 31) ^ hash(self.user)
+    value = (value * 31) ^ hash(self.table)
+    value = (value * 31) ^ hash(self.age)
+    value = (value * 31) ^ hash(self.idleTime)
+    value = (value * 31) ^ hash(self.type)
+    value = (value * 31) ^ hash(self.state)
+    value = (value * 31) ^ hash(self.extent)
+    value = (value * 31) ^ hash(self.columns)
+    value = (value * 31) ^ hash(self.iterators)
+    value = (value * 31) ^ hash(self.authorizations)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2101,7 +2271,7 @@
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.I64:
-          self.age = iprot.readI64();
+          self.age = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 3:
@@ -2109,39 +2279,39 @@
           self.inputFiles = []
           (_etype124, _size121) = iprot.readListBegin()
           for _i125 in xrange(_size121):
-            _elem126 = iprot.readString();
+            _elem126 = iprot.readString()
             self.inputFiles.append(_elem126)
           iprot.readListEnd()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.STRING:
-          self.outputFile = iprot.readString();
+          self.outputFile = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.I32:
-          self.type = iprot.readI32();
+          self.type = iprot.readI32()
         else:
           iprot.skip(ftype)
       elif fid == 6:
         if ftype == TType.I32:
-          self.reason = iprot.readI32();
+          self.reason = iprot.readI32()
         else:
           iprot.skip(ftype)
       elif fid == 7:
         if ftype == TType.STRING:
-          self.localityGroup = iprot.readString();
+          self.localityGroup = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 8:
         if ftype == TType.I64:
-          self.entriesRead = iprot.readI64();
+          self.entriesRead = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 9:
         if ftype == TType.I64:
-          self.entriesWritten = iprot.readI64();
+          self.entriesWritten = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 10:
@@ -2218,6 +2388,20 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.extent)
+    value = (value * 31) ^ hash(self.age)
+    value = (value * 31) ^ hash(self.inputFiles)
+    value = (value * 31) ^ hash(self.outputFile)
+    value = (value * 31) ^ hash(self.type)
+    value = (value * 31) ^ hash(self.reason)
+    value = (value * 31) ^ hash(self.localityGroup)
+    value = (value * 31) ^ hash(self.entriesRead)
+    value = (value * 31) ^ hash(self.entriesWritten)
+    value = (value * 31) ^ hash(self.iterators)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2266,27 +2450,27 @@
         break
       if fid == 1:
         if ftype == TType.I64:
-          self.maxMemory = iprot.readI64();
+          self.maxMemory = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 2:
         if ftype == TType.I64:
-          self.latencyMs = iprot.readI64();
+          self.latencyMs = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 3:
         if ftype == TType.I64:
-          self.timeoutMs = iprot.readI64();
+          self.timeoutMs = iprot.readI64()
         else:
           iprot.skip(ftype)
       elif fid == 4:
         if ftype == TType.I32:
-          self.threads = iprot.readI32();
+          self.threads = iprot.readI32()
         else:
           iprot.skip(ftype)
       elif fid == 5:
         if ftype == TType.I32:
-          self.durability = iprot.readI32();
+          self.durability = iprot.readI32()
         else:
           iprot.skip(ftype)
       else:
@@ -2326,6 +2510,15 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.maxMemory)
+    value = (value * 31) ^ hash(self.latencyMs)
+    value = (value * 31) ^ hash(self.timeoutMs)
+    value = (value * 31) ^ hash(self.threads)
+    value = (value * 31) ^ hash(self.durability)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2365,7 +2558,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.className = iprot.readString();
+          self.className = iprot.readString()
         else:
           iprot.skip(ftype)
       elif fid == 2:
@@ -2373,8 +2566,8 @@
           self.options = {}
           (_ktype136, _vtype137, _size135 ) = iprot.readMapBegin()
           for _i139 in xrange(_size135):
-            _key140 = iprot.readString();
-            _val141 = iprot.readString();
+            _key140 = iprot.readString()
+            _val141 = iprot.readString()
             self.options[_key140] = _val141
           iprot.readMapEnd()
         else:
@@ -2408,6 +2601,12 @@
     return
 
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.className)
+    value = (value * 31) ^ hash(self.options)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2444,7 +2643,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.msg = iprot.readString();
+          self.msg = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -2471,6 +2670,11 @@
   def __str__(self):
     return repr(self)
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2507,7 +2711,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.msg = iprot.readString();
+          self.msg = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -2534,6 +2738,11 @@
   def __str__(self):
     return repr(self)
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2570,7 +2779,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.msg = iprot.readString();
+          self.msg = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -2597,6 +2806,11 @@
   def __str__(self):
     return repr(self)
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2633,7 +2847,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.msg = iprot.readString();
+          self.msg = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -2660,6 +2874,11 @@
   def __str__(self):
     return repr(self)
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2696,7 +2915,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.msg = iprot.readString();
+          self.msg = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -2723,6 +2942,11 @@
   def __str__(self):
     return repr(self)
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2759,7 +2983,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.msg = iprot.readString();
+          self.msg = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -2786,6 +3010,11 @@
   def __str__(self):
     return repr(self)
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2822,7 +3051,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.msg = iprot.readString();
+          self.msg = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -2849,6 +3078,11 @@
   def __str__(self):
     return repr(self)
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
@@ -2885,7 +3119,7 @@
         break
       if fid == 1:
         if ftype == TType.STRING:
-          self.msg = iprot.readString();
+          self.msg = iprot.readString()
         else:
           iprot.skip(ftype)
       else:
@@ -2912,6 +3146,215 @@
   def __str__(self):
     return repr(self)
 
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class NamespaceExistsException(TException):
+  """
+  Attributes:
+   - msg
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'msg', None, None, ), # 1
+  )
+
+  def __init__(self, msg=None,):
+    self.msg = msg
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.msg = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('NamespaceExistsException')
+    if self.msg is not None:
+      oprot.writeFieldBegin('msg', TType.STRING, 1)
+      oprot.writeString(self.msg)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __str__(self):
+    return repr(self)
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class NamespaceNotFoundException(TException):
+  """
+  Attributes:
+   - msg
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'msg', None, None, ), # 1
+  )
+
+  def __init__(self, msg=None,):
+    self.msg = msg
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.msg = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('NamespaceNotFoundException')
+    if self.msg is not None:
+      oprot.writeFieldBegin('msg', TType.STRING, 1)
+      oprot.writeString(self.msg)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __str__(self):
+    return repr(self)
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
+  def __repr__(self):
+    L = ['%s=%r' % (key, value)
+      for key, value in self.__dict__.iteritems()]
+    return '%s(%s)' % (self.__class__.__name__, ', '.join(L))
+
+  def __eq__(self, other):
+    return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+  def __ne__(self, other):
+    return not (self == other)
+
+class NamespaceNotEmptyException(TException):
+  """
+  Attributes:
+   - msg
+  """
+
+  thrift_spec = (
+    None, # 0
+    (1, TType.STRING, 'msg', None, None, ), # 1
+  )
+
+  def __init__(self, msg=None,):
+    self.msg = msg
+
+  def read(self, iprot):
+    if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+      fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+      return
+    iprot.readStructBegin()
+    while True:
+      (fname, ftype, fid) = iprot.readFieldBegin()
+      if ftype == TType.STOP:
+        break
+      if fid == 1:
+        if ftype == TType.STRING:
+          self.msg = iprot.readString()
+        else:
+          iprot.skip(ftype)
+      else:
+        iprot.skip(ftype)
+      iprot.readFieldEnd()
+    iprot.readStructEnd()
+
+  def write(self, oprot):
+    if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+      oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+      return
+    oprot.writeStructBegin('NamespaceNotEmptyException')
+    if self.msg is not None:
+      oprot.writeFieldBegin('msg', TType.STRING, 1)
+      oprot.writeString(self.msg)
+      oprot.writeFieldEnd()
+    oprot.writeFieldStop()
+    oprot.writeStructEnd()
+
+  def validate(self):
+    return
+
+
+  def __str__(self):
+    return repr(self)
+
+  def __hash__(self):
+    value = 17
+    value = (value * 31) ^ hash(self.msg)
+    return value
+
   def __repr__(self):
     L = ['%s=%r' % (key, value)
       for key, value in self.__dict__.iteritems()]
diff --git a/proxy/src/main/ruby/accumulo_proxy.rb b/proxy/src/main/ruby/accumulo_proxy.rb
index f8d892e..e02ba16 100644
--- a/proxy/src/main/ruby/accumulo_proxy.rb
+++ b/proxy/src/main/ruby/accumulo_proxy.rb
@@ -13,7 +13,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-# Autogenerated by Thrift Compiler (0.9.1)
+# Autogenerated by Thrift Compiler (0.9.3)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
@@ -1041,6 +1041,55 @@
         return
       end
 
+      def grantNamespacePermission(login, user, namespaceName, perm)
+        send_grantNamespacePermission(login, user, namespaceName, perm)
+        recv_grantNamespacePermission()
+      end
+
+      def send_grantNamespacePermission(login, user, namespaceName, perm)
+        send_message('grantNamespacePermission', GrantNamespacePermission_args, :login => login, :user => user, :namespaceName => namespaceName, :perm => perm)
+      end
+
+      def recv_grantNamespacePermission()
+        result = receive_message(GrantNamespacePermission_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        return
+      end
+
+      def hasNamespacePermission(login, user, namespaceName, perm)
+        send_hasNamespacePermission(login, user, namespaceName, perm)
+        return recv_hasNamespacePermission()
+      end
+
+      def send_hasNamespacePermission(login, user, namespaceName, perm)
+        send_message('hasNamespacePermission', HasNamespacePermission_args, :login => login, :user => user, :namespaceName => namespaceName, :perm => perm)
+      end
+
+      def recv_hasNamespacePermission()
+        result = receive_message(HasNamespacePermission_result)
+        return result.success unless result.success.nil?
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'hasNamespacePermission failed: unknown result')
+      end
+
+      def revokeNamespacePermission(login, user, namespaceName, perm)
+        send_revokeNamespacePermission(login, user, namespaceName, perm)
+        recv_revokeNamespacePermission()
+      end
+
+      def send_revokeNamespacePermission(login, user, namespaceName, perm)
+        send_message('revokeNamespacePermission', RevokeNamespacePermission_args, :login => login, :user => user, :namespaceName => namespaceName, :perm => perm)
+      end
+
+      def recv_revokeNamespacePermission()
+        result = receive_message(RevokeNamespacePermission_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        return
+      end
+
       def createBatchScanner(login, tableName, options)
         send_createBatchScanner(login, tableName, options)
         return recv_createBatchScanner()
@@ -1185,7 +1234,7 @@
       end
 
       def send_update(writer, cells)
-        send_message('update', Update_args, :writer => writer, :cells => cells)
+        send_oneway_message('update', Update_args, :writer => writer, :cells => cells)
       end
       def flush(writer)
         send_flush(writer)
@@ -1317,6 +1366,350 @@
         raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'getFollowing failed: unknown result')
       end
 
+      def systemNamespace()
+        send_systemNamespace()
+        return recv_systemNamespace()
+      end
+
+      def send_systemNamespace()
+        send_message('systemNamespace', SystemNamespace_args)
+      end
+
+      def recv_systemNamespace()
+        result = receive_message(SystemNamespace_result)
+        return result.success unless result.success.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'systemNamespace failed: unknown result')
+      end
+
+      def defaultNamespace()
+        send_defaultNamespace()
+        return recv_defaultNamespace()
+      end
+
+      def send_defaultNamespace()
+        send_message('defaultNamespace', DefaultNamespace_args)
+      end
+
+      def recv_defaultNamespace()
+        result = receive_message(DefaultNamespace_result)
+        return result.success unless result.success.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'defaultNamespace failed: unknown result')
+      end
+
+      def listNamespaces(login)
+        send_listNamespaces(login)
+        return recv_listNamespaces()
+      end
+
+      def send_listNamespaces(login)
+        send_message('listNamespaces', ListNamespaces_args, :login => login)
+      end
+
+      def recv_listNamespaces()
+        result = receive_message(ListNamespaces_result)
+        return result.success unless result.success.nil?
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'listNamespaces failed: unknown result')
+      end
+
+      def namespaceExists(login, namespaceName)
+        send_namespaceExists(login, namespaceName)
+        return recv_namespaceExists()
+      end
+
+      def send_namespaceExists(login, namespaceName)
+        send_message('namespaceExists', NamespaceExists_args, :login => login, :namespaceName => namespaceName)
+      end
+
+      def recv_namespaceExists()
+        result = receive_message(NamespaceExists_result)
+        return result.success unless result.success.nil?
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'namespaceExists failed: unknown result')
+      end
+
+      def createNamespace(login, namespaceName)
+        send_createNamespace(login, namespaceName)
+        recv_createNamespace()
+      end
+
+      def send_createNamespace(login, namespaceName)
+        send_message('createNamespace', CreateNamespace_args, :login => login, :namespaceName => namespaceName)
+      end
+
+      def recv_createNamespace()
+        result = receive_message(CreateNamespace_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        return
+      end
+
+      def deleteNamespace(login, namespaceName)
+        send_deleteNamespace(login, namespaceName)
+        recv_deleteNamespace()
+      end
+
+      def send_deleteNamespace(login, namespaceName)
+        send_message('deleteNamespace', DeleteNamespace_args, :login => login, :namespaceName => namespaceName)
+      end
+
+      def recv_deleteNamespace()
+        result = receive_message(DeleteNamespace_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        raise result.ouch4 unless result.ouch4.nil?
+        return
+      end
+
+      def renameNamespace(login, oldNamespaceName, newNamespaceName)
+        send_renameNamespace(login, oldNamespaceName, newNamespaceName)
+        recv_renameNamespace()
+      end
+
+      def send_renameNamespace(login, oldNamespaceName, newNamespaceName)
+        send_message('renameNamespace', RenameNamespace_args, :login => login, :oldNamespaceName => oldNamespaceName, :newNamespaceName => newNamespaceName)
+      end
+
+      def recv_renameNamespace()
+        result = receive_message(RenameNamespace_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        raise result.ouch4 unless result.ouch4.nil?
+        return
+      end
+
+      def setNamespaceProperty(login, namespaceName, property, value)
+        send_setNamespaceProperty(login, namespaceName, property, value)
+        recv_setNamespaceProperty()
+      end
+
+      def send_setNamespaceProperty(login, namespaceName, property, value)
+        send_message('setNamespaceProperty', SetNamespaceProperty_args, :login => login, :namespaceName => namespaceName, :property => property, :value => value)
+      end
+
+      def recv_setNamespaceProperty()
+        result = receive_message(SetNamespaceProperty_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        return
+      end
+
+      def removeNamespaceProperty(login, namespaceName, property)
+        send_removeNamespaceProperty(login, namespaceName, property)
+        recv_removeNamespaceProperty()
+      end
+
+      def send_removeNamespaceProperty(login, namespaceName, property)
+        send_message('removeNamespaceProperty', RemoveNamespaceProperty_args, :login => login, :namespaceName => namespaceName, :property => property)
+      end
+
+      def recv_removeNamespaceProperty()
+        result = receive_message(RemoveNamespaceProperty_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        return
+      end
+
+      def getNamespaceProperties(login, namespaceName)
+        send_getNamespaceProperties(login, namespaceName)
+        return recv_getNamespaceProperties()
+      end
+
+      def send_getNamespaceProperties(login, namespaceName)
+        send_message('getNamespaceProperties', GetNamespaceProperties_args, :login => login, :namespaceName => namespaceName)
+      end
+
+      def recv_getNamespaceProperties()
+        result = receive_message(GetNamespaceProperties_result)
+        return result.success unless result.success.nil?
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'getNamespaceProperties failed: unknown result')
+      end
+
+      def namespaceIdMap(login)
+        send_namespaceIdMap(login)
+        return recv_namespaceIdMap()
+      end
+
+      def send_namespaceIdMap(login)
+        send_message('namespaceIdMap', NamespaceIdMap_args, :login => login)
+      end
+
+      def recv_namespaceIdMap()
+        result = receive_message(NamespaceIdMap_result)
+        return result.success unless result.success.nil?
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'namespaceIdMap failed: unknown result')
+      end
+
+      def attachNamespaceIterator(login, namespaceName, setting, scopes)
+        send_attachNamespaceIterator(login, namespaceName, setting, scopes)
+        recv_attachNamespaceIterator()
+      end
+
+      def send_attachNamespaceIterator(login, namespaceName, setting, scopes)
+        send_message('attachNamespaceIterator', AttachNamespaceIterator_args, :login => login, :namespaceName => namespaceName, :setting => setting, :scopes => scopes)
+      end
+
+      def recv_attachNamespaceIterator()
+        result = receive_message(AttachNamespaceIterator_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        return
+      end
+
+      def removeNamespaceIterator(login, namespaceName, name, scopes)
+        send_removeNamespaceIterator(login, namespaceName, name, scopes)
+        recv_removeNamespaceIterator()
+      end
+
+      def send_removeNamespaceIterator(login, namespaceName, name, scopes)
+        send_message('removeNamespaceIterator', RemoveNamespaceIterator_args, :login => login, :namespaceName => namespaceName, :name => name, :scopes => scopes)
+      end
+
+      def recv_removeNamespaceIterator()
+        result = receive_message(RemoveNamespaceIterator_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        return
+      end
+
+      def getNamespaceIteratorSetting(login, namespaceName, name, scope)
+        send_getNamespaceIteratorSetting(login, namespaceName, name, scope)
+        return recv_getNamespaceIteratorSetting()
+      end
+
+      def send_getNamespaceIteratorSetting(login, namespaceName, name, scope)
+        send_message('getNamespaceIteratorSetting', GetNamespaceIteratorSetting_args, :login => login, :namespaceName => namespaceName, :name => name, :scope => scope)
+      end
+
+      def recv_getNamespaceIteratorSetting()
+        result = receive_message(GetNamespaceIteratorSetting_result)
+        return result.success unless result.success.nil?
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'getNamespaceIteratorSetting failed: unknown result')
+      end
+
+      def listNamespaceIterators(login, namespaceName)
+        send_listNamespaceIterators(login, namespaceName)
+        return recv_listNamespaceIterators()
+      end
+
+      def send_listNamespaceIterators(login, namespaceName)
+        send_message('listNamespaceIterators', ListNamespaceIterators_args, :login => login, :namespaceName => namespaceName)
+      end
+
+      def recv_listNamespaceIterators()
+        result = receive_message(ListNamespaceIterators_result)
+        return result.success unless result.success.nil?
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'listNamespaceIterators failed: unknown result')
+      end
+
+      def checkNamespaceIteratorConflicts(login, namespaceName, setting, scopes)
+        send_checkNamespaceIteratorConflicts(login, namespaceName, setting, scopes)
+        recv_checkNamespaceIteratorConflicts()
+      end
+
+      def send_checkNamespaceIteratorConflicts(login, namespaceName, setting, scopes)
+        send_message('checkNamespaceIteratorConflicts', CheckNamespaceIteratorConflicts_args, :login => login, :namespaceName => namespaceName, :setting => setting, :scopes => scopes)
+      end
+
+      def recv_checkNamespaceIteratorConflicts()
+        result = receive_message(CheckNamespaceIteratorConflicts_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        return
+      end
+
+      def addNamespaceConstraint(login, namespaceName, constraintClassName)
+        send_addNamespaceConstraint(login, namespaceName, constraintClassName)
+        return recv_addNamespaceConstraint()
+      end
+
+      def send_addNamespaceConstraint(login, namespaceName, constraintClassName)
+        send_message('addNamespaceConstraint', AddNamespaceConstraint_args, :login => login, :namespaceName => namespaceName, :constraintClassName => constraintClassName)
+      end
+
+      def recv_addNamespaceConstraint()
+        result = receive_message(AddNamespaceConstraint_result)
+        return result.success unless result.success.nil?
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'addNamespaceConstraint failed: unknown result')
+      end
+
+      def removeNamespaceConstraint(login, namespaceName, id)
+        send_removeNamespaceConstraint(login, namespaceName, id)
+        recv_removeNamespaceConstraint()
+      end
+
+      def send_removeNamespaceConstraint(login, namespaceName, id)
+        send_message('removeNamespaceConstraint', RemoveNamespaceConstraint_args, :login => login, :namespaceName => namespaceName, :id => id)
+      end
+
+      def recv_removeNamespaceConstraint()
+        result = receive_message(RemoveNamespaceConstraint_result)
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        return
+      end
+
+      def listNamespaceConstraints(login, namespaceName)
+        send_listNamespaceConstraints(login, namespaceName)
+        return recv_listNamespaceConstraints()
+      end
+
+      def send_listNamespaceConstraints(login, namespaceName)
+        send_message('listNamespaceConstraints', ListNamespaceConstraints_args, :login => login, :namespaceName => namespaceName)
+      end
+
+      def recv_listNamespaceConstraints()
+        result = receive_message(ListNamespaceConstraints_result)
+        return result.success unless result.success.nil?
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'listNamespaceConstraints failed: unknown result')
+      end
+
+      def testNamespaceClassLoad(login, namespaceName, className, asTypeName)
+        send_testNamespaceClassLoad(login, namespaceName, className, asTypeName)
+        return recv_testNamespaceClassLoad()
+      end
+
+      def send_testNamespaceClassLoad(login, namespaceName, className, asTypeName)
+        send_message('testNamespaceClassLoad', TestNamespaceClassLoad_args, :login => login, :namespaceName => namespaceName, :className => className, :asTypeName => asTypeName)
+      end
+
+      def recv_testNamespaceClassLoad()
+        result = receive_message(TestNamespaceClassLoad_result)
+        return result.success unless result.success.nil?
+        raise result.ouch1 unless result.ouch1.nil?
+        raise result.ouch2 unless result.ouch2.nil?
+        raise result.ouch3 unless result.ouch3.nil?
+        raise ::Thrift::ApplicationException.new(::Thrift::ApplicationException::MISSING_RESULT, 'testNamespaceClassLoad failed: unknown result')
+      end
+
     end
 
     class Processor
@@ -2152,6 +2545,45 @@
         write_result(result, oprot, 'revokeTablePermission', seqid)
       end
 
+      def process_grantNamespacePermission(seqid, iprot, oprot)
+        args = read_args(iprot, GrantNamespacePermission_args)
+        result = GrantNamespacePermission_result.new()
+        begin
+          @handler.grantNamespacePermission(args.login, args.user, args.namespaceName, args.perm)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        end
+        write_result(result, oprot, 'grantNamespacePermission', seqid)
+      end
+
+      def process_hasNamespacePermission(seqid, iprot, oprot)
+        args = read_args(iprot, HasNamespacePermission_args)
+        result = HasNamespacePermission_result.new()
+        begin
+          result.success = @handler.hasNamespacePermission(args.login, args.user, args.namespaceName, args.perm)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        end
+        write_result(result, oprot, 'hasNamespacePermission', seqid)
+      end
+
+      def process_revokeNamespacePermission(seqid, iprot, oprot)
+        args = read_args(iprot, RevokeNamespacePermission_args)
+        result = RevokeNamespacePermission_result.new()
+        begin
+          @handler.revokeNamespacePermission(args.login, args.user, args.namespaceName, args.perm)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        end
+        write_result(result, oprot, 'revokeNamespacePermission', seqid)
+      end
+
       def process_createBatchScanner(seqid, iprot, oprot)
         args = read_args(iprot, CreateBatchScanner_args)
         result = CreateBatchScanner_result.new()
@@ -2364,6 +2796,288 @@
         write_result(result, oprot, 'getFollowing', seqid)
       end
 
+      def process_systemNamespace(seqid, iprot, oprot)
+        args = read_args(iprot, SystemNamespace_args)
+        result = SystemNamespace_result.new()
+        result.success = @handler.systemNamespace()
+        write_result(result, oprot, 'systemNamespace', seqid)
+      end
+
+      def process_defaultNamespace(seqid, iprot, oprot)
+        args = read_args(iprot, DefaultNamespace_args)
+        result = DefaultNamespace_result.new()
+        result.success = @handler.defaultNamespace()
+        write_result(result, oprot, 'defaultNamespace', seqid)
+      end
+
+      def process_listNamespaces(seqid, iprot, oprot)
+        args = read_args(iprot, ListNamespaces_args)
+        result = ListNamespaces_result.new()
+        begin
+          result.success = @handler.listNamespaces(args.login)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        end
+        write_result(result, oprot, 'listNamespaces', seqid)
+      end
+
+      def process_namespaceExists(seqid, iprot, oprot)
+        args = read_args(iprot, NamespaceExists_args)
+        result = NamespaceExists_result.new()
+        begin
+          result.success = @handler.namespaceExists(args.login, args.namespaceName)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        end
+        write_result(result, oprot, 'namespaceExists', seqid)
+      end
+
+      def process_createNamespace(seqid, iprot, oprot)
+        args = read_args(iprot, CreateNamespace_args)
+        result = CreateNamespace_result.new()
+        begin
+          @handler.createNamespace(args.login, args.namespaceName)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceExistsException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'createNamespace', seqid)
+      end
+
+      def process_deleteNamespace(seqid, iprot, oprot)
+        args = read_args(iprot, DeleteNamespace_args)
+        result = DeleteNamespace_result.new()
+        begin
+          @handler.deleteNamespace(args.login, args.namespaceName)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        rescue ::Accumulo::NamespaceNotEmptyException => ouch4
+          result.ouch4 = ouch4
+        end
+        write_result(result, oprot, 'deleteNamespace', seqid)
+      end
+
+      def process_renameNamespace(seqid, iprot, oprot)
+        args = read_args(iprot, RenameNamespace_args)
+        result = RenameNamespace_result.new()
+        begin
+          @handler.renameNamespace(args.login, args.oldNamespaceName, args.newNamespaceName)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        rescue ::Accumulo::NamespaceExistsException => ouch4
+          result.ouch4 = ouch4
+        end
+        write_result(result, oprot, 'renameNamespace', seqid)
+      end
+
+      def process_setNamespaceProperty(seqid, iprot, oprot)
+        args = read_args(iprot, SetNamespaceProperty_args)
+        result = SetNamespaceProperty_result.new()
+        begin
+          @handler.setNamespaceProperty(args.login, args.namespaceName, args.property, args.value)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'setNamespaceProperty', seqid)
+      end
+
+      def process_removeNamespaceProperty(seqid, iprot, oprot)
+        args = read_args(iprot, RemoveNamespaceProperty_args)
+        result = RemoveNamespaceProperty_result.new()
+        begin
+          @handler.removeNamespaceProperty(args.login, args.namespaceName, args.property)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'removeNamespaceProperty', seqid)
+      end
+
+      def process_getNamespaceProperties(seqid, iprot, oprot)
+        args = read_args(iprot, GetNamespaceProperties_args)
+        result = GetNamespaceProperties_result.new()
+        begin
+          result.success = @handler.getNamespaceProperties(args.login, args.namespaceName)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'getNamespaceProperties', seqid)
+      end
+
+      def process_namespaceIdMap(seqid, iprot, oprot)
+        args = read_args(iprot, NamespaceIdMap_args)
+        result = NamespaceIdMap_result.new()
+        begin
+          result.success = @handler.namespaceIdMap(args.login)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        end
+        write_result(result, oprot, 'namespaceIdMap', seqid)
+      end
+
+      def process_attachNamespaceIterator(seqid, iprot, oprot)
+        args = read_args(iprot, AttachNamespaceIterator_args)
+        result = AttachNamespaceIterator_result.new()
+        begin
+          @handler.attachNamespaceIterator(args.login, args.namespaceName, args.setting, args.scopes)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'attachNamespaceIterator', seqid)
+      end
+
+      def process_removeNamespaceIterator(seqid, iprot, oprot)
+        args = read_args(iprot, RemoveNamespaceIterator_args)
+        result = RemoveNamespaceIterator_result.new()
+        begin
+          @handler.removeNamespaceIterator(args.login, args.namespaceName, args.name, args.scopes)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'removeNamespaceIterator', seqid)
+      end
+
+      def process_getNamespaceIteratorSetting(seqid, iprot, oprot)
+        args = read_args(iprot, GetNamespaceIteratorSetting_args)
+        result = GetNamespaceIteratorSetting_result.new()
+        begin
+          result.success = @handler.getNamespaceIteratorSetting(args.login, args.namespaceName, args.name, args.scope)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'getNamespaceIteratorSetting', seqid)
+      end
+
+      def process_listNamespaceIterators(seqid, iprot, oprot)
+        args = read_args(iprot, ListNamespaceIterators_args)
+        result = ListNamespaceIterators_result.new()
+        begin
+          result.success = @handler.listNamespaceIterators(args.login, args.namespaceName)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'listNamespaceIterators', seqid)
+      end
+
+      def process_checkNamespaceIteratorConflicts(seqid, iprot, oprot)
+        args = read_args(iprot, CheckNamespaceIteratorConflicts_args)
+        result = CheckNamespaceIteratorConflicts_result.new()
+        begin
+          @handler.checkNamespaceIteratorConflicts(args.login, args.namespaceName, args.setting, args.scopes)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'checkNamespaceIteratorConflicts', seqid)
+      end
+
+      def process_addNamespaceConstraint(seqid, iprot, oprot)
+        args = read_args(iprot, AddNamespaceConstraint_args)
+        result = AddNamespaceConstraint_result.new()
+        begin
+          result.success = @handler.addNamespaceConstraint(args.login, args.namespaceName, args.constraintClassName)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'addNamespaceConstraint', seqid)
+      end
+
+      def process_removeNamespaceConstraint(seqid, iprot, oprot)
+        args = read_args(iprot, RemoveNamespaceConstraint_args)
+        result = RemoveNamespaceConstraint_result.new()
+        begin
+          @handler.removeNamespaceConstraint(args.login, args.namespaceName, args.id)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'removeNamespaceConstraint', seqid)
+      end
+
+      def process_listNamespaceConstraints(seqid, iprot, oprot)
+        args = read_args(iprot, ListNamespaceConstraints_args)
+        result = ListNamespaceConstraints_result.new()
+        begin
+          result.success = @handler.listNamespaceConstraints(args.login, args.namespaceName)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'listNamespaceConstraints', seqid)
+      end
+
+      def process_testNamespaceClassLoad(seqid, iprot, oprot)
+        args = read_args(iprot, TestNamespaceClassLoad_args)
+        result = TestNamespaceClassLoad_result.new()
+        begin
+          result.success = @handler.testNamespaceClassLoad(args.login, args.namespaceName, args.className, args.asTypeName)
+        rescue ::Accumulo::AccumuloException => ouch1
+          result.ouch1 = ouch1
+        rescue ::Accumulo::AccumuloSecurityException => ouch2
+          result.ouch2 = ouch2
+        rescue ::Accumulo::NamespaceNotFoundException => ouch3
+          result.ouch3 = ouch3
+        end
+        write_result(result, oprot, 'testNamespaceClassLoad', seqid)
+      end
+
     end
 
     # HELPER FUNCTIONS AND STRUCTURES
@@ -4784,6 +5498,137 @@
       ::Thrift::Struct.generate_accessors self
     end
 
+    class GrantNamespacePermission_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      USER = 2
+      NAMESPACENAME = 3
+      PERM = 4
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        USER => {:type => ::Thrift::Types::STRING, :name => 'user'},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        PERM => {:type => ::Thrift::Types::I32, :name => 'perm', :enum_class => ::Accumulo::NamespacePermission}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+        unless @perm.nil? || ::Accumulo::NamespacePermission::VALID_VALUES.include?(@perm)
+          raise ::Thrift::ProtocolException.new(::Thrift::ProtocolException::UNKNOWN, 'Invalid value of field perm!')
+        end
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class GrantNamespacePermission_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class HasNamespacePermission_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      USER = 2
+      NAMESPACENAME = 3
+      PERM = 4
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        USER => {:type => ::Thrift::Types::STRING, :name => 'user'},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        PERM => {:type => ::Thrift::Types::I32, :name => 'perm', :enum_class => ::Accumulo::NamespacePermission}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+        unless @perm.nil? || ::Accumulo::NamespacePermission::VALID_VALUES.include?(@perm)
+          raise ::Thrift::ProtocolException.new(::Thrift::ProtocolException::UNKNOWN, 'Invalid value of field perm!')
+        end
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class HasNamespacePermission_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+      OUCH1 = 1
+      OUCH2 = 2
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::BOOL, :name => 'success'},
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class RevokeNamespacePermission_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      USER = 2
+      NAMESPACENAME = 3
+      PERM = 4
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        USER => {:type => ::Thrift::Types::STRING, :name => 'user'},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        PERM => {:type => ::Thrift::Types::I32, :name => 'perm', :enum_class => ::Accumulo::NamespacePermission}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+        unless @perm.nil? || ::Accumulo::NamespacePermission::VALID_VALUES.include?(@perm)
+          raise ::Thrift::ProtocolException.new(::Thrift::ProtocolException::UNKNOWN, 'Invalid value of field perm!')
+        end
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class RevokeNamespacePermission_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
     class CreateBatchScanner_args
       include ::Thrift::Struct, ::Thrift::Struct_Union
       LOGIN = 1
@@ -5426,6 +6271,799 @@
       ::Thrift::Struct.generate_accessors self
     end
 
+    class SystemNamespace_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+
+      FIELDS = {
+
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class SystemNamespace_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::STRING, :name => 'success'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class DefaultNamespace_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+
+      FIELDS = {
+
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class DefaultNamespace_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::STRING, :name => 'success'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class ListNamespaces_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class ListNamespaces_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+      OUCH1 = 1
+      OUCH2 = 2
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::LIST, :name => 'success', :element => {:type => ::Thrift::Types::STRING}},
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class NamespaceExists_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class NamespaceExists_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+      OUCH1 = 1
+      OUCH2 = 2
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::BOOL, :name => 'success'},
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class CreateNamespace_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class CreateNamespace_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceExistsException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class DeleteNamespace_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class DeleteNamespace_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+      OUCH4 = 4
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException},
+        OUCH4 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch4', :class => ::Accumulo::NamespaceNotEmptyException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class RenameNamespace_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      OLDNAMESPACENAME = 2
+      NEWNAMESPACENAME = 3
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        OLDNAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'oldNamespaceName'},
+        NEWNAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'newNamespaceName'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class RenameNamespace_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+      OUCH4 = 4
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException},
+        OUCH4 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch4', :class => ::Accumulo::NamespaceExistsException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class SetNamespaceProperty_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+      PROPERTY = 3
+      VALUE = 4
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        PROPERTY => {:type => ::Thrift::Types::STRING, :name => 'property'},
+        VALUE => {:type => ::Thrift::Types::STRING, :name => 'value'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class SetNamespaceProperty_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class RemoveNamespaceProperty_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+      PROPERTY = 3
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        PROPERTY => {:type => ::Thrift::Types::STRING, :name => 'property'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class RemoveNamespaceProperty_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class GetNamespaceProperties_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class GetNamespaceProperties_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::MAP, :name => 'success', :key => {:type => ::Thrift::Types::STRING}, :value => {:type => ::Thrift::Types::STRING}},
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class NamespaceIdMap_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class NamespaceIdMap_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+      OUCH1 = 1
+      OUCH2 = 2
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::MAP, :name => 'success', :key => {:type => ::Thrift::Types::STRING}, :value => {:type => ::Thrift::Types::STRING}},
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class AttachNamespaceIterator_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+      SETTING = 3
+      SCOPES = 4
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        SETTING => {:type => ::Thrift::Types::STRUCT, :name => 'setting', :class => ::Accumulo::IteratorSetting},
+        SCOPES => {:type => ::Thrift::Types::SET, :name => 'scopes', :element => {:type => ::Thrift::Types::I32, :enum_class => ::Accumulo::IteratorScope}}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class AttachNamespaceIterator_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class RemoveNamespaceIterator_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+      NAME = 3
+      SCOPES = 4
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        NAME => {:type => ::Thrift::Types::STRING, :name => 'name'},
+        SCOPES => {:type => ::Thrift::Types::SET, :name => 'scopes', :element => {:type => ::Thrift::Types::I32, :enum_class => ::Accumulo::IteratorScope}}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class RemoveNamespaceIterator_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class GetNamespaceIteratorSetting_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+      NAME = 3
+      SCOPE = 4
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        NAME => {:type => ::Thrift::Types::STRING, :name => 'name'},
+        SCOPE => {:type => ::Thrift::Types::I32, :name => 'scope', :enum_class => ::Accumulo::IteratorScope}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+        unless @scope.nil? || ::Accumulo::IteratorScope::VALID_VALUES.include?(@scope)
+          raise ::Thrift::ProtocolException.new(::Thrift::ProtocolException::UNKNOWN, 'Invalid value of field scope!')
+        end
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class GetNamespaceIteratorSetting_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::STRUCT, :name => 'success', :class => ::Accumulo::IteratorSetting},
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class ListNamespaceIterators_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class ListNamespaceIterators_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::MAP, :name => 'success', :key => {:type => ::Thrift::Types::STRING}, :value => {:type => ::Thrift::Types::SET, :element => {:type => ::Thrift::Types::I32, :enum_class => ::Accumulo::IteratorScope}}},
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class CheckNamespaceIteratorConflicts_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+      SETTING = 3
+      SCOPES = 4
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        SETTING => {:type => ::Thrift::Types::STRUCT, :name => 'setting', :class => ::Accumulo::IteratorSetting},
+        SCOPES => {:type => ::Thrift::Types::SET, :name => 'scopes', :element => {:type => ::Thrift::Types::I32, :enum_class => ::Accumulo::IteratorScope}}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class CheckNamespaceIteratorConflicts_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class AddNamespaceConstraint_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+      CONSTRAINTCLASSNAME = 3
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        CONSTRAINTCLASSNAME => {:type => ::Thrift::Types::STRING, :name => 'constraintClassName'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class AddNamespaceConstraint_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::I32, :name => 'success'},
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class RemoveNamespaceConstraint_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+      ID = 3
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        ID => {:type => ::Thrift::Types::I32, :name => 'id'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class RemoveNamespaceConstraint_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class ListNamespaceConstraints_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class ListNamespaceConstraints_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::MAP, :name => 'success', :key => {:type => ::Thrift::Types::STRING}, :value => {:type => ::Thrift::Types::I32}},
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class TestNamespaceClassLoad_args
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      LOGIN = 1
+      NAMESPACENAME = 2
+      CLASSNAME = 3
+      ASTYPENAME = 4
+
+      FIELDS = {
+        LOGIN => {:type => ::Thrift::Types::STRING, :name => 'login', :binary => true},
+        NAMESPACENAME => {:type => ::Thrift::Types::STRING, :name => 'namespaceName'},
+        CLASSNAME => {:type => ::Thrift::Types::STRING, :name => 'className'},
+        ASTYPENAME => {:type => ::Thrift::Types::STRING, :name => 'asTypeName'}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
+    class TestNamespaceClassLoad_result
+      include ::Thrift::Struct, ::Thrift::Struct_Union
+      SUCCESS = 0
+      OUCH1 = 1
+      OUCH2 = 2
+      OUCH3 = 3
+
+      FIELDS = {
+        SUCCESS => {:type => ::Thrift::Types::BOOL, :name => 'success'},
+        OUCH1 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch1', :class => ::Accumulo::AccumuloException},
+        OUCH2 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch2', :class => ::Accumulo::AccumuloSecurityException},
+        OUCH3 => {:type => ::Thrift::Types::STRUCT, :name => 'ouch3', :class => ::Accumulo::NamespaceNotFoundException}
+      }
+
+      def struct_fields; FIELDS; end
+
+      def validate
+      end
+
+      ::Thrift::Struct.generate_accessors self
+    end
+
   end
 
 end
diff --git a/proxy/src/main/ruby/proxy_constants.rb b/proxy/src/main/ruby/proxy_constants.rb
index 98a589e..baaf9af 100644
--- a/proxy/src/main/ruby/proxy_constants.rb
+++ b/proxy/src/main/ruby/proxy_constants.rb
@@ -13,7 +13,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-# Autogenerated by Thrift Compiler (0.9.1)
+# Autogenerated by Thrift Compiler (0.9.3)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
diff --git a/proxy/src/main/ruby/proxy_types.rb b/proxy/src/main/ruby/proxy_types.rb
index 57306d1..e542df6 100644
--- a/proxy/src/main/ruby/proxy_types.rb
+++ b/proxy/src/main/ruby/proxy_types.rb
@@ -13,7 +13,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-# Autogenerated by Thrift Compiler (0.9.1)
+# Autogenerated by Thrift Compiler (0.9.3)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
@@ -56,6 +56,20 @@
     VALID_VALUES = Set.new([GRANT, CREATE_TABLE, DROP_TABLE, ALTER_TABLE, CREATE_USER, DROP_USER, ALTER_USER, SYSTEM]).freeze
   end
 
+  module NamespacePermission
+    READ = 0
+    WRITE = 1
+    ALTER_NAMESPACE = 2
+    GRANT = 3
+    ALTER_TABLE = 4
+    CREATE_TABLE = 5
+    DROP_TABLE = 6
+    BULK_IMPORT = 7
+    DROP_NAMESPACE = 8
+    VALUE_MAP = {0 => "READ", 1 => "WRITE", 2 => "ALTER_NAMESPACE", 3 => "GRANT", 4 => "ALTER_TABLE", 5 => "CREATE_TABLE", 6 => "DROP_TABLE", 7 => "BULK_IMPORT", 8 => "DROP_NAMESPACE"}
+    VALID_VALUES = Set.new([READ, WRITE, ALTER_NAMESPACE, GRANT, ALTER_TABLE, CREATE_TABLE, DROP_TABLE, BULK_IMPORT, DROP_NAMESPACE]).freeze
+  end
+
   module ScanType
     SINGLE = 0
     BATCH = 1
@@ -775,4 +789,73 @@
     ::Thrift::Struct.generate_accessors self
   end
 
+  class NamespaceExistsException < ::Thrift::Exception
+    include ::Thrift::Struct, ::Thrift::Struct_Union
+    def initialize(message=nil)
+      super()
+      self.msg = message
+    end
+
+    def message; msg end
+
+    MSG = 1
+
+    FIELDS = {
+      MSG => {:type => ::Thrift::Types::STRING, :name => 'msg'}
+    }
+
+    def struct_fields; FIELDS; end
+
+    def validate
+    end
+
+    ::Thrift::Struct.generate_accessors self
+  end
+
+  class NamespaceNotFoundException < ::Thrift::Exception
+    include ::Thrift::Struct, ::Thrift::Struct_Union
+    def initialize(message=nil)
+      super()
+      self.msg = message
+    end
+
+    def message; msg end
+
+    MSG = 1
+
+    FIELDS = {
+      MSG => {:type => ::Thrift::Types::STRING, :name => 'msg'}
+    }
+
+    def struct_fields; FIELDS; end
+
+    def validate
+    end
+
+    ::Thrift::Struct.generate_accessors self
+  end
+
+  class NamespaceNotEmptyException < ::Thrift::Exception
+    include ::Thrift::Struct, ::Thrift::Struct_Union
+    def initialize(message=nil)
+      super()
+      self.msg = message
+    end
+
+    def message; msg end
+
+    MSG = 1
+
+    FIELDS = {
+      MSG => {:type => ::Thrift::Types::STRING, :name => 'msg'}
+    }
+
+    def struct_fields; FIELDS; end
+
+    def validate
+    end
+
+    ::Thrift::Struct.generate_accessors self
+  end
+
 end
diff --git a/proxy/src/main/thrift/proxy.thrift b/proxy/src/main/thrift/proxy.thrift
index 25510d1..3814c44 100644
--- a/proxy/src/main/thrift/proxy.thrift
+++ b/proxy/src/main/thrift/proxy.thrift
@@ -133,6 +133,18 @@
   SYSTEM = 7,
 }
 
+enum NamespacePermission {
+  READ = 0,
+  WRITE = 1,
+  ALTER_NAMESPACE = 2,
+  GRANT = 3,
+  ALTER_TABLE = 4,
+  CREATE_TABLE = 5,
+  DROP_TABLE = 6,
+  BULK_IMPORT = 7,
+  DROP_NAMESPACE = 8
+}
+
 enum ScanType {
     SINGLE,
     BATCH
@@ -297,6 +309,18 @@
   1:string msg
 }
 
+exception NamespaceExistsException {
+  1:string msg
+}
+
+exception NamespaceNotFoundException {
+  1:string msg
+}
+
+exception NamespaceNotEmptyException {
+  1:string msg
+}
+
 service AccumuloProxy
 {
   // get an authentication token
@@ -390,6 +414,12 @@
   set<string> listLocalUsers (1:binary login)                                                        throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:TableNotFoundException ouch3);
   void revokeSystemPermission (1:binary login, 2:string user, 3:SystemPermission perm)               throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2);
   void revokeTablePermission (1:binary login, 2:string user, 3:string table, 4:TablePermission perm) throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:TableNotFoundException ouch3);
+  void grantNamespacePermission (1:binary login, 2:string user, 3:string namespaceName,
+                                 4:NamespacePermission perm)                                         throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2);
+  bool hasNamespacePermission (1:binary login, 2:string user, 3:string namespaceName,
+                               4:NamespacePermission perm)                                           throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2);
+  void revokeNamespacePermission (1:binary login, 2:string user, 3:string namespaceName,
+                                  4:NamespacePermission perm)                                        throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2);
 
   // scanning
   string createBatchScanner(1:binary login, 2:string tableName, 3:BatchScanOptions options)          throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:TableNotFoundException ouch3);
@@ -427,4 +457,33 @@
   // utilities
   Range getRowRange(1:binary row);
   Key getFollowing(1:Key key, 2:PartialKey part);
-}
\ No newline at end of file
+
+  // namespace operations, since 1.8.0
+  string systemNamespace();
+  string defaultNamespace();
+  list<string> listNamespaces(1:binary login)                                                      throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2);
+  bool namespaceExists(1:binary login, 2:string namespaceName)                                     throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2);
+  void createNamespace(1:binary login, 2:string namespaceName)                                     throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceExistsException ouch3);
+  void deleteNamespace(1:binary login, 2:string namespaceName)                                     throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3, 4:NamespaceNotEmptyException ouch4);
+  void renameNamespace(1:binary login, 2:string oldNamespaceName, 3:string newNamespaceName)       throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3, 4:NamespaceExistsException ouch4);
+  void setNamespaceProperty(1:binary login, 2:string namespaceName, 3:string property,
+                            4:string value)                                                        throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  void removeNamespaceProperty(1:binary login, 2:string namespaceName, 3:string property)          throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  map<string,string> getNamespaceProperties(1:binary login, 2:string namespaceName)                throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  map<string,string> namespaceIdMap(1:binary login)                                                throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2);
+  void attachNamespaceIterator(1:binary login, 2:string namespaceName, 3:IteratorSetting setting,
+                               4:set<IteratorScope> scopes)                                        throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  void removeNamespaceIterator(1:binary login, 2:string namespaceName, 3:string name,
+                               4:set<IteratorScope> scopes)                                        throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  IteratorSetting getNamespaceIteratorSetting(1:binary login, 2:string namespaceName,
+                                              3:string name, 4:IteratorScope scope)                throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  map<string,set<IteratorScope>> listNamespaceIterators(1:binary login, 2:string namespaceName)    throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  void checkNamespaceIteratorConflicts(1:binary login, 2:string namespaceName,
+                                       3:IteratorSetting setting, 4:set<IteratorScope> scopes)     throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  i32 addNamespaceConstraint(1:binary login, 2:string namespaceName,
+                             3:string constraintClassName)                                         throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  void removeNamespaceConstraint(1:binary login, 2:string namespaceName, 3:i32 id)                 throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  map<string,i32> listNamespaceConstraints(1:binary login, 2:string namespaceName)                 throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+  bool testNamespaceClassLoad(1:binary login, 2:string namespaceName, 3:string className,
+                              4:string asTypeName)                                                 throws (1:AccumuloException ouch1, 2:AccumuloSecurityException ouch2, 3:NamespaceNotFoundException ouch3);
+}
diff --git a/proxy/src/test/java/org/apache/accumulo/proxy/ProxyServerTest.java b/proxy/src/test/java/org/apache/accumulo/proxy/ProxyServerTest.java
index 07fdc45..2f64445 100644
--- a/proxy/src/test/java/org/apache/accumulo/proxy/ProxyServerTest.java
+++ b/proxy/src/test/java/org/apache/accumulo/proxy/ProxyServerTest.java
@@ -48,7 +48,7 @@
 
     final ByteBuffer login = ByteBuffer.wrap("my_login".getBytes(UTF_8));
     final String tableName = "table1";
-    final Map<ByteBuffer,List<ColumnUpdate>> cells = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    final Map<ByteBuffer,List<ColumnUpdate>> cells = new HashMap<>();
 
     EasyMock.expect(server.getWriter(login, tableName, null)).andReturn(bwpe);
     server.addCellsToWriter(cells, bwpe);
@@ -83,7 +83,7 @@
 
     final ByteBuffer login = ByteBuffer.wrap("my_login".getBytes(UTF_8));
     final String tableName = "table1";
-    final Map<ByteBuffer,List<ColumnUpdate>> cells = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    final Map<ByteBuffer,List<ColumnUpdate>> cells = new HashMap<>();
 
     EasyMock.expect(server.getWriter(login, tableName, null)).andReturn(bwpe);
     server.addCellsToWriter(cells, bwpe);
diff --git a/server/base/pom.xml b/server/base/pom.xml
index 288b359..00e0d14 100644
--- a/server/base/pom.xml
+++ b/server/base/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-server-base</artifactId>
diff --git a/server/base/src/main/java/org/apache/accumulo/server/Accumulo.java b/server/base/src/main/java/org/apache/accumulo/server/Accumulo.java
index c888be5..3cfd759 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/Accumulo.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/Accumulo.java
@@ -27,6 +27,7 @@
 import java.util.Arrays;
 import java.util.Map.Entry;
 import java.util.TreeMap;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -34,7 +35,6 @@
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.util.AddressUtil;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.util.Version;
 import org.apache.accumulo.core.volume.Volume;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
@@ -55,6 +55,8 @@
 import org.apache.log4j.helpers.LogLog;
 import org.apache.zookeeper.KeeperException;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class Accumulo {
 
   private static final Logger log = Logger.getLogger(Accumulo.class);
@@ -166,8 +168,8 @@
     logConfigWatcher.start();
 
     // Makes sure the log-forwarding to the monitor is configured
-    int logPort = conf.getPort(Property.MONITOR_LOG4J_PORT);
-    System.setProperty("org.apache.accumulo.core.host.log.port", Integer.toString(logPort));
+    int[] logPort = conf.getPort(Property.MONITOR_LOG4J_PORT);
+    System.setProperty("org.apache.accumulo.core.host.log.port", Integer.toString(logPort[0]));
 
     log.info(application + " starting");
     log.info("Instance " + serverConfig.getInstance().getInstanceID());
@@ -180,7 +182,7 @@
       throw new RuntimeException("This version of accumulo (" + codeVersion + ") is not compatible with files stored using data version " + dataVersion);
     }
 
-    TreeMap<String,String> sortedProps = new TreeMap<String,String>();
+    TreeMap<String,String> sortedProps = new TreeMap<>();
     for (Entry<String,String> entry : conf)
       sortedProps.put(entry.getKey(), entry.getValue());
 
@@ -261,7 +263,7 @@
         // ignored
       } catch (KeeperException ex) {
         log.info("Waiting for accumulo to be initialized");
-        UtilWaitThread.sleep(1000);
+        sleepUninterruptibly(1, TimeUnit.SECONDS);
       }
     }
     log.info("ZooKeeper connected and initialized, attempting to talk to HDFS");
@@ -291,7 +293,7 @@
         }
       }
       log.info("Backing off due to failure; current sleep period is " + sleep / 1000. + " seconds");
-      UtilWaitThread.sleep(sleep);
+      sleepUninterruptibly(sleep, TimeUnit.MILLISECONDS);
       /* Back off to give transient failures more time to clear. */
       sleep = Math.min(60 * 1000, sleep * 2);
     }
@@ -311,8 +313,8 @@
    */
   public static void abortIfFateTransactions() {
     try {
-      final ReadOnlyTStore<Accumulo> fate = new ReadOnlyStore<Accumulo>(new ZooStore<Accumulo>(
-          ZooUtil.getRoot(HdfsZooInstance.getInstance()) + Constants.ZFATE, ZooReaderWriter.getInstance()));
+      final ReadOnlyTStore<Accumulo> fate = new ReadOnlyStore<>(new ZooStore<Accumulo>(ZooUtil.getRoot(HdfsZooInstance.getInstance()) + Constants.ZFATE,
+          ZooReaderWriter.getInstance()));
       if (!(fate.list().isEmpty())) {
         throw new AccumuloException("Aborting upgrade because there are outstanding FATE transactions from a previous Accumulo version. "
             + "Please see the README document for instructions on what to do under your previous version.");
diff --git a/server/base/src/main/java/org/apache/accumulo/server/AccumuloServerContext.java b/server/base/src/main/java/org/apache/accumulo/server/AccumuloServerContext.java
index 8039207..ce7bfad 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/AccumuloServerContext.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/AccumuloServerContext.java
@@ -28,11 +28,11 @@
 import org.apache.accumulo.core.client.impl.ClientContext;
 import org.apache.accumulo.core.client.impl.ConnectorImpl;
 import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.rpc.SslConnectionParams;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 import org.apache.accumulo.server.conf.ServerConfigurationFactory;
 import org.apache.accumulo.server.rpc.SaslServerConnectionParams;
@@ -94,7 +94,7 @@
    * Get the credentials to use for this instance so it can be passed to the superclass during construction.
    */
   private static Credentials getCredentials(Instance instance) {
-    if (instance instanceof MockInstance) {
+    if (DeprecationUtil.isMockInstance(instance)) {
       return new Credentials("mockSystemUser", new PasswordToken("mockSystemPassword"));
     }
     return SystemCredentials.get(instance);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/GarbageCollectionLogger.java b/server/base/src/main/java/org/apache/accumulo/server/GarbageCollectionLogger.java
index 389a544..a275df7 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/GarbageCollectionLogger.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/GarbageCollectionLogger.java
@@ -30,7 +30,7 @@
 public class GarbageCollectionLogger {
   private static final Logger log = LoggerFactory.getLogger(GarbageCollectionLogger.class);
 
-  private final HashMap<String,Long> prevGcTime = new HashMap<String,Long>();
+  private final HashMap<String,Long> prevGcTime = new HashMap<>();
   private long lastMemorySize = 0;
   private long gcTimeIncreasedCount = 0;
   private static long lastMemoryCheckTime = 0;
@@ -79,7 +79,7 @@
       }
     }
 
-    if (mem > lastMemorySize) {
+    if (mem != lastMemorySize) {
       sawChange = true;
     }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java b/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
index c268e83..a14b8fc 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
@@ -47,13 +47,17 @@
   public static final Integer WIRE_VERSION = 3;
 
   /**
+   * version (8) reflects changes to RFile index (ACCUMULO-1124) in version 1.8.0
+   */
+  public static final int SHORTEN_RFILE_KEYS = 8;
+  /**
    * version (7) also reflects the addition of a replication table
    */
   public static final int MOVE_TO_REPLICATION_TABLE = 7;
   /**
    * this is the current data version
    */
-  public static final int DATA_VERSION = MOVE_TO_REPLICATION_TABLE;
+  public static final int DATA_VERSION = SHORTEN_RFILE_KEYS;
   /**
    * version (6) reflects the addition of a separate root table (ACCUMULO-1481) in version 1.6.0
    */
@@ -68,7 +72,7 @@
   public static final int LOGGING_TO_HDFS = 4;
   public static final BitSet CAN_UPGRADE = new BitSet();
   static {
-    for (int i : new int[] {DATA_VERSION, MOVE_TO_ROOT_TABLE, MOVE_DELETE_MARKERS, LOGGING_TO_HDFS}) {
+    for (int i : new int[] {DATA_VERSION, MOVE_TO_REPLICATION_TABLE, MOVE_TO_ROOT_TABLE}) {
       CAN_UPGRADE.set(i);
     }
   }
@@ -96,7 +100,7 @@
     String firstDir = null;
     String firstIid = null;
     Integer firstVersion = null;
-    ArrayList<String> baseDirsList = new ArrayList<String>();
+    ArrayList<String> baseDirsList = new ArrayList<>();
     for (String baseDir : configuredBaseDirs) {
       Path path = new Path(baseDir, INSTANCE_ID_DIR);
       String currentIid;
@@ -177,7 +181,7 @@
         return Collections.emptyList();
 
       String[] pairs = replacements.split(",");
-      List<Pair<Path,Path>> ret = new ArrayList<Pair<Path,Path>>();
+      List<Pair<Path,Path>> ret = new ArrayList<>();
 
       for (String pair : pairs) {
 
@@ -203,10 +207,10 @@
           throw new IllegalArgumentException(Property.INSTANCE_VOLUMES_REPLACEMENTS.getKey() + " contains " + uris[1] + " which has a syntax error", e);
         }
 
-        ret.add(new Pair<Path,Path>(p1, p2));
+        ret.add(new Pair<>(p1, p2));
       }
 
-      HashSet<Path> baseDirs = new HashSet<Path>();
+      HashSet<Path> baseDirs = new HashSet<>();
       for (String baseDir : getBaseUris()) {
         // normalize using path
         baseDirs.add(new Path(baseDir));
diff --git a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java b/server/base/src/main/java/org/apache/accumulo/server/TabletLevel.java
similarity index 67%
copy from core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
copy to server/base/src/main/java/org/apache/accumulo/server/TabletLevel.java
index 01f5fa8..e97b99b 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/TabletLevel.java
@@ -14,19 +14,19 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.util;
+package org.apache.accumulo.server;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
+import org.apache.accumulo.core.data.impl.KeyExtent;
 
-public class UtilWaitThread {
-  private static final Logger log = LoggerFactory.getLogger(UtilWaitThread.class);
+public enum TabletLevel {
+  ROOT, META, NORMAL;
 
-  public static void sleep(long millis) {
-    try {
-      Thread.sleep(millis);
-    } catch (InterruptedException e) {
-      log.error("{}", e.getMessage(), e);
-    }
+  public static TabletLevel getLevel(KeyExtent extent) {
+    if (!extent.isMeta())
+      return NORMAL;
+    if (extent.isRootTablet())
+      return ROOT;
+    return META;
   }
+
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnDefaultTable.java b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnDefaultTable.java
index 10cab49..a058660 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnDefaultTable.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnDefaultTable.java
@@ -18,7 +18,7 @@
 
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.client.mock.MockInstance;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 
 public class ClientOnDefaultTable extends org.apache.accumulo.core.cli.ClientOnDefaultTable {
@@ -32,7 +32,7 @@
       return cachedInstance;
 
     if (mock)
-      return cachedInstance = new MockInstance(instance);
+      return cachedInstance = DeprecationUtil.makeMockInstance(instance);
     if (instance == null) {
       return cachedInstance = HdfsZooInstance.getInstance();
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnRequiredTable.java b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnRequiredTable.java
index e134235..e02dd93 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnRequiredTable.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOnRequiredTable.java
@@ -18,7 +18,7 @@
 
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.client.mock.MockInstance;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 
 public class ClientOnRequiredTable extends org.apache.accumulo.core.cli.ClientOnRequiredTable {
@@ -32,7 +32,7 @@
       return cachedInstance;
 
     if (mock)
-      return cachedInstance = new MockInstance(instance);
+      return cachedInstance = DeprecationUtil.makeMockInstance(instance);
     if (instance == null) {
       return cachedInstance = HdfsZooInstance.getInstance();
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOpts.java b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOpts.java
index c50d95d..c91471e 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOpts.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/cli/ClientOpts.java
@@ -18,7 +18,7 @@
 
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.client.mock.MockInstance;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 
 public class ClientOpts extends org.apache.accumulo.core.cli.ClientOpts {
@@ -30,7 +30,7 @@
   @Override
   public Instance getInstance() {
     if (mock)
-      return new MockInstance(instance);
+      return DeprecationUtil.makeMockInstance(instance);
     if (instance == null) {
       return HdfsZooInstance.getInstance();
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java b/server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java
index 37bb041..c9af520 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.server.client;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
@@ -58,7 +60,6 @@
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.NamingThreadFactory;
 import org.apache.accumulo.core.util.StopWatch;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.util.LoggingRunnable;
 import org.apache.accumulo.server.fs.VolumeManager;
 import org.apache.accumulo.server.fs.VolumeManagerImpl;
@@ -81,7 +82,7 @@
   public static List<String> bulkLoad(ClientContext context, long tid, String tableId, List<String> files, String errorDir, boolean setTime)
       throws IOException, AccumuloException, AccumuloSecurityException, ThriftTableOperationException {
     AssignmentStats stats = new BulkImporter(context, tid, tableId, setTime).importFiles(files, new Path(errorDir));
-    List<String> result = new ArrayList<String>();
+    List<String> result = new ArrayList<>();
     for (Path p : stats.completeFailures.keySet()) {
       result.add(p.toString());
     }
@@ -112,14 +113,14 @@
     int numThreads = context.getConfiguration().getCount(Property.TSERV_BULK_PROCESS_THREADS);
     int numAssignThreads = context.getConfiguration().getCount(Property.TSERV_BULK_ASSIGNMENT_THREADS);
 
-    timer = new StopWatch<Timers>(Timers.class);
+    timer = new StopWatch<>(Timers.class);
     timer.start(Timers.TOTAL);
 
     Configuration conf = CachedConfiguration.getInstance();
     VolumeManagerImpl.get(context.getConfiguration());
     final VolumeManager fs = VolumeManagerImpl.get(context.getConfiguration());
 
-    Set<Path> paths = new HashSet<Path>();
+    Set<Path> paths = new HashSet<>();
     for (String file : files) {
       paths.add(new Path(file));
     }
@@ -128,7 +129,7 @@
     final Map<Path,List<KeyExtent>> completeFailures = Collections.synchronizedSortedMap(new TreeMap<Path,List<KeyExtent>>());
 
     ClientService.Client client = null;
-    final TabletLocator locator = TabletLocator.getLocator(context, new Text(tableId));
+    final TabletLocator locator = TabletLocator.getLocator(context, tableId);
 
     try {
       final Map<Path,List<TabletLocation>> assignments = Collections.synchronizedSortedMap(new TreeMap<Path,List<TabletLocation>>());
@@ -171,7 +172,7 @@
       Map<Path,List<KeyExtent>> assignmentFailures = assignMapFiles(context, conf, fs, tableId, assignments, paths, numAssignThreads, numThreads);
       assignmentStats.assignmentsFailed(assignmentFailures);
 
-      Map<Path,Integer> failureCount = new TreeMap<Path,Integer>();
+      Map<Path,Integer> failureCount = new TreeMap<>();
 
       for (Entry<Path,List<KeyExtent>> entry : assignmentFailures.entrySet())
         failureCount.put(entry.getKey(), 1);
@@ -187,7 +188,7 @@
         // same key range and are contiguous (no holes, no overlap)
 
         timer.start(Timers.SLEEP);
-        UtilWaitThread.sleep(sleepTime);
+        sleepUninterruptibly(sleepTime, TimeUnit.MILLISECONDS);
         timer.stop(Timers.SLEEP);
 
         log.debug("Trying to assign " + assignmentFailures.size() + " map files that previously failed on some key extents");
@@ -198,7 +199,7 @@
         for (Entry<Path,List<KeyExtent>> entry : assignmentFailures.entrySet()) {
           Iterator<KeyExtent> keListIter = entry.getValue().iterator();
 
-          List<TabletLocation> tabletsToAssignMapFileTo = new ArrayList<TabletLocation>();
+          List<TabletLocation> tabletsToAssignMapFileTo = new ArrayList<>();
 
           while (keListIter.hasNext()) {
             KeyExtent ke = keListIter.next();
@@ -272,7 +273,7 @@
 
       totalTime += timer.get(t);
     }
-    List<String> files = new ArrayList<String>();
+    List<String> files = new ArrayList<>();
     for (Path path : paths) {
       files.add(path.getName());
     }
@@ -325,7 +326,7 @@
   }
 
   private static List<KeyExtent> extentsOf(List<TabletLocation> locations) {
-    List<KeyExtent> result = new ArrayList<KeyExtent>(locations.size());
+    List<KeyExtent> result = new ArrayList<>(locations.size());
     for (TabletLocation tl : locations)
       result.add(tl.tablet_extent);
     return result;
@@ -335,7 +336,7 @@
       Map<Path,List<TabletLocation>> assignments, Collection<Path> paths, int numThreads) {
 
     long t1 = System.currentTimeMillis();
-    final Map<Path,Long> mapFileSizes = new TreeMap<Path,Long>();
+    final Map<Path,Long> mapFileSizes = new TreeMap<>();
 
     try {
       for (Path path : paths) {
@@ -375,13 +376,13 @@
 
           if (estimatedSizes == null) {
             // estimation failed, do a simple estimation
-            estimatedSizes = new TreeMap<KeyExtent,Long>();
+            estimatedSizes = new TreeMap<>();
             long estSize = (long) (mapFileSizes.get(entry.getKey()) / (double) entry.getValue().size());
             for (TabletLocation tl : entry.getValue())
               estimatedSizes.put(tl.tablet_extent, estSize);
           }
 
-          List<AssignmentInfo> assignmentInfoList = new ArrayList<AssignmentInfo>(estimatedSizes.size());
+          List<AssignmentInfo> assignmentInfoList = new ArrayList<>(estimatedSizes.size());
 
           for (Entry<KeyExtent,Long> entry2 : estimatedSizes.entrySet())
             assignmentInfoList.add(new AssignmentInfo(entry2.getKey(), entry2.getValue()));
@@ -412,7 +413,7 @@
   }
 
   private static Map<KeyExtent,String> locationsOf(Map<Path,List<TabletLocation>> assignments) {
-    Map<KeyExtent,String> result = new HashMap<KeyExtent,String>();
+    Map<KeyExtent,String> result = new HashMap<>();
     for (List<TabletLocation> entry : assignments.values()) {
       for (TabletLocation tl : entry) {
         result.put(tl.tablet_extent, tl.tablet_location);
@@ -454,7 +455,7 @@
           for (PathSize pathSize : mapFiles) {
             List<KeyExtent> existingFailures = assignmentFailures.get(pathSize.path);
             if (existingFailures == null) {
-              existingFailures = new ArrayList<KeyExtent>();
+              existingFailures = new ArrayList<>();
               assignmentFailures.put(pathSize.path, existingFailures);
             }
 
@@ -468,7 +469,7 @@
 
     @Override
     public void run() {
-      HashSet<Path> uniqMapFiles = new HashSet<Path>();
+      HashSet<Path> uniqMapFiles = new HashSet<>();
       for (List<PathSize> mapFiles : assignmentsPerTablet.values())
         for (PathSize ps : mapFiles)
           uniqMapFiles.add(ps.path);
@@ -505,7 +506,7 @@
   private Map<Path,List<KeyExtent>> assignMapFiles(String tableName, Map<Path,List<AssignmentInfo>> assignments, Map<KeyExtent,String> locations, int numThreads) {
 
     // group assignments by tablet
-    Map<KeyExtent,List<PathSize>> assignmentsPerTablet = new TreeMap<KeyExtent,List<PathSize>>();
+    Map<KeyExtent,List<PathSize>> assignmentsPerTablet = new TreeMap<>();
     for (Entry<Path,List<AssignmentInfo>> entry : assignments.entrySet()) {
       Path mapFile = entry.getKey();
       List<AssignmentInfo> tabletsToAssignMapFileTo = entry.getValue();
@@ -513,7 +514,7 @@
       for (AssignmentInfo ai : tabletsToAssignMapFileTo) {
         List<PathSize> mapFiles = assignmentsPerTablet.get(ai.ke);
         if (mapFiles == null) {
-          mapFiles = new ArrayList<PathSize>();
+          mapFiles = new ArrayList<>();
           assignmentsPerTablet.put(ai.ke, mapFiles);
         }
 
@@ -525,7 +526,7 @@
 
     Map<Path,List<KeyExtent>> assignmentFailures = Collections.synchronizedMap(new TreeMap<Path,List<KeyExtent>>());
 
-    TreeMap<String,Map<KeyExtent,List<PathSize>>> assignmentsPerTabletServer = new TreeMap<String,Map<KeyExtent,List<PathSize>>>();
+    TreeMap<String,Map<KeyExtent,List<PathSize>>> assignmentsPerTabletServer = new TreeMap<>();
 
     for (Entry<KeyExtent,List<PathSize>> entry : assignmentsPerTablet.entrySet()) {
       KeyExtent ke = entry.getKey();
@@ -536,7 +537,7 @@
           synchronized (assignmentFailures) {
             List<KeyExtent> failures = assignmentFailures.get(pathSize.path);
             if (failures == null) {
-              failures = new ArrayList<KeyExtent>();
+              failures = new ArrayList<>();
               assignmentFailures.put(pathSize.path, failures);
             }
 
@@ -551,7 +552,7 @@
 
       Map<KeyExtent,List<PathSize>> apt = assignmentsPerTabletServer.get(location);
       if (apt == null) {
-        apt = new TreeMap<KeyExtent,List<PathSize>>();
+        apt = new TreeMap<>();
         assignmentsPerTabletServer.put(location, apt);
       }
 
@@ -585,9 +586,9 @@
       long timeInMillis = context.getConfiguration().getTimeInMillis(Property.TSERV_BULK_TIMEOUT);
       TabletClientService.Iface client = ThriftUtil.getTServerClient(location, context, timeInMillis);
       try {
-        HashMap<KeyExtent,Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>> files = new HashMap<KeyExtent,Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>>();
+        HashMap<KeyExtent,Map<String,org.apache.accumulo.core.data.thrift.MapFileInfo>> files = new HashMap<>();
         for (Entry<KeyExtent,List<PathSize>> entry : assignmentsPerTablet.entrySet()) {
-          HashMap<String,org.apache.accumulo.core.data.thrift.MapFileInfo> tabletFiles = new HashMap<String,org.apache.accumulo.core.data.thrift.MapFileInfo>();
+          HashMap<String,org.apache.accumulo.core.data.thrift.MapFileInfo> tabletFiles = new HashMap<>();
           files.put(entry.getKey(), tabletFiles);
 
           for (PathSize pathSize : entry.getValue()) {
@@ -636,12 +637,13 @@
 
   public static List<TabletLocation> findOverlappingTablets(ClientContext context, VolumeManager vm, TabletLocator locator, Path file, Text startRow,
       Text endRow) throws Exception {
-    List<TabletLocation> result = new ArrayList<TabletLocation>();
+    List<TabletLocation> result = new ArrayList<>();
     Collection<ByteSequence> columnFamilies = Collections.emptyList();
     String filename = file.toString();
     // log.debug(filename + " finding overlapping tablets " + startRow + " -> " + endRow);
     FileSystem fs = vm.getVolumeByPath(file).getFileSystem();
-    FileSKVIterator reader = FileOperations.getInstance().openReader(filename, true, fs, fs.getConf(), context.getConfiguration());
+    FileSKVIterator reader = FileOperations.getInstance().newReaderBuilder().forFile(filename, fs, fs.getConf())
+        .withTableConfiguration(context.getConfiguration()).seekToBeginning().build();
     try {
       Text row = startRow;
       if (row == null)
@@ -678,7 +680,7 @@
     private Set<Path> failedFailures = null;
 
     AssignmentStats(int fileCount) {
-      counts = new HashMap<KeyExtent,Integer>();
+      counts = new HashMap<>();
       numUniqueMapFiles = fileCount;
     }
 
@@ -749,7 +751,7 @@
       stddev = stddev / counts.size();
       stddev = Math.sqrt(stddev);
 
-      Set<KeyExtent> failedTablets = new HashSet<KeyExtent>();
+      Set<KeyExtent> failedTablets = new HashSet<>();
       for (List<KeyExtent> ft : completeFailures.values())
         failedTablets.addAll(ft);
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java b/server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
index 588e3e0..3beb1e0 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/client/ClientServiceHandler.java
@@ -50,6 +50,8 @@
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.master.thrift.BulkImportState;
+import org.apache.accumulo.core.master.thrift.BulkImportStatus;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.NamespacePermission;
 import org.apache.accumulo.core.security.SystemPermission;
@@ -61,6 +63,7 @@
 import org.apache.accumulo.server.fs.VolumeManager;
 import org.apache.accumulo.server.security.AuditedSecurityOperation;
 import org.apache.accumulo.server.security.SecurityOperation;
+import org.apache.accumulo.server.util.ServerBulkImportStatus;
 import org.apache.accumulo.server.util.TableDiskUsage;
 import org.apache.accumulo.server.zookeeper.TransactionWatcher;
 import org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader;
@@ -75,6 +78,7 @@
   private final Instance instance;
   private final VolumeManager fs;
   private final SecurityOperation security;
+  private final ServerBulkImportStatus bulkImportStatus = new ServerBulkImportStatus();
 
   public ClientServiceHandler(AccumuloServerContext context, TransactionWatcher transactionWatcher, VolumeManager fs) {
     this.context = context;
@@ -258,7 +262,7 @@
     security.authenticateUser(credentials, credentials);
     conf.invalidateCache();
 
-    Map<String,String> result = new HashMap<String,String>();
+    Map<String,String> result = new HashMap<>();
     for (Entry<String,String> entry : conf) {
       String key = entry.getKey();
       if (!Property.isSensitive(key))
@@ -294,11 +298,17 @@
     try {
       if (!security.canPerformSystemActions(credentials))
         throw new AccumuloSecurityException(credentials.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
+      bulkImportStatus.updateBulkImportStatus(files, BulkImportState.INITIAL);
       log.debug("Got request to bulk import files to table(" + tableId + "): " + files);
       return transactionWatcher.run(Constants.BULK_ARBITRATOR_TYPE, tid, new Callable<List<String>>() {
         @Override
         public List<String> call() throws Exception {
-          return BulkImporter.bulkLoad(context, tid, tableId, files, errorDir, setTime);
+          bulkImportStatus.updateBulkImportStatus(files, BulkImportState.PROCESSING);
+          try {
+            return BulkImporter.bulkLoad(context, tid, tableId, files, errorDir, setTime);
+          } finally {
+            bulkImportStatus.removeBulkImportStatus(files);
+          }
         }
       });
     } catch (AccumuloSecurityException e) {
@@ -411,7 +421,7 @@
   @Override
   public List<TDiskUsage> getDiskUsage(Set<String> tables, TCredentials credentials) throws ThriftTableOperationException, ThriftSecurityException, TException {
     try {
-      HashSet<String> tableIds = new HashSet<String>();
+      HashSet<String> tableIds = new HashSet<>();
 
       for (String table : tables) {
         // ensure that table table exists
@@ -425,9 +435,9 @@
       // use the same set of tableIds that were validated above to avoid race conditions
       Map<TreeSet<String>,Long> diskUsage = TableDiskUsage.getDiskUsage(context.getServerConfigurationFactory().getConfiguration(), tableIds, fs,
           context.getConnector());
-      List<TDiskUsage> retUsages = new ArrayList<TDiskUsage>();
+      List<TDiskUsage> retUsages = new ArrayList<>();
       for (Map.Entry<TreeSet<String>,Long> usageItem : diskUsage.entrySet()) {
-        retUsages.add(new TDiskUsage(new ArrayList<String>(usageItem.getKey()), usageItem.getValue()));
+        retUsages.add(new TDiskUsage(new ArrayList<>(usageItem.getKey()), usageItem.getValue()));
       }
       return retUsages;
 
@@ -452,4 +462,8 @@
     AccumuloConfiguration config = context.getServerConfigurationFactory().getNamespaceConfiguration(namespaceId);
     return conf(credentials, config);
   }
+
+  public List<BulkImportStatus> getBulkLoadStatus() {
+    return bulkImportStatus.getBulkLoadStatus();
+  }
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/client/HdfsZooInstance.java b/server/base/src/main/java/org/apache/accumulo/server/client/HdfsZooInstance.java
index 0d7aaf1..e4e73d2 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/client/HdfsZooInstance.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/client/HdfsZooInstance.java
@@ -23,6 +23,7 @@
 import java.util.Collections;
 import java.util.List;
 import java.util.UUID;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -52,10 +53,9 @@
 import org.apache.accumulo.server.zookeeper.ZooLock;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
-
 import com.google.common.base.Joiner;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * An implementation of Instance that looks in HDFS and ZooKeeper to find the master and root tablet location.
@@ -78,17 +78,26 @@
 
   private static ZooCache zooCache;
   private static String instanceId = null;
-  private static final Logger log = Logger.getLogger(HdfsZooInstance.class);
+  private static final Logger log = LoggerFactory.getLogger(HdfsZooInstance.class);
 
   @Override
   public String getRootTabletLocation() {
     String zRootLocPath = ZooUtil.getRoot(this) + RootTable.ZROOT_TABLET_LOCATION;
 
-    OpTimer opTimer = new OpTimer(log, Level.TRACE).start("Looking up root tablet location in zoocache.");
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Looking up root tablet location in zoocache.", Thread.currentThread().getId());
+      timer = new OpTimer().start();
+    }
 
     byte[] loc = zooCache.get(zRootLocPath);
 
-    opTimer.stop("Found root tablet at " + (loc == null ? null : new String(loc, UTF_8)) + " in %DURATION%");
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Found root tablet at {} in {}", Thread.currentThread().getId(), (loc == null ? "null" : new String(loc, UTF_8)),
+          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
 
     if (loc == null) {
       return null;
@@ -102,11 +111,20 @@
 
     String masterLocPath = ZooUtil.getRoot(this) + Constants.ZMASTER_LOCK;
 
-    OpTimer opTimer = new OpTimer(log, Level.TRACE).start("Looking up master location in zoocache.");
+    OpTimer timer = null;
+
+    if (log.isTraceEnabled()) {
+      log.trace("tid={} Looking up master location in zoocache.", Thread.currentThread().getId());
+      timer = new OpTimer().start();
+    }
 
     byte[] loc = ZooLock.getLockData(zooCache, masterLocPath, null);
 
-    opTimer.stop("Found master at " + (loc == null ? null : new String(loc, UTF_8)) + " in %DURATION%");
+    if (timer != null) {
+      timer.stop();
+      log.trace("tid={} Found master at {} in {}", Thread.currentThread().getId(), (loc == null ? "null" : new String(loc, UTF_8)),
+          String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+    }
 
     if (loc == null) {
       return Collections.emptyList();
@@ -133,7 +151,7 @@
         throw new RuntimeException(e);
       }
       Path instanceIdPath = Accumulo.getAccumuloInstanceIdPath(fs);
-      log.trace("Looking for instanceId from " + instanceIdPath);
+      log.trace("Looking for instanceId from {}", instanceIdPath);
       String instanceIdFromFile = ZooUtil.getInstanceIDFromHdfs(instanceIdPath, acuConf);
       instanceId = instanceIdFromFile;
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java b/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java
index 658d249..942cabf 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java
@@ -17,11 +17,25 @@
 package org.apache.accumulo.server.conf;
 
 import org.apache.accumulo.server.client.HdfsZooInstance;
+import org.apache.accumulo.start.spi.KeywordExecutable;
 
-public class ConfigSanityCheck {
+import com.google.auto.service.AutoService;
+
+@AutoService(KeywordExecutable.class)
+public class ConfigSanityCheck implements KeywordExecutable {
 
   public static void main(String[] args) {
     new ServerConfigurationFactory(HdfsZooInstance.getInstance()).getConfiguration();
   }
 
+  @Override
+  public String keyword() {
+    return "check-server-config";
+  }
+
+  @Override
+  public void execute(String[] args) throws Exception {
+    main(args);
+  }
+
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java b/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java
index 3d19723..1ca083e 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/NamespaceConfiguration.java
@@ -38,7 +38,7 @@
 public class NamespaceConfiguration extends ObservableConfiguration {
   private static final Logger log = LoggerFactory.getLogger(NamespaceConfiguration.class);
 
-  private static final Map<PropCacheKey,ZooCache> propCaches = new java.util.HashMap<PropCacheKey,ZooCache>();
+  private static final Map<PropCacheKey,ZooCache> propCaches = new java.util.HashMap<>();
 
   private final AccumuloConfiguration parent;
   private ZooCachePropertyAccessor propCacheAccessor = null;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfigurationFactory.java b/server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfigurationFactory.java
index 6976bab..5cea8b5 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfigurationFactory.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/ServerConfigurationFactory.java
@@ -34,9 +34,9 @@
  */
 public class ServerConfigurationFactory extends ServerConfiguration {
 
-  private static final Map<String,Map<String,TableConfiguration>> tableConfigs = new HashMap<String,Map<String,TableConfiguration>>(1);
-  private static final Map<String,Map<String,NamespaceConfiguration>> namespaceConfigs = new HashMap<String,Map<String,NamespaceConfiguration>>(1);
-  private static final Map<String,Map<String,NamespaceConfiguration>> tableParentConfigs = new HashMap<String,Map<String,NamespaceConfiguration>>(1);
+  private static final Map<String,Map<String,TableConfiguration>> tableConfigs = new HashMap<>(1);
+  private static final Map<String,Map<String,NamespaceConfiguration>> namespaceConfigs = new HashMap<>(1);
+  private static final Map<String,Map<String,NamespaceConfiguration>> tableParentConfigs = new HashMap<>(1);
 
   private static void addInstanceToCaches(String iid) {
     synchronized (tableConfigs) {
@@ -177,7 +177,7 @@
 
   @Override
   public TableConfiguration getTableConfiguration(KeyExtent extent) {
-    return getTableConfiguration(extent.getTableId().toString());
+    return getTableConfiguration(extent.getTableId());
   }
 
   public NamespaceConfiguration getNamespaceConfigurationForTable(String tableId) {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java b/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java
index 915122b..6c53c5b 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/TableConfiguration.java
@@ -35,18 +35,18 @@
 public class TableConfiguration extends ObservableConfiguration {
   private static final Logger log = LoggerFactory.getLogger(TableConfiguration.class);
 
-  private static final Map<PropCacheKey,ZooCache> propCaches = new java.util.HashMap<PropCacheKey,ZooCache>();
+  private static final Map<PropCacheKey,ZooCache> propCaches = new java.util.HashMap<>();
 
   private ZooCachePropertyAccessor propCacheAccessor = null;
   private final Instance instance;
   private final NamespaceConfiguration parent;
   private ZooCacheFactory zcf = new ZooCacheFactory();
 
-  private final String table;
+  private final String tableId;
 
-  public TableConfiguration(Instance instance, String table, NamespaceConfiguration parent) {
+  public TableConfiguration(Instance instance, String tableId, NamespaceConfiguration parent) {
     this.instance = instance;
-    this.table = table;
+    this.tableId = tableId;
     this.parent = parent;
   }
 
@@ -57,7 +57,7 @@
   private synchronized ZooCachePropertyAccessor getPropCacheAccessor() {
     if (propCacheAccessor == null) {
       synchronized (propCaches) {
-        PropCacheKey key = new PropCacheKey(instance.getInstanceID(), table);
+        PropCacheKey key = new PropCacheKey(instance.getInstanceID(), tableId);
         ZooCache propCache = propCaches.get(key);
         if (propCache == null) {
           propCache = zcf.getZooCache(instance.getZooKeepers(), instance.getZooKeepersSessionTimeOut(), new TableConfWatcher(instance));
@@ -71,7 +71,7 @@
 
   @Override
   public void addObserver(ConfigurationObserver co) {
-    if (table == null) {
+    if (tableId == null) {
       String err = "Attempt to add observer for non-table configuration";
       log.error(err);
       throw new RuntimeException(err);
@@ -82,7 +82,7 @@
 
   @Override
   public void removeObserver(ConfigurationObserver co) {
-    if (table == null) {
+    if (tableId == null) {
       String err = "Attempt to remove observer for non-table configuration";
       log.error(err);
       throw new RuntimeException(err);
@@ -91,7 +91,7 @@
   }
 
   private String getPath() {
-    return ZooUtil.getRoot(instance.getInstanceID()) + Constants.ZTABLES + "/" + table + Constants.ZTABLE_CONF;
+    return ZooUtil.getRoot(instance.getInstanceID()) + Constants.ZTABLES + "/" + tableId + Constants.ZTABLE_CONF;
   }
 
   @Override
@@ -105,7 +105,7 @@
   }
 
   public String getTableId() {
-    return table;
+    return tableId;
   }
 
   /**
diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/ZooConfigurationFactory.java b/server/base/src/main/java/org/apache/accumulo/server/conf/ZooConfigurationFactory.java
index 6c8ceca..0cfbff8 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/conf/ZooConfigurationFactory.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/conf/ZooConfigurationFactory.java
@@ -35,7 +35,7 @@
  * A factory for {@link ZooConfiguration} objects.
  */
 class ZooConfigurationFactory {
-  private static final Map<String,ZooConfiguration> instances = new HashMap<String,ZooConfiguration>();
+  private static final Map<String,ZooConfiguration> instances = new HashMap<>();
 
   /**
    * Gets a configuration object for the given instance with the given parent. Repeated calls will return the same object.
diff --git a/server/base/src/main/java/org/apache/accumulo/server/constraints/MetadataConstraints.java b/server/base/src/main/java/org/apache/accumulo/server/constraints/MetadataConstraints.java
index fd5af14..98f8c3f 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/constraints/MetadataConstraints.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/constraints/MetadataConstraints.java
@@ -64,12 +64,12 @@
     }
   }
 
-  private static final HashSet<ColumnFQ> validColumnQuals = new HashSet<ColumnFQ>(Arrays.asList(TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN,
-      TabletsSection.TabletColumnFamily.OLD_PREV_ROW_COLUMN, TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN,
-      TabletsSection.TabletColumnFamily.SPLIT_RATIO_COLUMN, TabletsSection.ServerColumnFamily.TIME_COLUMN, TabletsSection.ServerColumnFamily.LOCK_COLUMN,
-      TabletsSection.ServerColumnFamily.FLUSH_COLUMN, TabletsSection.ServerColumnFamily.COMPACT_COLUMN));
+  private static final HashSet<ColumnFQ> validColumnQuals = new HashSet<>(Arrays.asList(TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN,
+      TabletsSection.TabletColumnFamily.OLD_PREV_ROW_COLUMN, TabletsSection.SuspendLocationColumn.SUSPEND_COLUMN,
+      TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN, TabletsSection.TabletColumnFamily.SPLIT_RATIO_COLUMN, TabletsSection.ServerColumnFamily.TIME_COLUMN,
+      TabletsSection.ServerColumnFamily.LOCK_COLUMN, TabletsSection.ServerColumnFamily.FLUSH_COLUMN, TabletsSection.ServerColumnFamily.COMPACT_COLUMN));
 
-  private static final HashSet<Text> validColumnFams = new HashSet<Text>(Arrays.asList(TabletsSection.BulkFileColumnFamily.NAME, LogColumnFamily.NAME,
+  private static final HashSet<Text> validColumnFams = new HashSet<>(Arrays.asList(TabletsSection.BulkFileColumnFamily.NAME, LogColumnFamily.NAME,
       ScanFileColumnFamily.NAME, DataFileColumnFamily.NAME, TabletsSection.CurrentLocationColumnFamily.NAME, TabletsSection.LastLocationColumnFamily.NAME,
       TabletsSection.FutureLocationColumnFamily.NAME, ChoppedColumnFamily.NAME, ClonedColumnFamily.NAME));
 
@@ -86,7 +86,7 @@
 
   static private ArrayList<Short> addViolation(ArrayList<Short> lst, int violation) {
     if (lst == null)
-      lst = new ArrayList<Short>();
+      lst = new ArrayList<>();
     lst.add((short) violation);
     return lst;
   }
@@ -194,8 +194,8 @@
           // See ACCUMULO-1230.
           boolean isLocationMutation = false;
 
-          HashSet<Text> dataFiles = new HashSet<Text>();
-          HashSet<Text> loadedFiles = new HashSet<Text>();
+          HashSet<Text> dataFiles = new HashSet<>();
+          HashSet<Text> loadedFiles = new HashSet<>();
 
           String tidString = new String(columnUpdate.getValue(), UTF_8);
           int otherTidCount = 0;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/PerTableVolumeChooser.java b/server/base/src/main/java/org/apache/accumulo/server/fs/PerTableVolumeChooser.java
index e51df03..594a0a2 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/PerTableVolumeChooser.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/PerTableVolumeChooser.java
@@ -32,7 +32,7 @@
   private final VolumeChooser fallbackVolumeChooser = new RandomVolumeChooser();
   // TODO Add hint of expected size to construction, see ACCUMULO-3410
   /* Track VolumeChooser instances so they can keep state. */
-  private final ConcurrentHashMap<String,VolumeChooser> tableSpecificChooser = new ConcurrentHashMap<String,VolumeChooser>();
+  private final ConcurrentHashMap<String,VolumeChooser> tableSpecificChooser = new ConcurrentHashMap<>();
   // TODO has to be lazily initialized currently because of the reliance on HdfsZooInstance. see ACCUMULO-3411
   private volatile ServerConfigurationFactory serverConfs;
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java b/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java
index 6bc225f..ec7c360 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/PreferredVolumeChooser.java
@@ -43,7 +43,7 @@
  * to selecting from all of the options presented. Can be customized via the table property {@value #PREFERRED_VOLUMES_CUSTOM_KEY}, which should contain a comma
  * separated list of {@link Volume} URIs. Note that both the property name and the format of its value are specific to this particular implementation.
  */
-public class PreferredVolumeChooser extends RandomVolumeChooser implements VolumeChooser {
+public class PreferredVolumeChooser extends RandomVolumeChooser {
   private static final Logger log = LoggerFactory.getLogger(PreferredVolumeChooser.class);
 
   /**
@@ -77,7 +77,7 @@
       serverConfs = localConf;
     }
     TableConfiguration tableConf = localConf.getTableConfiguration(env.getTableId());
-    final Map<String,String> props = new HashMap<String,String>();
+    final Map<String,String> props = new HashMap<>();
     tableConf.getProperties(props, PREFERRED_VOLUMES_FILTER);
     if (props.isEmpty()) {
       log.warn("No preferred volumes specified. Defaulting to randomly choosing from instance volumes");
@@ -93,12 +93,12 @@
     // If the preferred volumes property was specified, split the returned string by the comma and add use it to filter the given options.
     Set<String> preferred = parsedPreferredVolumes.get(volumes);
     if (preferred == null) {
-      preferred = new HashSet<String>(Arrays.asList(StringUtils.split(volumes, ',')));
+      preferred = new HashSet<>(Arrays.asList(StringUtils.split(volumes, ',')));
       parsedPreferredVolumes.put(volumes, preferred);
     }
 
     // Only keep the options that are in the preferred set
-    final ArrayList<String> filteredOptions = new ArrayList<String>(Arrays.asList(options));
+    final ArrayList<String> filteredOptions = new ArrayList<>(Arrays.asList(options));
     filteredOptions.retainAll(preferred);
 
     // If there are no preferred volumes left, then warn the user and choose randomly from the instance volumes
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java
index 6c6c77c..d9df424 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeManagerImpl.java
@@ -65,13 +65,12 @@
 
   private static final Logger log = LoggerFactory.getLogger(VolumeManagerImpl.class);
 
-  private static final HashSet<String> WARNED_ABOUT_SYNCONCLOSE = new HashSet<String>();
+  private static final HashSet<String> WARNED_ABOUT_SYNCONCLOSE = new HashSet<>();
 
-  Map<String,Volume> volumesByName;
-  Multimap<URI,Volume> volumesByFileSystemUri;
-  Volume defaultVolume;
-  AccumuloConfiguration conf;
-  VolumeChooser chooser;
+  private final Map<String,Volume> volumesByName;
+  private final Multimap<URI,Volume> volumesByFileSystemUri;
+  private final Volume defaultVolume;
+  private final VolumeChooser chooser;
 
   protected VolumeManagerImpl(Map<String,Volume> volumes, Volume defaultVolume, AccumuloConfiguration conf) {
     this.volumesByName = volumes;
@@ -79,7 +78,6 @@
     // We may have multiple directories used in a single FileSystem (e.g. testing)
     this.volumesByFileSystemUri = HashMultimap.create();
     invertVolumesByFileSystem(volumesByName, volumesByFileSystemUri);
-    this.conf = conf;
     ensureSyncIsEnabled();
     // Keep in sync with default type in the property definition.
     chooser = Property.createInstanceFromPropertyName(conf, Property.GENERAL_VOLUME_CHOOSER, VolumeChooser.class, new PerTableVolumeChooser());
@@ -328,7 +326,7 @@
   }
 
   public static VolumeManager get(AccumuloConfiguration conf, final Configuration hadoopConf) throws IOException {
-    final Map<String,Volume> volumes = new HashMap<String,Volume>();
+    final Map<String,Volume> volumes = new HashMap<>();
 
     // The "default" Volume for Accumulo (in case no volumes are specified)
     for (String volumeUriOrDir : VolumeConfiguration.getVolumeUris(conf, hadoopConf)) {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java
index c3595cd..192ae77 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/fs/VolumeUtil.java
@@ -46,7 +46,6 @@
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -127,16 +126,13 @@
     else
       switchedPath = le.filename;
 
-    ArrayList<String> switchedLogs = new ArrayList<String>();
-    for (String log : le.logSet) {
-      String switchedLog = switchVolume(le.filename, FileType.WAL, replacements);
-      if (switchedLog != null) {
-        switchedLogs.add(switchedLog);
-        numSwitched++;
-      } else {
-        switchedLogs.add(log);
-      }
-
+    ArrayList<String> switchedLogs = new ArrayList<>();
+    String switchedLog = switchVolume(le.filename, FileType.WAL, replacements);
+    if (switchedLog != null) {
+      switchedLogs.add(switchedLog);
+      numSwitched++;
+    } else {
+      switchedLogs.add(le.filename);
     }
 
     if (numSwitched == 0) {
@@ -144,9 +140,7 @@
       return null;
     }
 
-    LogEntry newLogEntry = new LogEntry(le);
-    newLogEntry.filename = switchedPath;
-    newLogEntry.logSet = switchedLogs;
+    LogEntry newLogEntry = new LogEntry(le.extent, le.timestamp, le.server, switchedPath);
 
     log.trace("Switched " + le + " to " + newLogEntry);
 
@@ -159,8 +153,8 @@
     public SortedMap<FileRef,DataFileValue> datafiles;
 
     public TabletFiles() {
-      logEntries = new ArrayList<LogEntry>();
-      datafiles = new TreeMap<FileRef,DataFileValue>();
+      logEntries = new ArrayList<>();
+      datafiles = new TreeMap<>();
     }
 
     public TabletFiles(String dir, List<LogEntry> logEntries, SortedMap<FileRef,DataFileValue> datafiles) {
@@ -170,14 +164,12 @@
     }
   }
 
-  public static Text switchRootTabletVolume(KeyExtent extent, Text location) throws IOException {
-    if (extent.isRootTablet()) {
-      String newLocation = switchVolume(location.toString(), FileType.TABLE, ServerConstants.getVolumeReplacements());
-      if (newLocation != null) {
-        MetadataTableUtil.setRootTabletDir(newLocation);
-        log.info("Volume replaced " + extent + " : " + location + " -> " + newLocation);
-        return new Text(new Path(newLocation).toString());
-      }
+  public static String switchRootTableVolume(String location) throws IOException {
+    String newLocation = switchVolume(location, FileType.TABLE, ServerConstants.getVolumeReplacements());
+    if (newLocation != null) {
+      MetadataTableUtil.setRootTabletDir(newLocation);
+      log.info("Volume replaced: " + location + " -> " + newLocation);
+      return new Path(newLocation).toString();
     }
     return location;
   }
@@ -191,11 +183,11 @@
     List<Pair<Path,Path>> replacements = ServerConstants.getVolumeReplacements();
     log.trace("Using volume replacements: " + replacements);
 
-    List<LogEntry> logsToRemove = new ArrayList<LogEntry>();
-    List<LogEntry> logsToAdd = new ArrayList<LogEntry>();
+    List<LogEntry> logsToRemove = new ArrayList<>();
+    List<LogEntry> logsToAdd = new ArrayList<>();
 
-    List<FileRef> filesToRemove = new ArrayList<FileRef>();
-    SortedMap<FileRef,DataFileValue> filesToAdd = new TreeMap<FileRef,DataFileValue>();
+    List<FileRef> filesToRemove = new ArrayList<>();
+    SortedMap<FileRef,DataFileValue> filesToAdd = new TreeMap<>();
 
     TabletFiles ret = new TabletFiles();
 
@@ -244,16 +236,22 @@
         log.debug("Tablet directory switched, need to record old log files " + logsToRemove + " " + ProtobufUtil.toString(status));
         // Before deleting these logs, we need to mark them for replication
         for (LogEntry logEntry : logsToRemove) {
-          ReplicationTableUtil.updateFiles(context, extent, logEntry.logSet, status);
+          ReplicationTableUtil.updateFiles(context, extent, logEntry.filename, status);
         }
       }
     }
 
     ret.dir = decommisionedTabletDir(context, zooLock, vm, extent, tabletDir);
+    if (extent.isRootTablet()) {
+      SortedMap<FileRef,DataFileValue> copy = ret.datafiles;
+      ret.datafiles = new TreeMap<>();
+      for (Entry<FileRef,DataFileValue> entry : copy.entrySet()) {
+        ret.datafiles.put(new FileRef(new Path(ret.dir, entry.getKey().path().getName()).toString()), entry.getValue());
+      }
+    }
 
     // method this should return the exact strings that are in the metadata table
     return ret;
-
   }
 
   private static String decommisionedTabletDir(AccumuloServerContext context, ZooLock zooLock, VolumeManager vm, KeyExtent extent, String metaDir)
@@ -266,7 +264,7 @@
       throw new IllegalArgumentException("Unexpected table dir " + dir);
     }
 
-    Path newDir = new Path(vm.choose(Optional.of(extent.getTableId().toString()), ServerConstants.getBaseUris()) + Path.SEPARATOR + ServerConstants.TABLE_DIR
+    Path newDir = new Path(vm.choose(Optional.of(extent.getTableId()), ServerConstants.getBaseUris()) + Path.SEPARATOR + ServerConstants.TABLE_DIR
         + Path.SEPARATOR + dir.getParent().getName() + Path.SEPARATOR + dir.getName());
 
     log.info("Updating directory for " + extent + " from " + dir + " to " + newDir);
@@ -347,7 +345,7 @@
   }
 
   private static HashSet<String> getFileNames(FileStatus[] filesStatuses) {
-    HashSet<String> names = new HashSet<String>();
+    HashSet<String> names = new HashSet<>();
     for (FileStatus fileStatus : filesStatuses)
       if (fileStatus.isDirectory())
         throw new IllegalArgumentException("expected " + fileStatus.getPath() + " to be a file");
diff --git a/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java b/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java
index 4e5864e..0ccf51f 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java
@@ -87,6 +87,7 @@
 import org.apache.accumulo.server.fs.VolumeManager;
 import org.apache.accumulo.server.fs.VolumeManagerImpl;
 import org.apache.accumulo.server.iterators.MetadataBulkLoadFilter;
+import org.apache.accumulo.server.log.WalStateManager;
 import org.apache.accumulo.server.replication.ReplicationUtil;
 import org.apache.accumulo.server.replication.StatusCombiner;
 import org.apache.accumulo.server.security.AuditedSecurityOperation;
@@ -154,9 +155,9 @@
     return zoo;
   }
 
-  private static HashMap<String,String> initialMetadataConf = new HashMap<String,String>();
-  private static HashMap<String,String> initialMetadataCombinerConf = new HashMap<String,String>();
-  private static HashMap<String,String> initialReplicationTableConf = new HashMap<String,String>();
+  private static HashMap<String,String> initialMetadataConf = new HashMap<>();
+  private static HashMap<String,String> initialMetadataCombinerConf = new HashMap<>();
+  private static HashMap<String,String> initialReplicationTableConf = new HashMap<>();
 
   static {
     initialMetadataConf.put(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE.getKey(), "32K");
@@ -464,7 +465,8 @@
       createEntriesForTablet(sorted, tablet);
     }
     FileSystem fs = volmanager.getVolumeByPath(new Path(fileName)).getFileSystem();
-    FileSKVWriter tabletWriter = FileOperations.getInstance().openWriter(fileName, fs, fs.getConf(), AccumuloConfiguration.getDefaultConfiguration());
+    FileSKVWriter tabletWriter = FileOperations.getInstance().newWriterBuilder().forFile(fileName, fs, fs.getConf())
+        .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
     tabletWriter.startDefaultLocalityGroup();
 
     for (Entry<Key,Value> entry : sorted.entrySet()) {
@@ -476,7 +478,7 @@
 
   private static void createEntriesForTablet(TreeMap<Key,Value> map, Tablet tablet) {
     Value EMPTY_SIZE = new DataFileValue(0, 0).encodeAsValue();
-    Text extent = new Text(KeyExtent.getMetadataEntry(new Text(tablet.tableId), tablet.endRow));
+    Text extent = new Text(KeyExtent.getMetadataEntry(tablet.tableId, tablet.endRow));
     addEntry(map, extent, DIRECTORY_COLUMN, new Value(tablet.dir.getBytes(UTF_8)));
     addEntry(map, extent, TIME_COLUMN, new Value((TabletTime.LOGICAL_TIME_ID + "0").getBytes(UTF_8)));
     addEntry(map, extent, PREV_ROW_COLUMN, KeyExtent.encodePrevEndRow(tablet.prevEndRow));
@@ -535,6 +537,7 @@
     zoo.putPersistentData(zkInstanceRoot + Constants.ZPROBLEMS, EMPTY_BYTE_ARRAY, NodeExistsPolicy.FAIL);
     zoo.putPersistentData(zkInstanceRoot + RootTable.ZROOT_TABLET, EMPTY_BYTE_ARRAY, NodeExistsPolicy.FAIL);
     zoo.putPersistentData(zkInstanceRoot + RootTable.ZROOT_TABLET_WALOGS, EMPTY_BYTE_ARRAY, NodeExistsPolicy.FAIL);
+    zoo.putPersistentData(zkInstanceRoot + RootTable.ZROOT_TABLET_CURRENT_LOGS, EMPTY_BYTE_ARRAY, NodeExistsPolicy.FAIL);
     zoo.putPersistentData(zkInstanceRoot + RootTable.ZROOT_TABLET_PATH, rootTabletDir.getBytes(UTF_8), NodeExistsPolicy.FAIL);
     zoo.putPersistentData(zkInstanceRoot + Constants.ZMASTERS, EMPTY_BYTE_ARRAY, NodeExistsPolicy.FAIL);
     zoo.putPersistentData(zkInstanceRoot + Constants.ZMASTER_LOCK, EMPTY_BYTE_ARRAY, NodeExistsPolicy.FAIL);
@@ -550,6 +553,7 @@
     zoo.putPersistentData(zkInstanceRoot + Constants.ZMONITOR_LOCK, EMPTY_BYTE_ARRAY, NodeExistsPolicy.FAIL);
     zoo.putPersistentData(zkInstanceRoot + ReplicationConstants.ZOO_BASE, EMPTY_BYTE_ARRAY, NodeExistsPolicy.FAIL);
     zoo.putPersistentData(zkInstanceRoot + ReplicationConstants.ZOO_TSERVERS, EMPTY_BYTE_ARRAY, NodeExistsPolicy.FAIL);
+    zoo.putPersistentData(zkInstanceRoot + WalStateManager.ZWALS, EMPTY_BYTE_ARRAY, NodeExistsPolicy.FAIL);
   }
 
   private String getInstanceNamePath(Opts opts) throws IOException, KeeperException, InterruptedException {
@@ -571,7 +575,8 @@
       if (opts.clearInstanceName) {
         exists = false;
         break;
-      } else if (exists = zoo.exists(instanceNamePath)) {
+      } else if (zoo.exists(instanceNamePath)) {
+        exists = true;
         String decision = getConsoleReader().readLine("Instance name \"" + instanceName + "\" exists. Delete existing entry from zookeeper? [Y/N] : ");
         if (decision == null)
           System.exit(0);
@@ -692,10 +697,10 @@
 
     String[] volumeURIs = VolumeConfiguration.getVolumeUris(SiteConfiguration.getInstance());
 
-    HashSet<String> initializedDirs = new HashSet<String>();
+    HashSet<String> initializedDirs = new HashSet<>();
     initializedDirs.addAll(Arrays.asList(ServerConstants.checkBaseUris(volumeURIs, true)));
 
-    HashSet<String> uinitializedDirs = new HashSet<String>();
+    HashSet<String> uinitializedDirs = new HashSet<>();
     uinitializedDirs.addAll(Arrays.asList(volumeURIs));
     uinitializedDirs.removeAll(initializedDirs);
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilter.java b/server/base/src/main/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilter.java
index 772be32..3a76442 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilter.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilter.java
@@ -82,7 +82,7 @@
       throw new IOException("This iterator not intended for use at scan time");
     }
 
-    bulkTxStatusCache = new HashMap<Long,MetadataBulkLoadFilter.Status>();
+    bulkTxStatusCache = new HashMap<>();
     arbitrator = getArbitrator();
   }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/log/WalStateManager.java b/server/base/src/main/java/org/apache/accumulo/server/log/WalStateManager.java
new file mode 100644
index 0000000..f08bcc4
--- /dev/null
+++ b/server/base/src/main/java/org/apache/accumulo/server/log/WalStateManager.java
@@ -0,0 +1,241 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.server.log;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.UUID;
+
+import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.util.Pair;
+import org.apache.accumulo.core.zookeeper.ZooUtil;
+import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
+import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
+import org.apache.accumulo.server.master.state.TServerInstance;
+import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
+import org.apache.hadoop.fs.Path;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/*
+ * This class governs the space in Zookeeper that advertises the status of Write-Ahead Logs
+ * in use by tablet servers and the replication machinery.
+ *
+ * The Master needs to know the state of the WALs to mark tablets during recovery.
+ * The GC needs to know when a log is no longer needed so it can be removed.
+ * The replication mechanism needs to know when a log is closed and can be forwarded to the destination table.
+ *
+ * The state of the WALs is kept in Zookeeper under /accumulo/<instanceid>/wals.
+ * For each server, there is a znode formatted like the TServerInstance.toString(): "host:port[sessionid]".
+ * Under the server znode, is a node for each log, using the UUID for the log.
+ * In each of the WAL znodes, is the current state of the log, and the full path to the log.
+ *
+ * The state [OPEN, CLOSED, UNREFERENCED] is what the tablet server believes to be the state of the file.
+ *
+ * In the event of a recovery, the log is identified as belonging to a dead server.  The master will update
+ * the tablets assigned to that server with log references. Once all tablets have been reassigned and the log
+ * references are removed, the log will be eligible for deletion.
+ *
+ * Even when a log is UNREFERENCED by the tablet server, the replication mechanism may still need the log.
+ * The GC will defer log removal until replication is finished with it.
+ *
+ */
+public class WalStateManager {
+
+  public class WalMarkerException extends Exception {
+    static private final long serialVersionUID = 1L;
+
+    public WalMarkerException(Exception ex) {
+      super(ex);
+    }
+  }
+
+  private static final Logger log = LoggerFactory.getLogger(WalStateManager.class);
+
+  public final static String ZWALS = "/wals";
+
+  public static enum WalState {
+    /* log is open, and may be written to */
+    OPEN,
+    /* log is closed, and will not be written to again */
+    CLOSED,
+    /* unreferenced: no tablet needs the log for recovery */
+    UNREFERENCED
+  }
+
+  private final Instance instance;
+  private final ZooReaderWriter zoo;
+
+  private volatile boolean checkedExistance = false;
+
+  public WalStateManager(Instance instance, ZooReaderWriter zoo) {
+    this.instance = instance;
+    this.zoo = zoo;
+  }
+
+  private String root() throws WalMarkerException {
+    String root = ZooUtil.getRoot(instance) + ZWALS;
+
+    try {
+      if (!checkedExistance && !zoo.exists(root)) {
+        zoo.putPersistentData(root, new byte[0], NodeExistsPolicy.SKIP);
+      }
+
+      checkedExistance = true;
+    } catch (KeeperException | InterruptedException e) {
+      throw new WalMarkerException(e);
+    }
+
+    return root;
+  }
+
+  // Tablet server exists
+  public void initWalMarker(TServerInstance tsi) throws WalMarkerException {
+    byte[] data = new byte[0];
+
+    try {
+      zoo.putPersistentData(root() + "/" + tsi.toString(), data, NodeExistsPolicy.FAIL);
+    } catch (KeeperException | InterruptedException e) {
+      throw new WalMarkerException(e);
+    }
+  }
+
+  // Tablet server opens a new WAL
+  public void addNewWalMarker(TServerInstance tsi, Path path) throws WalMarkerException {
+    updateState(tsi, path, WalState.OPEN);
+  }
+
+  private void updateState(TServerInstance tsi, Path path, WalState state) throws WalMarkerException {
+    byte[] data = (state.toString() + "," + path.toString()).getBytes(UTF_8);
+    try {
+      NodeExistsPolicy policy = NodeExistsPolicy.OVERWRITE;
+      if (state == WalState.OPEN) {
+        policy = NodeExistsPolicy.FAIL;
+      }
+      log.debug("Setting {} to {}", path.getName(), state);
+      zoo.putPersistentData(root() + "/" + tsi.toString() + "/" + path.getName(), data, policy);
+    } catch (KeeperException | InterruptedException e) {
+      throw new WalMarkerException(e);
+    }
+  }
+
+  // Tablet server has no references to the WAL
+  public void walUnreferenced(TServerInstance tsi, Path path) throws WalMarkerException {
+    updateState(tsi, path, WalState.UNREFERENCED);
+  }
+
+  private static Pair<WalState,Path> parse(byte data[]) {
+    String parts[] = new String(data, UTF_8).split(",");
+    return new Pair<>(WalState.valueOf(parts[0]), new Path(parts[1]));
+  }
+
+  // Master needs to know the logs for the given instance
+  public List<Path> getWalsInUse(TServerInstance tsi) throws WalMarkerException {
+    List<Path> result = new ArrayList<>();
+    try {
+      String zpath = root() + "/" + tsi.toString();
+      zoo.sync(zpath);
+      for (String child : zoo.getChildren(zpath)) {
+        Pair<WalState,Path> parts = parse(zoo.getData(zpath + "/" + child, null));
+        if (parts.getFirst() != WalState.UNREFERENCED) {
+          result.add(parts.getSecond());
+        }
+      }
+    } catch (KeeperException.NoNodeException e) {
+      log.debug("{} has no wal entry in zookeeper, assuming no logs", tsi);
+    } catch (KeeperException | InterruptedException e) {
+      throw new WalMarkerException(e);
+    }
+    return result;
+  }
+
+  // garbage collector wants the list of logs markers for all servers
+  public Map<TServerInstance,List<UUID>> getAllMarkers() throws WalMarkerException {
+    Map<TServerInstance,List<UUID>> result = new HashMap<>();
+    try {
+      String path = root();
+      for (String child : zoo.getChildren(path)) {
+        TServerInstance inst = new TServerInstance(child);
+        List<UUID> logs = result.get(inst);
+        if (logs == null) {
+          result.put(inst, logs = new ArrayList<>());
+        }
+        for (String idString : zoo.getChildren(path + "/" + child)) {
+          logs.add(UUID.fromString(idString));
+        }
+      }
+    } catch (KeeperException | InterruptedException e) {
+      throw new WalMarkerException(e);
+    }
+    return result;
+  }
+
+  // garbage collector wants to know the state (open/closed) of a log, and the filename to delete
+  public Pair<WalState,Path> state(TServerInstance instance, UUID uuid) throws WalMarkerException {
+    try {
+      String path = root() + "/" + instance.toString() + "/" + uuid.toString();
+      return parse(zoo.getData(path, null));
+    } catch (KeeperException | InterruptedException e) {
+      throw new WalMarkerException(e);
+    }
+  }
+
+  // utility combination of getAllMarkers and state
+  public Map<Path,WalState> getAllState() throws WalMarkerException {
+    Map<Path,WalState> result = new HashMap<>();
+    for (Entry<TServerInstance,List<UUID>> entry : getAllMarkers().entrySet()) {
+      for (UUID id : entry.getValue()) {
+        Pair<WalState,Path> state = state(entry.getKey(), id);
+        result.put(state.getSecond(), state.getFirst());
+      }
+    }
+    return result;
+  }
+
+  // garbage collector knows it's safe to remove the marker for a closed log
+  public void removeWalMarker(TServerInstance instance, UUID uuid) throws WalMarkerException {
+    try {
+      log.debug("Removing {}", uuid);
+      String path = root() + "/" + instance.toString() + "/" + uuid.toString();
+      zoo.delete(path, -1);
+    } catch (InterruptedException | KeeperException e) {
+      throw new WalMarkerException(e);
+    }
+  }
+
+  // garbage collector knows the instance is dead, and has no markers
+  public void forget(TServerInstance instance) throws WalMarkerException {
+    String path = root() + "/" + instance.toString();
+    try {
+      zoo.recursiveDelete(path, NodeMissingPolicy.FAIL);
+    } catch (InterruptedException | KeeperException e) {
+      throw new WalMarkerException(e);
+    }
+  }
+
+  // tablet server can mark the log as closed (but still needed), for replication to begin
+  // master can mark a log as unreferenced after it has made log recovery markers on the tablets that need to be recovered
+  public void closeWal(TServerInstance instance, Path path) throws WalMarkerException {
+    updateState(instance, path, WalState.CLOSED);
+  }
+}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/LiveTServerSet.java b/server/base/src/main/java/org/apache/accumulo/server/master/LiveTServerSet.java
index 0c0cceb..7d1d6e1 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/LiveTServerSet.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/LiveTServerSet.java
@@ -57,6 +57,7 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.net.HostAndPort;
+import org.apache.accumulo.core.tabletserver.thrift.TUnloadTabletGoal;
 
 public class LiveTServerSet implements Watcher {
 
@@ -105,10 +106,10 @@
       }
     }
 
-    public void unloadTablet(ZooLock lock, KeyExtent extent, boolean save) throws TException {
+    public void unloadTablet(ZooLock lock, KeyExtent extent, TUnloadTabletGoal goal, long requestTime) throws TException {
       TabletClientService.Client client = ThriftUtil.getClient(new TabletClientService.Client.Factory(), address, context);
       try {
-        client.unloadTablet(Tracer.traceInfo(), context.rpcCreds(), lockString(lock), extent.toThrift(), save);
+        client.unloadTablet(Tracer.traceInfo(), context.rpcCreds(), lockString(lock), extent.toThrift(), goal, requestTime);
       } finally {
         ThriftUtil.returnClient(client);
       }
@@ -205,15 +206,15 @@
       this.connection = connection;
       this.instance = instance;
     }
-  };
+  }
 
   // The set of active tservers with locks, indexed by their name in zookeeper
-  private Map<String,TServerInfo> current = new HashMap<String,TServerInfo>();
+  private Map<String,TServerInfo> current = new HashMap<>();
   // as above, indexed by TServerInstance
-  private Map<TServerInstance,TServerInfo> currentInstances = new HashMap<TServerInstance,TServerInfo>();
+  private Map<TServerInstance,TServerInfo> currentInstances = new HashMap<>();
 
   // The set of entries in zookeeper without locks, and the first time each was noticed
-  private Map<String,Long> locklessServers = new HashMap<String,Long>();
+  private Map<String,Long> locklessServers = new HashMap<>();
 
   public LiveTServerSet(ClientContext context, Listener cback) {
     this.cback = cback;
@@ -238,12 +239,12 @@
 
   public synchronized void scanServers() {
     try {
-      final Set<TServerInstance> updates = new HashSet<TServerInstance>();
-      final Set<TServerInstance> doomed = new HashSet<TServerInstance>();
+      final Set<TServerInstance> updates = new HashSet<>();
+      final Set<TServerInstance> doomed = new HashSet<>();
 
       final String path = ZooUtil.getRoot(context.getInstance()) + Constants.ZTSERVERS;
 
-      HashSet<String> all = new HashSet<String>(current.keySet());
+      HashSet<String> all = new HashSet<>(current.keySet());
       all.addAll(getZooCache().getChildren(path));
 
       locklessServers.keySet().retainAll(all);
@@ -332,8 +333,8 @@
 
           String server = event.getPath().substring(pos + 1);
 
-          final Set<TServerInstance> updates = new HashSet<TServerInstance>();
-          final Set<TServerInstance> doomed = new HashSet<TServerInstance>();
+          final Set<TServerInstance> updates = new HashSet<>();
+          final Set<TServerInstance> doomed = new HashSet<>();
 
           final String path = ZooUtil.getRoot(context.getInstance()) + Constants.ZTSERVERS;
 
@@ -359,7 +360,7 @@
   }
 
   public synchronized Set<TServerInstance> getCurrentServers() {
-    return new HashSet<TServerInstance>(currentInstances.keySet());
+    return new HashSet<>(currentInstances.keySet());
   }
 
   public synchronized int size() {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/ChaoticLoadBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/ChaoticLoadBalancer.java
index fd3dfd8..112fcc1 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/ChaoticLoadBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/ChaoticLoadBalancer.java
@@ -69,8 +69,8 @@
       Map<KeyExtent,TServerInstance> assignments) {
     long total = assignments.size() + unassigned.size();
     long avg = (long) Math.ceil(((double) total) / current.size());
-    Map<TServerInstance,Long> toAssign = new HashMap<TServerInstance,Long>();
-    List<TServerInstance> tServerArray = new ArrayList<TServerInstance>();
+    Map<TServerInstance,Long> toAssign = new HashMap<>();
+    List<TServerInstance> tServerArray = new ArrayList<>();
     for (Entry<TServerInstance,TabletServerStatus> e : current.entrySet()) {
       long numTablets = 0;
       for (TableInfo ti : e.getValue().getTableMap().values()) {
@@ -105,8 +105,8 @@
 
   @Override
   public long balance(SortedMap<TServerInstance,TabletServerStatus> current, Set<KeyExtent> migrations, List<TabletMigration> migrationsOut) {
-    Map<TServerInstance,Long> numTablets = new HashMap<TServerInstance,Long>();
-    List<TServerInstance> underCapacityTServer = new ArrayList<TServerInstance>();
+    Map<TServerInstance,Long> numTablets = new HashMap<>();
+    List<TServerInstance> underCapacityTServer = new ArrayList<>();
 
     if (!migrations.isEmpty()) {
       outstandingMigrations.migrations = migrations;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancer.java
index 56b3839..c31eb37 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancer.java
@@ -53,7 +53,7 @@
   }
 
   List<TServerInstance> randomize(Set<TServerInstance> locations) {
-    List<TServerInstance> result = new ArrayList<TServerInstance>(locations);
+    List<TServerInstance> result = new ArrayList<>(locations);
     Collections.shuffle(result);
     return result;
   }
@@ -123,11 +123,11 @@
       if (current.size() < 2) {
         return false;
       }
-      final Map<String,Map<KeyExtent,TabletStats>> donerTabletStats = new HashMap<String,Map<KeyExtent,TabletStats>>();
+      final Map<String,Map<KeyExtent,TabletStats>> donerTabletStats = new HashMap<>();
 
       // Sort by total number of online tablets, per server
       int total = 0;
-      ArrayList<ServerCounts> totals = new ArrayList<ServerCounts>();
+      ArrayList<ServerCounts> totals = new ArrayList<>();
       for (Entry<TServerInstance,TabletServerStatus> entry : current.entrySet()) {
         int serverTotal = 0;
         if (entry.getValue() != null && entry.getValue().tableMap != null) {
@@ -197,7 +197,7 @@
    */
   List<TabletMigration> move(ServerCounts tooMuch, ServerCounts tooLittle, int count, Map<String,Map<KeyExtent,TabletStats>> donerTabletStats) {
 
-    List<TabletMigration> result = new ArrayList<TabletMigration>();
+    List<TabletMigration> result = new ArrayList<>();
     if (count == 0)
       return result;
 
@@ -235,7 +235,7 @@
       Map<KeyExtent,TabletStats> onlineTabletsForTable = donerTabletStats.get(table);
       try {
         if (onlineTabletsForTable == null) {
-          onlineTabletsForTable = new HashMap<KeyExtent,TabletStats>();
+          onlineTabletsForTable = new HashMap<>();
           List<TabletStats> stats = getOnlineTabletsForTable(tooMuch.server, table);
           if (null == stats) {
             log.warn("Unable to find tablets to move");
@@ -271,7 +271,7 @@
   }
 
   static Map<String,Integer> tabletCountsPerTable(TabletServerStatus status) {
-    Map<String,Integer> result = new HashMap<String,Integer>();
+    Map<String,Integer> result = new HashMap<>();
     if (status != null && status.tableMap != null) {
       Map<String,TableInfo> tableMap = status.tableMap;
       for (Entry<String,TableInfo> entry : tableMap.entrySet()) {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
index 2ea8e71..1838536 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
@@ -48,7 +48,6 @@
 import org.apache.accumulo.server.master.state.TServerInstance;
 import org.apache.accumulo.server.master.state.TabletMigration;
 import org.apache.commons.lang.mutable.MutableInt;
-import org.apache.hadoop.io.Text;
 
 import com.google.common.base.Function;
 import com.google.common.collect.HashBasedTable;
@@ -72,7 +71,6 @@
 public abstract class GroupBalancer extends TabletBalancer {
 
   private final String tableId;
-  private final Text textTableId;
   private long lastRun = 0;
 
   /**
@@ -82,7 +80,6 @@
 
   public GroupBalancer(String tableId) {
     this.tableId = tableId;
-    this.textTableId = new Text(tableId);
   }
 
   protected Iterable<Pair<KeyExtent,Location>> getLocationProvider() {
@@ -113,7 +110,7 @@
     }
 
     for (KeyExtent keyExtent : migrations) {
-      if (keyExtent.getTableId().equals(textTableId)) {
+      if (keyExtent.getTableId().equals(tableId)) {
         return false;
       }
     }
@@ -148,7 +145,7 @@
         }
       }
 
-      tabletsByGroup.add(new ComparablePair<String,KeyExtent>(partitioner.apply(entry.getKey()), entry.getKey()));
+      tabletsByGroup.add(new ComparablePair<>(partitioner.apply(entry.getKey()), entry.getKey()));
     }
 
     Collections.sort(tabletsByGroup);
@@ -531,7 +528,7 @@
             if (srcTgi.getExtras().size() <= maxExtraGroups) {
               serversToRemove.add(srcTgi.getTserverInstance());
             } else {
-              serversGroupsToRemove.add(new Pair<String,TServerInstance>(group, srcTgi.getTserverInstance()));
+              serversGroupsToRemove.add(new Pair<>(group, srcTgi.getTserverInstance()));
             }
 
             if (destTgi.getExtras().size() >= maxExtraGroups || moves.size() >= getMaxMigrations()) {
@@ -596,7 +593,7 @@
             moves.move(group, 1, srcTgi, destTgi);
 
             if (num == 2) {
-              serversToRemove.add(new Pair<String,TserverGroupInfo>(group, srcTgi));
+              serversToRemove.add(new Pair<>(group, srcTgi));
             }
 
             if (destTgi.getExtras().size() >= maxExtraGroups || moves.size() >= getMaxMigrations()) {
@@ -658,7 +655,7 @@
             if (srcTgi.getExtras().size() <= expectedExtra) {
               emptyServers.add(srcTgi.getTserverInstance());
             } else if (srcTgi.getExtras().get(group) == null) {
-              emptyServerGroups.add(new Pair<String,TServerInstance>(group, srcTgi.getTserverInstance()));
+              emptyServerGroups.add(new Pair<>(group, srcTgi.getTserverInstance()));
             }
 
             if (destTgi.getExtras().size() >= expectedExtra || moves.size() >= getMaxMigrations()) {
@@ -764,7 +761,7 @@
         }
       }
 
-      return new Pair<KeyExtent,Location>(extent, loc);
+      return new Pair<>(extent, loc);
     }
 
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java
index d19ab82..0fd217c 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java
@@ -89,7 +89,7 @@
   private volatile long lastOOBCheck = System.currentTimeMillis();
   private volatile long lastPoolRecheck = 0;
   private boolean isIpBasedRegex = false;
-  private Map<String,SortedMap<TServerInstance,TabletServerStatus>> pools = new HashMap<String,SortedMap<TServerInstance,TabletServerStatus>>();
+  private Map<String,SortedMap<TServerInstance,TabletServerStatus>> pools = new HashMap<>();
   private int maxTServerMigrations = HOST_BALANCER_REGEX_MAX_MIGRATIONS_DEFAULT;
 
   /**
@@ -103,13 +103,13 @@
   protected synchronized Map<String,SortedMap<TServerInstance,TabletServerStatus>> splitCurrentByRegex(SortedMap<TServerInstance,TabletServerStatus> current) {
     if ((System.currentTimeMillis() - lastPoolRecheck) > poolRecheckMillis) {
       LOG.debug("Performing pool recheck - regrouping tablet servers based on regular expressions");
-      Map<String,SortedMap<TServerInstance,TabletServerStatus>> newPools = new HashMap<String,SortedMap<TServerInstance,TabletServerStatus>>();
+      Map<String,SortedMap<TServerInstance,TabletServerStatus>> newPools = new HashMap<>();
       for (Entry<TServerInstance,TabletServerStatus> e : current.entrySet()) {
         List<String> poolNames = getPoolNamesForHost(e.getKey().host());
         for (String pool : poolNames) {
           SortedMap<TServerInstance,TabletServerStatus> np = newPools.get(pool);
           if (null == np) {
-            np = new TreeMap<TServerInstance,TabletServerStatus>(current.comparator());
+            np = new TreeMap<>(current.comparator());
             newPools.put(pool, np);
           }
           np.put(e.getKey(), e.getValue());
@@ -265,18 +265,18 @@
 
     Map<String,SortedMap<TServerInstance,TabletServerStatus>> pools = splitCurrentByRegex(current);
     // group the unassigned into tables
-    Map<String,Map<KeyExtent,TServerInstance>> groupedUnassigned = new HashMap<String,Map<KeyExtent,TServerInstance>>();
+    Map<String,Map<KeyExtent,TServerInstance>> groupedUnassigned = new HashMap<>();
     for (Entry<KeyExtent,TServerInstance> e : unassigned.entrySet()) {
-      Map<KeyExtent,TServerInstance> tableUnassigned = groupedUnassigned.get(e.getKey().getTableId().toString());
+      Map<KeyExtent,TServerInstance> tableUnassigned = groupedUnassigned.get(e.getKey().getTableId());
       if (tableUnassigned == null) {
-        tableUnassigned = new HashMap<KeyExtent,TServerInstance>();
-        groupedUnassigned.put(e.getKey().getTableId().toString(), tableUnassigned);
+        tableUnassigned = new HashMap<>();
+        groupedUnassigned.put(e.getKey().getTableId(), tableUnassigned);
       }
       tableUnassigned.put(e.getKey(), e.getValue());
     }
     // Send a view of the current servers to the tables tablet balancer
     for (Entry<String,Map<KeyExtent,TServerInstance>> e : groupedUnassigned.entrySet()) {
-      Map<KeyExtent,TServerInstance> newAssignments = new HashMap<KeyExtent,TServerInstance>();
+      Map<KeyExtent,TServerInstance> newAssignments = new HashMap<>();
       String tableName = tableIdToTableName.get(e.getKey());
       String poolName = getPoolNameForTable(tableName);
       SortedMap<TServerInstance,TabletServerStatus> currentView = pools.get(poolName);
@@ -377,7 +377,7 @@
             this.poolRecheckMillis);
         continue;
       }
-      ArrayList<TabletMigration> newMigrations = new ArrayList<TabletMigration>();
+      ArrayList<TabletMigration> newMigrations = new ArrayList<>();
       long tableBalanceTime = getBalancerForTable(s).balance(currentView, migrations, newMigrations);
       if (tableBalanceTime < minBalanceTime) {
         minBalanceTime = tableBalanceTime;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/TableLoadBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/TableLoadBalancer.java
index 7d76572..de3f4f7 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/TableLoadBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/TableLoadBalancer.java
@@ -43,7 +43,7 @@
 
   private static final Logger log = LoggerFactory.getLogger(TableLoadBalancer.class);
 
-  Map<String,TabletBalancer> perTableBalancers = new HashMap<String,TabletBalancer>();
+  Map<String,TabletBalancer> perTableBalancers = new HashMap<>();
 
   private TabletBalancer constructNewBalancerForTable(String clazzName, String table) throws Exception {
     String context = null;
@@ -114,17 +114,17 @@
   public void getAssignments(SortedMap<TServerInstance,TabletServerStatus> current, Map<KeyExtent,TServerInstance> unassigned,
       Map<KeyExtent,TServerInstance> assignments) {
     // separate the unassigned into tables
-    Map<String,Map<KeyExtent,TServerInstance>> groupedUnassigned = new HashMap<String,Map<KeyExtent,TServerInstance>>();
+    Map<String,Map<KeyExtent,TServerInstance>> groupedUnassigned = new HashMap<>();
     for (Entry<KeyExtent,TServerInstance> e : unassigned.entrySet()) {
-      Map<KeyExtent,TServerInstance> tableUnassigned = groupedUnassigned.get(e.getKey().getTableId().toString());
+      Map<KeyExtent,TServerInstance> tableUnassigned = groupedUnassigned.get(e.getKey().getTableId());
       if (tableUnassigned == null) {
-        tableUnassigned = new HashMap<KeyExtent,TServerInstance>();
-        groupedUnassigned.put(e.getKey().getTableId().toString(), tableUnassigned);
+        tableUnassigned = new HashMap<>();
+        groupedUnassigned.put(e.getKey().getTableId(), tableUnassigned);
       }
       tableUnassigned.put(e.getKey(), e.getValue());
     }
     for (Entry<String,Map<KeyExtent,TServerInstance>> e : groupedUnassigned.entrySet()) {
-      Map<KeyExtent,TServerInstance> newAssignments = new HashMap<KeyExtent,TServerInstance>();
+      Map<KeyExtent,TServerInstance> newAssignments = new HashMap<>();
       getBalancerForTable(e.getKey()).getAssignments(current, e.getValue(), newAssignments);
       assignments.putAll(newAssignments);
     }
@@ -152,7 +152,7 @@
     if (t == null)
       return minBalanceTime;
     for (String s : t.tableIdMap().values()) {
-      ArrayList<TabletMigration> newMigrations = new ArrayList<TabletMigration>();
+      ArrayList<TabletMigration> newMigrations = new ArrayList<>();
       long tableBalanceTime = getBalancerForTable(s).balance(current, migrations, newMigrations);
       if (tableBalanceTime < minBalanceTime)
         minBalanceTime = tableBalanceTime;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/TabletBalancer.java b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/TabletBalancer.java
index a199288..a160bb0 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/TabletBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/TabletBalancer.java
@@ -213,7 +213,7 @@
    * @return A list of TabletMigration object that passed sanity checks.
    */
   public static List<TabletMigration> checkMigrationSanity(Set<TServerInstance> current, List<TabletMigration> migrations) {
-    List<TabletMigration> result = new ArrayList<TabletMigration>(migrations.size());
+    List<TabletMigration> result = new ArrayList<>(migrations.size());
     for (TabletMigration m : migrations) {
       if (m.tablet == null) {
         log.warn("Balancer gave back a null tablet " + m);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/DeadServerList.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/DeadServerList.java
index 3cd2517..0be5f45 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/DeadServerList.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/DeadServerList.java
@@ -46,7 +46,7 @@
   }
 
   public List<DeadServer> getList() {
-    List<DeadServer> result = new ArrayList<DeadServer>();
+    List<DeadServer> result = new ArrayList<>();
     IZooReaderWriter zoo = ZooReaderWriter.getInstance();
     try {
       List<String> children = zoo.getChildren(path);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataStateStore.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataStateStore.java
index 7ee6f0c..c549adc 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataStateStore.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataStateStore.java
@@ -17,6 +17,8 @@
 package org.apache.accumulo.server.master.state;
 
 import java.util.Collection;
+import java.util.List;
+import java.util.Map;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
@@ -27,7 +29,9 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
+import org.apache.accumulo.core.tabletserver.log.LogEntry;
 import org.apache.accumulo.server.AccumuloServerContext;
+import org.apache.hadoop.fs.Path;
 
 public class MetaDataStateStore extends TabletStateStore {
 
@@ -59,7 +63,7 @@
 
   @Override
   public ClosableIterator<TabletLocationState> iterator() {
-    return new MetaDataTableScanner(context, MetadataSchema.TabletsSection.getRange(), state);
+    return new MetaDataTableScanner(context, MetadataSchema.TabletsSection.getRange(), state, targetTableName);
   }
 
   @Override
@@ -70,6 +74,7 @@
         Mutation m = new Mutation(assignment.tablet.getMetadataEntry());
         assignment.server.putLocation(m);
         assignment.server.clearFutureLocation(m);
+        SuspendingTServer.clearSuspension(m);
         writer.addMutation(m);
       }
     } catch (Exception ex) {
@@ -101,6 +106,7 @@
     try {
       for (Assignment assignment : assignments) {
         Mutation m = new Mutation(assignment.tablet.getMetadataEntry());
+        SuspendingTServer.clearSuspension(m);
         assignment.server.putFutureLocation(m);
         writer.addMutation(m);
       }
@@ -116,14 +122,35 @@
   }
 
   @Override
-  public void unassign(Collection<TabletLocationState> tablets) throws DistributedStoreException {
+  public void unassign(Collection<TabletLocationState> tablets, Map<TServerInstance,List<Path>> logsForDeadServers) throws DistributedStoreException {
+    suspend(tablets, logsForDeadServers, -1);
+  }
 
+  @Override
+  public void suspend(Collection<TabletLocationState> tablets, Map<TServerInstance,List<Path>> logsForDeadServers, long suspensionTimestamp)
+      throws DistributedStoreException {
     BatchWriter writer = createBatchWriter();
     try {
       for (TabletLocationState tls : tablets) {
         Mutation m = new Mutation(tls.extent.getMetadataEntry());
         if (tls.current != null) {
           tls.current.clearLocation(m);
+          if (logsForDeadServers != null) {
+            List<Path> logs = logsForDeadServers.get(tls.current);
+            if (logs != null) {
+              for (Path log : logs) {
+                LogEntry entry = new LogEntry(tls.extent, 0, tls.current.hostPort(), log.toString());
+                m.put(entry.getColumnFamily(), entry.getColumnQualifier(), entry.getValue());
+              }
+            }
+          }
+          if (suspensionTimestamp >= 0) {
+            SuspendingTServer suspender = new SuspendingTServer(tls.current.getLocation(), suspensionTimestamp);
+            suspender.setSuspension(m);
+          }
+        }
+        if (tls.suspend != null && suspensionTimestamp < 0) {
+          SuspendingTServer.clearSuspension(m);
         }
         if (tls.future != null) {
           tls.future.clearFutureLocation(m);
@@ -142,7 +169,31 @@
   }
 
   @Override
+  public void unsuspend(Collection<TabletLocationState> tablets) throws DistributedStoreException {
+    BatchWriter writer = createBatchWriter();
+    try {
+      for (TabletLocationState tls : tablets) {
+        if (tls.suspend != null) {
+          continue;
+        }
+        Mutation m = new Mutation(tls.extent.getMetadataEntry());
+        SuspendingTServer.clearSuspension(m);
+        writer.addMutation(m);
+      }
+    } catch (Exception ex) {
+      throw new DistributedStoreException(ex);
+    } finally {
+      try {
+        writer.close();
+      } catch (MutationsRejectedException e) {
+        throw new DistributedStoreException(e);
+      }
+    }
+  }
+
+  @Override
   public String name() {
     return "Normal Tablets";
   }
+
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataTableScanner.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataTableScanner.java
index 81eff0f..5d6052f 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataTableScanner.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/MetaDataTableScanner.java
@@ -78,6 +78,7 @@
     scanner.fetchColumnFamily(TabletsSection.CurrentLocationColumnFamily.NAME);
     scanner.fetchColumnFamily(TabletsSection.FutureLocationColumnFamily.NAME);
     scanner.fetchColumnFamily(TabletsSection.LastLocationColumnFamily.NAME);
+    scanner.fetchColumnFamily(TabletsSection.SuspendLocationColumn.SUSPEND_COLUMN.getColumnFamily());
     scanner.fetchColumnFamily(LogColumnFamily.NAME);
     scanner.fetchColumnFamily(ChoppedColumnFamily.NAME);
     scanner.addScanIterator(new IteratorSetting(1000, "wholeRows", WholeRowIterator.class));
@@ -136,11 +137,13 @@
     TServerInstance future = null;
     TServerInstance current = null;
     TServerInstance last = null;
+    SuspendingTServer suspend = null;
     long lastTimestamp = 0;
-    List<Collection<String>> walogs = new ArrayList<Collection<String>>();
+    List<Collection<String>> walogs = new ArrayList<>();
     boolean chopped = false;
 
     for (Entry<Key,Value> entry : decodedRow.entrySet()) {
+
       Key key = entry.getKey();
       Text row = key.getRow();
       Text cf = key.getColumnFamily();
@@ -170,13 +173,16 @@
         chopped = true;
       } else if (TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.equals(cf, cq)) {
         extent = new KeyExtent(row, entry.getValue());
+      } else if (TabletsSection.SuspendLocationColumn.SUSPEND_COLUMN.equals(cf, cq)) {
+        suspend = SuspendingTServer.fromValue(entry.getValue());
       }
     }
     if (extent == null) {
-      log.warn("No prev-row for key extent: " + decodedRow);
-      return null;
+      String msg = "No prev-row for key extent " + decodedRow;
+      log.error(msg);
+      throw new BadLocationStateException(msg, k.getRow());
     }
-    return new TabletLocationState(extent, future, current, last, walogs, chopped);
+    return new TabletLocationState(extent, future, current, last, suspend, walogs, chopped);
   }
 
   private TabletLocationState fetch() {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/SuspendingTServer.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/SuspendingTServer.java
new file mode 100644
index 0000000..3f4e49e
--- /dev/null
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/SuspendingTServer.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.server.master.state;
+
+import com.google.common.net.HostAndPort;
+import java.util.Objects;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.SuspendLocationColumn.SUSPEND_COLUMN;
+
+/** For a suspended tablet, the time of suspension and the server it was suspended from. */
+public class SuspendingTServer {
+  public final HostAndPort server;
+  public final long suspensionTime;
+
+  SuspendingTServer(HostAndPort server, long suspensionTime) {
+    this.server = Objects.requireNonNull(server);
+    this.suspensionTime = suspensionTime;
+  }
+
+  public static SuspendingTServer fromValue(Value value) {
+    String valStr = value.toString();
+    String[] parts = valStr.split("[|]", 2);
+    return new SuspendingTServer(HostAndPort.fromString(parts[0]), Long.parseLong(parts[1]));
+  }
+
+  public Value toValue() {
+    return new Value(server.toString() + "|" + suspensionTime);
+  }
+
+  @Override
+  public boolean equals(Object rhsObject) {
+    if (!(rhsObject instanceof SuspendingTServer)) {
+      return false;
+    }
+    SuspendingTServer rhs = (SuspendingTServer) rhsObject;
+    return server.equals(rhs.server) && suspensionTime == rhs.suspensionTime;
+  }
+
+  public void setSuspension(Mutation m) {
+    m.put(SUSPEND_COLUMN.getColumnFamily(), SUSPEND_COLUMN.getColumnQualifier(), toValue());
+  }
+
+  public static void clearSuspension(Mutation m) {
+    m.putDelete(SUSPEND_COLUMN.getColumnFamily(), SUSPEND_COLUMN.getColumnQualifier());
+  }
+
+  @Override
+  public int hashCode() {
+    return Objects.hash(server, suspensionTime);
+  }
+
+  @Override
+  public String toString() {
+    return server.toString() + "[" + suspensionTime + "]";
+  }
+}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/TServerInstance.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/TServerInstance.java
index d2d4f44..ace9f05 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/TServerInstance.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/TServerInstance.java
@@ -42,8 +42,8 @@
 
   // HostAndPort is not Serializable
   private transient HostAndPort location;
-  private String session;
-  private String cachedStringRepresentation;
+  private final String session;
+  private final String cachedStringRepresentation;
 
   public TServerInstance(HostAndPort address, String session) {
     this.location = address;
@@ -51,6 +51,16 @@
     this.cachedStringRepresentation = hostPort() + "[" + session + "]";
   }
 
+  public TServerInstance(String formattedString) {
+    int pos = formattedString.indexOf("[");
+    if (pos < 0 || !formattedString.endsWith("]")) {
+      throw new IllegalArgumentException(formattedString);
+    }
+    this.location = HostAndPort.fromString(formattedString.substring(0, pos));
+    this.session = formattedString.substring(pos + 1, formattedString.length() - 1);
+    this.cachedStringRepresentation = hostPort() + "[" + session + "]";
+  }
+
   public TServerInstance(HostAndPort address, long session) {
     this(address, Long.toHexString(session));
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletLocationState.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletLocationState.java
index fb30440..784bd33 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletLocationState.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletLocationState.java
@@ -46,12 +46,13 @@
     }
   }
 
-  public TabletLocationState(KeyExtent extent, TServerInstance future, TServerInstance current, TServerInstance last, Collection<Collection<String>> walogs,
-      boolean chopped) throws BadLocationStateException {
+  public TabletLocationState(KeyExtent extent, TServerInstance future, TServerInstance current, TServerInstance last, SuspendingTServer suspend,
+      Collection<Collection<String>> walogs, boolean chopped) throws BadLocationStateException {
     this.extent = extent;
     this.future = future;
     this.current = current;
     this.last = last;
+    this.suspend = suspend;
     if (walogs == null)
       walogs = Collections.emptyList();
     this.walogs = walogs;
@@ -65,9 +66,17 @@
   final public TServerInstance future;
   final public TServerInstance current;
   final public TServerInstance last;
+  final public SuspendingTServer suspend;
   final public Collection<Collection<String>> walogs;
   final public boolean chopped;
 
+  public TServerInstance futureOrCurrent() {
+    if (current != null) {
+      return current;
+    }
+    return future;
+  }
+
   @Override
   public String toString() {
     return extent + "@(" + future + "," + current + "," + last + ")" + (chopped ? " chopped" : "");
@@ -85,23 +94,29 @@
     return result;
   }
 
-  public TabletState getState(Set<TServerInstance> liveServers) {
-    TServerInstance server = getServer();
-    if (server == null)
-      return TabletState.UNASSIGNED;
-    if (server.equals(current) || server.equals(future)) {
-      if (liveServers.contains(server))
-        if (server.equals(future)) {
-          return TabletState.ASSIGNED;
-        } else {
-          return TabletState.HOSTED;
-        }
-      else {
-        return TabletState.ASSIGNED_TO_DEAD_SERVER;
-      }
-    }
-    // server == last
-    return TabletState.UNASSIGNED;
-  }
+  private static final int _HAS_CURRENT = 1 << 0;
+  private static final int _HAS_FUTURE = 1 << 1;
+  private static final int _HAS_SUSPEND = 1 << 2;
 
+  public TabletState getState(Set<TServerInstance> liveServers) {
+    switch ((current == null ? 0 : _HAS_CURRENT) | (future == null ? 0 : _HAS_FUTURE) | (suspend == null ? 0 : _HAS_SUSPEND)) {
+      case 0:
+        return TabletState.UNASSIGNED;
+
+      case _HAS_SUSPEND:
+        return TabletState.SUSPENDED;
+
+      case _HAS_FUTURE:
+      case (_HAS_FUTURE | _HAS_SUSPEND):
+        return liveServers.contains(future) ? TabletState.ASSIGNED : TabletState.ASSIGNED_TO_DEAD_SERVER;
+
+      case _HAS_CURRENT:
+      case (_HAS_CURRENT | _HAS_SUSPEND):
+        return liveServers.contains(current) ? TabletState.HOSTED : TabletState.ASSIGNED_TO_DEAD_SERVER;
+
+      default:
+        // Both current and future are set, which is prevented by constructor.
+        throw new IllegalStateException();
+    }
+  }
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletServerState.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletServerState.java
index 942eabf..d618d8b 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletServerState.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletServerState.java
@@ -45,8 +45,8 @@
   private static HashSet<TabletServerState> badStates;
 
   static {
-    mapping = new HashMap<Byte,TabletServerState>(TabletServerState.values().length);
-    badStates = new HashSet<TabletServerState>();
+    mapping = new HashMap<>(TabletServerState.values().length);
+    badStates = new HashSet<>();
     for (TabletServerState state : TabletServerState.values()) {
       mapping.put(state.id, state);
       if (state.id > 99)
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletState.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletState.java
index d69ca19..bd0e885 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletState.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletState.java
@@ -17,5 +17,5 @@
 package org.apache.accumulo.server.master.state;
 
 public enum TabletState {
-  UNASSIGNED, ASSIGNED, HOSTED, ASSIGNED_TO_DEAD_SERVER
+  UNASSIGNED, ASSIGNED, HOSTED, ASSIGNED_TO_DEAD_SERVER, SUSPENDED
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
index 236b602..00f86c6 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateChangeIterator.java
@@ -42,7 +42,6 @@
 import org.apache.accumulo.server.master.state.TabletLocationState.BadLocationStateException;
 import org.apache.hadoop.io.DataInputBuffer;
 import org.apache.hadoop.io.DataOutputBuffer;
-import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -61,7 +60,7 @@
 
   private Set<TServerInstance> current;
   private Set<String> onlineTables;
-  private Map<Text,MergeInfo> merges;
+  private Map<String,MergeInfo> merges;
   private boolean debug = false;
   private Set<KeyExtent> migrations;
   private MasterState masterState = MasterState.NORMAL;
@@ -91,7 +90,7 @@
     if (migrations == null)
       return Collections.emptySet();
     try {
-      Set<KeyExtent> result = new HashSet<KeyExtent>();
+      Set<KeyExtent> result = new HashSet<>();
       DataInputBuffer buffer = new DataInputBuffer();
       byte[] data = Base64.decodeBase64(migrations.getBytes(UTF_8));
       buffer.reset(data, data.length);
@@ -109,7 +108,7 @@
   private Set<String> parseTables(String tables) {
     if (tables == null)
       return null;
-    Set<String> result = new HashSet<String>();
+    Set<String> result = new HashSet<>();
     for (String table : tables.split(","))
       result.add(table);
     return result;
@@ -119,7 +118,7 @@
     if (servers == null)
       return null;
     // parse "host:port[INSTANCE]"
-    Set<TServerInstance> result = new HashSet<TServerInstance>();
+    Set<TServerInstance> result = new HashSet<>();
     if (servers.length() > 0) {
       for (String part : servers.split(",")) {
         String parts[] = part.split("\\[", 2);
@@ -133,11 +132,11 @@
     return result;
   }
 
-  private Map<Text,MergeInfo> parseMerges(String merges) {
+  private Map<String,MergeInfo> parseMerges(String merges) {
     if (merges == null)
       return null;
     try {
-      Map<Text,MergeInfo> result = new HashMap<Text,MergeInfo>();
+      Map<String,MergeInfo> result = new HashMap<>();
       DataInputBuffer buffer = new DataInputBuffer();
       byte[] data = Base64.decodeBase64(merges.getBytes(UTF_8));
       buffer.reset(data, data.length);
@@ -182,7 +181,7 @@
       }
 
       // is the table supposed to be online or offline?
-      boolean shouldBeOnline = onlineTables.contains(tls.extent.getTableId().toString());
+      boolean shouldBeOnline = onlineTables.contains(tls.extent.getTableId());
 
       if (debug) {
         log.debug(tls.extent + " is " + tls.getState(current) + " and should be " + (shouldBeOnline ? "on" : "off") + "line");
@@ -197,6 +196,7 @@
           break;
         case ASSIGNED_TO_DEAD_SERVER:
           return;
+        case SUSPENDED:
         case UNASSIGNED:
           if (shouldBeOnline)
             return;
@@ -216,7 +216,7 @@
 
   public static void setCurrentServers(IteratorSetting cfg, Set<TServerInstance> goodServers) {
     if (goodServers != null) {
-      List<String> servers = new ArrayList<String>();
+      List<String> servers = new ArrayList<>();
       for (TServerInstance server : goodServers)
         servers.add(server.toString());
       cfg.addOption(SERVERS_OPTION, Joiner.on(",").join(servers));
@@ -263,7 +263,7 @@
 
   public static void setShuttingDown(IteratorSetting cfg, Set<TServerInstance> servers) {
     if (servers != null) {
-      List<String> serverList = new ArrayList<String>();
+      List<String> serverList = new ArrayList<>();
       for (TServerInstance server : servers) {
         serverList.add(server.toString());
       }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateStore.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateStore.java
index 5413e31..6872466 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateStore.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/TabletStateStore.java
@@ -18,8 +18,12 @@
 
 import java.util.Collection;
 import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import org.apache.accumulo.core.data.impl.KeyExtent;
 
 import org.apache.accumulo.server.AccumuloServerContext;
+import org.apache.hadoop.fs.Path;
 
 /**
  * Interface for storing information about tablet assignments. There are three implementations:
@@ -56,31 +60,43 @@
    *
    * @param tablets
    *          the tablets' current information
+   * @param logsForDeadServers
+   *          a cache of logs in use by servers when they died
    */
-  abstract public void unassign(Collection<TabletLocationState> tablets) throws DistributedStoreException;
+  abstract public void unassign(Collection<TabletLocationState> tablets, Map<TServerInstance,List<Path>> logsForDeadServers) throws DistributedStoreException;
 
-  public static void unassign(AccumuloServerContext context, TabletLocationState tls) throws DistributedStoreException {
-    TabletStateStore store;
-    if (tls.extent.isRootTablet()) {
-      store = new ZooTabletStateStore();
-    } else if (tls.extent.isMeta()) {
-      store = new RootTabletStateStore(context);
-    } else {
-      store = new MetaDataStateStore(context);
-    }
-    store.unassign(Collections.singletonList(tls));
+  /**
+   * Mark tablets as having no known or future location, but desiring to be returned to their previous tserver.
+   */
+  abstract public void suspend(Collection<TabletLocationState> tablets, Map<TServerInstance,List<Path>> logsForDeadServers, long suspensionTimestamp)
+      throws DistributedStoreException;
+
+  /**
+   * Remove a suspension marker for a collection of tablets, moving them to being simply unassigned.
+   */
+  abstract public void unsuspend(Collection<TabletLocationState> tablets) throws DistributedStoreException;
+
+  public static void unassign(AccumuloServerContext context, TabletLocationState tls, Map<TServerInstance,List<Path>> logsForDeadServers)
+      throws DistributedStoreException {
+    getStoreForTablet(tls.extent, context).unassign(Collections.singletonList(tls), logsForDeadServers);
+  }
+
+  public static void suspend(AccumuloServerContext context, TabletLocationState tls, Map<TServerInstance,List<Path>> logsForDeadServers,
+      long suspensionTimestamp) throws DistributedStoreException {
+    getStoreForTablet(tls.extent, context).suspend(Collections.singletonList(tls), logsForDeadServers, suspensionTimestamp);
   }
 
   public static void setLocation(AccumuloServerContext context, Assignment assignment) throws DistributedStoreException {
-    TabletStateStore store;
-    if (assignment.tablet.isRootTablet()) {
-      store = new ZooTabletStateStore();
-    } else if (assignment.tablet.isMeta()) {
-      store = new RootTabletStateStore(context);
-    } else {
-      store = new MetaDataStateStore(context);
-    }
-    store.setLocations(Collections.singletonList(assignment));
+    getStoreForTablet(assignment.tablet, context).setLocations(Collections.singletonList(assignment));
   }
 
+  protected static TabletStateStore getStoreForTablet(KeyExtent extent, AccumuloServerContext context) throws DistributedStoreException {
+    if (extent.isRootTablet()) {
+      return new ZooTabletStateStore();
+    } else if (extent.isMeta()) {
+      return new RootTabletStateStore(context);
+    } else {
+      return new MetaDataStateStore(context);
+    }
+  }
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooTabletStateStore.java b/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooTabletStateStore.java
index ab99396..148b6cc 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooTabletStateStore.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/state/ZooTabletStateStore.java
@@ -21,12 +21,15 @@
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
+import java.util.Collections;
 import java.util.Iterator;
 import java.util.List;
+import java.util.Map;
 
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.tabletserver.log.LogEntry;
 import org.apache.commons.lang.NotImplementedException;
+import org.apache.hadoop.fs.Path;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -81,17 +84,16 @@
             currentSession = parse(current);
             futureSession = null;
           }
-          List<Collection<String>> logs = new ArrayList<Collection<String>>();
+          List<Collection<String>> logs = new ArrayList<>();
           for (String entry : store.getChildren(RootTable.ZROOT_TABLET_WALOGS)) {
             byte[] logInfo = store.get(RootTable.ZROOT_TABLET_WALOGS + "/" + entry);
             if (logInfo != null) {
-              LogEntry logEntry = new LogEntry();
-              logEntry.fromBytes(logInfo);
-              logs.add(logEntry.logSet);
-              log.debug("root tablet logSet " + logEntry.logSet);
+              LogEntry logEntry = LogEntry.fromBytes(logInfo);
+              logs.add(Collections.singleton(logEntry.filename));
+              log.debug("root tablet log " + logEntry.filename);
             }
           }
-          TabletLocationState result = new TabletLocationState(RootTable.EXTENT, futureSession, currentSession, lastSession, logs, false);
+          TabletLocationState result = new TabletLocationState(RootTable.EXTENT, futureSession, currentSession, lastSession, null, logs, false);
           log.debug("Returning root tablet state: " + result);
           return result;
         } catch (Exception ex) {
@@ -161,20 +163,46 @@
   }
 
   @Override
-  public void unassign(Collection<TabletLocationState> tablets) throws DistributedStoreException {
+  public void unassign(Collection<TabletLocationState> tablets, Map<TServerInstance,List<Path>> logsForDeadServers) throws DistributedStoreException {
     if (tablets.size() != 1)
       throw new IllegalArgumentException("There is only one root tablet");
     TabletLocationState tls = tablets.iterator().next();
     if (tls.extent.compareTo(RootTable.EXTENT) != 0)
       throw new IllegalArgumentException("You can only store the root tablet location");
+    if (logsForDeadServers != null) {
+      List<Path> logs = logsForDeadServers.get(tls.futureOrCurrent());
+      if (logs != null) {
+        for (Path entry : logs) {
+          LogEntry logEntry = new LogEntry(RootTable.EXTENT, System.currentTimeMillis(), tls.futureOrCurrent().getLocation().toString(), entry.toString());
+          byte[] value;
+          try {
+            value = logEntry.toBytes();
+          } catch (IOException ex) {
+            throw new DistributedStoreException(ex);
+          }
+          store.put(RootTable.ZROOT_TABLET_WALOGS + "/" + logEntry.getUniqueID(), value);
+        }
+      }
+    }
     store.remove(RootTable.ZROOT_TABLET_LOCATION);
     store.remove(RootTable.ZROOT_TABLET_FUTURE_LOCATION);
     log.debug("unassign root tablet location");
   }
 
   @Override
+  public void suspend(Collection<TabletLocationState> tablets, Map<TServerInstance,List<Path>> logsForDeadServers, long suspensionTimestamp)
+      throws DistributedStoreException {
+    // No support for suspending root tablet.
+    unassign(tablets, logsForDeadServers);
+  }
+
+  @Override
+  public void unsuspend(Collection<TabletLocationState> tablets) throws DistributedStoreException {
+    // no support for suspending root tablet.
+  }
+
+  @Override
   public String name() {
     return "Root Table";
   }
-
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/master/tableOps/UserCompactionConfig.java b/server/base/src/main/java/org/apache/accumulo/server/master/tableOps/UserCompactionConfig.java
index 02c6ac3..def0696 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/tableOps/UserCompactionConfig.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/tableOps/UserCompactionConfig.java
@@ -89,7 +89,7 @@
     }
 
     int num = in.readInt();
-    iterators = new ArrayList<IteratorSetting>(num);
+    iterators = new ArrayList<>(num);
 
     for (int i = 0; i < num; i++) {
       iterators.add(new IteratorSetting(in));
diff --git a/server/base/src/main/java/org/apache/accumulo/server/metrics/AbstractMetricsImpl.java b/server/base/src/main/java/org/apache/accumulo/server/metrics/AbstractMetricsImpl.java
index 42d2d00..24a9750 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/metrics/AbstractMetricsImpl.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/metrics/AbstractMetricsImpl.java
@@ -93,7 +93,7 @@
 
   static final org.slf4j.Logger log = org.slf4j.LoggerFactory.getLogger(AbstractMetricsImpl.class);
 
-  private static ConcurrentHashMap<String,Metric> registry = new ConcurrentHashMap<String,Metric>();
+  private static ConcurrentHashMap<String,Metric> registry = new ConcurrentHashMap<>();
 
   private boolean currentlyLogging = false;
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/monitor/LogService.java b/server/base/src/main/java/org/apache/accumulo/server/monitor/LogService.java
index 8acc764..d59267a 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/monitor/LogService.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/monitor/LogService.java
@@ -94,7 +94,7 @@
    */
   public static void startLogListener(AccumuloConfiguration conf, String instanceId, String hostAddress) {
     try {
-      SocketServer server = new SocketServer(conf.getPort(Property.MONITOR_LOG4J_PORT));
+      SocketServer server = new SocketServer(conf.getPort(Property.MONITOR_LOG4J_PORT)[0]);
 
       // getLocalPort will return the actual ephemeral port used when '0' was provided.
       String logForwardingAddr = hostAddress + ":" + server.getLocalPort();
@@ -181,7 +181,7 @@
   }
 
   synchronized public List<DedupedLogEvent> getEvents() {
-    return new ArrayList<DedupedLogEvent>(events.values());
+    return new ArrayList<>(events.values());
   }
 
   synchronized public void clear() {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReport.java b/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReport.java
index 4dc1eed..8685034 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReport.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReport.java
@@ -44,18 +44,18 @@
 import org.apache.zookeeper.KeeperException;
 
 public class ProblemReport {
-  private String tableName;
+  private String tableId;
   private ProblemType problemType;
   private String resource;
   private String exception;
   private String server;
   private long creationTime;
 
-  public ProblemReport(String table, ProblemType problemType, String resource, String server, Throwable e, long creationTime) {
-    requireNonNull(table, "table is null");
+  public ProblemReport(String tableId, ProblemType problemType, String resource, String server, Throwable e, long creationTime) {
+    requireNonNull(tableId, "tableId is null");
     requireNonNull(problemType, "problemType is null");
     requireNonNull(resource, "resource is null");
-    this.tableName = table;
+    this.tableId = tableId;
 
     this.problemType = problemType;
     this.resource = resource;
@@ -76,19 +76,19 @@
     this.creationTime = creationTime;
   }
 
-  public ProblemReport(String table, ProblemType problemType, String resource, String server, Throwable e) {
-    this(table, problemType, resource, server, e, System.currentTimeMillis());
+  public ProblemReport(String tableId, ProblemType problemType, String resource, String server, Throwable e) {
+    this(tableId, problemType, resource, server, e, System.currentTimeMillis());
   }
 
-  public ProblemReport(String table, ProblemType problemType, String resource, Throwable e) {
-    this(table, problemType, resource, null, e);
+  public ProblemReport(String tableId, ProblemType problemType, String resource, Throwable e) {
+    this(tableId, problemType, resource, null, e);
   }
 
   private ProblemReport(String table, ProblemType problemType, String resource, byte enc[]) throws IOException {
     requireNonNull(table, "table is null");
     requireNonNull(problemType, "problemType is null");
     requireNonNull(resource, "resource is null");
-    this.tableName = table;
+    this.tableId = table;
     this.problemType = problemType;
     this.resource = resource;
 
@@ -137,13 +137,13 @@
   }
 
   void removeFromMetadataTable(AccumuloServerContext context) throws Exception {
-    Mutation m = new Mutation(new Text("~err_" + tableName));
+    Mutation m = new Mutation(new Text("~err_" + tableId));
     m.putDelete(new Text(problemType.name()), new Text(resource));
     MetadataTableUtil.getMetadataTable(context).update(m);
   }
 
   void saveToMetadataTable(AccumuloServerContext context) throws Exception {
-    Mutation m = new Mutation(new Text("~err_" + tableName));
+    Mutation m = new Mutation(new Text("~err_" + tableId));
     m.put(new Text(problemType.name()), new Text(resource), new Value(encode()));
     MetadataTableUtil.getMetadataTable(context).update(m);
   }
@@ -188,27 +188,27 @@
     ByteArrayInputStream bais = new ByteArrayInputStream(bytes);
     DataInputStream dis = new DataInputStream(bais);
 
-    String tableName = dis.readUTF();
+    String tableId = dis.readUTF();
     String problemType = dis.readUTF();
     String resource = dis.readUTF();
 
     String zpath = ZooUtil.getRoot(instance) + Constants.ZPROBLEMS + "/" + node;
     byte[] enc = zoorw.getData(zpath, null);
 
-    return new ProblemReport(tableName, ProblemType.valueOf(problemType), resource, enc);
+    return new ProblemReport(tableId, ProblemType.valueOf(problemType), resource, enc);
 
   }
 
   public static ProblemReport decodeMetadataEntry(Entry<Key,Value> entry) throws IOException {
-    String tableName = entry.getKey().getRow().toString().substring("~err_".length());
+    String tableId = entry.getKey().getRow().toString().substring("~err_".length());
     String problemType = entry.getKey().getColumnFamily().toString();
     String resource = entry.getKey().getColumnQualifier().toString();
 
-    return new ProblemReport(tableName, ProblemType.valueOf(problemType), resource, entry.getValue().get());
+    return new ProblemReport(tableId, ProblemType.valueOf(problemType), resource, entry.getValue().get());
   }
 
   public String getTableName() {
-    return tableName;
+    return tableId;
   }
 
   public ProblemType getProblemType() {
@@ -233,14 +233,14 @@
 
   @Override
   public int hashCode() {
-    return tableName.hashCode() + problemType.hashCode() + resource.hashCode();
+    return tableId.hashCode() + problemType.hashCode() + resource.hashCode();
   }
 
   @Override
   public boolean equals(Object o) {
     if (o instanceof ProblemReport) {
       ProblemReport opr = (ProblemReport) o;
-      return tableName.equals(opr.tableName) && problemType.equals(opr.problemType) && resource.equals(opr.resource);
+      return tableId.equals(opr.tableId) && problemType.equals(opr.problemType) && resource.equals(opr.resource);
     }
     return false;
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReportingIterator.java b/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReportingIterator.java
index 349ed20..83b4615 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReportingIterator.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReportingIterator.java
@@ -35,13 +35,13 @@
   private boolean sawError = false;
   private boolean continueOnError;
   private String resource;
-  private String table;
+  private String tableId;
   private final AccumuloServerContext context;
 
-  public ProblemReportingIterator(AccumuloServerContext context, String table, String resource, boolean continueOnError,
+  public ProblemReportingIterator(AccumuloServerContext context, String tableId, String resource, boolean continueOnError,
       SortedKeyValueIterator<Key,Value> source) {
     this.context = context;
-    this.table = table;
+    this.tableId = tableId;
     this.resource = resource;
     this.continueOnError = continueOnError;
     this.source = source;
@@ -49,7 +49,7 @@
 
   @Override
   public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
-    return new ProblemReportingIterator(context, table, resource, continueOnError, source.deepCopy(env));
+    return new ProblemReportingIterator(context, tableId, resource, continueOnError, source.deepCopy(env));
   }
 
   @Override
@@ -81,7 +81,7 @@
       source.next();
     } catch (IOException ioe) {
       sawError = true;
-      ProblemReports.getInstance(context).report(new ProblemReport(table, ProblemType.FILE_READ, resource, ioe));
+      ProblemReports.getInstance(context).report(new ProblemReport(tableId, ProblemType.FILE_READ, resource, ioe));
       if (!continueOnError) {
         throw ioe;
       }
@@ -98,7 +98,7 @@
       source.seek(range, columnFamilies, inclusive);
     } catch (IOException ioe) {
       sawError = true;
-      ProblemReports.getInstance(context).report(new ProblemReport(table, ProblemType.FILE_READ, resource, ioe));
+      ProblemReports.getInstance(context).report(new ProblemReport(tableId, ProblemType.FILE_READ, resource, ioe));
       if (!continueOnError) {
         throw ioe;
       }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReports.java b/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReports.java
index d44efb1..82e9f5b 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReports.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/problems/ProblemReports.java
@@ -305,12 +305,12 @@
 
   public Map<String,Map<ProblemType,Integer>> summarize() {
 
-    TreeMap<String,Map<ProblemType,Integer>> summary = new TreeMap<String,Map<ProblemType,Integer>>();
+    TreeMap<String,Map<ProblemType,Integer>> summary = new TreeMap<>();
 
     for (ProblemReport pr : this) {
       Map<ProblemType,Integer> tableProblems = summary.get(pr.getTableName());
       if (tableProblems == null) {
-        tableProblems = new EnumMap<ProblemType,Integer>(ProblemType.class);
+        tableProblems = new EnumMap<>(ProblemType.class);
         summary.put(pr.getTableName(), tableProblems);
       }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/replication/StatusCombiner.java b/server/base/src/main/java/org/apache/accumulo/server/replication/StatusCombiner.java
index aacf64d..84e4742 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/replication/StatusCombiner.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/replication/StatusCombiner.java
@@ -45,7 +45,7 @@
 public class StatusCombiner extends TypedValueCombiner<Status> {
   private static final Logger log = LoggerFactory.getLogger(StatusCombiner.class);
 
-  public static class StatusEncoder extends AbstractEncoder<Status> implements Encoder<Status> {
+  public static class StatusEncoder extends AbstractEncoder<Status> {
     private static final Logger log = LoggerFactory.getLogger(StatusEncoder.class);
 
     @Override
diff --git a/server/base/src/main/java/org/apache/accumulo/server/replication/StatusFormatter.java b/server/base/src/main/java/org/apache/accumulo/server/replication/StatusFormatter.java
index a674802..6a2d66a 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/replication/StatusFormatter.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/replication/StatusFormatter.java
@@ -22,7 +22,6 @@
 import java.util.Iterator;
 import java.util.Map.Entry;
 import java.util.Set;
-
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.ReplicationSection;
@@ -32,8 +31,8 @@
 import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.accumulo.core.util.format.DefaultFormatter;
-import org.apache.accumulo.core.util.format.DefaultFormatter.DefaultDateFormat;
 import org.apache.accumulo.core.util.format.Formatter;
+import org.apache.accumulo.core.util.format.FormatterConfig;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
 import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
@@ -52,7 +51,7 @@
       WorkSection.NAME, OrderSection.NAME));
 
   private Iterator<Entry<Key,Value>> iterator;
-  private boolean printTimestamps;
+  private FormatterConfig config;
 
   /* so a new date object doesn't get created for every record in the scan result */
   private static ThreadLocal<Date> tmpDate = new ThreadLocal<Date>() {
@@ -62,13 +61,6 @@
     }
   };
 
-  private static final ThreadLocal<DateFormat> formatter = new ThreadLocal<DateFormat>() {
-    @Override
-    protected DateFormat initialValue() {
-      return new DefaultDateFormat();
-    }
-  };
-
   @Override
   public boolean hasNext() {
     return iterator.hasNext();
@@ -77,7 +69,7 @@
   @Override
   public String next() {
     Entry<Key,Value> entry = iterator.next();
-    DateFormat timestampFormat = printTimestamps ? formatter.get() : null;
+    DateFormat timestampFormat = config.willPrintTimestamps() ? config.getDateFormatSupplier().get() : null;
 
     // If we expected this to be a protobuf, try to parse it, adding a message when it fails to parse
     if (REPLICATION_COLFAMS.contains(entry.getKey().getColumnFamily())) {
@@ -157,9 +149,9 @@
   }
 
   @Override
-  public void initialize(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps) {
+  public void initialize(Iterable<Entry<Key,Value>> scanner, FormatterConfig config) {
     this.iterator = scanner.iterator();
-    this.printTimestamps = printTimestamps;
+    this.config = new FormatterConfig(config);
   }
 
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/replication/StatusUtil.java b/server/base/src/main/java/org/apache/accumulo/server/replication/StatusUtil.java
index e6e3cfd..ad892b8 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/replication/StatusUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/replication/StatusUtil.java
@@ -153,6 +153,19 @@
   /**
    * @return A {@link Status} for an open file of unspecified length, all of which needs replicating.
    */
+  public static Status openWithUnknownLength(long timeCreated) {
+    Builder builder = Status.newBuilder();
+    builder.setBegin(0);
+    builder.setEnd(0);
+    builder.setInfiniteEnd(true);
+    builder.setClosed(false);
+    builder.setCreatedTime(timeCreated);
+    return builder.build();
+  }
+
+  /**
+   * @return A {@link Status} for an open file of unspecified length, all of which needs replicating.
+   */
   public static Status openWithUnknownLength() {
     return INF_END_REPLICATION_STATUS;
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/rpc/CustomNonBlockingServer.java b/server/base/src/main/java/org/apache/accumulo/server/rpc/CustomNonBlockingServer.java
index c81913f..ae65c1e 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/rpc/CustomNonBlockingServer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/rpc/CustomNonBlockingServer.java
@@ -17,253 +17,94 @@
 package org.apache.accumulo.server.rpc;
 
 import java.io.IOException;
+import java.lang.reflect.Field;
 import java.net.Socket;
 import java.nio.channels.SelectionKey;
-import java.util.Iterator;
 
+import org.apache.accumulo.server.rpc.TServerUtils;
 import org.apache.thrift.server.THsHaServer;
 import org.apache.thrift.server.TNonblockingServer;
 import org.apache.thrift.transport.TNonblockingServerTransport;
 import org.apache.thrift.transport.TNonblockingSocket;
 import org.apache.thrift.transport.TNonblockingTransport;
-import org.apache.thrift.transport.TTransportException;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 /**
- * This class implements a custom non-blocking thrift server, incorporating the {@link THsHaServer} features, and overriding the underlying
- * {@link TNonblockingServer} methods, especially {@link org.apache.thrift.server.TNonblockingServer.SelectAcceptThread}, in order to override the
- * {@link org.apache.thrift.server.AbstractNonblockingServer.FrameBuffer} and {@link org.apache.thrift.server.AbstractNonblockingServer.AsyncFrameBuffer} with
- * one that reveals the client address from its transport.
- *
- * <p>
- * The justification for this is explained in https://issues.apache.org/jira/browse/ACCUMULO-1691, and is needed due to the repeated regressions:
- * <ul>
- * <li>https://issues.apache.org/jira/browse/THRIFT-958</li>
- * <li>https://issues.apache.org/jira/browse/THRIFT-1464</li>
- * <li>https://issues.apache.org/jira/browse/THRIFT-2173</li>
- * </ul>
- *
- * <p>
- * This class contains a copy of {@link org.apache.thrift.server.TNonblockingServer.SelectAcceptThread} from Thrift 0.9.1, with the slight modification of
- * instantiating a custom FrameBuffer, rather than the {@link org.apache.thrift.server.AbstractNonblockingServer.FrameBuffer} and
- * {@link org.apache.thrift.server.AbstractNonblockingServer.AsyncFrameBuffer}. Because of this, any change in the implementation upstream will require a review
- * of this implementation here, to ensure any new bugfixes/features in the upstream Thrift class are also applied here, at least until
- * https://issues.apache.org/jira/browse/THRIFT-2173 is implemented. In the meantime, the maven-enforcer-plugin ensures that Thrift remains at version 0.9.1,
- * which has been reviewed and tested.
+ * This class implements a custom non-blocking thrift server that stores the client address in thread-local storage for the invocation.
  */
 public class CustomNonBlockingServer extends THsHaServer {
 
-  private static final Logger LOGGER = LoggerFactory.getLogger(CustomNonBlockingServer.class);
-  private SelectAcceptThread selectAcceptThread_;
-  private volatile boolean stopped_ = false;
+  private final Field selectAcceptThreadField;
 
   public CustomNonBlockingServer(Args args) {
     super(args);
-  }
 
-  @Override
-  protected Runnable getRunnable(final FrameBuffer frameBuffer) {
-    return new Runnable() {
-      @Override
-      public void run() {
-        if (frameBuffer instanceof CustomNonblockingFrameBuffer) {
-          TNonblockingTransport trans = ((CustomNonblockingFrameBuffer) frameBuffer).getTransport();
-          if (trans instanceof TNonblockingSocket) {
-            TNonblockingSocket tsock = (TNonblockingSocket) trans;
-            Socket sock = tsock.getSocketChannel().socket();
-            TServerUtils.clientAddress.set(sock.getInetAddress().getHostAddress() + ":" + sock.getPort());
-          }
-        }
-        frameBuffer.invoke();
-      }
-    };
+    try {
+      selectAcceptThreadField = TNonblockingServer.class.getDeclaredField("selectAcceptThread_");
+      selectAcceptThreadField.setAccessible(true);
+    } catch (Exception e) {
+      throw new RuntimeException("Failed to access required field in Thrift code.", e);
+    }
   }
 
   @Override
   protected boolean startThreads() {
+    // Yet another dirty/gross hack to get access to the client's address.
+
     // start the selector
     try {
-      selectAcceptThread_ = new SelectAcceptThread((TNonblockingServerTransport) serverTransport_);
+      // Hack in our SelectAcceptThread impl
+      SelectAcceptThread selectAcceptThread_ = new CustomSelectAcceptThread((TNonblockingServerTransport) serverTransport_);
+      // Set the private field before continuing.
+      selectAcceptThreadField.set(this, selectAcceptThread_);
+
       selectAcceptThread_.start();
       return true;
     } catch (IOException e) {
       LOGGER.error("Failed to start selector thread!", e);
       return false;
+    } catch (IllegalAccessException | IllegalArgumentException e) {
+      throw new RuntimeException("Exception setting customer select thread in Thrift");
     }
   }
 
-  @Override
-  public void stop() {
-    stopped_ = true;
-    if (selectAcceptThread_ != null) {
-      selectAcceptThread_.wakeupSelector();
-    }
-  }
+  /**
+   * Custom wrapper around {@link org.apache.thrift.server.TNonblockingServer.SelectAcceptThread} to create our {@link CustomFrameBuffer}.
+   */
+  private class CustomSelectAcceptThread extends SelectAcceptThread {
 
-  @Override
-  public boolean isStopped() {
-    return selectAcceptThread_.isStopped();
-  }
-
-  @Override
-  protected void joinSelector() {
-    // wait until the selector thread exits
-    try {
-      selectAcceptThread_.join();
-    } catch (InterruptedException e) {
-      // for now, just silently ignore. technically this means we'll have less of
-      // a graceful shutdown as a result.
-    }
-  }
-
-  private interface CustomNonblockingFrameBuffer {
-    TNonblockingTransport getTransport();
-  }
-
-  private class CustomAsyncFrameBuffer extends AsyncFrameBuffer implements CustomNonblockingFrameBuffer {
-    private TNonblockingTransport trans;
-
-    public CustomAsyncFrameBuffer(TNonblockingTransport trans, SelectionKey selectionKey, AbstractSelectThread selectThread) {
-      super(trans, selectionKey, selectThread);
-      this.trans = trans;
+    public CustomSelectAcceptThread(TNonblockingServerTransport serverTransport) throws IOException {
+      super(serverTransport);
     }
 
     @Override
-    public TNonblockingTransport getTransport() {
-      return trans;
+    protected FrameBuffer createFrameBuffer(final TNonblockingTransport trans, final SelectionKey selectionKey, final AbstractSelectThread selectThread) {
+      if (processorFactory_.isAsyncProcessor()) {
+        throw new IllegalStateException("This implementation does not support AsyncProcessors");
+      }
+
+      return new CustomFrameBuffer(trans, selectionKey, selectThread);
     }
   }
 
-  private class CustomFrameBuffer extends FrameBuffer implements CustomNonblockingFrameBuffer {
-    private TNonblockingTransport trans;
+  /**
+   * Custom wrapper around {@link org.apache.thrift.server.AbstractNonblockingServer.FrameBuffer} to extract the client's network location before accepting the
+   * request.
+   */
+  private class CustomFrameBuffer extends FrameBuffer {
 
     public CustomFrameBuffer(TNonblockingTransport trans, SelectionKey selectionKey, AbstractSelectThread selectThread) {
       super(trans, selectionKey, selectThread);
-      this.trans = trans;
     }
 
     @Override
-    public TNonblockingTransport getTransport() {
-      return trans;
+    public void invoke() {
+      if (trans_ instanceof TNonblockingSocket) {
+        TNonblockingSocket tsock = (TNonblockingSocket) trans_;
+        Socket sock = tsock.getSocketChannel().socket();
+        TServerUtils.clientAddress.set(sock.getInetAddress().getHostAddress() + ":" + sock.getPort());
+      }
+      super.invoke();
     }
   }
 
-  // @formatter:off
-  private class SelectAcceptThread extends AbstractSelectThread {
-
-    // The server transport on which new client transports will be accepted
-    private final TNonblockingServerTransport serverTransport;
-
-    /**
-     * Set up the thread that will handle the non-blocking accepts, reads, and
-     * writes.
-     */
-    public SelectAcceptThread(final TNonblockingServerTransport serverTransport)
-    throws IOException {
-      this.serverTransport = serverTransport;
-      serverTransport.registerSelector(selector);
-    }
-
-    public boolean isStopped() {
-      return stopped_;
-    }
-
-    /**
-     * The work loop. Handles both selecting (all IO operations) and managing
-     * the selection preferences of all existing connections.
-     */
-    @Override
-    public void run() {
-      try {
-        if (eventHandler_ != null) {
-          eventHandler_.preServe();
-        }
-
-        while (!stopped_) {
-          select();
-          processInterestChanges();
-        }
-        for (SelectionKey selectionKey : selector.keys()) {
-          cleanupSelectionKey(selectionKey);
-        }
-      } catch (Throwable t) {
-        LOGGER.error("run() exiting due to uncaught error", t);
-      } finally {
-        stopped_ = true;
-      }
-    }
-
-    /**
-     * Select and process IO events appropriately:
-     * If there are connections to be accepted, accept them.
-     * If there are existing connections with data waiting to be read, read it,
-     * buffering until a whole frame has been read.
-     * If there are any pending responses, buffer them until their target client
-     * is available, and then send the data.
-     */
-    private void select() {
-      try {
-        // wait for io events.
-        selector.select();
-
-        // process the io events we received
-        Iterator<SelectionKey> selectedKeys = selector.selectedKeys().iterator();
-        while (!stopped_ && selectedKeys.hasNext()) {
-          SelectionKey key = selectedKeys.next();
-          selectedKeys.remove();
-
-          // skip if not valid
-          if (!key.isValid()) {
-            cleanupSelectionKey(key);
-            continue;
-          }
-
-          // if the key is marked Accept, then it has to be the server
-          // transport.
-          if (key.isAcceptable()) {
-            handleAccept();
-          } else if (key.isReadable()) {
-            // deal with reads
-            handleRead(key);
-          } else if (key.isWritable()) {
-            // deal with writes
-            handleWrite(key);
-          } else {
-            LOGGER.warn("Unexpected state in select! " + key.interestOps());
-          }
-        }
-      } catch (IOException e) {
-        LOGGER.warn("Got an IOException while selecting!", e);
-      }
-    }
-
-    /**
-     * Accept a new connection.
-     */
-    @SuppressWarnings("unused")
-    private void handleAccept() throws IOException {
-      SelectionKey clientKey = null;
-      TNonblockingTransport client = null;
-      try {
-        // accept the connection
-        client = (TNonblockingTransport)serverTransport.accept();
-        clientKey = client.registerSelector(selector, SelectionKey.OP_READ);
-
-        // add this key to the map
-          FrameBuffer frameBuffer =
-              processorFactory_.isAsyncProcessor() ? new CustomAsyncFrameBuffer(client, clientKey,SelectAcceptThread.this) :
-                  new CustomFrameBuffer(client, clientKey,SelectAcceptThread.this);
-
-          clientKey.attach(frameBuffer);
-      } catch (TTransportException tte) {
-        // something went wrong accepting.
-        LOGGER.warn("Exception trying to accept!", tte);
-        tte.printStackTrace();
-        if (clientKey != null) cleanupSelectionKey(clientKey);
-        if (client != null) client.close();
-      }
-    }
-  } // SelectAcceptThread
-  // @formatter:on
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/rpc/RpcWrapper.java b/server/base/src/main/java/org/apache/accumulo/server/rpc/RpcWrapper.java
index ec68166..b942913 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/rpc/RpcWrapper.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/rpc/RpcWrapper.java
@@ -106,7 +106,7 @@
     isOnewayMethod.setAccessible(true);
 
     try {
-      final Set<String> onewayMethods = new HashSet<String>();
+      final Set<String> onewayMethods = new HashSet<>();
       for (Entry<String,?> entry : processorView.entrySet()) {
         try {
           if ((Boolean) isOnewayMethod.invoke(entry.getValue())) {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingWrapper.java b/server/base/src/main/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingWrapper.java
index 698cf9e..54f61dd 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingWrapper.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingWrapper.java
@@ -29,7 +29,7 @@
 public class TCredentialsUpdatingWrapper {
 
   public static <T> T service(final T instance, final Class<? extends T> originalClass, AccumuloConfiguration conf) {
-    InvocationHandler handler = new TCredentialsUpdatingInvocationHandler<T>(instance, conf);
+    InvocationHandler handler = new TCredentialsUpdatingInvocationHandler<>(instance, conf);
 
     @SuppressWarnings("unchecked")
     T proxiedInstance = (T) Proxy.newProxyInstance(originalClass.getClassLoader(), originalClass.getInterfaces(), handler);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/rpc/TNonblockingServerSocket.java b/server/base/src/main/java/org/apache/accumulo/server/rpc/TNonblockingServerSocket.java
deleted file mode 100644
index a209e69..0000000
--- a/server/base/src/main/java/org/apache/accumulo/server/rpc/TNonblockingServerSocket.java
+++ /dev/null
@@ -1,156 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied. See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-package org.apache.accumulo.server.rpc;
-
-import java.io.IOException;
-import java.net.InetSocketAddress;
-import java.net.ServerSocket;
-import java.net.SocketException;
-import java.nio.channels.ClosedChannelException;
-import java.nio.channels.SelectionKey;
-import java.nio.channels.Selector;
-import java.nio.channels.ServerSocketChannel;
-import java.nio.channels.SocketChannel;
-
-import org.apache.thrift.transport.TNonblockingServerTransport;
-import org.apache.thrift.transport.TNonblockingSocket;
-import org.apache.thrift.transport.TTransportException;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * Wrapper around ServerSocketChannel.
- *
- * This class is copied from org.apache.thrift.transport.TNonblockingServerSocket version 0.9. The only change (apart from the logging statements) is the
- * addition of the {@link #getPort()} method to retrieve the port used by the ServerSocket.
- */
-public class TNonblockingServerSocket extends TNonblockingServerTransport {
-  private static final Logger log = LoggerFactory.getLogger(TNonblockingServerSocket.class);
-
-  /**
-   * This channel is where all the nonblocking magic happens.
-   */
-  private ServerSocketChannel serverSocketChannel = null;
-
-  /**
-   * Underlying ServerSocket object
-   */
-  private ServerSocket serverSocket_ = null;
-
-  /**
-   * Timeout for client sockets from accept
-   */
-  private int clientTimeout_ = 0;
-
-  /**
-   * Creates just a port listening server socket
-   */
-  public TNonblockingServerSocket(int port, int clientTimeout) throws TTransportException {
-    this(new InetSocketAddress(port), clientTimeout);
-  }
-
-  public TNonblockingServerSocket(InetSocketAddress bindAddr) throws TTransportException {
-    this(bindAddr, 0);
-  }
-
-  public TNonblockingServerSocket(InetSocketAddress bindAddr, int clientTimeout) throws TTransportException {
-    clientTimeout_ = clientTimeout;
-    try {
-      serverSocketChannel = ServerSocketChannel.open();
-      serverSocketChannel.configureBlocking(false);
-
-      // Make server socket
-      serverSocket_ = serverSocketChannel.socket();
-      // Prevent 2MSL delay problem on server restarts
-      serverSocket_.setReuseAddress(true);
-      // Bind to listening port
-      serverSocket_.bind(bindAddr);
-    } catch (IOException ioe) {
-      serverSocket_ = null;
-      throw new TTransportException("Could not create ServerSocket on address " + bindAddr.toString() + ".");
-    }
-  }
-
-  @Override
-  public void listen() throws TTransportException {
-    // Make sure not to block on accept
-    if (serverSocket_ != null) {
-      try {
-        serverSocket_.setSoTimeout(0);
-      } catch (SocketException sx) {
-        log.error("SocketException caused by serverSocket in listen()", sx);
-      }
-    }
-  }
-
-  @Override
-  protected TNonblockingSocket acceptImpl() throws TTransportException {
-    if (serverSocket_ == null) {
-      throw new TTransportException(TTransportException.NOT_OPEN, "No underlying server socket.");
-    }
-    try {
-      SocketChannel socketChannel = serverSocketChannel.accept();
-      if (socketChannel == null) {
-        return null;
-      }
-
-      TNonblockingSocket tsocket = new TNonblockingSocket(socketChannel);
-      tsocket.setTimeout(clientTimeout_);
-      return tsocket;
-    } catch (IOException iox) {
-      throw new TTransportException(iox);
-    }
-  }
-
-  @Override
-  public void registerSelector(Selector selector) {
-    try {
-      // Register the server socket channel, indicating an interest in
-      // accepting new connections
-      serverSocketChannel.register(selector, SelectionKey.OP_ACCEPT);
-    } catch (ClosedChannelException e) {
-      // this shouldn't happen, ideally...
-      // TODO: decide what to do with this.
-    }
-  }
-
-  @Override
-  public void close() {
-    if (serverSocket_ != null) {
-      try {
-        serverSocket_.close();
-      } catch (IOException iox) {
-        log.warn("WARNING: Could not close server socket: {}", iox.getMessage());
-      }
-      serverSocket_ = null;
-    }
-  }
-
-  @Override
-  public void interrupt() {
-    // The thread-safeness of this is dubious, but Java documentation suggests
-    // that it is safe to do this from a different thread context
-    close();
-  }
-
-  public int getPort() {
-    return serverSocket_.getLocalPort();
-  }
-}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/rpc/TServerUtils.java b/server/base/src/main/java/org/apache/accumulo/server/rpc/TServerUtils.java
index 08ef944..49995d6 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/rpc/TServerUtils.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/rpc/TServerUtils.java
@@ -20,14 +20,12 @@
 
 import java.io.IOException;
 import java.lang.reflect.Field;
-import java.net.BindException;
 import java.net.InetAddress;
 import java.net.InetSocketAddress;
 import java.net.ServerSocket;
 import java.net.UnknownHostException;
 import java.util.Arrays;
 import java.util.HashSet;
-import java.util.Random;
 import java.util.Set;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.ThreadPoolExecutor;
@@ -41,7 +39,6 @@
 import org.apache.accumulo.core.rpc.UGIAssumingTransportFactory;
 import org.apache.accumulo.core.util.Daemon;
 import org.apache.accumulo.core.util.SimpleThreadPool;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.util.LoggingRunnable;
 import org.apache.accumulo.server.AccumuloServerContext;
 import org.apache.accumulo.server.util.Halt;
@@ -53,6 +50,7 @@
 import org.apache.thrift.protocol.TProtocolFactory;
 import org.apache.thrift.server.TServer;
 import org.apache.thrift.server.TThreadPoolServer;
+import org.apache.thrift.transport.TNonblockingServerSocket;
 import org.apache.thrift.transport.TSSLTransportFactory;
 import org.apache.thrift.transport.TSaslServerTransport;
 import org.apache.thrift.transport.TServerSocket;
@@ -73,7 +71,23 @@
   /**
    * Static instance, passed to {@link ClientInfoProcessorFactory}, which will contain the client address of any incoming RPC.
    */
-  public static final ThreadLocal<String> clientAddress = new ThreadLocal<String>();
+  public static final ThreadLocal<String> clientAddress = new ThreadLocal<>();
+
+  /**
+   *
+   * @param hostname
+   *          name of the host
+   * @param ports
+   *          array of ports
+   * @return array of HostAndPort objects
+   */
+  public static HostAndPort[] getHostAndPorts(String hostname, int[] ports) {
+    HostAndPort[] addresses = new HostAndPort[ports.length];
+    for (int i = 0; i < ports.length; i++) {
+      addresses[i] = HostAndPort.fromParts(hostname, ports[i]);
+    }
+    return addresses;
+  }
 
   /**
    * Start a server, at the given port, or higher, if that port is not available.
@@ -105,7 +119,7 @@
       throws UnknownHostException {
     final AccumuloConfiguration config = service.getConfiguration();
 
-    final int portHint = config.getPort(portHintProperty);
+    final int[] portHint = config.getPort(portHintProperty);
 
     int minThreads = 2;
     if (minThreadProperty != null)
@@ -133,42 +147,35 @@
     // create the TimedProcessor outside the port search loop so we don't try to register the same metrics mbean more than once
     TimedProcessor timedProcessor = new TimedProcessor(config, processor, serverName, threadName);
 
-    Random random = new Random();
-    for (int j = 0; j < 100; j++) {
-
-      // Are we going to slide around, looking for an open port?
-      int portsToSearch = 1;
-      if (portSearch)
-        portsToSearch = 1000;
-
-      for (int i = 0; i < portsToSearch; i++) {
-        int port = portHint + i;
-        if (portHint != 0 && i > 0)
-          port = 1024 + random.nextInt(65535 - 1024);
-        if (port > 65535)
-          port = 1024 + port % (65535 - 1024);
-        try {
-          HostAndPort addr = HostAndPort.fromParts(hostname, port);
-          return TServerUtils.startTServer(addr, serverType, timedProcessor, serverName, threadName, minThreads, simpleTimerThreadpoolSize,
-              timeBetweenThreadChecks, maxMessageSize, service.getServerSslParams(), service.getSaslParams(), service.getClientTimeoutInMillis());
-        } catch (TTransportException ex) {
-          log.error("Unable to start TServer", ex);
-          if (ex.getCause() == null || ex.getCause().getClass() == BindException.class) {
-            // Note: with a TNonblockingServerSocket a "port taken" exception is a cause-less
-            // TTransportException, and with a TSocket created by TSSLTransportFactory, it
-            // comes through as caused by a BindException.
-            log.info("Unable to use port {}, retrying. (Thread Name = {})", port, threadName);
-            UtilWaitThread.sleep(250);
-          } else {
-            // thrift is passing up a nested exception that isn't a BindException,
-            // so no reason to believe retrying on a different port would help.
-            log.error("Unable to start TServer", ex);
+    HostAndPort[] addresses = getHostAndPorts(hostname, portHint);
+    try {
+      return TServerUtils.startTServer(serverType, timedProcessor, serverName, threadName, minThreads, simpleTimerThreadpoolSize, timeBetweenThreadChecks,
+          maxMessageSize, service.getServerSslParams(), service.getSaslParams(), service.getClientTimeoutInMillis(), addresses);
+    } catch (TTransportException e) {
+      if (portSearch) {
+        HostAndPort last = addresses[addresses.length - 1];
+        // Attempt to allocate a port outside of the specified port property
+        // Search sequentially over the next 1000 ports
+        for (int i = last.getPort() + 1; i < last.getPort() + 1001; i++) {
+          int port = i;
+          if (port > 65535) {
             break;
           }
+          try {
+            HostAndPort addr = HostAndPort.fromParts(hostname, port);
+            return TServerUtils.startTServer(serverType, timedProcessor, serverName, threadName, minThreads, simpleTimerThreadpoolSize,
+                timeBetweenThreadChecks, maxMessageSize, service.getServerSslParams(), service.getSaslParams(), service.getClientTimeoutInMillis(), addr);
+          } catch (TTransportException tte) {
+            log.info("Unable to use port {}, retrying. (Thread Name = {})", port, threadName);
+          }
         }
+        log.error("Unable to start TServer", e);
+        throw new UnknownHostException("Unable to find a listen port");
+      } else {
+        log.error("Unable to start TServer", e);
+        throw new UnknownHostException("Unable to find a listen port");
       }
     }
-    throw new UnknownHostException("Unable to find a listen port");
   }
 
   /**
@@ -332,7 +339,7 @@
 
       // Be nice for the user and automatically remove protocols that might not exist in their JVM. Keeps us from forcing config alterations too
       // e.g. TLSv1.1 and TLSv1.2 don't exist in JDK6
-      Set<String> socketEnabledProtocols = new HashSet<String>(Arrays.asList(sslServerSock.getEnabledProtocols()));
+      Set<String> socketEnabledProtocols = new HashSet<>(Arrays.asList(sslServerSock.getEnabledProtocols()));
       // Keep only the enabled protocols that were specified by the configuration
       socketEnabledProtocols.retainAll(Arrays.asList(protocols));
       if (socketEnabledProtocols.isEmpty()) {
@@ -393,6 +400,7 @@
       hostname = InetAddress.getByName(address.getHostText()).getCanonicalHostName();
       fqdn = InetAddress.getLocalHost().getCanonicalHostName();
     } catch (UnknownHostException e) {
+      transport.close();
       throw new TTransportException(e);
     }
 
@@ -408,6 +416,7 @@
       log.error(
           "Expected hostname of '{}' but got '{}'. Ensure the entries in the Accumulo hosts files (e.g. masters, slaves) are the FQDN for each host when using SASL.",
           fqdn, hostname);
+      transport.close();
       throw new RuntimeException("SASL requires that the address the thrift server listens on is the same as the FQDN for this host");
     }
 
@@ -415,6 +424,7 @@
     try {
       serverUser = UserGroupInformation.getLoginUser();
     } catch (IOException e) {
+      transport.close();
       throw new TTransportException(e);
     }
 
@@ -451,27 +461,27 @@
     return new ServerAddress(server, address);
   }
 
-  public static ServerAddress startTServer(AccumuloConfiguration conf, HostAndPort address, ThriftServerType serverType, TProcessor processor,
-      String serverName, String threadName, int numThreads, int numSTThreads, long timeBetweenThreadChecks, long maxMessageSize, SslConnectionParams sslParams,
-      SaslServerConnectionParams saslParams, long serverSocketTimeout) throws TTransportException {
+  public static ServerAddress startTServer(AccumuloConfiguration conf, ThriftServerType serverType, TProcessor processor, String serverName, String threadName,
+      int numThreads, int numSTThreads, long timeBetweenThreadChecks, long maxMessageSize, SslConnectionParams sslParams,
+      SaslServerConnectionParams saslParams, long serverSocketTimeout, HostAndPort... addresses) throws TTransportException {
 
     if (ThriftServerType.SASL == serverType) {
       processor = updateSaslProcessor(serverType, processor);
     }
 
-    return startTServer(address, serverType, new TimedProcessor(conf, processor, serverName, threadName), serverName, threadName, numThreads, numSTThreads,
-        timeBetweenThreadChecks, maxMessageSize, sslParams, saslParams, serverSocketTimeout);
+    return startTServer(serverType, new TimedProcessor(conf, processor, serverName, threadName), serverName, threadName, numThreads, numSTThreads,
+        timeBetweenThreadChecks, maxMessageSize, sslParams, saslParams, serverSocketTimeout, addresses);
   }
 
   /**
-   * @see #startTServer(HostAndPort, ThriftServerType, TimedProcessor, TProtocolFactory, String, String, int, int, long, long, SslConnectionParams,
-   *      SaslServerConnectionParams, long)
+   * @see #startTServer(ThriftServerType, TimedProcessor, TProtocolFactory, String, String, int, int, long, long, SslConnectionParams,
+   *      SaslServerConnectionParams, long, HostAndPort...)
    */
-  public static ServerAddress startTServer(HostAndPort address, ThriftServerType serverType, TimedProcessor processor, String serverName, String threadName,
-      int numThreads, int numSTThreads, long timeBetweenThreadChecks, long maxMessageSize, SslConnectionParams sslParams,
-      SaslServerConnectionParams saslParams, long serverSocketTimeout) throws TTransportException {
-    return startTServer(address, serverType, processor, ThriftUtil.protocolFactory(), serverName, threadName, numThreads, numSTThreads,
-        timeBetweenThreadChecks, maxMessageSize, sslParams, saslParams, serverSocketTimeout);
+  public static ServerAddress startTServer(ThriftServerType serverType, TimedProcessor processor, String serverName, String threadName, int numThreads,
+      int numSTThreads, long timeBetweenThreadChecks, long maxMessageSize, SslConnectionParams sslParams, SaslServerConnectionParams saslParams,
+      long serverSocketTimeout, HostAndPort... addresses) throws TTransportException {
+    return startTServer(serverType, processor, ThriftUtil.protocolFactory(), serverName, threadName, numThreads, numSTThreads, timeBetweenThreadChecks,
+        maxMessageSize, sslParams, saslParams, serverSocketTimeout, addresses);
   }
 
   /**
@@ -479,35 +489,46 @@
    *
    * @return A ServerAddress encapsulating the Thrift server created and the host/port which it is bound to.
    */
-  public static ServerAddress startTServer(HostAndPort address, ThriftServerType serverType, TimedProcessor processor, TProtocolFactory protocolFactory,
-      String serverName, String threadName, int numThreads, int numSTThreads, long timeBetweenThreadChecks, long maxMessageSize, SslConnectionParams sslParams,
-      SaslServerConnectionParams saslParams, long serverSocketTimeout) throws TTransportException {
+  public static ServerAddress startTServer(ThriftServerType serverType, TimedProcessor processor, TProtocolFactory protocolFactory, String serverName,
+      String threadName, int numThreads, int numSTThreads, long timeBetweenThreadChecks, long maxMessageSize, SslConnectionParams sslParams,
+      SaslServerConnectionParams saslParams, long serverSocketTimeout, HostAndPort... addresses) throws TTransportException {
 
     // This is presently not supported. It's hypothetically possible, I believe, to work, but it would require changes in how the transports
     // work at the Thrift layer to ensure that both the SSL and SASL handshakes function. SASL's quality of protection addresses privacy issues.
     checkArgument(!(sslParams != null && saslParams != null), "Cannot start a Thrift server using both SSL and SASL");
 
-    ServerAddress serverAddress;
-    switch (serverType) {
-      case SSL:
-        log.debug("Instantiating SSL Thrift server");
-        serverAddress = createSslThreadPoolServer(address, processor, protocolFactory, serverSocketTimeout, sslParams, serverName, numThreads, numSTThreads,
-            timeBetweenThreadChecks);
+    ServerAddress serverAddress = null;
+    for (HostAndPort address : addresses) {
+      try {
+        switch (serverType) {
+          case SSL:
+            log.debug("Instantiating SSL Thrift server");
+            serverAddress = createSslThreadPoolServer(address, processor, protocolFactory, serverSocketTimeout, sslParams, serverName, numThreads,
+                numSTThreads, timeBetweenThreadChecks);
+            break;
+          case SASL:
+            log.debug("Instantiating SASL Thrift server");
+            serverAddress = createSaslThreadPoolServer(address, processor, protocolFactory, serverSocketTimeout, saslParams, serverName, threadName,
+                numThreads, numSTThreads, timeBetweenThreadChecks);
+            break;
+          case THREADPOOL:
+            log.debug("Instantiating unsecure TThreadPool Thrift server");
+            serverAddress = createBlockingServer(address, processor, protocolFactory, maxMessageSize, serverName, numThreads, numSTThreads,
+                timeBetweenThreadChecks);
+            break;
+          case CUSTOM_HS_HA: // Intentional passthrough -- Our custom wrapper around HsHa is the default
+          default:
+            log.debug("Instantiating default, unsecure custom half-async Thrift server");
+            serverAddress = createNonBlockingServer(address, processor, protocolFactory, serverName, threadName, numThreads, numSTThreads,
+                timeBetweenThreadChecks, maxMessageSize);
+        }
         break;
-      case SASL:
-        log.debug("Instantiating SASL Thrift server");
-        serverAddress = createSaslThreadPoolServer(address, processor, protocolFactory, serverSocketTimeout, saslParams, serverName, threadName, numThreads,
-            numSTThreads, timeBetweenThreadChecks);
-        break;
-      case THREADPOOL:
-        log.debug("Instantiating unsecure TThreadPool Thrift server");
-        serverAddress = createBlockingServer(address, processor, protocolFactory, maxMessageSize, serverName, numThreads, numSTThreads, timeBetweenThreadChecks);
-        break;
-      case CUSTOM_HS_HA: // Intentional passthrough -- Our custom wrapper around HsHa is the default
-      default:
-        log.debug("Instantiating default, unsecure custom half-async Thrift server");
-        serverAddress = createNonBlockingServer(address, processor, protocolFactory, serverName, threadName, numThreads, numSTThreads, timeBetweenThreadChecks,
-            maxMessageSize);
+      } catch (TTransportException e) {
+        log.warn("Error attempting to create server at {}. Error: {}", address.toString(), e.getMessage());
+      }
+    }
+    if (null == serverAddress) {
+      throw new TTransportException("Unable to create server on addresses: " + Arrays.toString(addresses));
     }
 
     final TServer finalServer = serverAddress.server;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/rpc/UGIAssumingProcessor.java b/server/base/src/main/java/org/apache/accumulo/server/rpc/UGIAssumingProcessor.java
index 48d18f4..27f15c7 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/rpc/UGIAssumingProcessor.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/rpc/UGIAssumingProcessor.java
@@ -40,8 +40,8 @@
 public class UGIAssumingProcessor implements TProcessor {
   private static final Logger log = LoggerFactory.getLogger(UGIAssumingProcessor.class);
 
-  public static final ThreadLocal<String> rpcPrincipal = new ThreadLocal<String>();
-  public static final ThreadLocal<SaslMechanism> rpcMechanism = new ThreadLocal<SaslMechanism>();
+  public static final ThreadLocal<String> rpcPrincipal = new ThreadLocal<>();
+  public static final ThreadLocal<SaslMechanism> rpcMechanism = new ThreadLocal<>();
 
   private final TProcessor wrapped;
   private final UserGroupInformation loginUser;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/AuditedSecurityOperation.java b/server/base/src/main/java/org/apache/accumulo/server/security/AuditedSecurityOperation.java
index 7fc9d81..e5ca006 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/AuditedSecurityOperation.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/AuditedSecurityOperation.java
@@ -130,7 +130,7 @@
   private static final int MAX_ELEMENTS_TO_LOG = 10;
 
   private static List<String> truncate(Collection<?> list) {
-    List<String> result = new ArrayList<String>();
+    List<String> result = new ArrayList<>();
     int i = 0;
     for (Object obj : list) {
       if (i++ > MAX_ELEMENTS_TO_LOG) {
@@ -173,7 +173,7 @@
       @SuppressWarnings({"unchecked", "rawtypes"})
       Map<KeyExtent,List<Range>> convertedBatch = Translator.translate(tbatch, new Translator.TKeyExtentTranslator(), new Translator.ListTranslator(
           new Translator.TRangeTranslator()));
-      Map<KeyExtent,List<String>> truncated = new HashMap<KeyExtent,List<String>>();
+      Map<KeyExtent,List<String>> truncated = new HashMap<>();
       for (Entry<KeyExtent,List<Range>> entry : convertedBatch.entrySet()) {
         truncated.put(entry.getKey(), truncate(entry.getValue()));
       }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java b/server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
index 0b44727..9473aca 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/SystemCredentials.java
@@ -52,7 +52,7 @@
 
   private final TCredentials AS_THRIFT;
 
-  SystemCredentials(Instance instance, String principal, AuthenticationToken token) {
+  public SystemCredentials(Instance instance, String principal, AuthenticationToken token) {
     super(principal, token);
     AS_THRIFT = super.toThrift(instance);
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/delegation/AuthenticationTokenSecretManager.java b/server/base/src/main/java/org/apache/accumulo/server/security/delegation/AuthenticationTokenSecretManager.java
index 99a088a..a648cb5 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/delegation/AuthenticationTokenSecretManager.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/delegation/AuthenticationTokenSecretManager.java
@@ -58,7 +58,7 @@
 
   private final Instance instance;
   private final long tokenMaxLifetime;
-  private final ConcurrentHashMap<Integer,AuthenticationKey> allKeys = new ConcurrentHashMap<Integer,AuthenticationKey>();
+  private final ConcurrentHashMap<Integer,AuthenticationKey> allKeys = new ConcurrentHashMap<>();
   private AuthenticationKey currentKey;
 
   /**
@@ -168,7 +168,7 @@
     }
     // The use of the ServiceLoader inside Token doesn't work to automatically get the Identifier
     // Explicitly returning the identifier also saves an extra deserialization
-    Token<AuthenticationTokenIdentifier> token = new Token<AuthenticationTokenIdentifier>(id.getBytes(), password, id.getKind(), new Text(svcName.toString()));
+    Token<AuthenticationTokenIdentifier> token = new Token<>(id.getBytes(), password, id.getKind(), new Text(svcName.toString()));
     return Maps.immutableEntry(token, id);
   }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/handler/InsecureAuthenticator.java b/server/base/src/main/java/org/apache/accumulo/server/security/handler/InsecureAuthenticator.java
index a57608c..8170ea3 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/handler/InsecureAuthenticator.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/handler/InsecureAuthenticator.java
@@ -73,7 +73,7 @@
 
   @Override
   public Set<Class<? extends AuthenticationToken>> getSupportedTokenTypes() {
-    Set<Class<? extends AuthenticationToken>> cs = new HashSet<Class<? extends AuthenticationToken>>();
+    Set<Class<? extends AuthenticationToken>> cs = new HashSet<>();
     cs.add(NullToken.class);
     return cs;
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKAuthenticator.java b/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKAuthenticator.java
index a1616c1..6623fc6 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKAuthenticator.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKAuthenticator.java
@@ -103,7 +103,7 @@
 
   @Override
   public Set<String> listUsers() {
-    return new TreeSet<String>(zooCache.getChildren(ZKUserPath));
+    return new TreeSet<>(zooCache.getChildren(ZKUserPath));
   }
 
   @Override
@@ -200,7 +200,7 @@
 
   @Override
   public Set<Class<? extends AuthenticationToken>> getSupportedTokenTypes() {
-    Set<Class<? extends AuthenticationToken>> cs = new HashSet<Class<? extends AuthenticationToken>>();
+    Set<Class<? extends AuthenticationToken>> cs = new HashSet<>();
     cs.add(PasswordToken.class);
     return cs;
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKAuthorizor.java b/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKAuthorizor.java
index 2d7f7bb..2803627 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKAuthorizor.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKAuthorizor.java
@@ -86,10 +86,10 @@
     IZooReaderWriter zoo = ZooReaderWriter.getInstance();
 
     // create the root user with all system privileges, no table privileges, and no record-level authorizations
-    Set<SystemPermission> rootPerms = new TreeSet<SystemPermission>();
+    Set<SystemPermission> rootPerms = new TreeSet<>();
     for (SystemPermission p : SystemPermission.values())
       rootPerms.add(p);
-    Map<String,Set<TablePermission>> tablePerms = new HashMap<String,Set<TablePermission>>();
+    Map<String,Set<TablePermission>> tablePerms = new HashMap<>();
     // Allow the root user to flush the metadata tables
     tablePerms.put(MetadataTable.ID, Collections.singleton(TablePermission.ALTER_TABLE));
     tablePerms.put(RootTable.ID, Collections.singleton(TablePermission.ALTER_TABLE));
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKPermHandler.java b/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKPermHandler.java
index 06433c4..cf43aee 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKPermHandler.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKPermHandler.java
@@ -180,7 +180,7 @@
       byte[] permBytes = zooCache.get(ZKUserPath + "/" + user + ZKUserSysPerms);
       Set<SystemPermission> perms;
       if (permBytes == null) {
-        perms = new TreeSet<SystemPermission>();
+        perms = new TreeSet<>();
       } else {
         perms = ZKSecurityTool.convertSystemPermissions(permBytes);
       }
@@ -208,7 +208,7 @@
     if (serializedPerms != null)
       tablePerms = ZKSecurityTool.convertTablePermissions(serializedPerms);
     else
-      tablePerms = new TreeSet<TablePermission>();
+      tablePerms = new TreeSet<>();
 
     try {
       if (tablePerms.add(permission)) {
@@ -234,7 +234,7 @@
     if (serializedPerms != null)
       namespacePerms = ZKSecurityTool.convertNamespacePermissions(serializedPerms);
     else
-      namespacePerms = new TreeSet<NamespacePermission>();
+      namespacePerms = new TreeSet<>();
 
     try {
       if (namespacePerms.add(permission)) {
@@ -377,15 +377,15 @@
     IZooReaderWriter zoo = ZooReaderWriter.getInstance();
 
     // create the root user with all system privileges, no table privileges, and no record-level authorizations
-    Set<SystemPermission> rootPerms = new TreeSet<SystemPermission>();
+    Set<SystemPermission> rootPerms = new TreeSet<>();
     for (SystemPermission p : SystemPermission.values())
       rootPerms.add(p);
-    Map<String,Set<TablePermission>> tablePerms = new HashMap<String,Set<TablePermission>>();
+    Map<String,Set<TablePermission>> tablePerms = new HashMap<>();
     // Allow the root user to flush the system tables
     tablePerms.put(RootTable.ID, Collections.singleton(TablePermission.ALTER_TABLE));
     tablePerms.put(MetadataTable.ID, Collections.singleton(TablePermission.ALTER_TABLE));
     // essentially the same but on the system namespace, the ALTER_TABLE permission is now redundant
-    Map<String,Set<NamespacePermission>> namespacePerms = new HashMap<String,Set<NamespacePermission>>();
+    Map<String,Set<NamespacePermission>> namespacePerms = new HashMap<>();
     namespacePerms.put(Namespaces.ACCUMULO_NAMESPACE_ID, Collections.singleton(NamespacePermission.ALTER_NAMESPACE));
     namespacePerms.put(Namespaces.ACCUMULO_NAMESPACE_ID, Collections.singleton(NamespacePermission.ALTER_TABLE));
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKSecurityTool.java b/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKSecurityTool.java
index 6401190..a3da0d8 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKSecurityTool.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/handler/ZKSecurityTool.java
@@ -120,7 +120,7 @@
   public static Set<SystemPermission> convertSystemPermissions(byte[] systempermissions) {
     ByteArrayInputStream bytes = new ByteArrayInputStream(systempermissions);
     DataInputStream in = new DataInputStream(bytes);
-    Set<SystemPermission> toReturn = new HashSet<SystemPermission>();
+    Set<SystemPermission> toReturn = new HashSet<>();
     try {
       while (in.available() > 0)
         toReturn.add(SystemPermission.getPermissionById(in.readByte()));
@@ -145,7 +145,7 @@
   }
 
   public static Set<TablePermission> convertTablePermissions(byte[] tablepermissions) {
-    Set<TablePermission> toReturn = new HashSet<TablePermission>();
+    Set<TablePermission> toReturn = new HashSet<>();
     for (byte b : tablepermissions)
       toReturn.add(TablePermission.getPermissionById(b));
     return toReturn;
@@ -165,7 +165,7 @@
   }
 
   public static Set<NamespacePermission> convertNamespacePermissions(byte[] namespacepermissions) {
-    Set<NamespacePermission> toReturn = new HashSet<NamespacePermission>();
+    Set<NamespacePermission> toReturn = new HashSet<>();
     for (byte b : namespacepermissions)
       toReturn.add(NamespacePermission.getPermissionById(b));
     return toReturn;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/tabletserver/LargestFirstMemoryManager.java b/server/base/src/main/java/org/apache/accumulo/server/tabletserver/LargestFirstMemoryManager.java
index e49e1af..01faacd 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/tabletserver/LargestFirstMemoryManager.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/tabletserver/LargestFirstMemoryManager.java
@@ -28,7 +28,6 @@
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.server.conf.ServerConfiguration;
-import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -52,7 +51,7 @@
   // The fraction of memory that needs to be used before we begin flushing.
   private double compactionThreshold;
   private long maxObserved;
-  private final HashMap<Text,Long> mincIdleThresholds = new HashMap<Text,Long>();
+  private final HashMap<String,Long> mincIdleThresholds = new HashMap<>();
   private ServerConfiguration config = null;
 
   private static class TabletInfo {
@@ -72,7 +71,7 @@
   // A little map that will hold the "largest" N tablets, where largest is a result of the timeMemoryLoad function
   private static class LargestMap {
     final int max;
-    final TreeMap<Long,List<TabletInfo>> map = new TreeMap<Long,List<TabletInfo>>();
+    final TreeMap<Long,List<TabletInfo>> map = new TreeMap<>();
 
     LargestMap(int n) {
       max = n;
@@ -99,7 +98,7 @@
       if (lst != null) {
         lst.add(value);
       } else {
-        lst = new ArrayList<TabletInfo>();
+        lst = new ArrayList<>();
         lst.add(value);
         map.put(key, lst);
       }
@@ -140,9 +139,9 @@
   }
 
   protected long getMinCIdleThreshold(KeyExtent extent) {
-    Text tableId = extent.getTableId();
+    String tableId = extent.getTableId();
     if (!mincIdleThresholds.containsKey(tableId))
-      mincIdleThresholds.put(tableId, config.getTableConfiguration(tableId.toString()).getTimeInMillis(Property.TABLE_MINC_COMPACT_IDLETIME));
+      mincIdleThresholds.put(tableId, config.getTableConfiguration(tableId).getTimeInMillis(Property.TABLE_MINC_COMPACT_IDLETIME));
     return mincIdleThresholds.get(tableId);
   }
 
@@ -160,7 +159,7 @@
 
     mincIdleThresholds.clear();
     final MemoryManagementActions result = new MemoryManagementActions();
-    result.tabletsToMinorCompact = new ArrayList<KeyExtent>();
+    result.tabletsToMinorCompact = new ArrayList<>();
 
     LargestMap largestMemTablets = new LargestMap(maxMinCs);
     final LargestMap largestIdleMemTablets = new LargestMap(maxConcurrentMincs);
@@ -173,7 +172,7 @@
     // find the largest and most idle tablets
     for (TabletState ts : tablets) {
       // Make sure that the table still exists
-      if (!tableExists(instance, ts.getExtent().getTableId().toString())) {
+      if (!tableExists(instance, ts.getExtent().getTableId())) {
         log.trace("Ignoring extent for deleted table: {}", ts.getExtent());
         continue;
       }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/Admin.java b/server/base/src/main/java/org/apache/accumulo/server/util/Admin.java
index 9a8f6ed..ea2f458 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/Admin.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/Admin.java
@@ -90,13 +90,13 @@
   @Parameters(commandDescription = "stop the tablet server on the given hosts")
   static class StopCommand {
     @Parameter(description = "<host> {<host> ... }")
-    List<String> args = new ArrayList<String>();
+    List<String> args = new ArrayList<>();
   }
 
   @Parameters(commandDescription = "Ping tablet servers.  If no arguments, pings all.")
   static class PingCommand {
     @Parameter(description = "{<host> ... }")
-    List<String> args = new ArrayList<String>();
+    List<String> args = new ArrayList<>();
   }
 
   @Parameters(commandDescription = "print tablets that are offline in online tables")
@@ -105,7 +105,7 @@
     boolean fixFiles = false;
 
     @Parameter(names = {"-t", "--table"}, description = "Table to check, if not set checks all tables")
-    String table = null;
+    String tableName = null;
   }
 
   @Parameters(commandDescription = "stop the master")
@@ -139,7 +139,7 @@
     @Parameter(names = {"-n", "--namespaces"}, description = "print the namespace configuration")
     boolean namespaceConfiguration = false;
     @Parameter(names = {"-t", "--tables"}, description = "print per-table configuration")
-    List<String> tables = new ArrayList<String>();
+    List<String> tables = new ArrayList<>();
     @Parameter(names = {"-u", "--users"}, description = "print users and their authorizations and permissions")
     boolean users = false;
   }
@@ -147,7 +147,7 @@
   @Parameters(commandDescription = "redistribute tablet directories across the current volume list")
   static class RandomizeVolumesCommand {
     @Parameter(names = {"-t"}, description = "table to update", required = true)
-    String table = null;
+    String tableName = null;
   }
 
   public static void main(String[] args) {
@@ -231,14 +231,14 @@
           rc = 4;
       } else if (cl.getParsedCommand().equals("checkTablets")) {
         System.out.println("\n*** Looking for offline tablets ***\n");
-        if (FindOfflineTablets.findOffline(context, checkTabletsCommand.table) != 0)
+        if (FindOfflineTablets.findOffline(context, checkTabletsCommand.tableName) != 0)
           rc = 5;
         System.out.println("\n*** Looking for missing files ***\n");
-        if (checkTabletsCommand.table == null) {
+        if (checkTabletsCommand.tableName == null) {
           if (RemoveEntriesForMissingFiles.checkAllTables(context, checkTabletsCommand.fixFiles) != 0)
             rc = 6;
         } else {
-          if (RemoveEntriesForMissingFiles.checkTable(context, checkTabletsCommand.table, checkTabletsCommand.fixFiles) != 0)
+          if (RemoveEntriesForMissingFiles.checkTable(context, checkTabletsCommand.tableName, checkTabletsCommand.fixFiles) != 0)
             rc = 6;
         }
 
@@ -249,7 +249,7 @@
       } else if (cl.getParsedCommand().equals("volumes")) {
         ListVolumesUsed.listVolumes(context);
       } else if (cl.getParsedCommand().equals("randomizeVolumes")) {
-        rc = RandomizeVolumes.randomize(context.getConnector(), randomizeVolumesOpts.table);
+        rc = RandomizeVolumes.randomize(context.getConnector(), randomizeVolumesOpts.tableName);
       } else {
         everything = cl.getParsedCommand().equals("stopAll");
 
@@ -374,15 +374,17 @@
     final String zTServerRoot = getTServersZkPath(instance);
     final ZooCache zc = new ZooCacheFactory().getZooCache(instance.getZooKeepers(), instance.getZooKeepersSessionTimeOut());
     for (String server : servers) {
-      HostAndPort address = AddressUtil.parseAddress(server, context.getConfiguration().getPort(Property.TSERV_CLIENTPORT));
-      final String finalServer = qualifyWithZooKeeperSessionId(zTServerRoot, zc, address.toString());
-      log.info("Stopping server " + finalServer);
-      MasterClient.execute(context, new ClientExec<MasterClientService.Client>() {
-        @Override
-        public void execute(MasterClientService.Client client) throws Exception {
-          client.shutdownTabletServer(Tracer.traceInfo(), context.rpcCreds(), finalServer, force);
-        }
-      });
+      for (int port : context.getConfiguration().getPort(Property.TSERV_CLIENTPORT)) {
+        HostAndPort address = AddressUtil.parseAddress(server, port);
+        final String finalServer = qualifyWithZooKeeperSessionId(zTServerRoot, zc, address.toString());
+        log.info("Stopping server " + finalServer);
+        MasterClient.execute(context, new ClientExec<MasterClientService.Client>() {
+          @Override
+          public void execute(MasterClientService.Client client) throws Exception {
+            client.shutdownTabletServer(Tracer.traceInfo(), context.rpcCreds(), finalServer, force);
+          }
+        });
+      }
     }
   }
 
@@ -516,7 +518,7 @@
     File namespaceScript = new File(outputDirectory, namespace + NS_FILE_SUFFIX);
     FileWriter nsWriter = new FileWriter(namespaceScript);
     nsWriter.write(createNsFormat.format(new String[] {namespace}));
-    TreeMap<String,String> props = new TreeMap<String,String>();
+    TreeMap<String,String> props = new TreeMap<>();
     for (Entry<String,String> p : connector.namespaceOperations().getProperties(namespace)) {
       props.put(p.getKey(), p.getValue());
     }
@@ -563,14 +565,14 @@
 
   private void printSystemConfiguration(Connector connector, File outputDirectory) throws IOException, AccumuloException, AccumuloSecurityException {
     Configuration conf = new Configuration(false);
-    TreeMap<String,String> site = new TreeMap<String,String>(siteConfig);
+    TreeMap<String,String> site = new TreeMap<>(siteConfig);
     for (Entry<String,String> prop : site.entrySet()) {
       String defaultValue = getDefaultConfigValue(prop.getKey());
       if (!prop.getValue().equals(defaultValue) && !systemConfig.containsKey(prop.getKey())) {
         conf.set(prop.getKey(), prop.getValue());
       }
     }
-    TreeMap<String,String> system = new TreeMap<String,String>(systemConfig);
+    TreeMap<String,String> system = new TreeMap<>(systemConfig);
     for (Entry<String,String> prop : system.entrySet()) {
       String defaultValue = getDefaultConfigValue(prop.getKey());
       if (!prop.getValue().equals(defaultValue)) {
@@ -591,7 +593,7 @@
     File tableBackup = new File(outputDirectory, tableName + ".cfg");
     FileWriter writer = new FileWriter(tableBackup);
     writer.write(createTableFormat.format(new String[] {tableName}));
-    TreeMap<String,String> props = new TreeMap<String,String>();
+    TreeMap<String,String> props = new TreeMap<>();
     for (Entry<String,String> p : connector.tableOperations().getProperties(tableName)) {
       props.put(p.getKey(), p.getValue());
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ChangeSecret.java b/server/base/src/main/java/org/apache/accumulo/server/util/ChangeSecret.java
index a9ecf47..43863b5 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/ChangeSecret.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/ChangeSecret.java
@@ -54,7 +54,7 @@
 
   public static void main(String[] args) throws Exception {
     Opts opts = new Opts();
-    List<String> argsList = new ArrayList<String>(args.length + 2);
+    List<String> argsList = new ArrayList<>(args.length + 2);
     argsList.add("--old");
     argsList.add("--new");
     argsList.addAll(Arrays.asList(args));
@@ -90,7 +90,7 @@
   private static boolean verifyAccumuloIsDown(Instance inst, String oldPassword) {
     ZooReader zooReader = new ZooReaderWriter(inst.getZooKeepers(), inst.getZooKeepersSessionTimeOut(), oldPassword);
     String root = ZooUtil.getRoot(inst);
-    final List<String> ephemerals = new ArrayList<String>();
+    final List<String> ephemerals = new ArrayList<>();
     recurse(zooReader, root, new Visitor() {
       @Override
       public void visit(ZooReader zoo, String path) throws Exception {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/CheckForMetadataProblems.java b/server/base/src/main/java/org/apache/accumulo/server/util/CheckForMetadataProblems.java
index b081a60..4f69c72 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/CheckForMetadataProblems.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/CheckForMetadataProblems.java
@@ -90,7 +90,7 @@
 
   public static void checkMetadataAndRootTableEntries(String tableNameToCheck, ClientOpts opts, VolumeManager fs) throws Exception {
     System.out.println("Checking table: " + tableNameToCheck);
-    Map<String,TreeSet<KeyExtent>> tables = new HashMap<String,TreeSet<KeyExtent>>();
+    Map<String,TreeSet<KeyExtent>> tables = new HashMap<>();
 
     Scanner scanner;
 
@@ -112,7 +112,7 @@
 
       count++;
 
-      String tableName = (new KeyExtent(entry.getKey().getRow(), (Text) null)).getTableId().toString();
+      String tableName = (new KeyExtent(entry.getKey().getRow(), (Text) null)).getTableId();
 
       TreeSet<KeyExtent> tablets = tables.get(tableName);
       if (tablets == null) {
@@ -124,7 +124,7 @@
 
         tables.clear();
 
-        tablets = new TreeSet<KeyExtent>();
+        tablets = new TreeSet<>();
         tables.put(tableName, tablets);
       }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/DeleteZooInstance.java b/server/base/src/main/java/org/apache/accumulo/server/util/DeleteZooInstance.java
index ba27733..0c4578c 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/DeleteZooInstance.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/DeleteZooInstance.java
@@ -60,8 +60,8 @@
 
     IZooReaderWriter zk = ZooReaderWriter.getInstance();
     // try instance name:
-    Set<String> instances = new HashSet<String>(zk.getChildren(Constants.ZROOT + Constants.ZINSTANCES));
-    Set<String> uuids = new HashSet<String>(zk.getChildren(Constants.ZROOT));
+    Set<String> instances = new HashSet<>(zk.getChildren(Constants.ZROOT + Constants.ZINSTANCES));
+    Set<String> uuids = new HashSet<>(zk.getChildren(Constants.ZROOT));
     uuids.remove("instances");
     if (instances.contains(opts.instance)) {
       String path = Constants.ZROOT + Constants.ZINSTANCES + "/" + opts.instance;
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/FileSystemMonitor.java b/server/base/src/main/java/org/apache/accumulo/server/util/FileSystemMonitor.java
index 11d0e0f..5e9959c 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/FileSystemMonitor.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/FileSystemMonitor.java
@@ -56,7 +56,7 @@
       mountPoint = tokens[1].trim();
       filesystemType = tokens[2].trim().toLowerCase();
 
-      options = new HashSet<String>(Arrays.asList(tokens[3].split(",")));
+      options = new HashSet<>(Arrays.asList(tokens[3].split(",")));
     }
 
     @Override
@@ -80,7 +80,7 @@
   }
 
   static List<Mount> getMountsFromFile(BufferedReader br) throws IOException {
-    List<Mount> mounts = new ArrayList<Mount>();
+    List<Mount> mounts = new ArrayList<>();
     String line;
     while ((line = br.readLine()) != null) {
       Mount mount = new Mount(line);
@@ -94,7 +94,7 @@
     return mounts;
   }
 
-  private Map<String,Boolean> readWriteFilesystems = new HashMap<String,Boolean>();
+  private Map<String,Boolean> readWriteFilesystems = new HashMap<>();
 
   public FileSystemMonitor(final String procFile, long period) throws IOException {
     List<Mount> mounts = parse(procFile);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/FileUtil.java b/server/base/src/main/java/org/apache/accumulo/server/util/FileUtil.java
index 04e17d5..a686bae 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/FileUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/FileUtil.java
@@ -109,7 +109,7 @@
 
   public static Collection<String> reduceFiles(AccumuloConfiguration acuConf, Configuration conf, VolumeManager fs, Text prevEndRow, Text endRow,
       Collection<String> mapFiles, int maxFiles, Path tmpDir, int pass) throws IOException {
-    ArrayList<String> paths = new ArrayList<String>(mapFiles);
+    ArrayList<String> paths = new ArrayList<>(mapFiles);
 
     if (paths.size() <= maxFiles)
       return paths;
@@ -118,7 +118,7 @@
 
     int start = 0;
 
-    ArrayList<String> outFiles = new ArrayList<String>();
+    ArrayList<String> outFiles = new ArrayList<>();
 
     int count = 0;
 
@@ -132,15 +132,15 @@
 
       outFiles.add(newMapFile);
       FileSystem ns = fs.getVolumeByPath(new Path(newMapFile)).getFileSystem();
-      FileSKVWriter writer = new RFileOperations().openWriter(newMapFile.toString(), ns, ns.getConf(), acuConf);
+      FileSKVWriter writer = new RFileOperations().newWriterBuilder().forFile(newMapFile.toString(), ns, ns.getConf()).withTableConfiguration(acuConf).build();
       writer.startDefaultLocalityGroup();
-      List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<SortedKeyValueIterator<Key,Value>>(inFiles.size());
+      List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<>(inFiles.size());
 
       FileSKVIterator reader = null;
       try {
         for (String s : inFiles) {
           ns = fs.getVolumeByPath(new Path(s)).getFileSystem();
-          reader = FileOperations.getInstance().openIndex(s, ns, ns.getConf(), acuConf);
+          reader = FileOperations.getInstance().newIndexReaderBuilder().forFile(s, ns, ns.getConf()).withTableConfiguration(acuConf).build();
           iters.add(reader);
         }
 
@@ -202,7 +202,7 @@
     Path tmpDir = null;
 
     int maxToOpen = acuconf.getCount(Property.TSERV_TABLET_SPLIT_FINDMIDPOINT_MAXOPEN);
-    ArrayList<FileSKVIterator> readers = new ArrayList<FileSKVIterator>(mapFiles.size());
+    ArrayList<FileSKVIterator> readers = new ArrayList<>(mapFiles.size());
 
     try {
       if (mapFiles.size() > maxToOpen) {
@@ -275,7 +275,7 @@
     Path tmpDir = null;
 
     int maxToOpen = acuConf.getCount(Property.TSERV_TABLET_SPLIT_FINDMIDPOINT_MAXOPEN);
-    ArrayList<FileSKVIterator> readers = new ArrayList<FileSKVIterator>(mapFiles.size());
+    ArrayList<FileSKVIterator> readers = new ArrayList<>(mapFiles.size());
 
     try {
       if (mapFiles.size() > maxToOpen) {
@@ -319,7 +319,7 @@
         mmfi.next();
 
       // read half of the keys in the index
-      TreeMap<Double,Key> ret = new TreeMap<Double,Key>();
+      TreeMap<Double,Key> ret = new TreeMap<>();
       Key lastKey = null;
       long keysRead = 0;
 
@@ -401,10 +401,10 @@
       FileSystem ns = fs.getVolumeByPath(path).getFileSystem();
       try {
         if (useIndex)
-          reader = FileOperations.getInstance().openIndex(path.toString(), ns, ns.getConf(), acuConf);
+          reader = FileOperations.getInstance().newIndexReaderBuilder().forFile(path.toString(), ns, ns.getConf()).withTableConfiguration(acuConf).build();
         else
-          reader = FileOperations.getInstance().openReader(path.toString(), new Range(prevEndRow, false, null, true), LocalityGroupUtil.EMPTY_CF_SET, false,
-              ns, ns.getConf(), acuConf);
+          reader = FileOperations.getInstance().newScanReaderBuilder().forFile(path.toString(), ns, ns.getConf()).withTableConfiguration(acuConf)
+              .overRange(new Range(prevEndRow, false, null, true), LocalityGroupUtil.EMPTY_CF_SET, false).build();
 
         while (reader.hasTop()) {
           Key key = reader.getTopKey();
@@ -425,10 +425,10 @@
       }
 
       if (useIndex)
-        readers.add(FileOperations.getInstance().openIndex(path.toString(), ns, ns.getConf(), acuConf));
+        readers.add(FileOperations.getInstance().newIndexReaderBuilder().forFile(path.toString(), ns, ns.getConf()).withTableConfiguration(acuConf).build());
       else
-        readers.add(FileOperations.getInstance().openReader(path.toString(), new Range(prevEndRow, false, null, true), LocalityGroupUtil.EMPTY_CF_SET, false,
-            ns, ns.getConf(), acuConf));
+        readers.add(FileOperations.getInstance().newScanReaderBuilder().forFile(path.toString(), ns, ns.getConf()).withTableConfiguration(acuConf)
+            .overRange(new Range(prevEndRow, false, null, true), LocalityGroupUtil.EMPTY_CF_SET, false).build());
 
     }
     return numKeys;
@@ -436,7 +436,7 @@
 
   public static Map<FileRef,FileInfo> tryToGetFirstAndLastRows(VolumeManager fs, AccumuloConfiguration acuConf, Set<FileRef> mapfiles) {
 
-    HashMap<FileRef,FileInfo> mapFilesInfo = new HashMap<FileRef,FileInfo>();
+    HashMap<FileRef,FileInfo> mapFilesInfo = new HashMap<>();
 
     long t1 = System.currentTimeMillis();
 
@@ -445,7 +445,7 @@
       FileSKVIterator reader = null;
       FileSystem ns = fs.getVolumeByPath(mapfile.path()).getFileSystem();
       try {
-        reader = FileOperations.getInstance().openReader(mapfile.toString(), false, ns, ns.getConf(), acuConf);
+        reader = FileOperations.getInstance().newReaderBuilder().forFile(mapfile.toString(), ns, ns.getConf()).withTableConfiguration(acuConf).build();
 
         Key firstKey = reader.getFirstKey();
         if (firstKey != null) {
@@ -479,7 +479,8 @@
     for (FileRef ref : mapFiles) {
       Path path = ref.path();
       FileSystem ns = fs.getVolumeByPath(path).getFileSystem();
-      FileSKVIterator reader = FileOperations.getInstance().openReader(path.toString(), true, ns, ns.getConf(), acuConf);
+      FileSKVIterator reader = FileOperations.getInstance().newReaderBuilder().forFile(path.toString(), ns, ns.getConf()).withTableConfiguration(acuConf)
+          .seekToBeginning().build();
 
       try {
         if (!reader.hasTop())
@@ -516,13 +517,14 @@
       VolumeManager fs) throws IOException {
 
     long totalIndexEntries = 0;
-    Map<KeyExtent,MLong> counts = new TreeMap<KeyExtent,MLong>();
+    Map<KeyExtent,MLong> counts = new TreeMap<>();
     for (KeyExtent keyExtent : extents)
       counts.put(keyExtent, new MLong(0));
 
     Text row = new Text();
     FileSystem ns = fs.getVolumeByPath(mapFile).getFileSystem();
-    FileSKVIterator index = FileOperations.getInstance().openIndex(mapFile.toString(), ns, ns.getConf(), acuConf);
+    FileSKVIterator index = FileOperations.getInstance().newIndexReaderBuilder().forFile(mapFile.toString(), ns, ns.getConf()).withTableConfiguration(acuConf)
+        .build();
 
     try {
       while (index.hasTop()) {
@@ -546,7 +548,7 @@
       }
     }
 
-    Map<KeyExtent,Long> results = new TreeMap<KeyExtent,Long>();
+    Map<KeyExtent,Long> results = new TreeMap<>();
     for (KeyExtent keyExtent : extents) {
       double numEntries = counts.get(keyExtent).l;
       if (numEntries == 0)
@@ -558,7 +560,7 @@
   }
 
   public static Collection<String> toPathStrings(Collection<FileRef> refs) {
-    ArrayList<String> ret = new ArrayList<String>();
+    ArrayList<String> ret = new ArrayList<>();
     for (FileRef fileRef : refs) {
       ret.add(fileRef.path().toString());
     }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/FindOfflineTablets.java b/server/base/src/main/java/org/apache/accumulo/server/util/FindOfflineTablets.java
index 1db53a4..98101ec 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/FindOfflineTablets.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/FindOfflineTablets.java
@@ -42,7 +42,6 @@
 import org.apache.accumulo.server.master.state.TabletState;
 import org.apache.accumulo.server.master.state.ZooTabletStateStore;
 import org.apache.accumulo.server.tables.TableManager;
-import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -101,7 +100,7 @@
     Range range = MetadataSchema.TabletsSection.getRange();
     if (tableName != null) {
       String tableId = Tables.getTableId(context.getInstance(), tableName);
-      range = new KeyExtent(new Text(tableId), null, null).toMetadataRange();
+      range = new KeyExtent(tableId, null, null).toMetadataRange();
     }
 
     MetaDataTableScanner metaScanner = new MetaDataTableScanner(context, range, MetadataTable.NAME);
@@ -118,8 +117,7 @@
     while (scanner.hasNext() && !System.out.checkError()) {
       TabletLocationState locationState = scanner.next();
       TabletState state = locationState.getState(tservers.getCurrentServers());
-      if (state != null && state != TabletState.HOSTED
-          && TableManager.getInstance().getTableState(locationState.extent.getTableId().toString()) != TableState.OFFLINE) {
+      if (state != null && state != TabletState.HOSTED && TableManager.getInstance().getTableState(locationState.extent.getTableId()) != TableState.OFFLINE) {
         System.out.println(locationState + " is " + state + "  #walogs:" + locationState.walogs.size());
         offline++;
       }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/Halt.java b/server/base/src/main/java/org/apache/accumulo/server/util/Halt.java
index 7f57687..cbd8510 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/Halt.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/Halt.java
@@ -16,11 +16,14 @@
  */
 package org.apache.accumulo.server.util;
 
+import java.util.concurrent.TimeUnit;
+
 import org.apache.accumulo.core.util.Daemon;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class Halt {
   static private final Logger log = LoggerFactory.getLogger(Halt.class);
 
@@ -49,7 +52,7 @@
       new Daemon() {
         @Override
         public void run() {
-          UtilWaitThread.sleep(100);
+          sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
           Runtime.getRuntime().halt(status);
         }
       }.start();
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ListInstances.java b/server/base/src/main/java/org/apache/accumulo/server/util/ListInstances.java
index 9dc1251..0674bea 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/ListInstances.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/ListInstances.java
@@ -172,7 +172,7 @@
 
     String instancesPath = Constants.ZROOT + Constants.ZINSTANCES;
 
-    TreeMap<String,UUID> tm = new TreeMap<String,UUID>();
+    TreeMap<String,UUID> tm = new TreeMap<>();
 
     List<String> names;
 
@@ -198,7 +198,7 @@
   }
 
   private static TreeSet<UUID> getInstanceIDs(ZooReader zk, boolean printErrors) {
-    TreeSet<UUID> ts = new TreeSet<UUID>();
+    TreeSet<UUID> ts = new TreeSet<>();
 
     try {
       List<String> children = zk.getChildren(Constants.ZROOT);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ListVolumesUsed.java b/server/base/src/main/java/org/apache/accumulo/server/util/ListVolumesUsed.java
index e90d1dd..3e30aaf 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/ListVolumesUsed.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/ListVolumesUsed.java
@@ -34,6 +34,8 @@
 import org.apache.accumulo.server.client.HdfsZooInstance;
 import org.apache.accumulo.server.conf.ServerConfigurationFactory;
 import org.apache.accumulo.server.fs.VolumeManager.FileType;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
 import org.apache.hadoop.fs.Path;
 
 /**
@@ -61,17 +63,14 @@
 
   private static void getLogURIs(TreeSet<String> volumes, LogEntry logEntry) {
     volumes.add(getLogURI(logEntry.filename));
-    for (String logSet : logEntry.logSet) {
-      volumes.add(getLogURI(logSet));
-    }
   }
 
   private static void listZookeeper() throws Exception {
     System.out.println("Listing volumes referenced in zookeeper");
-    TreeSet<String> volumes = new TreeSet<String>();
+    TreeSet<String> volumes = new TreeSet<>();
 
     volumes.add(getTableURI(MetadataTableUtil.getRootTabletDir()));
-    ArrayList<LogEntry> result = new ArrayList<LogEntry>();
+    ArrayList<LogEntry> result = new ArrayList<>();
     MetadataTableUtil.getRootLogEntries(result);
     for (LogEntry logEntry : result) {
       getLogURIs(volumes, logEntry);
@@ -93,7 +92,7 @@
     scanner.fetchColumnFamily(MetadataSchema.TabletsSection.LogColumnFamily.NAME);
     MetadataSchema.TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.fetch(scanner);
 
-    TreeSet<String> volumes = new TreeSet<String>();
+    TreeSet<String> volumes = new TreeSet<>();
 
     for (Entry<Key,Value> entry : scanner) {
       if (entry.getKey().getColumnFamily().equals(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME)) {
@@ -123,6 +122,18 @@
 
     for (String volume : volumes)
       System.out.println("\tVolume : " + volume);
+
+    volumes.clear();
+
+    WalStateManager wals = new WalStateManager(conn.getInstance(), ZooReaderWriter.getInstance());
+    for (Path path : wals.getAllState().keySet()) {
+      volumes.add(getLogURI(path.toString()));
+    }
+
+    System.out.println("Listing volumes referenced in " + name + " current logs");
+
+    for (String volume : volumes)
+      System.out.println("\tVolume : " + volume);
   }
 
   public static void listVolumes(ClientContext context) throws Exception {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/LocalityCheck.java b/server/base/src/main/java/org/apache/accumulo/server/util/LocalityCheck.java
index 5d49fa7..2e43505 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/LocalityCheck.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/LocalityCheck.java
@@ -53,9 +53,9 @@
     scanner.fetchColumnFamily(DataFileColumnFamily.NAME);
     scanner.setRange(MetadataSchema.TabletsSection.getRange());
 
-    Map<String,Long> totalBlocks = new HashMap<String,Long>();
-    Map<String,Long> localBlocks = new HashMap<String,Long>();
-    ArrayList<String> files = new ArrayList<String>();
+    Map<String,Long> totalBlocks = new HashMap<>();
+    Map<String,Long> localBlocks = new HashMap<>();
+    ArrayList<String> files = new ArrayList<>();
 
     for (Entry<Key,Value> entry : scanner) {
       Key key = entry.getKey();
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java b/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java
index 8ffe586..7c102e6 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java
@@ -46,7 +46,7 @@
     Authenticator authenticator = AccumuloVFSClassLoader.getClassLoader().loadClass(config.get(Property.INSTANCE_SECURITY_AUTHENTICATOR))
         .asSubclass(Authenticator.class).newInstance();
 
-    List<Set<TokenProperty>> tokenProps = new ArrayList<Set<TokenProperty>>();
+    List<Set<TokenProperty>> tokenProps = new ArrayList<>();
 
     for (Class<? extends AuthenticationToken> tokenType : authenticator.getSupportedTokenTypes()) {
       tokenProps.add(tokenType.newInstance().getProperties());
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/MasterMetadataUtil.java b/server/base/src/main/java/org/apache/accumulo/server/util/MasterMetadataUtil.java
index 14eba68..1bdd255 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/MasterMetadataUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/MasterMetadataUtil.java
@@ -16,10 +16,12 @@
  */
 package org.apache.accumulo.server.util;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Collection;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -27,6 +29,7 @@
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.Scanner;
@@ -46,7 +49,6 @@
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ScanFileColumnFamily;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.ColumnFQ;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
 import org.apache.accumulo.server.fs.FileRef;
@@ -68,7 +70,7 @@
   private static final Logger log = LoggerFactory.getLogger(MasterMetadataUtil.class);
 
   public static void addNewTablet(ClientContext context, KeyExtent extent, String path, TServerInstance location, Map<FileRef,DataFileValue> datafileSizes,
-      Map<FileRef,Long> bulkLoadedFiles, String time, long lastFlushID, long lastCompactID, ZooLock zooLock) {
+      Map<Long,? extends Collection<FileRef>> bulkLoadedFiles, String time, long lastFlushID, long lastCompactID, ZooLock zooLock) {
     Mutation m = extent.getPrevRowUpdateMutation();
 
     TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(m, new Value(path.getBytes(UTF_8)));
@@ -87,9 +89,11 @@
       m.put(DataFileColumnFamily.NAME, entry.getKey().meta(), new Value(entry.getValue().encode()));
     }
 
-    for (Entry<FileRef,Long> entry : bulkLoadedFiles.entrySet()) {
-      byte[] tidBytes = Long.toString(entry.getValue()).getBytes();
-      m.put(TabletsSection.BulkFileColumnFamily.NAME, entry.getKey().meta(), new Value(tidBytes));
+    for (Entry<Long,? extends Collection<FileRef>> entry : bulkLoadedFiles.entrySet()) {
+      Value tidBytes = new Value(Long.toString(entry.getKey()).getBytes());
+      for (FileRef ref : entry.getValue()) {
+        m.put(TabletsSection.BulkFileColumnFamily.NAME, ref.meta(), new Value(tidBytes));
+      }
     }
 
     MetadataTableUtil.update(context, zooLock, m, extent);
@@ -131,12 +135,12 @@
 
     Text metadataPrevEndRow = KeyExtent.decodePrevEndRow(prevEndRowIBW);
 
-    Text table = (new KeyExtent(metadataEntry, (Text) null)).getTableId();
+    String table = (new KeyExtent(metadataEntry, (Text) null)).getTableId();
 
     return fixSplit(context, table, metadataEntry, metadataPrevEndRow, oper, splitRatio, tserver, time.toString(), initFlushID, initCompactID, lock);
   }
 
-  private static KeyExtent fixSplit(ClientContext context, Text table, Text metadataEntry, Text metadataPrevEndRow, Value oper, double splitRatio,
+  private static KeyExtent fixSplit(ClientContext context, String table, Text metadataEntry, Text metadataPrevEndRow, Value oper, double splitRatio,
       TServerInstance tserver, String time, long initFlushID, long initCompactID, ZooLock lock) throws AccumuloException, IOException {
     if (metadataPrevEndRow == null)
       // something is wrong, this should not happen... if a tablet is split, it will always have a
@@ -146,42 +150,44 @@
     // check to see if prev tablet exist in metadata tablet
     Key prevRowKey = new Key(new Text(KeyExtent.getMetadataEntry(table, metadataPrevEndRow)));
 
-    ScannerImpl scanner2 = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY);
-    scanner2.setRange(new Range(prevRowKey, prevRowKey.followingKey(PartialKey.ROW)));
+    try (ScannerImpl scanner2 = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY)) {
+      scanner2.setRange(new Range(prevRowKey, prevRowKey.followingKey(PartialKey.ROW)));
 
-    VolumeManager fs = VolumeManagerImpl.get();
-    if (!scanner2.iterator().hasNext()) {
-      log.info("Rolling back incomplete split " + metadataEntry + " " + metadataPrevEndRow);
-      MetadataTableUtil.rollBackSplit(metadataEntry, KeyExtent.decodePrevEndRow(oper), context, lock);
-      return new KeyExtent(metadataEntry, KeyExtent.decodePrevEndRow(oper));
-    } else {
-      log.info("Finishing incomplete split " + metadataEntry + " " + metadataPrevEndRow);
+      VolumeManager fs = VolumeManagerImpl.get();
+      if (!scanner2.iterator().hasNext()) {
+        log.info("Rolling back incomplete split " + metadataEntry + " " + metadataPrevEndRow);
+        MetadataTableUtil.rollBackSplit(metadataEntry, KeyExtent.decodePrevEndRow(oper), context, lock);
+        return new KeyExtent(metadataEntry, KeyExtent.decodePrevEndRow(oper));
+      } else {
+        log.info("Finishing incomplete split " + metadataEntry + " " + metadataPrevEndRow);
 
-      List<FileRef> highDatafilesToRemove = new ArrayList<FileRef>();
+        List<FileRef> highDatafilesToRemove = new ArrayList<>();
 
-      Scanner scanner3 = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY);
-      Key rowKey = new Key(metadataEntry);
+        SortedMap<FileRef,DataFileValue> origDatafileSizes = new TreeMap<>();
+        SortedMap<FileRef,DataFileValue> highDatafileSizes = new TreeMap<>();
+        SortedMap<FileRef,DataFileValue> lowDatafileSizes = new TreeMap<>();
 
-      SortedMap<FileRef,DataFileValue> origDatafileSizes = new TreeMap<FileRef,DataFileValue>();
-      SortedMap<FileRef,DataFileValue> highDatafileSizes = new TreeMap<FileRef,DataFileValue>();
-      SortedMap<FileRef,DataFileValue> lowDatafileSizes = new TreeMap<FileRef,DataFileValue>();
-      scanner3.fetchColumnFamily(DataFileColumnFamily.NAME);
-      scanner3.setRange(new Range(rowKey, rowKey.followingKey(PartialKey.ROW)));
+        try (Scanner scanner3 = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY)) {
+          Key rowKey = new Key(metadataEntry);
 
-      for (Entry<Key,Value> entry : scanner3) {
-        if (entry.getKey().compareColumnFamily(DataFileColumnFamily.NAME) == 0) {
-          origDatafileSizes.put(new FileRef(fs, entry.getKey()), new DataFileValue(entry.getValue().get()));
+          scanner3.fetchColumnFamily(DataFileColumnFamily.NAME);
+          scanner3.setRange(new Range(rowKey, rowKey.followingKey(PartialKey.ROW)));
+
+          for (Entry<Key,Value> entry : scanner3) {
+            if (entry.getKey().compareColumnFamily(DataFileColumnFamily.NAME) == 0) {
+              origDatafileSizes.put(new FileRef(fs, entry.getKey()), new DataFileValue(entry.getValue().get()));
+            }
+          }
         }
+
+        MetadataTableUtil.splitDatafiles(table, metadataPrevEndRow, splitRatio, new HashMap<FileRef,FileUtil.FileInfo>(), origDatafileSizes, lowDatafileSizes,
+            highDatafileSizes, highDatafilesToRemove);
+
+        MetadataTableUtil.finishSplit(metadataEntry, highDatafileSizes, highDatafilesToRemove, context, lock);
+
+        return new KeyExtent(metadataEntry, KeyExtent.encodePrevEndRow(metadataPrevEndRow));
       }
-
-      MetadataTableUtil.splitDatafiles(table, metadataPrevEndRow, splitRatio, new HashMap<FileRef,FileUtil.FileInfo>(), origDatafileSizes, lowDatafileSizes,
-          highDatafileSizes, highDatafilesToRemove);
-
-      MetadataTableUtil.finishSplit(metadataEntry, highDatafileSizes, highDatafilesToRemove, context, lock);
-
-      return new KeyExtent(metadataEntry, KeyExtent.encodePrevEndRow(metadataPrevEndRow));
     }
-
   }
 
   private static TServerInstance getTServerInstance(String address, ZooLock zooLock) {
@@ -193,7 +199,7 @@
       } catch (InterruptedException e) {
         log.error("{}", e.getMessage(), e);
       }
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
   }
 
@@ -248,35 +254,27 @@
       if (unusedWalLogs != null) {
         updateRootTabletDataFile(extent, path, mergeFile, dfv, time, filesInUseByScans, address, zooLock, unusedWalLogs, lastLocation, flushId);
       }
-
       return;
     }
-
     Mutation m = getUpdateForTabletDataFile(extent, path, mergeFile, dfv, time, filesInUseByScans, address, zooLock, unusedWalLogs, lastLocation, flushId);
-
     MetadataTableUtil.update(context, zooLock, m, extent);
-
   }
 
   /**
    * Update the data file for the root tablet
    */
-  protected static void updateRootTabletDataFile(KeyExtent extent, FileRef path, FileRef mergeFile, DataFileValue dfv, String time,
+  private static void updateRootTabletDataFile(KeyExtent extent, FileRef path, FileRef mergeFile, DataFileValue dfv, String time,
       Set<FileRef> filesInUseByScans, String address, ZooLock zooLock, Set<String> unusedWalLogs, TServerInstance lastLocation, long flushId) {
     IZooReaderWriter zk = ZooReaderWriter.getInstance();
-    // unusedWalLogs will contain the location/name of each log in a log set
-    // the log set is stored under one of the log names, but not both
-    // find the entry under one of the names and delete it.
     String root = MetadataTableUtil.getZookeeperLogLocation();
-    boolean foundEntry = false;
     for (String entry : unusedWalLogs) {
       String[] parts = entry.split("/");
       String zpath = root + "/" + parts[parts.length - 1];
       while (true) {
         try {
           if (zk.exists(zpath)) {
+            log.debug("Removing WAL reference for root table " + zpath);
             zk.recursiveDelete(zpath, NodeMissingPolicy.SKIP);
-            foundEntry = true;
           }
           break;
         } catch (KeeperException e) {
@@ -284,11 +282,9 @@
         } catch (InterruptedException e) {
           log.error("{}", e.getMessage(), e);
         }
-        UtilWaitThread.sleep(1000);
+        sleepUninterruptibly(1, TimeUnit.SECONDS);
       }
     }
-    if (unusedWalLogs.size() > 0 && !foundEntry)
-      log.warn("WALog entry for root tablet did not exist " + unusedWalLogs);
   }
 
   /**
@@ -296,7 +292,7 @@
    *
    * @return A Mutation to update a tablet from the given information
    */
-  protected static Mutation getUpdateForTabletDataFile(KeyExtent extent, FileRef path, FileRef mergeFile, DataFileValue dfv, String time,
+  private static Mutation getUpdateForTabletDataFile(KeyExtent extent, FileRef path, FileRef mergeFile, DataFileValue dfv, String time,
       Set<FileRef> filesInUseByScans, String address, ZooLock zooLock, Set<String> unusedWalLogs, TServerInstance lastLocation, long flushId) {
     Mutation m = new Mutation(extent.getMetadataEntry());
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java b/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java
index 5e74aac..b38083f 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/MetadataTableUtil.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.server.util;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN;
 import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.TIME_COLUMN;
@@ -23,8 +24,7 @@
 
 import java.io.IOException;
 import java.util.ArrayList;
-import java.util.Collections;
-import java.util.Comparator;
+import java.util.Collection;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
@@ -75,7 +75,6 @@
 import org.apache.accumulo.core.util.ColumnFQ;
 import org.apache.accumulo.core.util.FastFormat;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
@@ -96,6 +95,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Optional;
 
 /**
@@ -105,8 +105,8 @@
 
   private static final Text EMPTY_TEXT = new Text();
   private static final byte[] EMPTY_BYTES = new byte[0];
-  private static Map<Credentials,Writer> root_tables = new HashMap<Credentials,Writer>();
-  private static Map<Credentials,Writer> metadata_tables = new HashMap<Credentials,Writer>();
+  private static Map<Credentials,Writer> root_tables = new HashMap<>();
+  private static Map<Credentials,Writer> metadata_tables = new HashMap<>();
   private static final Logger log = LoggerFactory.getLogger(MetadataTableUtil.class);
 
   private MetadataTableUtil() {}
@@ -121,7 +121,7 @@
     return metadataTable;
   }
 
-  private synchronized static Writer getRootTable(ClientContext context) {
+  public synchronized static Writer getRootTable(ClientContext context) {
     Credentials credentials = context.getCredentials();
     Writer rootTable = root_tables.get(credentials);
     if (rootTable == null) {
@@ -131,7 +131,7 @@
     return rootTable;
   }
 
-  private static void putLockID(ZooLock zooLock, Mutation m) {
+  public static void putLockID(ZooLock zooLock, Mutation m) {
     TabletsSection.ServerColumnFamily.LOCK_COLUMN.put(m, new Value(zooLock.getLockID().serialize(ZooUtil.getRoot(HdfsZooInstance.getInstance()) + "/")
         .getBytes(UTF_8)));
   }
@@ -163,7 +163,7 @@
       } catch (TableNotFoundException e) {
         log.error("{}", e.getMessage(), e);
       }
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
   }
 
@@ -223,7 +223,7 @@
 
       // add before removing in case of process death
       for (LogEntry logEntry : logsToAdd)
-        addLogEntry(context, logEntry, zooLock);
+        addRootLogEntry(context, zooLock, logEntry);
 
       removeUnusedWALEntries(context, extent, logsToRemove, zooLock);
     } else {
@@ -248,27 +248,57 @@
     }
   }
 
-  public static SortedMap<FileRef,DataFileValue> getDataFileSizes(KeyExtent extent, ClientContext context) throws IOException {
-    TreeMap<FileRef,DataFileValue> sizes = new TreeMap<FileRef,DataFileValue>();
+  private static interface ZooOperation {
+    void run(IZooReaderWriter rw) throws KeeperException, InterruptedException, IOException;
+  }
 
-    Scanner mdScanner = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY);
-    mdScanner.fetchColumnFamily(DataFileColumnFamily.NAME);
-    Text row = extent.getMetadataEntry();
-    VolumeManager fs = VolumeManagerImpl.get();
-
-    Key endKey = new Key(row, DataFileColumnFamily.NAME, new Text(""));
-    endKey = endKey.followingKey(PartialKey.ROW_COLFAM);
-
-    mdScanner.setRange(new Range(new Key(row), endKey));
-    for (Entry<Key,Value> entry : mdScanner) {
-
-      if (!entry.getKey().getRow().equals(row))
+  private static void retryZooKeeperUpdate(ClientContext context, ZooLock zooLock, ZooOperation op) {
+    while (true) {
+      try {
+        IZooReaderWriter zoo = ZooReaderWriter.getInstance();
+        if (zoo.isLockHeld(zooLock.getLockID())) {
+          op.run(zoo);
+        }
         break;
-      DataFileValue dfv = new DataFileValue(entry.getValue().get());
-      sizes.put(new FileRef(fs, entry.getKey()), dfv);
+      } catch (Exception e) {
+        log.error("Unexpected exception {}", e.getMessage(), e);
+      }
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
+  }
 
-    return sizes;
+  private static void addRootLogEntry(AccumuloServerContext context, ZooLock zooLock, final LogEntry entry) {
+    retryZooKeeperUpdate(context, zooLock, new ZooOperation() {
+      @Override
+      public void run(IZooReaderWriter rw) throws KeeperException, InterruptedException, IOException {
+        String root = getZookeeperLogLocation();
+        rw.putPersistentData(root + "/" + entry.getUniqueID(), entry.toBytes(), NodeExistsPolicy.OVERWRITE);
+      }
+    });
+  }
+
+  public static SortedMap<FileRef,DataFileValue> getDataFileSizes(KeyExtent extent, ClientContext context) throws IOException {
+    TreeMap<FileRef,DataFileValue> sizes = new TreeMap<>();
+
+    try (Scanner mdScanner = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY)) {
+      mdScanner.fetchColumnFamily(DataFileColumnFamily.NAME);
+      Text row = extent.getMetadataEntry();
+      VolumeManager fs = VolumeManagerImpl.get();
+
+      Key endKey = new Key(row, DataFileColumnFamily.NAME, new Text(""));
+      endKey = endKey.followingKey(PartialKey.ROW_COLFAM);
+
+      mdScanner.setRange(new Range(new Key(row), endKey));
+      for (Entry<Key,Value> entry : mdScanner) {
+
+        if (!entry.getKey().getRow().equals(row))
+          break;
+        DataFileValue dfv = new DataFileValue(entry.getValue().get());
+        sizes.put(new FileRef(fs, entry.getKey()), dfv);
+      }
+
+      return sizes;
+    }
   }
 
   public static void rollBackSplit(Text metadataEntry, Text oldPrevEndRow, ClientContext context, ZooLock zooLock) {
@@ -314,7 +344,7 @@
 
   public static void addDeleteEntries(KeyExtent extent, Set<FileRef> datafilesToDelete, ClientContext context) throws IOException {
 
-    String tableId = extent.getTableId().toString();
+    String tableId = extent.getTableId();
 
     // TODO could use batch writer,would need to handle failure and retry like update does - ACCUMULO-1294
     for (FileRef pathToRemove : datafilesToDelete) {
@@ -323,7 +353,7 @@
   }
 
   public static void addDeleteEntry(AccumuloServerContext context, String tableId, String path) throws IOException {
-    update(context, createDeleteMutation(tableId, path), new KeyExtent(new Text(tableId), null, null));
+    update(context, createDeleteMutation(tableId, path), new KeyExtent(tableId, null, null));
   }
 
   public static Mutation createDeleteMutation(String tableId, String pathToRemove) throws IOException {
@@ -342,7 +372,7 @@
     update(context, zooLock, m, extent);
   }
 
-  public static void splitDatafiles(Text table, Text midRow, double splitRatio, Map<FileRef,FileUtil.FileInfo> firstAndLastRows,
+  public static void splitDatafiles(String tableId, Text midRow, double splitRatio, Map<FileRef,FileUtil.FileInfo> firstAndLastRows,
       SortedMap<FileRef,DataFileValue> datafiles, SortedMap<FileRef,DataFileValue> lowDatafileSizes, SortedMap<FileRef,DataFileValue> highDatafileSizes,
       List<FileRef> highDatafilesToRemove) {
 
@@ -386,95 +416,65 @@
   }
 
   public static void deleteTable(String tableId, boolean insertDeletes, ClientContext context, ZooLock lock) throws AccumuloException, IOException {
-    Scanner ms = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY);
-    Text tableIdText = new Text(tableId);
-    BatchWriter bw = new BatchWriterImpl(context, MetadataTable.ID, new BatchWriterConfig().setMaxMemory(1000000).setMaxLatency(120000l, TimeUnit.MILLISECONDS)
-        .setMaxWriteThreads(2));
+    try (Scanner ms = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY);
+        BatchWriter bw = new BatchWriterImpl(context, MetadataTable.ID, new BatchWriterConfig().setMaxMemory(1000000)
+            .setMaxLatency(120000l, TimeUnit.MILLISECONDS).setMaxWriteThreads(2))) {
 
-    // scan metadata for our table and delete everything we find
-    Mutation m = null;
-    ms.setRange(new KeyExtent(tableIdText, null, null).toMetadataRange());
+      // scan metadata for our table and delete everything we find
+      Mutation m = null;
+      ms.setRange(new KeyExtent(tableId, null, null).toMetadataRange());
 
-    // insert deletes before deleting data from metadata... this makes the code fault tolerant
-    if (insertDeletes) {
+      // insert deletes before deleting data from metadata... this makes the code fault tolerant
+      if (insertDeletes) {
 
-      ms.fetchColumnFamily(DataFileColumnFamily.NAME);
-      TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.fetch(ms);
+        ms.fetchColumnFamily(DataFileColumnFamily.NAME);
+        TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.fetch(ms);
+
+        for (Entry<Key,Value> cell : ms) {
+          Key key = cell.getKey();
+
+          if (key.getColumnFamily().equals(DataFileColumnFamily.NAME)) {
+            FileRef ref = new FileRef(VolumeManagerImpl.get(), key);
+            bw.addMutation(createDeleteMutation(tableId, ref.meta().toString()));
+          }
+
+          if (TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.hasColumns(key)) {
+            bw.addMutation(createDeleteMutation(tableId, cell.getValue().toString()));
+          }
+        }
+
+        bw.flush();
+
+        ms.clearColumns();
+      }
 
       for (Entry<Key,Value> cell : ms) {
         Key key = cell.getKey();
 
-        if (key.getColumnFamily().equals(DataFileColumnFamily.NAME)) {
-          FileRef ref = new FileRef(VolumeManagerImpl.get(), key);
-          bw.addMutation(createDeleteMutation(tableId, ref.meta().toString()));
+        if (m == null) {
+          m = new Mutation(key.getRow());
+          if (lock != null)
+            putLockID(lock, m);
         }
 
-        if (TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.hasColumns(key)) {
-          bw.addMutation(createDeleteMutation(tableId, cell.getValue().toString()));
+        if (key.getRow().compareTo(m.getRow(), 0, m.getRow().length) != 0) {
+          bw.addMutation(m);
+          m = new Mutation(key.getRow());
+          if (lock != null)
+            putLockID(lock, m);
         }
+        m.putDelete(key.getColumnFamily(), key.getColumnQualifier());
       }
 
-      bw.flush();
-
-      ms.clearColumns();
-    }
-
-    for (Entry<Key,Value> cell : ms) {
-      Key key = cell.getKey();
-
-      if (m == null) {
-        m = new Mutation(key.getRow());
-        if (lock != null)
-          putLockID(lock, m);
-      }
-
-      if (key.getRow().compareTo(m.getRow(), 0, m.getRow().length) != 0) {
+      if (m != null)
         bw.addMutation(m);
-        m = new Mutation(key.getRow());
-        if (lock != null)
-          putLockID(lock, m);
-      }
-      m.putDelete(key.getColumnFamily(), key.getColumnQualifier());
     }
-
-    if (m != null)
-      bw.addMutation(m);
-
-    bw.close();
   }
 
   static String getZookeeperLogLocation() {
     return ZooUtil.getRoot(HdfsZooInstance.getInstance()) + RootTable.ZROOT_TABLET_WALOGS;
   }
 
-  public static void addLogEntry(ClientContext context, LogEntry entry, ZooLock zooLock) {
-    if (entry.extent.isRootTablet()) {
-      String root = getZookeeperLogLocation();
-      while (true) {
-        try {
-          IZooReaderWriter zoo = ZooReaderWriter.getInstance();
-          if (zoo.isLockHeld(zooLock.getLockID())) {
-            String[] parts = entry.filename.split("/");
-            String uniqueId = parts[parts.length - 1];
-            zoo.putPersistentData(root + "/" + uniqueId, entry.toBytes(), NodeExistsPolicy.OVERWRITE);
-          }
-          break;
-        } catch (KeeperException e) {
-          log.error("{}", e.getMessage(), e);
-        } catch (InterruptedException e) {
-          log.error("{}", e.getMessage(), e);
-        } catch (IOException e) {
-          log.error("{}", e.getMessage(), e);
-        }
-        UtilWaitThread.sleep(1000);
-      }
-    } else {
-      Mutation m = new Mutation(entry.getRow());
-      m.put(entry.getColumnFamily(), entry.getColumnQualifier(), entry.getValue());
-      update(context, zooLock, m, entry.extent);
-    }
-  }
-
   public static void setRootTabletDir(String dir) throws IOException {
     IZooReaderWriter zoo = ZooReaderWriter.getInstance();
     String zpath = ZooUtil.getRoot(HdfsZooInstance.getInstance()) + RootTable.ZROOT_TABLET_PATH;
@@ -503,8 +503,8 @@
 
   public static Pair<List<LogEntry>,SortedMap<FileRef,DataFileValue>> getFileAndLogEntries(ClientContext context, KeyExtent extent) throws KeeperException,
       InterruptedException, IOException {
-    ArrayList<LogEntry> result = new ArrayList<LogEntry>();
-    TreeMap<FileRef,DataFileValue> sizes = new TreeMap<FileRef,DataFileValue>();
+    ArrayList<LogEntry> result = new ArrayList<>();
+    TreeMap<FileRef,DataFileValue> sizes = new TreeMap<>();
 
     VolumeManager fs = VolumeManagerImpl.get();
     if (extent.isRootTablet()) {
@@ -521,23 +521,24 @@
 
     } else {
       String systemTableToCheck = extent.isMeta() ? RootTable.ID : MetadataTable.ID;
-      Scanner scanner = new ScannerImpl(context, systemTableToCheck, Authorizations.EMPTY);
-      scanner.fetchColumnFamily(LogColumnFamily.NAME);
-      scanner.fetchColumnFamily(DataFileColumnFamily.NAME);
-      scanner.setRange(extent.toMetadataRange());
+      try (Scanner scanner = new ScannerImpl(context, systemTableToCheck, Authorizations.EMPTY)) {
+        scanner.fetchColumnFamily(LogColumnFamily.NAME);
+        scanner.fetchColumnFamily(DataFileColumnFamily.NAME);
+        scanner.setRange(extent.toMetadataRange());
 
-      for (Entry<Key,Value> entry : scanner) {
-        if (!entry.getKey().getRow().equals(extent.getMetadataEntry())) {
-          throw new RuntimeException("Unexpected row " + entry.getKey().getRow() + " expected " + extent.getMetadataEntry());
-        }
+        for (Entry<Key,Value> entry : scanner) {
+          if (!entry.getKey().getRow().equals(extent.getMetadataEntry())) {
+            throw new RuntimeException("Unexpected row " + entry.getKey().getRow() + " expected " + extent.getMetadataEntry());
+          }
 
-        if (entry.getKey().getColumnFamily().equals(LogColumnFamily.NAME)) {
-          result.add(LogEntry.fromKeyValue(entry.getKey(), entry.getValue()));
-        } else if (entry.getKey().getColumnFamily().equals(DataFileColumnFamily.NAME)) {
-          DataFileValue dfv = new DataFileValue(entry.getValue().get());
-          sizes.put(new FileRef(fs, entry.getKey()), dfv);
-        } else {
-          throw new RuntimeException("Unexpected col fam " + entry.getKey().getColumnFamily());
+          if (entry.getKey().getColumnFamily().equals(LogColumnFamily.NAME)) {
+            result.add(LogEntry.fromKeyValue(entry.getKey(), entry.getValue()));
+          } else if (entry.getKey().getColumnFamily().equals(DataFileColumnFamily.NAME)) {
+            DataFileValue dfv = new DataFileValue(entry.getValue().get());
+            sizes.put(new FileRef(fs, entry.getKey()), dfv);
+          } else {
+            throw new RuntimeException("Unexpected col fam " + entry.getKey().getColumnFamily());
+          }
         }
       }
     }
@@ -547,7 +548,7 @@
 
   public static List<LogEntry> getLogEntries(ClientContext context, KeyExtent extent) throws IOException, KeeperException, InterruptedException {
     log.info("Scanning logging entries for " + extent);
-    ArrayList<LogEntry> result = new ArrayList<LogEntry>();
+    ArrayList<LogEntry> result = new ArrayList<>();
     if (extent.equals(RootTable.EXTENT)) {
       log.info("Getting logs for root tablet from zookeeper");
       getRootLogEntries(result);
@@ -565,22 +566,11 @@
       }
     }
 
-    Collections.sort(result, new Comparator<LogEntry>() {
-      @Override
-      public int compare(LogEntry o1, LogEntry o2) {
-        long diff = o1.timestamp - o2.timestamp;
-        if (diff < 0)
-          return -1;
-        if (diff > 0)
-          return 1;
-        return 0;
-      }
-    });
     log.info("Returning logs " + result + " for extent " + extent);
     return result;
   }
 
-  static void getRootLogEntries(ArrayList<LogEntry> result) throws KeeperException, InterruptedException, IOException {
+  static void getRootLogEntries(final ArrayList<LogEntry> result) throws KeeperException, InterruptedException, IOException {
     IZooReaderWriter zoo = ZooReaderWriter.getInstance();
     String root = getZookeeperLogLocation();
     // there's a little race between getting the children and fetching
@@ -588,11 +578,10 @@
     while (true) {
       result.clear();
       for (String child : zoo.getChildren(root)) {
-        LogEntry e = new LogEntry();
         try {
-          e.fromBytes(zoo.getData(root + "/" + child, null));
+          LogEntry e = LogEntry.fromBytes(zoo.getData(root + "/" + child, null));
           // upgrade from !0;!0<< -> +r<<
-          e.extent = RootTable.EXTENT;
+          e = new LogEntry(RootTable.EXTENT, 0, e.server, e.filename);
           result.add(e);
         } catch (KeeperException.NoNodeException ex) {
           continue;
@@ -623,7 +612,7 @@
 
     LogEntryIterator(ClientContext context) throws IOException, KeeperException, InterruptedException {
       zookeeperEntries = getLogEntries(context, RootTable.EXTENT).iterator();
-      rootTableEntries = getLogEntries(context, new KeyExtent(new Text(MetadataTable.ID), null, null)).iterator();
+      rootTableEntries = getLogEntries(context, new KeyExtent(MetadataTable.ID, null, null)).iterator();
       try {
         Scanner scanner = context.getConnector().createScanner(MetadataTable.NAME, Authorizations.EMPTY);
         log.info("Setting range to " + MetadataSchema.TabletsSection.getRange());
@@ -662,28 +651,23 @@
     return new LogEntryIterator(context);
   }
 
-  public static void removeUnusedWALEntries(AccumuloServerContext context, KeyExtent extent, List<LogEntry> logEntries, ZooLock zooLock) {
+  public static void removeUnusedWALEntries(AccumuloServerContext context, KeyExtent extent, final List<LogEntry> entries, ZooLock zooLock) {
     if (extent.isRootTablet()) {
-      for (LogEntry entry : logEntries) {
-        String root = getZookeeperLogLocation();
-        while (true) {
-          try {
-            IZooReaderWriter zoo = ZooReaderWriter.getInstance();
-            if (zoo.isLockHeld(zooLock.getLockID())) {
-              String parts[] = entry.filename.split("/");
-              zoo.recursiveDelete(root + "/" + parts[parts.length - 1], NodeMissingPolicy.SKIP);
-            }
-            break;
-          } catch (Exception e) {
-            log.error("{}", e.getMessage(), e);
+      retryZooKeeperUpdate(context, zooLock, new ZooOperation() {
+        @Override
+        public void run(IZooReaderWriter rw) throws KeeperException, InterruptedException, IOException {
+          String root = getZookeeperLogLocation();
+          for (LogEntry entry : entries) {
+            String path = root + "/" + entry.getUniqueID();
+            log.debug("Removing " + path + " from zookeeper");
+            rw.recursiveDelete(path, NodeMissingPolicy.SKIP);
           }
-          UtilWaitThread.sleep(1000);
         }
-      }
+      });
     } else {
       Mutation m = new Mutation(extent.getMetadataEntry());
-      for (LogEntry entry : logEntries) {
-        m.putDelete(LogColumnFamily.NAME, new Text(entry.getName()));
+      for (LogEntry entry : entries) {
+        m.putDelete(entry.getColumnFamily(), entry.getColumnQualifier());
       }
       update(context, zooLock, m, extent);
     }
@@ -704,7 +688,7 @@
   private static Mutation createCloneMutation(String srcTableId, String tableId, Map<Key,Value> tablet) {
 
     KeyExtent ke = new KeyExtent(tablet.keySet().iterator().next().getRow(), (Text) null);
-    Mutation m = new Mutation(KeyExtent.getMetadataEntry(new Text(tableId), ke.getEndRow()));
+    Mutation m = new Mutation(KeyExtent.getMetadataEntry(tableId, ke.getEndRow()));
 
     for (Entry<Key,Value> entry : tablet.entrySet()) {
       if (entry.getKey().getColumnFamily().equals(DataFileColumnFamily.NAME)) {
@@ -723,12 +707,9 @@
     return m;
   }
 
-  private static Scanner createCloneScanner(String tableId, Connector conn) throws TableNotFoundException {
-    String tableName = MetadataTable.NAME;
-    if (tableId.equals(MetadataTable.ID))
-      tableName = RootTable.NAME;
+  private static Scanner createCloneScanner(String tableName, String tableId, Connector conn) throws TableNotFoundException {
     Scanner mscanner = new IsolatedScanner(conn.createScanner(tableName, Authorizations.EMPTY));
-    mscanner.setRange(new KeyExtent(new Text(tableId), null, null).toMetadataRange());
+    mscanner.setRange(new KeyExtent(tableId, null, null).toMetadataRange());
     mscanner.fetchColumnFamily(DataFileColumnFamily.NAME);
     mscanner.fetchColumnFamily(TabletsSection.CurrentLocationColumnFamily.NAME);
     mscanner.fetchColumnFamily(TabletsSection.LastLocationColumnFamily.NAME);
@@ -738,12 +719,14 @@
     return mscanner;
   }
 
-  static void initializeClone(String srcTableId, String tableId, Connector conn, BatchWriter bw) throws TableNotFoundException, MutationsRejectedException {
+  @VisibleForTesting
+  public static void initializeClone(String tableName, String srcTableId, String tableId, Connector conn, BatchWriter bw) throws TableNotFoundException,
+      MutationsRejectedException {
     TabletIterator ti;
     if (srcTableId.equals(MetadataTable.ID))
-      ti = new TabletIterator(createCloneScanner(srcTableId, conn), new Range(), true, true);
+      ti = new TabletIterator(createCloneScanner(tableName, srcTableId, conn), new Range(), true, true);
     else
-      ti = new TabletIterator(createCloneScanner(srcTableId, conn), new KeyExtent(new Text(srcTableId), null, null).toMetadataRange(), true, true);
+      ti = new TabletIterator(createCloneScanner(tableName, srcTableId, conn), new KeyExtent(srcTableId, null, null).toMetadataRange(), true, true);
 
     if (!ti.hasNext())
       throw new RuntimeException(" table deleted during clone?  srcTableId = " + srcTableId);
@@ -755,13 +738,16 @@
   }
 
   private static int compareEndRows(Text endRow1, Text endRow2) {
-    return new KeyExtent(new Text("0"), endRow1, null).compareTo(new KeyExtent(new Text("0"), endRow2, null));
+    return new KeyExtent("0", endRow1, null).compareTo(new KeyExtent("0", endRow2, null));
   }
 
-  static int checkClone(String srcTableId, String tableId, Connector conn, BatchWriter bw) throws TableNotFoundException, MutationsRejectedException {
-    TabletIterator srcIter = new TabletIterator(createCloneScanner(srcTableId, conn), new KeyExtent(new Text(srcTableId), null, null).toMetadataRange(), true,
+  @VisibleForTesting
+  public static int checkClone(String tableName, String srcTableId, String tableId, Connector conn, BatchWriter bw) throws TableNotFoundException,
+      MutationsRejectedException {
+    TabletIterator srcIter = new TabletIterator(createCloneScanner(tableName, srcTableId, conn), new KeyExtent(srcTableId, null, null).toMetadataRange(), true,
         true);
-    TabletIterator cloneIter = new TabletIterator(createCloneScanner(tableId, conn), new KeyExtent(new Text(tableId), null, null).toMetadataRange(), true, true);
+    TabletIterator cloneIter = new TabletIterator(createCloneScanner(tableName, tableId, conn), new KeyExtent(tableId, null, null).toMetadataRange(), true,
+        true);
 
     if (!cloneIter.hasNext() || !srcIter.hasNext())
       throw new RuntimeException(" table deleted during clone?  srcTableId = " + srcTableId + " tableId=" + tableId);
@@ -771,7 +757,7 @@
     while (cloneIter.hasNext()) {
       Map<Key,Value> cloneTablet = cloneIter.next();
       Text cloneEndRow = new KeyExtent(cloneTablet.keySet().iterator().next().getRow(), (Text) null).getEndRow();
-      HashSet<String> cloneFiles = new HashSet<String>();
+      HashSet<String> cloneFiles = new HashSet<>();
 
       boolean cloneSuccessful = false;
       for (Entry<Key,Value> entry : cloneTablet.entrySet()) {
@@ -784,7 +770,7 @@
       if (!cloneSuccessful)
         getFiles(cloneFiles, cloneTablet, null);
 
-      List<Map<Key,Value>> srcTablets = new ArrayList<Map<Key,Value>>();
+      List<Map<Key,Value>> srcTablets = new ArrayList<>();
       Map<Key,Value> srcTablet = srcIter.next();
       srcTablets.add(srcTablet);
 
@@ -794,7 +780,7 @@
       if (cmp < 0)
         throw new TabletIterator.TabletDeletedException("Tablets deleted from src during clone : " + cloneEndRow + " " + srcEndRow);
 
-      HashSet<String> srcFiles = new HashSet<String>();
+      HashSet<String> srcFiles = new HashSet<>();
       if (!cloneSuccessful)
         getFiles(srcFiles, srcTablet, srcTableId);
 
@@ -843,58 +829,56 @@
   public static void cloneTable(ClientContext context, String srcTableId, String tableId, VolumeManager volumeManager) throws Exception {
 
     Connector conn = context.getConnector();
-    BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    try (BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig())) {
 
-    while (true) {
+      while (true) {
 
-      try {
-        initializeClone(srcTableId, tableId, conn, bw);
+        try {
+          initializeClone(MetadataTable.NAME, srcTableId, tableId, conn, bw);
 
-        // the following loop looks changes in the file that occurred during the copy.. if files were dereferenced then they could have been GCed
+          // the following loop looks changes in the file that occurred during the copy.. if files were dereferenced then they could have been GCed
 
-        while (true) {
-          int rewrites = checkClone(srcTableId, tableId, conn, bw);
+          while (true) {
+            int rewrites = checkClone(MetadataTable.NAME, srcTableId, tableId, conn, bw);
 
-          if (rewrites == 0)
-            break;
+            if (rewrites == 0)
+              break;
+          }
+
+          bw.flush();
+          break;
+
+        } catch (TabletIterator.TabletDeletedException tde) {
+          // tablets were merged in the src table
+          bw.flush();
+
+          // delete what we have cloned and try again
+          deleteTable(tableId, false, context, null);
+
+          log.debug("Tablets merged in table " + srcTableId + " while attempting to clone, trying again");
+
+          sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
         }
+      }
 
-        bw.flush();
-        break;
+      // delete the clone markers and create directory entries
+      Scanner mscanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
+      mscanner.setRange(new KeyExtent(tableId, null, null).toMetadataRange());
+      mscanner.fetchColumnFamily(ClonedColumnFamily.NAME);
 
-      } catch (TabletIterator.TabletDeletedException tde) {
-        // tablets were merged in the src table
-        bw.flush();
+      int dirCount = 0;
 
-        // delete what we have cloned and try again
-        deleteTable(tableId, false, context, null);
+      for (Entry<Key,Value> entry : mscanner) {
+        Key k = entry.getKey();
+        Mutation m = new Mutation(k.getRow());
+        m.putDelete(k.getColumnFamily(), k.getColumnQualifier());
+        String dir = volumeManager.choose(Optional.of(tableId), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR + Path.SEPARATOR + tableId
+            + Path.SEPARATOR + new String(FastFormat.toZeroPaddedString(dirCount++, 8, 16, Constants.CLONE_PREFIX_BYTES));
+        TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(m, new Value(dir.getBytes(UTF_8)));
 
-        log.debug("Tablets merged in table " + srcTableId + " while attempting to clone, trying again");
-
-        UtilWaitThread.sleep(100);
+        bw.addMutation(m);
       }
     }
-
-    // delete the clone markers and create directory entries
-    Scanner mscanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    mscanner.setRange(new KeyExtent(new Text(tableId), null, null).toMetadataRange());
-    mscanner.fetchColumnFamily(ClonedColumnFamily.NAME);
-
-    int dirCount = 0;
-
-    for (Entry<Key,Value> entry : mscanner) {
-      Key k = entry.getKey();
-      Mutation m = new Mutation(k.getRow());
-      m.putDelete(k.getColumnFamily(), k.getColumnQualifier());
-      String dir = volumeManager.choose(Optional.of(tableId), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR + Path.SEPARATOR + tableId
-          + Path.SEPARATOR + new String(FastFormat.toZeroPaddedString(dirCount++, 8, 16, Constants.CLONE_PREFIX_BYTES));
-      TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(m, new Value(dir.getBytes(UTF_8)));
-
-      bw.addMutation(m);
-    }
-
-    bw.close();
-
   }
 
   public static void chopped(AccumuloServerContext context, KeyExtent extent, ZooLock zooLock) {
@@ -904,27 +888,26 @@
   }
 
   public static void removeBulkLoadEntries(Connector conn, String tableId, long tid) throws Exception {
-    Scanner mscanner = new IsolatedScanner(conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY));
-    mscanner.setRange(new KeyExtent(new Text(tableId), null, null).toMetadataRange());
-    mscanner.fetchColumnFamily(TabletsSection.BulkFileColumnFamily.NAME);
-    BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
-    for (Entry<Key,Value> entry : mscanner) {
-      log.debug("Looking at entry " + entry + " with tid " + tid);
-      if (Long.parseLong(entry.getValue().toString()) == tid) {
-        log.debug("deleting entry " + entry);
-        Mutation m = new Mutation(entry.getKey().getRow());
-        m.putDelete(entry.getKey().getColumnFamily(), entry.getKey().getColumnQualifier());
-        bw.addMutation(m);
+    try (Scanner mscanner = new IsolatedScanner(conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY));
+        BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig())) {
+      mscanner.setRange(new KeyExtent(tableId, null, null).toMetadataRange());
+      mscanner.fetchColumnFamily(TabletsSection.BulkFileColumnFamily.NAME);
+      for (Entry<Key,Value> entry : mscanner) {
+        log.debug("Looking at entry " + entry + " with tid " + tid);
+        if (Long.parseLong(entry.getValue().toString()) == tid) {
+          log.debug("deleting entry " + entry);
+          Mutation m = new Mutation(entry.getKey().getRow());
+          m.putDelete(entry.getKey().getColumnFamily(), entry.getKey().getColumnQualifier());
+          bw.addMutation(m);
+        }
       }
     }
-    bw.close();
   }
 
   public static List<FileRef> getBulkFilesLoaded(Connector conn, KeyExtent extent, long tid) throws IOException {
-    List<FileRef> result = new ArrayList<FileRef>();
-    try {
+    List<FileRef> result = new ArrayList<>();
+    try (Scanner mscanner = new IsolatedScanner(conn.createScanner(extent.isMeta() ? RootTable.NAME : MetadataTable.NAME, Authorizations.EMPTY))) {
       VolumeManager fs = VolumeManagerImpl.get();
-      Scanner mscanner = new IsolatedScanner(conn.createScanner(extent.isMeta() ? RootTable.NAME : MetadataTable.NAME, Authorizations.EMPTY));
       mscanner.setRange(extent.toMetadataRange());
       mscanner.fetchColumnFamily(TabletsSection.BulkFileColumnFamily.NAME);
       for (Entry<Key,Value> entry : mscanner) {
@@ -932,6 +915,7 @@
           result.add(new FileRef(fs, entry.getKey()));
         }
       }
+
       return result;
     } catch (TableNotFoundException ex) {
       // unlikely
@@ -939,19 +923,24 @@
     }
   }
 
-  public static Map<FileRef,Long> getBulkFilesLoaded(ClientContext context, KeyExtent extent) throws IOException {
+  public static Map<Long,? extends Collection<FileRef>> getBulkFilesLoaded(ClientContext context, KeyExtent extent) throws IOException {
     Text metadataRow = extent.getMetadataEntry();
-    Map<FileRef,Long> ret = new HashMap<FileRef,Long>();
+    Map<Long,List<FileRef>> result = new HashMap<>();
 
     VolumeManager fs = VolumeManagerImpl.get();
-    Scanner scanner = new ScannerImpl(context, extent.isMeta() ? RootTable.ID : MetadataTable.ID, Authorizations.EMPTY);
-    scanner.setRange(new Range(metadataRow));
-    scanner.fetchColumnFamily(TabletsSection.BulkFileColumnFamily.NAME);
-    for (Entry<Key,Value> entry : scanner) {
-      Long tid = Long.parseLong(entry.getValue().toString());
-      ret.put(new FileRef(fs, entry.getKey()), tid);
+    try (Scanner scanner = new ScannerImpl(context, extent.isMeta() ? RootTable.ID : MetadataTable.ID, Authorizations.EMPTY)) {
+      scanner.setRange(new Range(metadataRow));
+      scanner.fetchColumnFamily(TabletsSection.BulkFileColumnFamily.NAME);
+      for (Entry<Key,Value> entry : scanner) {
+        Long tid = Long.parseLong(entry.getValue().toString());
+        List<FileRef> lst = result.get(tid);
+        if (lst == null) {
+          result.put(tid, lst = new ArrayList<>());
+        }
+        lst.add(new FileRef(fs, entry.getKey()));
+      }
     }
-    return ret;
+    return result;
   }
 
   public static void addBulkLoadInProgressFlag(AccumuloServerContext context, String path) {
@@ -961,7 +950,7 @@
 
     // new KeyExtent is only added to force update to write to the metadata table, not the root table
     // because bulk loads aren't supported to the metadata table
-    update(context, m, new KeyExtent(new Text("anythingNotMetadata"), null, null));
+    update(context, m, new KeyExtent("anythingNotMetadata", null, null));
   }
 
   public static void removeBulkLoadInProgressFlag(AccumuloServerContext context, String path) {
@@ -971,7 +960,7 @@
 
     // new KeyExtent is only added to force update to write to the metadata table, not the root table
     // because bulk loads aren't supported to the metadata table
-    update(context, m, new KeyExtent(new Text("anythingNotMetadata"), null, null));
+    update(context, m, new KeyExtent("anythingNotMetadata", null, null));
   }
 
   /**
@@ -981,7 +970,7 @@
     String dir = VolumeManagerImpl.get().choose(Optional.of(ReplicationTable.ID), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR + Path.SEPARATOR
         + ReplicationTable.ID + Constants.DEFAULT_TABLET_LOCATION;
 
-    Mutation m = new Mutation(new Text(KeyExtent.getMetadataEntry(new Text(ReplicationTable.ID), null)));
+    Mutation m = new Mutation(new Text(KeyExtent.getMetadataEntry(ReplicationTable.ID, null)));
     m.put(DIRECTORY_COLUMN.getColumnFamily(), DIRECTORY_COLUMN.getColumnQualifier(), 0, new Value(dir.getBytes(UTF_8)));
     m.put(TIME_COLUMN.getColumnFamily(), TIME_COLUMN.getColumnQualifier(), 0, new Value((TabletTime.LOGICAL_TIME_ID + "0").getBytes(UTF_8)));
     m.put(PREV_ROW_COLUMN.getColumnFamily(), PREV_ROW_COLUMN.getColumnQualifier(), 0, KeyExtent.encodePrevEndRow(null));
@@ -996,31 +985,33 @@
     Range oldDeletesRange = new Range(oldDeletesPrefix, true, "!!~dem", false);
 
     // move old delete markers to new location, to standardize table schema between all metadata tables
-    Scanner scanner = new ScannerImpl(context, RootTable.ID, Authorizations.EMPTY);
-    scanner.setRange(oldDeletesRange);
-    for (Entry<Key,Value> entry : scanner) {
-      String row = entry.getKey().getRow().toString();
-      if (row.startsWith(oldDeletesPrefix)) {
-        moveDeleteEntry(context, RootTable.OLD_EXTENT, entry, row, oldDeletesPrefix);
-      } else {
-        break;
+    try (Scanner scanner = new ScannerImpl(context, RootTable.ID, Authorizations.EMPTY)) {
+      scanner.setRange(oldDeletesRange);
+      for (Entry<Key,Value> entry : scanner) {
+        String row = entry.getKey().getRow().toString();
+        if (row.startsWith(oldDeletesPrefix)) {
+          moveDeleteEntry(context, RootTable.OLD_EXTENT, entry, row, oldDeletesPrefix);
+        } else {
+          break;
+        }
       }
     }
   }
 
   public static void moveMetaDeleteMarkersFrom14(ClientContext context) {
     // new KeyExtent is only added to force update to write to the metadata table, not the root table
-    KeyExtent notMetadata = new KeyExtent(new Text("anythingNotMetadata"), null, null);
+    KeyExtent notMetadata = new KeyExtent("anythingNotMetadata", null, null);
 
     // move delete markers from the normal delete keyspace to the root tablet delete keyspace if the files are for the !METADATA table
-    Scanner scanner = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY);
-    scanner.setRange(MetadataSchema.DeletesSection.getRange());
-    for (Entry<Key,Value> entry : scanner) {
-      String row = entry.getKey().getRow().toString();
-      if (row.startsWith(MetadataSchema.DeletesSection.getRowPrefix() + "/" + MetadataTable.ID)) {
-        moveDeleteEntry(context, notMetadata, entry, row, MetadataSchema.DeletesSection.getRowPrefix());
-      } else {
-        break;
+    try (Scanner scanner = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY)) {
+      scanner.setRange(MetadataSchema.DeletesSection.getRange());
+      for (Entry<Key,Value> entry : scanner) {
+        String row = entry.getKey().getRow().toString();
+        if (row.startsWith(MetadataSchema.DeletesSection.getRowPrefix() + "/" + MetadataTable.ID)) {
+          moveDeleteEntry(context, notMetadata, entry, row, MetadataSchema.DeletesSection.getRowPrefix());
+        } else {
+          break;
+        }
       }
     }
   }
@@ -1041,11 +1032,11 @@
   }
 
   public static SortedMap<Text,SortedMap<ColumnFQ,Value>> getTabletEntries(SortedMap<Key,Value> tabletKeyValues, List<ColumnFQ> columns) {
-    TreeMap<Text,SortedMap<ColumnFQ,Value>> tabletEntries = new TreeMap<Text,SortedMap<ColumnFQ,Value>>();
+    TreeMap<Text,SortedMap<ColumnFQ,Value>> tabletEntries = new TreeMap<>();
 
     HashSet<ColumnFQ> colSet = null;
     if (columns != null) {
-      colSet = new HashSet<ColumnFQ>(columns);
+      colSet = new HashSet<>(columns);
     }
 
     for (Entry<Key,Value> entry : tabletKeyValues.entrySet()) {
@@ -1058,7 +1049,7 @@
 
       SortedMap<ColumnFQ,Value> colVals = tabletEntries.get(row);
       if (colVals == null) {
-        colVals = new TreeMap<ColumnFQ,Value>();
+        colVals = new TreeMap<>();
         tabletEntries.put(row, colVals);
       }
 
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/RemoveEntriesForMissingFiles.java b/server/base/src/main/java/org/apache/accumulo/server/util/RemoveEntriesForMissingFiles.java
index c9d4dd5..e3f531d 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/RemoveEntriesForMissingFiles.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/RemoveEntriesForMissingFiles.java
@@ -51,7 +51,6 @@
 import org.apache.accumulo.server.fs.VolumeManagerImpl;
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 
 import com.beust.jcommander.Parameter;
 
@@ -125,7 +124,7 @@
 
     @SuppressWarnings({"rawtypes"})
     Map cache = new LRUMap(100000);
-    Set<Path> processing = new HashSet<Path>();
+    Set<Path> processing = new HashSet<>();
     ExecutorService threadPool = Executors.newFixedThreadPool(16);
 
     System.out.printf("Scanning : %s %s\n", table, range);
@@ -137,7 +136,7 @@
     metadata.fetchColumnFamily(DataFileColumnFamily.NAME);
     int count = 0;
     AtomicInteger missing = new AtomicInteger(0);
-    AtomicReference<Exception> exceptionRef = new AtomicReference<Exception>(null);
+    AtomicReference<Exception> exceptionRef = new AtomicReference<>(null);
     BatchWriter writer = null;
 
     if (fix)
@@ -199,7 +198,7 @@
       return checkTable(context, RootTable.NAME, MetadataSchema.TabletsSection.getRange(), fix);
     } else {
       String tableId = Tables.getTableId(context.getInstance(), tableName);
-      Range range = new KeyExtent(new Text(tableId), null, null).toMetadataRange();
+      Range range = new KeyExtent(tableId, null, null).toMetadataRange();
       return checkTable(context, MetadataTable.NAME, range, fix);
     }
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ReplicationTableUtil.java b/server/base/src/main/java/org/apache/accumulo/server/util/ReplicationTableUtil.java
index 8e755a3..f234593 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/ReplicationTableUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/ReplicationTableUtil.java
@@ -16,12 +16,14 @@
  */
 package org.apache.accumulo.server.util;
 
-import java.util.Collection;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.util.Collections;
 import java.util.EnumSet;
 import java.util.HashMap;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -44,7 +46,6 @@
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.ReplicationSection;
 import org.apache.accumulo.core.protobuf.ProtobufUtil;
 import org.apache.accumulo.core.tabletserver.thrift.ConstraintViolationException;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.server.replication.StatusCombiner;
 import org.apache.accumulo.server.replication.StatusFormatter;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
@@ -58,7 +59,7 @@
  */
 public class ReplicationTableUtil {
 
-  private static Map<Credentials,Writer> writers = new HashMap<Credentials,Writer>();
+  private static Map<Credentials,Writer> writers = new HashMap<>();
   private static final Logger log = LoggerFactory.getLogger(ReplicationTableUtil.class);
 
   public static final String COMBINER_NAME = "replcombiner";
@@ -169,27 +170,21 @@
       } catch (TableNotFoundException e) {
         log.error(e.toString(), e);
       }
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
   }
 
   /**
    * Write replication ingest entries for each provided file with the given {@link Status}.
    */
-  public static void updateFiles(ClientContext context, KeyExtent extent, Collection<String> files, Status stat) {
+  public static void updateFiles(ClientContext context, KeyExtent extent, String file, Status stat) {
     if (log.isDebugEnabled()) {
-      log.debug("Updating replication status for " + extent + " with " + files + " using " + ProtobufUtil.toString(stat));
+      log.debug("Updating replication status for " + extent + " with " + file + " using " + ProtobufUtil.toString(stat));
     }
     // TODO could use batch writer, would need to handle failure and retry like update does - ACCUMULO-1294
-    if (files.isEmpty()) {
-      return;
-    }
 
     Value v = ProtobufUtil.toValue(stat);
-    for (String file : files) {
-      // TODO Can preclude this addition if the extent is for a table we don't need to replicate
-      update(context, createUpdateMutation(new Path(file), v, extent), extent);
-    }
+    update(context, createUpdateMutation(new Path(file), v, extent), extent);
   }
 
   static Mutation createUpdateMutation(Path file, Value v, KeyExtent extent) {
@@ -199,7 +194,7 @@
 
   private static Mutation createUpdateMutation(Text row, Value v, KeyExtent extent) {
     Mutation m = new Mutation(row);
-    m.put(MetadataSchema.ReplicationSection.COLF, extent.getTableId(), v);
+    m.put(MetadataSchema.ReplicationSection.COLF, new Text(extent.getTableId()), v);
     return m;
   }
 }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java b/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
index d4e79d76..8da1ce9 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/RestoreZookeeper.java
@@ -43,7 +43,7 @@
 
   private static class Restore extends DefaultHandler {
     IZooReaderWriter zk = null;
-    Stack<String> cwd = new Stack<String>();
+    Stack<String> cwd = new Stack<>();
     boolean overwrite = false;
 
     Restore(IZooReaderWriter zk, boolean overwrite) {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ServerBulkImportStatus.java b/server/base/src/main/java/org/apache/accumulo/server/util/ServerBulkImportStatus.java
new file mode 100644
index 0000000..21815f8
--- /dev/null
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/ServerBulkImportStatus.java
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.server.util;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.accumulo.core.master.thrift.BulkImportState;
+import org.apache.accumulo.core.master.thrift.BulkImportStatus;
+
+// A little class to hold bulk import status information in the Master
+// and two places in the tablet server.
+public class ServerBulkImportStatus {
+  private final ConcurrentMap<String,BulkImportStatus> status = new ConcurrentHashMap<>();
+
+  public List<BulkImportStatus> getBulkLoadStatus() {
+    return new ArrayList<>(status.values());
+  }
+
+  public void updateBulkImportStatus(List<String> files, BulkImportState state) {
+    for (String file : files) {
+      BulkImportStatus initial = new BulkImportStatus(System.currentTimeMillis(), file, state);
+      status.putIfAbsent(file, initial);
+      initial = status.get(file);
+      if (initial != null) {
+        initial.state = state;
+      }
+    }
+  }
+
+  public void removeBulkImportStatus(List<String> files) {
+    status.keySet().removeAll(files);
+  }
+
+}
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/SystemPropUtil.java b/server/base/src/main/java/org/apache/accumulo/server/util/SystemPropUtil.java
index 49a6971..81755ea 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/SystemPropUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/SystemPropUtil.java
@@ -20,6 +20,7 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.conf.PropertyType;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
@@ -33,17 +34,32 @@
   private static final Logger log = LoggerFactory.getLogger(SystemPropUtil.class);
 
   public static boolean setSystemProperty(String property, String value) throws KeeperException, InterruptedException {
-    Property p = Property.getPropertyByKey(property);
-    if ((p != null && !p.getType().isValidFormat(value)) || !Property.isValidZooPropertyKey(property)) {
-      log.warn("Ignoring property {} it is null, an invalid format, or not capable of being changed in zookeeper", property);
-      return false;
+    if (!Property.isValidZooPropertyKey(property)) {
+      IllegalArgumentException iae = new IllegalArgumentException("Zookeeper property is not mutable: " + property);
+      log.debug("Attempted to set zookeeper property.  It is not mutable", iae);
+      throw iae;
+    }
+
+    // Find the property taking prefix into account
+    Property foundProp = null;
+    for (Property prop : Property.values()) {
+      if (PropertyType.PREFIX == prop.getType() && property.startsWith(prop.getKey()) || prop.getKey().equals(property)) {
+        foundProp = prop;
+        break;
+      }
+    }
+
+    if ((foundProp == null || (foundProp.getType() != PropertyType.PREFIX && !foundProp.getType().isValidFormat(value)))) {
+      IllegalArgumentException iae = new IllegalArgumentException("Ignoring property " + property + " it is either null or in an invalid format");
+      log.debug("Attempted to set zookeeper property.  Value is either null or invalid", iae);
+      throw iae;
     }
 
     // create the zk node for this property and set it's data to the specified value
     String zPath = ZooUtil.getRoot(HdfsZooInstance.getInstance()) + Constants.ZCONFIG + "/" + property;
-    ZooReaderWriter.getInstance().putPersistentData(zPath, value.getBytes(UTF_8), NodeExistsPolicy.OVERWRITE);
+    boolean result = ZooReaderWriter.getInstance().putPersistentData(zPath, value.getBytes(UTF_8), NodeExistsPolicy.OVERWRITE);
 
-    return true;
+    return result;
   }
 
   public static void removeSystemProperty(String property) throws InterruptedException, KeeperException {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/TableDiskUsage.java b/server/base/src/main/java/org/apache/accumulo/server/util/TableDiskUsage.java
index 235286b..84d76cb 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/TableDiskUsage.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/TableDiskUsage.java
@@ -48,7 +48,6 @@
 import org.apache.accumulo.server.fs.VolumeManagerImpl;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -59,10 +58,10 @@
 
   private static final Logger log = LoggerFactory.getLogger(TableDiskUsage.class);
   private int nextInternalId = 0;
-  private Map<String,Integer> internalIds = new HashMap<String,Integer>();
-  private Map<Integer,String> externalIds = new HashMap<Integer,String>();
-  private Map<String,Integer[]> tableFiles = new HashMap<String,Integer[]>();
-  private Map<String,Long> fileSizes = new HashMap<String,Long>();
+  private Map<String,Integer> internalIds = new HashMap<>();
+  private Map<Integer,String> externalIds = new HashMap<>();
+  private Map<String,Integer[]> tableFiles = new HashMap<>();
+  private Map<String,Long> fileSizes = new HashMap<>();
 
   void addTable(String tableId) {
     if (internalIds.containsKey(tableId))
@@ -101,7 +100,7 @@
   Map<List<String>,Long> calculateUsage() {
 
     // Bitset of tables that contain a file and total usage by all files that share that usage
-    Map<List<Integer>,Long> usage = new HashMap<List<Integer>,Long>();
+    Map<List<Integer>,Long> usage = new HashMap<>();
 
     if (log.isTraceEnabled()) {
       log.trace("fileSizes " + fileSizes);
@@ -124,10 +123,10 @@
 
     }
 
-    Map<List<String>,Long> externalUsage = new HashMap<List<String>,Long>();
+    Map<List<String>,Long> externalUsage = new HashMap<>();
 
     for (Entry<List<Integer>,Long> entry : usage.entrySet()) {
-      List<String> externalKey = new ArrayList<String>();
+      List<String> externalKey = new ArrayList<>();
       List<Integer> key = entry.getKey();
       // table bitset
       for (int i = 0; i < key.size(); i++)
@@ -165,9 +164,9 @@
     for (String tableId : tableIds)
       tdu.addTable(tableId);
 
-    HashSet<String> tablesReferenced = new HashSet<String>(tableIds);
-    HashSet<String> emptyTableIds = new HashSet<String>();
-    HashSet<String> nameSpacesReferenced = new HashSet<String>();
+    HashSet<String> tablesReferenced = new HashSet<>(tableIds);
+    HashSet<String> emptyTableIds = new HashSet<>();
+    HashSet<String> nameSpacesReferenced = new HashSet<>();
 
     // For each table ID
     for (String tableId : tableIds) {
@@ -178,7 +177,7 @@
         throw new RuntimeException(e);
       }
       mdScanner.fetchColumnFamily(DataFileColumnFamily.NAME);
-      mdScanner.setRange(new KeyExtent(new Text(tableId), null, null).toMetadataRange());
+      mdScanner.setRange(new KeyExtent(tableId, null, null).toMetadataRange());
 
       if (!mdScanner.iterator().hasNext()) {
         emptyTableIds.add(tableId);
@@ -223,11 +222,11 @@
     }
 
     // Invert tableId->tableName
-    HashMap<String,String> reverseTableIdMap = new HashMap<String,String>();
+    HashMap<String,String> reverseTableIdMap = new HashMap<>();
     for (Entry<String,String> entry : conn.tableOperations().tableIdMap().entrySet())
       reverseTableIdMap.put(entry.getValue(), entry.getKey());
 
-    TreeMap<TreeSet<String>,Long> usage = new TreeMap<TreeSet<String>,Long>(new Comparator<TreeSet<String>>() {
+    TreeMap<TreeSet<String>,Long> usage = new TreeMap<>(new Comparator<TreeSet<String>>() {
 
       @Override
       public int compare(TreeSet<String> o1, TreeSet<String> o2) {
@@ -258,7 +257,7 @@
     });
 
     for (Entry<List<String>,Long> entry : tdu.calculateUsage().entrySet()) {
-      TreeSet<String> tableNames = new TreeSet<String>();
+      TreeSet<String> tableNames = new TreeSet<>();
       // Convert size shared by each table id into size shared by each table name
       for (String tableId : entry.getKey())
         tableNames.add(reverseTableIdMap.get(tableId));
@@ -268,7 +267,7 @@
     }
 
     if (!emptyTableIds.isEmpty()) {
-      TreeSet<String> emptyTables = new TreeSet<String>();
+      TreeSet<String> emptyTables = new TreeSet<>();
       for (String tableId : emptyTableIds) {
         emptyTables.add(reverseTableIdMap.get(tableId));
       }
@@ -281,7 +280,7 @@
   public static void printDiskUsage(AccumuloConfiguration acuConf, Collection<String> tables, VolumeManager fs, Connector conn, Printer printer,
       boolean humanReadable) throws TableNotFoundException, IOException {
 
-    HashSet<String> tableIds = new HashSet<String>();
+    HashSet<String> tableIds = new HashSet<>();
 
     // Get table IDs for all tables requested to be 'du'
     for (String tableName : tables) {
@@ -303,7 +302,7 @@
 
   static class Opts extends ClientOpts {
     @Parameter(description = " <table> { <table> ... } ")
-    List<String> tables = new ArrayList<String>();
+    List<String> tables = new ArrayList<>();
   }
 
   public static void main(String[] args) throws Exception {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/TableInfoUtil.java b/server/base/src/main/java/org/apache/accumulo/server/util/TableInfoUtil.java
index 6aa937f..d804e1c 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/TableInfoUtil.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/TableInfoUtil.java
@@ -71,7 +71,7 @@
   }
 
   public static Map<String,Double> summarizeTableStats(MasterMonitorInfo mmi) {
-    Map<String,Double> compactingByTable = new HashMap<String,Double>();
+    Map<String,Double> compactingByTable = new HashMap<>();
     if (mmi != null && mmi.tServerInfo != null) {
       for (TabletServerStatus status : mmi.tServerInfo) {
         if (status != null && status.tableMap != null) {
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/TabletIterator.java b/server/base/src/main/java/org/apache/accumulo/server/util/TabletIterator.java
index 2137999..9569f49 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/TabletIterator.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/TabletIterator.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.server.util;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.util.Iterator;
 import java.util.Map;
 import java.util.Map.Entry;
@@ -23,6 +25,7 @@
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.data.Key;
@@ -32,7 +35,6 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -117,8 +119,8 @@
         lastEndRow = new KeyExtent(lastTablet, (Text) null).getEndRow();
 
         // do table transition sanity check
-        String lastTable = new KeyExtent(lastTablet, (Text) null).getTableId().toString();
-        String currentTable = new KeyExtent(prevEndRowKey.getRow(), (Text) null).getTableId().toString();
+        String lastTable = new KeyExtent(lastTablet, (Text) null).getTableId();
+        String currentTable = new KeyExtent(prevEndRowKey.getRow(), (Text) null).getTableId();
 
         if (!lastTable.equals(currentTable) && (per != null || lastEndRow != null)) {
           log.info("Metadata inconsistency on table transition : " + lastTable + " " + currentTable + " " + per + " " + lastEndRow);
@@ -126,7 +128,7 @@
           currentTabletKeys = null;
           resetScanner();
 
-          UtilWaitThread.sleep(250);
+          sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
 
           continue;
         }
@@ -141,7 +143,7 @@
         currentTabletKeys = null;
         resetScanner();
 
-        UtilWaitThread.sleep(250);
+        sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
 
         continue;
 
@@ -188,7 +190,7 @@
 
     Text curMetaDataRow = null;
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     boolean sawPrevEndRow = false;
 
@@ -218,7 +220,7 @@
         resetScanner();
         curMetaDataRow = null;
         tm.clear();
-        UtilWaitThread.sleep(250);
+        sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
       } else {
         break;
       }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/VerifyTabletAssignments.java b/server/base/src/main/java/org/apache/accumulo/server/util/VerifyTabletAssignments.java
index c0c979b..bf072f3 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/util/VerifyTabletAssignments.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/util/VerifyTabletAssignments.java
@@ -87,14 +87,14 @@
     else
       System.out.println("Checking table " + tableName + " again, failures " + check.size());
 
-    TreeMap<KeyExtent,String> tabletLocations = new TreeMap<KeyExtent,String>();
+    TreeMap<KeyExtent,String> tabletLocations = new TreeMap<>();
 
     String tableId = Tables.getNameToIdMap(context.getInstance()).get(tableName);
     MetadataServicer.forTableId(context, tableId).getTabletLocations(tabletLocations);
 
-    final HashSet<KeyExtent> failures = new HashSet<KeyExtent>();
+    final HashSet<KeyExtent> failures = new HashSet<>();
 
-    Map<HostAndPort,List<KeyExtent>> extentsPerServer = new TreeMap<HostAndPort,List<KeyExtent>>();
+    Map<HostAndPort,List<KeyExtent>> extentsPerServer = new TreeMap<>();
 
     for (Entry<KeyExtent,String> entry : tabletLocations.entrySet()) {
       KeyExtent keyExtent = entry.getKey();
@@ -108,7 +108,7 @@
         final HostAndPort parsedLoc = HostAndPort.fromString(loc);
         List<KeyExtent> extentList = extentsPerServer.get(parsedLoc);
         if (extentList == null) {
-          extentList = new ArrayList<KeyExtent>();
+          extentList = new ArrayList<>();
           extentsPerServer.put(parsedLoc, extentList);
         }
 
@@ -156,7 +156,7 @@
       throws ThriftSecurityException, TException, NoSuchScanIDException {
     TabletClientService.Iface client = ThriftUtil.getTServerClient(entry.getKey(), context);
 
-    Map<TKeyExtent,List<TRange>> batch = new TreeMap<TKeyExtent,List<TRange>>();
+    Map<TKeyExtent,List<TRange>> batch = new TreeMap<>();
 
     for (KeyExtent keyExtent : entry.getValue()) {
       Text row = keyExtent.getEndRow();
@@ -189,7 +189,7 @@
     List<IterInfo> emptyListIterInfo = Collections.emptyList();
     List<TColumn> emptyListColumn = Collections.emptyList();
     InitialMultiScan is = client.startMultiScan(tinfo, context.rpcCreds(), batch, emptyListColumn, emptyListIterInfo, emptyMapSMapSS,
-        Authorizations.EMPTY.getAuthorizationsBB(), false);
+        Authorizations.EMPTY.getAuthorizationsBB(), false, null, 0L, null);
     if (is.result.more) {
       MultiScanResult result = client.continueMultiScan(tinfo, is.scanID);
       checkFailures(entry.getKey(), failures, result);
diff --git a/server/base/src/main/java/org/apache/accumulo/server/zookeeper/DistributedWorkQueue.java b/server/base/src/main/java/org/apache/accumulo/server/zookeeper/DistributedWorkQueue.java
index 0f298b4..4faa7ad 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/zookeeper/DistributedWorkQueue.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/zookeeper/DistributedWorkQueue.java
@@ -233,7 +233,7 @@
   }
 
   public List<String> getWorkQueued() throws KeeperException, InterruptedException {
-    ArrayList<String> children = new ArrayList<String>(zoo.getChildren(path));
+    ArrayList<String> children = new ArrayList<>(zoo.getChildren(path));
     children.remove(LOCKS_NODE);
     return children;
   }
diff --git a/server/base/src/main/java/org/apache/accumulo/server/zookeeper/TransactionWatcher.java b/server/base/src/main/java/org/apache/accumulo/server/zookeeper/TransactionWatcher.java
index 0e1cdfd..da94a3c 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/zookeeper/TransactionWatcher.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/zookeeper/TransactionWatcher.java
@@ -16,8 +16,13 @@
  */
 package org.apache.accumulo.server.zookeeper;
 
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
+import org.apache.accumulo.fate.zookeeper.IZooReader;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooReader;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
@@ -59,6 +64,22 @@
       writer.recursiveDelete(ZooUtil.getRoot(instance) + "/" + type + "/" + tid + "-running", NodeMissingPolicy.SKIP);
     }
 
+    public static Set<Long> allTransactionsAlive(String type) throws KeeperException, InterruptedException {
+      final Instance instance = HdfsZooInstance.getInstance();
+      final IZooReader reader = ZooReaderWriter.getInstance();
+      final Set<Long> result = new HashSet<>();
+      final String parent = ZooUtil.getRoot(instance) + "/" + type;
+      reader.sync(parent);
+      List<String> children = reader.getChildren(parent);
+      for (String child : children) {
+        if (child.endsWith("-running")) {
+          continue;
+        }
+        result.add(Long.parseLong(child));
+      }
+      return result;
+    }
+
     @Override
     public boolean transactionComplete(String type, long tid) throws Exception {
       String path = ZooUtil.getRoot(instance) + "/" + type + "/" + tid + "-running";
diff --git a/server/base/src/test/java/org/apache/accumulo/server/AccumuloServerContextTest.java b/server/base/src/test/java/org/apache/accumulo/server/AccumuloServerContextTest.java
index a596d9f..521fd7b 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/AccumuloServerContextTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/AccumuloServerContextTest.java
@@ -26,9 +26,9 @@
 
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
+import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.impl.ClientContext;
 import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
@@ -68,7 +68,7 @@
     testUser.doAs(new PrivilegedExceptionAction<Void>() {
       @Override
       public Void run() throws Exception {
-        MockInstance instance = new MockInstance();
+        Instance instance = EasyMock.createMock(Instance.class);
 
         ClientConfiguration clientConf = ClientConfiguration.loadDefault();
         clientConf.setProperty(ClientProperty.INSTANCE_RPC_SASL_ENABLED, "true");
diff --git a/server/base/src/test/java/org/apache/accumulo/server/ServerConstantsTest.java b/server/base/src/test/java/org/apache/accumulo/server/ServerConstantsTest.java
index 05f0a47..15c23be 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/ServerConstantsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/ServerConstantsTest.java
@@ -94,7 +94,7 @@
 
     LocalFileSystem fs = FileSystem.getLocal(new Configuration());
 
-    ArrayList<String> accumuloPaths = new ArrayList<String>();
+    ArrayList<String> accumuloPaths = new ArrayList<>();
 
     for (int i = 0; i < uuids.size(); i++) {
       String volume = "v" + i;
diff --git a/server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java b/server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
index 1c6a48f..cb2bd75 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/client/BulkImporterTest.java
@@ -25,15 +25,12 @@
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.impl.ClientContext;
-import org.apache.accumulo.core.client.impl.Credentials;
 import org.apache.accumulo.core.client.impl.TabletLocator;
 import org.apache.accumulo.core.client.impl.TabletLocator.TabletLocation;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.conf.DefaultConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
@@ -48,13 +45,15 @@
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
+import org.easymock.EasyMock;
 import org.junit.Assert;
 import org.junit.Test;
 
 public class BulkImporterTest {
 
-  static final SortedSet<KeyExtent> fakeMetaData = new TreeSet<KeyExtent>();
-  static final Text tableId = new Text("1");
+  static final SortedSet<KeyExtent> fakeMetaData = new TreeSet<>();
+  static final String tableId = "1";
+
   static {
     fakeMetaData.add(new KeyExtent(tableId, new Text("a"), null));
     for (String part : new String[] {"b", "bm", "c", "cm", "d", "dm", "e", "em", "f", "g", "h", "i", "j", "k", "l"}) {
@@ -109,10 +108,13 @@
   public void testFindOverlappingTablets() throws Exception {
     MockTabletLocator locator = new MockTabletLocator();
     FileSystem fs = FileSystem.getLocal(CachedConfiguration.getInstance());
-    ClientContext context = new ClientContext(new MockInstance(), new Credentials("root", new PasswordToken("")), new ClientConfiguration());
+    ClientContext context = EasyMock.createMock(ClientContext.class);
+    EasyMock.expect(context.getConfiguration()).andReturn(DefaultConfiguration.getInstance()).anyTimes();
+    EasyMock.replay(context);
     String file = "target/testFile.rf";
     fs.delete(new Path(file), true);
-    FileSKVWriter writer = FileOperations.getInstance().openWriter(file, fs, fs.getConf(), context.getConfiguration());
+    FileSKVWriter writer = FileOperations.getInstance().newWriterBuilder().forFile(file, fs, fs.getConf()).withTableConfiguration(context.getConfiguration())
+        .build();
     writer.startDefaultLocalityGroup();
     Value empty = new Value(new byte[] {});
     writer.append(new Key("a", "cf", "cq"), empty);
@@ -161,19 +163,19 @@
     // a correct startRow so that findOverlappingTablets works as intended.
 
     // 1;2;1
-    KeyExtent extent = new KeyExtent(new Text("1"), new Text("2"), new Text("1"));
+    KeyExtent extent = new KeyExtent("1", new Text("2"), new Text("1"));
     Assert.assertEquals(new Text("1\0"), BulkImporter.getStartRowForExtent(extent));
 
     // 1;2<
-    extent = new KeyExtent(new Text("1"), new Text("2"), null);
+    extent = new KeyExtent("1", new Text("2"), null);
     Assert.assertEquals(null, BulkImporter.getStartRowForExtent(extent));
 
     // 1<<
-    extent = new KeyExtent(new Text("1"), null, null);
+    extent = new KeyExtent("1", null, null);
     Assert.assertEquals(null, BulkImporter.getStartRowForExtent(extent));
 
     // 1;8;7777777
-    extent = new KeyExtent(new Text("1"), new Text("8"), new Text("7777777"));
+    extent = new KeyExtent("1", new Text("8"), new Text("7777777"));
     Assert.assertEquals(new Text("7777777\0"), BulkImporter.getStartRowForExtent(extent));
   }
 }
diff --git a/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java b/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java
index 742ebb2..071e9c0 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/conf/NamespaceConfigurationTest.java
@@ -118,10 +118,10 @@
   @Test
   public void testGetProperties() {
     Predicate<String> all = Predicates.alwaysTrue();
-    Map<String,String> props = new java.util.HashMap<String,String>();
+    Map<String,String> props = new java.util.HashMap<>();
     parent.getProperties(props, all);
     replay(parent);
-    List<String> children = new java.util.ArrayList<String>();
+    List<String> children = new java.util.ArrayList<>();
     children.add("foo");
     children.add("ding");
     expect(zc.getChildren(ZooUtil.getRoot(iid) + Constants.ZNAMESPACES + "/" + NSID + Constants.ZNAMESPACE_CONF)).andReturn(children);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java b/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java
index 1eb933a..34d6905 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/conf/TableConfigurationTest.java
@@ -102,10 +102,10 @@
   @Test
   public void testGetProperties() {
     Predicate<String> all = Predicates.alwaysTrue();
-    Map<String,String> props = new java.util.HashMap<String,String>();
+    Map<String,String> props = new java.util.HashMap<>();
     parent.getProperties(props, all);
     replay(parent);
-    List<String> children = new java.util.ArrayList<String>();
+    List<String> children = new java.util.ArrayList<>();
     children.add("foo");
     children.add("ding");
     expect(zc.getChildren(ZooUtil.getRoot(iid) + Constants.ZTABLES + "/" + TID + Constants.ZTABLE_CONF)).andReturn(children);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessorTest.java b/server/base/src/test/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessorTest.java
index e90e921..9bd7b90 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessorTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/conf/ZooCachePropertyAccessorTest.java
@@ -105,7 +105,7 @@
 
   @Test
   public void testGetProperties() {
-    Map<String,String> props = new java.util.HashMap<String,String>();
+    Map<String,String> props = new java.util.HashMap<>();
     AccumuloConfiguration parent = createMock(AccumuloConfiguration.class);
     @SuppressWarnings("unchecked")
     Predicate<String> filter = createMock(Predicate.class);
@@ -113,7 +113,7 @@
     replay(parent);
     String child1 = "child1";
     String child2 = "child2";
-    List<String> children = new java.util.ArrayList<String>();
+    List<String> children = new java.util.ArrayList<>();
     children.add(child1);
     children.add(child2);
     expect(zc.getChildren(PATH)).andReturn(children);
@@ -132,7 +132,7 @@
 
   @Test
   public void testGetProperties_NoChildren() {
-    Map<String,String> props = new java.util.HashMap<String,String>();
+    Map<String,String> props = new java.util.HashMap<>();
     AccumuloConfiguration parent = createMock(AccumuloConfiguration.class);
     @SuppressWarnings("unchecked")
     Predicate<String> filter = createMock(Predicate.class);
@@ -147,14 +147,14 @@
 
   @Test
   public void testGetProperties_Filter() {
-    Map<String,String> props = new java.util.HashMap<String,String>();
+    Map<String,String> props = new java.util.HashMap<>();
     AccumuloConfiguration parent = createMock(AccumuloConfiguration.class);
     @SuppressWarnings("unchecked")
     Predicate<String> filter = createMock(Predicate.class);
     parent.getProperties(props, filter);
     replay(parent);
     String child1 = "child1";
-    List<String> children = new java.util.ArrayList<String>();
+    List<String> children = new java.util.ArrayList<>();
     children.add(child1);
     expect(zc.getChildren(PATH)).andReturn(children);
     replay(zc);
@@ -167,7 +167,7 @@
 
   @Test
   public void testGetProperties_ParentFilter() {
-    Map<String,String> props = new java.util.HashMap<String,String>();
+    Map<String,String> props = new java.util.HashMap<>();
     AccumuloConfiguration parent = createMock(AccumuloConfiguration.class);
     @SuppressWarnings("unchecked")
     Predicate<String> filter = createMock(Predicate.class);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/fs/FileRefTest.java b/server/base/src/test/java/org/apache/accumulo/server/fs/FileRefTest.java
index 402f689..14ca20c 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/fs/FileRefTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/fs/FileRefTest.java
@@ -60,7 +60,7 @@
     Assert.assertNotEquals(new FileRef("hdfs://1.2.3.4/accumulo/tables/2a/t-0003/C0005.rf"), new FileRef("hdfs://nn1/accumulo/tables/2a/t-0003/C0004.rf"));
     Assert.assertNotEquals(new FileRef("hdfs://nn1/accumulo/tables/2a/t-0003/C0005.rf"), new FileRef("hdfs://nn1/accumulo/tables/2a/t-0003/C0004.rf"));
 
-    HashMap<FileRef,String> refMap = new HashMap<FileRef,String>();
+    HashMap<FileRef,String> refMap = new HashMap<>();
     refMap.put(new FileRef("hdfs://1.2.3.4/accumulo/tables/2a/t-0003/C0004.rf"), "7");
     refMap.put(new FileRef("hdfs://nn1/accumulo/tables/2a/t-0003/C0005.rf"), "8");
 
diff --git a/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeUtilTest.java b/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeUtilTest.java
index 9fcd3e0..d04b124 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeUtilTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/fs/VolumeUtilTest.java
@@ -42,10 +42,10 @@
 
   @Test
   public void testSwitchVolume() {
-    List<Pair<Path,Path>> replacements = new ArrayList<Pair<Path,Path>>();
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1/accumulo"), new Path("viewfs:/a/accumulo")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1:9000/accumulo"), new Path("viewfs:/a/accumulo")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn2/accumulo"), new Path("viewfs:/b/accumulo")));
+    List<Pair<Path,Path>> replacements = new ArrayList<>();
+    replacements.add(new Pair<>(new Path("hdfs://nn1/accumulo"), new Path("viewfs:/a/accumulo")));
+    replacements.add(new Pair<>(new Path("hdfs://nn1:9000/accumulo"), new Path("viewfs:/a/accumulo")));
+    replacements.add(new Pair<>(new Path("hdfs://nn2/accumulo"), new Path("viewfs:/b/accumulo")));
 
     Assert.assertEquals("viewfs:/a/accumulo/tables/t-00000/C000.rf",
         VolumeUtil.switchVolume("hdfs://nn1/accumulo/tables/t-00000/C000.rf", FileType.TABLE, replacements));
@@ -57,9 +57,9 @@
     Assert.assertNull(VolumeUtil.switchVolume("file:/nn1/a/accumulo/tables/t-00000/C000.rf", FileType.TABLE, replacements));
 
     replacements.clear();
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1/d1/accumulo"), new Path("viewfs:/a/accumulo")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1:9000/d1/accumulo"), new Path("viewfs:/a/accumulo")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn2/d2/accumulo"), new Path("viewfs:/b/accumulo")));
+    replacements.add(new Pair<>(new Path("hdfs://nn1/d1/accumulo"), new Path("viewfs:/a/accumulo")));
+    replacements.add(new Pair<>(new Path("hdfs://nn1:9000/d1/accumulo"), new Path("viewfs:/a/accumulo")));
+    replacements.add(new Pair<>(new Path("hdfs://nn2/d2/accumulo"), new Path("viewfs:/b/accumulo")));
 
     Assert.assertEquals("viewfs:/a/accumulo/tables/t-00000/C000.rf",
         VolumeUtil.switchVolume("hdfs://nn1/d1/accumulo/tables/t-00000/C000.rf", FileType.TABLE, replacements));
@@ -74,10 +74,10 @@
 
   @Test
   public void testSwitchVolumesDifferentSourceDepths() {
-    List<Pair<Path,Path>> replacements = new ArrayList<Pair<Path,Path>>();
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1/accumulo"), new Path("viewfs:/a")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1:9000/accumulo"), new Path("viewfs:/a")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn2/accumulo"), new Path("viewfs:/b")));
+    List<Pair<Path,Path>> replacements = new ArrayList<>();
+    replacements.add(new Pair<>(new Path("hdfs://nn1/accumulo"), new Path("viewfs:/a")));
+    replacements.add(new Pair<>(new Path("hdfs://nn1:9000/accumulo"), new Path("viewfs:/a")));
+    replacements.add(new Pair<>(new Path("hdfs://nn2/accumulo"), new Path("viewfs:/b")));
 
     Assert
         .assertEquals("viewfs:/a/tables/t-00000/C000.rf", VolumeUtil.switchVolume("hdfs://nn1/accumulo/tables/t-00000/C000.rf", FileType.TABLE, replacements));
@@ -89,9 +89,9 @@
     Assert.assertNull(VolumeUtil.switchVolume("file:/nn1/a/accumulo/tables/t-00000/C000.rf", FileType.TABLE, replacements));
 
     replacements.clear();
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1/d1/accumulo"), new Path("viewfs:/a")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1:9000/d1/accumulo"), new Path("viewfs:/a")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn2/d2/accumulo"), new Path("viewfs:/b")));
+    replacements.add(new Pair<>(new Path("hdfs://nn1/d1/accumulo"), new Path("viewfs:/a")));
+    replacements.add(new Pair<>(new Path("hdfs://nn1:9000/d1/accumulo"), new Path("viewfs:/a")));
+    replacements.add(new Pair<>(new Path("hdfs://nn2/d2/accumulo"), new Path("viewfs:/b")));
 
     Assert.assertEquals("viewfs:/a/tables/t-00000/C000.rf",
         VolumeUtil.switchVolume("hdfs://nn1/d1/accumulo/tables/t-00000/C000.rf", FileType.TABLE, replacements));
@@ -106,10 +106,10 @@
 
   @Test
   public void testSwitchVolumesDifferentTargetDepths() {
-    List<Pair<Path,Path>> replacements = new ArrayList<Pair<Path,Path>>();
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1/accumulo"), new Path("viewfs:/path1/path2")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1:9000/accumulo"), new Path("viewfs:/path1/path2")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn2/accumulo"), new Path("viewfs:/path3")));
+    List<Pair<Path,Path>> replacements = new ArrayList<>();
+    replacements.add(new Pair<>(new Path("hdfs://nn1/accumulo"), new Path("viewfs:/path1/path2")));
+    replacements.add(new Pair<>(new Path("hdfs://nn1:9000/accumulo"), new Path("viewfs:/path1/path2")));
+    replacements.add(new Pair<>(new Path("hdfs://nn2/accumulo"), new Path("viewfs:/path3")));
 
     Assert.assertEquals("viewfs:/path1/path2/tables/t-00000/C000.rf",
         VolumeUtil.switchVolume("hdfs://nn1/accumulo/tables/t-00000/C000.rf", FileType.TABLE, replacements));
@@ -121,9 +121,9 @@
     Assert.assertNull(VolumeUtil.switchVolume("file:/nn1/a/accumulo/tables/t-00000/C000.rf", FileType.TABLE, replacements));
 
     replacements.clear();
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1/d1/accumulo"), new Path("viewfs:/path1/path2")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn1:9000/d1/accumulo"), new Path("viewfs:/path1/path2")));
-    replacements.add(new Pair<Path,Path>(new Path("hdfs://nn2/d2/accumulo"), new Path("viewfs:/path3")));
+    replacements.add(new Pair<>(new Path("hdfs://nn1/d1/accumulo"), new Path("viewfs:/path1/path2")));
+    replacements.add(new Pair<>(new Path("hdfs://nn1:9000/d1/accumulo"), new Path("viewfs:/path1/path2")));
+    replacements.add(new Pair<>(new Path("hdfs://nn2/d2/accumulo"), new Path("viewfs:/path3")));
 
     Assert.assertEquals("viewfs:/path1/path2/tables/t-00000/C000.rf",
         VolumeUtil.switchVolume("hdfs://nn1/d1/accumulo/tables/t-00000/C000.rf", FileType.TABLE, replacements));
@@ -199,9 +199,9 @@
 
   @Test
   public void testRootTableReplacement() throws IOException {
-    List<Pair<Path,Path>> replacements = new ArrayList<Pair<Path,Path>>();
-    replacements.add(new Pair<Path,Path>(new Path("file:/foo/v1"), new Path("file:/foo/v8")));
-    replacements.add(new Pair<Path,Path>(new Path("file:/foo/v2"), new Path("file:/foo/v9")));
+    List<Pair<Path,Path>> replacements = new ArrayList<>();
+    replacements.add(new Pair<>(new Path("file:/foo/v1"), new Path("file:/foo/v8")));
+    replacements.add(new Pair<>(new Path("file:/foo/v2"), new Path("file:/foo/v9")));
 
     FileType ft = FileType.TABLE;
 
diff --git a/server/base/src/test/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilterTest.java b/server/base/src/test/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilterTest.java
index ed662a5..a9a1f7b 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilterTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/iterators/MetadataBulkLoadFilterTest.java
@@ -21,19 +21,16 @@
 import java.util.HashMap;
 import java.util.TreeMap;
 
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
-import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.SortedMapIterator;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
-import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.ColumnFQ;
 import org.apache.accumulo.fate.zookeeper.TransactionWatcher.Arbitrator;
 import org.apache.hadoop.io.Text;
@@ -81,8 +78,8 @@
 
   @Test
   public void testBasic() throws IOException {
-    TreeMap<Key,Value> tm1 = new TreeMap<Key,Value>();
-    TreeMap<Key,Value> expected = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm1 = new TreeMap<>();
+    TreeMap<Key,Value> expected = new TreeMap<>();
 
     // following should not be deleted by filter
     put(tm1, "2;m", TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN, "/t1");
@@ -105,20 +102,7 @@
     put(tm1, "2<", TabletsSection.BulkFileColumnFamily.NAME, "/t2/fileA", "2");
 
     TestMetadataBulkLoadFilter iter = new TestMetadataBulkLoadFilter();
-    iter.init(new SortedMapIterator(tm1), new HashMap<String,String>(), new IteratorEnvironment() {
-
-      @Override
-      public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String mapFileName) throws IOException {
-        return null;
-      }
-
-      @Override
-      public void registerSideChannel(SortedKeyValueIterator<Key,Value> iter) {}
-
-      @Override
-      public Authorizations getAuthorizations() {
-        return null;
-      }
+    iter.init(new SortedMapIterator(tm1), new HashMap<String,String>(), new BaseIteratorEnvironment() {
 
       @Override
       public boolean isFullMajorCompaction() {
@@ -129,16 +113,11 @@
       public IteratorScope getIteratorScope() {
         return IteratorScope.majc;
       }
-
-      @Override
-      public AccumuloConfiguration getConfig() {
-        return null;
-      }
     });
 
     iter.seek(new Range(), new ArrayList<ByteSequence>(), false);
 
-    TreeMap<Key,Value> actual = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> actual = new TreeMap<>();
 
     while (iter.hasTop()) {
       actual.put(iter.getTopKey(), iter.getTopValue());
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java
index 144d1fc..7738c3a 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/BaseHostRegexTableLoadBalancerTest.java
@@ -217,23 +217,23 @@
     allTabletServers.put(new TServerInstance("192.168.0.15:9997", 1), new TabletServerStatus());
 
     tableExtents.put(FOO.getTableName(), new ArrayList<KeyExtent>());
-    tableExtents.get(FOO.getTableName()).add(new KeyExtent(new Text(FOO.getId()), new Text("1"), new Text("0")));
-    tableExtents.get(FOO.getTableName()).add(new KeyExtent(new Text(FOO.getId()), new Text("2"), new Text("1")));
-    tableExtents.get(FOO.getTableName()).add(new KeyExtent(new Text(FOO.getId()), new Text("3"), new Text("2")));
-    tableExtents.get(FOO.getTableName()).add(new KeyExtent(new Text(FOO.getId()), new Text("4"), new Text("3")));
-    tableExtents.get(FOO.getTableName()).add(new KeyExtent(new Text(FOO.getId()), new Text("5"), new Text("4")));
+    tableExtents.get(FOO.getTableName()).add(new KeyExtent(FOO.getId(), new Text("1"), new Text("0")));
+    tableExtents.get(FOO.getTableName()).add(new KeyExtent(FOO.getId(), new Text("2"), new Text("1")));
+    tableExtents.get(FOO.getTableName()).add(new KeyExtent(FOO.getId(), new Text("3"), new Text("2")));
+    tableExtents.get(FOO.getTableName()).add(new KeyExtent(FOO.getId(), new Text("4"), new Text("3")));
+    tableExtents.get(FOO.getTableName()).add(new KeyExtent(FOO.getId(), new Text("5"), new Text("4")));
     tableExtents.put(BAR.getTableName(), new ArrayList<KeyExtent>());
-    tableExtents.get(BAR.getTableName()).add(new KeyExtent(new Text(BAR.getId()), new Text("11"), new Text("10")));
-    tableExtents.get(BAR.getTableName()).add(new KeyExtent(new Text(BAR.getId()), new Text("12"), new Text("11")));
-    tableExtents.get(BAR.getTableName()).add(new KeyExtent(new Text(BAR.getId()), new Text("13"), new Text("12")));
-    tableExtents.get(BAR.getTableName()).add(new KeyExtent(new Text(BAR.getId()), new Text("14"), new Text("13")));
-    tableExtents.get(BAR.getTableName()).add(new KeyExtent(new Text(BAR.getId()), new Text("15"), new Text("14")));
+    tableExtents.get(BAR.getTableName()).add(new KeyExtent(BAR.getId(), new Text("11"), new Text("10")));
+    tableExtents.get(BAR.getTableName()).add(new KeyExtent(BAR.getId(), new Text("12"), new Text("11")));
+    tableExtents.get(BAR.getTableName()).add(new KeyExtent(BAR.getId(), new Text("13"), new Text("12")));
+    tableExtents.get(BAR.getTableName()).add(new KeyExtent(BAR.getId(), new Text("14"), new Text("13")));
+    tableExtents.get(BAR.getTableName()).add(new KeyExtent(BAR.getId(), new Text("15"), new Text("14")));
     tableExtents.put(BAZ.getTableName(), new ArrayList<KeyExtent>());
-    tableExtents.get(BAZ.getTableName()).add(new KeyExtent(new Text(BAZ.getId()), new Text("21"), new Text("20")));
-    tableExtents.get(BAZ.getTableName()).add(new KeyExtent(new Text(BAZ.getId()), new Text("22"), new Text("21")));
-    tableExtents.get(BAZ.getTableName()).add(new KeyExtent(new Text(BAZ.getId()), new Text("23"), new Text("22")));
-    tableExtents.get(BAZ.getTableName()).add(new KeyExtent(new Text(BAZ.getId()), new Text("24"), new Text("23")));
-    tableExtents.get(BAZ.getTableName()).add(new KeyExtent(new Text(BAZ.getId()), new Text("25"), new Text("24")));
+    tableExtents.get(BAZ.getTableName()).add(new KeyExtent(BAZ.getId(), new Text("21"), new Text("20")));
+    tableExtents.get(BAZ.getTableName()).add(new KeyExtent(BAZ.getId(), new Text("22"), new Text("21")));
+    tableExtents.get(BAZ.getTableName()).add(new KeyExtent(BAZ.getId(), new Text("23"), new Text("22")));
+    tableExtents.get(BAZ.getTableName()).add(new KeyExtent(BAZ.getId(), new Text("24"), new Text("23")));
+    tableExtents.get(BAZ.getTableName()).add(new KeyExtent(BAZ.getId(), new Text("25"), new Text("24")));
   }
 
   protected boolean tabletInBounds(KeyExtent ke, TServerInstance tsi) {
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/ChaoticLoadBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/ChaoticLoadBalancerTest.java
index 0fb9182..2697d75 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/ChaoticLoadBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/ChaoticLoadBalancerTest.java
@@ -44,13 +44,13 @@
 public class ChaoticLoadBalancerTest {
 
   class FakeTServer {
-    List<KeyExtent> extents = new ArrayList<KeyExtent>();
+    List<KeyExtent> extents = new ArrayList<>();
 
     TabletServerStatus getStatus(TServerInstance server) {
       TabletServerStatus result = new TabletServerStatus();
-      result.tableMap = new HashMap<String,TableInfo>();
+      result.tableMap = new HashMap<>();
       for (KeyExtent extent : extents) {
-        String table = extent.getTableId().toString();
+        String table = extent.getTableId();
         TableInfo info = result.tableMap.get(table);
         if (info == null)
           result.tableMap.put(table, info = new TableInfo());
@@ -63,15 +63,15 @@
     }
   }
 
-  Map<TServerInstance,FakeTServer> servers = new HashMap<TServerInstance,FakeTServer>();
+  Map<TServerInstance,FakeTServer> servers = new HashMap<>();
 
   class TestChaoticLoadBalancer extends ChaoticLoadBalancer {
 
     @Override
     public List<TabletStats> getOnlineTabletsForTable(TServerInstance tserver, String table) throws ThriftSecurityException, TException {
-      List<TabletStats> result = new ArrayList<TabletStats>();
+      List<TabletStats> result = new ArrayList<>();
       for (KeyExtent extent : servers.get(tserver).extents) {
-        if (extent.getTableId().toString().equals(table)) {
+        if (extent.getTableId().equals(table)) {
           result.add(new TabletStats(extent.toThrift(), null, null, null, 0l, 0., 0., 0));
         }
       }
@@ -85,7 +85,7 @@
     servers.put(new TServerInstance(HostAndPort.fromParts("127.0.0.1", 1234), "a"), new FakeTServer());
     servers.put(new TServerInstance(HostAndPort.fromParts("127.0.0.1", 1235), "b"), new FakeTServer());
     servers.put(new TServerInstance(HostAndPort.fromParts("127.0.0.1", 1236), "c"), new FakeTServer());
-    Map<KeyExtent,TServerInstance> metadataTable = new TreeMap<KeyExtent,TServerInstance>();
+    Map<KeyExtent,TServerInstance> metadataTable = new TreeMap<>();
     String table = "t1";
     metadataTable.put(makeExtent(table, null, null), null);
     table = "t2";
@@ -101,19 +101,19 @@
 
     TestChaoticLoadBalancer balancer = new TestChaoticLoadBalancer();
 
-    SortedMap<TServerInstance,TabletServerStatus> current = new TreeMap<TServerInstance,TabletServerStatus>();
+    SortedMap<TServerInstance,TabletServerStatus> current = new TreeMap<>();
     for (Entry<TServerInstance,FakeTServer> entry : servers.entrySet()) {
       current.put(entry.getKey(), entry.getValue().getStatus(entry.getKey()));
     }
 
-    Map<KeyExtent,TServerInstance> assignments = new HashMap<KeyExtent,TServerInstance>();
+    Map<KeyExtent,TServerInstance> assignments = new HashMap<>();
     balancer.getAssignments(getAssignments(servers), metadataTable, assignments);
 
     assertEquals(assignments.size(), metadataTable.size());
   }
 
   SortedMap<TServerInstance,TabletServerStatus> getAssignments(Map<TServerInstance,FakeTServer> servers) {
-    SortedMap<TServerInstance,TabletServerStatus> result = new TreeMap<TServerInstance,TabletServerStatus>();
+    SortedMap<TServerInstance,TabletServerStatus> result = new TreeMap<>();
     for (Entry<TServerInstance,FakeTServer> entry : servers.entrySet()) {
       result.put(entry.getKey(), entry.getValue().getStatus(entry.getKey()));
     }
@@ -147,14 +147,14 @@
     Set<KeyExtent> migrations = Collections.emptySet();
 
     // Just want to make sure it gets some migrations, randomness prevents guarantee of a defined amount, or even expected amount
-    List<TabletMigration> migrationsOut = new ArrayList<TabletMigration>();
+    List<TabletMigration> migrationsOut = new ArrayList<>();
     while (migrationsOut.size() != 0) {
       balancer.balance(getAssignments(servers), migrations, migrationsOut);
     }
   }
 
   private static KeyExtent makeExtent(String table, String end, String prev) {
-    return new KeyExtent(new Text(table), toText(end), toText(prev));
+    return new KeyExtent(table, toText(end), toText(prev));
   }
 
   private static Text toText(String value) {
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancerTest.java
index aee1795..e0bd2d1 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/DefaultLoadBalancerTest.java
@@ -48,13 +48,13 @@
 public class DefaultLoadBalancerTest {
 
   class FakeTServer {
-    List<KeyExtent> extents = new ArrayList<KeyExtent>();
+    List<KeyExtent> extents = new ArrayList<>();
 
     TabletServerStatus getStatus(TServerInstance server) {
       TabletServerStatus result = new TabletServerStatus();
-      result.tableMap = new HashMap<String,TableInfo>();
+      result.tableMap = new HashMap<>();
       for (KeyExtent extent : extents) {
-        String table = extent.getTableId().toString();
+        String table = extent.getTableId();
         TableInfo info = result.tableMap.get(table);
         if (info == null)
           result.tableMap.put(table, info = new TableInfo());
@@ -67,16 +67,16 @@
     }
   }
 
-  Map<TServerInstance,FakeTServer> servers = new HashMap<TServerInstance,FakeTServer>();
-  Map<KeyExtent,TServerInstance> last = new HashMap<KeyExtent,TServerInstance>();
+  Map<TServerInstance,FakeTServer> servers = new HashMap<>();
+  Map<KeyExtent,TServerInstance> last = new HashMap<>();
 
   class TestDefaultLoadBalancer extends DefaultLoadBalancer {
 
     @Override
     public List<TabletStats> getOnlineTabletsForTable(TServerInstance tserver, String table) throws ThriftSecurityException, TException {
-      List<TabletStats> result = new ArrayList<TabletStats>();
+      List<TabletStats> result = new ArrayList<>();
       for (KeyExtent extent : servers.get(tserver).extents) {
-        if (extent.getTableId().toString().equals(table)) {
+        if (extent.getTableId().equals(table)) {
           result.add(new TabletStats(extent.toThrift(), null, null, null, 0l, 0., 0., 0));
         }
       }
@@ -95,7 +95,7 @@
     servers.put(new TServerInstance(HostAndPort.fromParts("127.0.0.1", 1234), "a"), new FakeTServer());
     servers.put(new TServerInstance(HostAndPort.fromParts("127.0.0.2", 1234), "b"), new FakeTServer());
     servers.put(new TServerInstance(HostAndPort.fromParts("127.0.0.3", 1234), "c"), new FakeTServer());
-    List<KeyExtent> metadataTable = new ArrayList<KeyExtent>();
+    List<KeyExtent> metadataTable = new ArrayList<>();
     String table = "t1";
     metadataTable.add(makeExtent(table, null, null));
     table = "t2";
@@ -112,14 +112,14 @@
 
     TestDefaultLoadBalancer balancer = new TestDefaultLoadBalancer();
 
-    SortedMap<TServerInstance,TabletServerStatus> current = new TreeMap<TServerInstance,TabletServerStatus>();
+    SortedMap<TServerInstance,TabletServerStatus> current = new TreeMap<>();
     for (Entry<TServerInstance,FakeTServer> entry : servers.entrySet()) {
       current.put(entry.getKey(), entry.getValue().getStatus(entry.getKey()));
     }
     assignTablets(metadataTable, servers, current, balancer);
 
     // Verify that the counts on the tables are correct
-    Map<String,Integer> expectedCounts = new HashMap<String,Integer>();
+    Map<String,Integer> expectedCounts = new HashMap<>();
     expectedCounts.put("t1", 1);
     expectedCounts.put("t2", 1);
     expectedCounts.put("t3", 2);
@@ -131,7 +131,7 @@
     }
 
     // Nothing should happen, we are balanced
-    ArrayList<TabletMigration> out = new ArrayList<TabletMigration>();
+    ArrayList<TabletMigration> out = new ArrayList<>();
     balancer.getMigrations(current, out);
     assertEquals(out.size(), 0);
 
@@ -158,7 +158,7 @@
   }
 
   SortedMap<TServerInstance,TabletServerStatus> getAssignments(Map<TServerInstance,FakeTServer> servers) {
-    SortedMap<TServerInstance,TabletServerStatus> result = new TreeMap<TServerInstance,TabletServerStatus>();
+    SortedMap<TServerInstance,TabletServerStatus> result = new TreeMap<>();
     for (Entry<TServerInstance,FakeTServer> entry : servers.entrySet()) {
       result.put(entry.getKey(), entry.getValue().getStatus(entry.getKey()));
     }
@@ -169,7 +169,7 @@
   public void testUnevenAssignment() {
     for (char c : "abcdefghijklmnopqrstuvwxyz".toCharArray()) {
       String cString = Character.toString(c);
-      HostAndPort fakeAddress = HostAndPort.fromParts("127.0.0.1", (int) c);
+      HostAndPort fakeAddress = HostAndPort.fromParts("127.0.0.1", c);
       String fakeInstance = cString;
       TServerInstance tsi = new TServerInstance(fakeAddress, fakeInstance);
       FakeTServer fakeTServer = new FakeTServer();
@@ -192,7 +192,7 @@
     int moved = 0;
     // balance until we can't balance no more!
     while (true) {
-      List<TabletMigration> migrationsOut = new ArrayList<TabletMigration>();
+      List<TabletMigration> migrationsOut = new ArrayList<>();
       balancer.balance(getAssignments(servers), migrations, migrationsOut);
       if (migrationsOut.size() == 0)
         break;
@@ -210,14 +210,14 @@
     // make 26 servers
     for (char c : "abcdefghijklmnopqrstuvwxyz".toCharArray()) {
       String cString = Character.toString(c);
-      HostAndPort fakeAddress = HostAndPort.fromParts("127.0.0.1", (int) c);
+      HostAndPort fakeAddress = HostAndPort.fromParts("127.0.0.1", c);
       String fakeInstance = cString;
       TServerInstance tsi = new TServerInstance(fakeAddress, fakeInstance);
       FakeTServer fakeTServer = new FakeTServer();
       servers.put(tsi, fakeTServer);
     }
     // put 60 tablets on 25 of them
-    List<Entry<TServerInstance,FakeTServer>> shortList = new ArrayList<Entry<TServerInstance,FakeTServer>>(servers.entrySet());
+    List<Entry<TServerInstance,FakeTServer>> shortList = new ArrayList<>(servers.entrySet());
     Entry<TServerInstance,FakeTServer> shortServer = shortList.remove(0);
     int c = 0;
     for (int i = 0; i < 60; i++) {
@@ -235,7 +235,7 @@
     int moved = 0;
     // balance until we can't balance no more!
     while (true) {
-      List<TabletMigration> migrationsOut = new ArrayList<TabletMigration>();
+      List<TabletMigration> migrationsOut = new ArrayList<>();
       balancer.balance(getAssignments(servers), migrations, migrationsOut);
       if (migrationsOut.size() == 0)
         break;
@@ -264,9 +264,9 @@
 
     if (expectedCounts != null) {
       for (FakeTServer server : servers.values()) {
-        Map<String,Integer> counts = new HashMap<String,Integer>();
+        Map<String,Integer> counts = new HashMap<>();
         for (KeyExtent extent : server.extents) {
-          String t = extent.getTableId().toString();
+          String t = extent.getTableId();
           if (counts.get(t) == null)
             counts.put(t, 0);
           counts.put(t, counts.get(t) + 1);
@@ -279,7 +279,7 @@
   }
 
   private static KeyExtent makeExtent(String table, String end, String prev) {
-    return new KeyExtent(new Text(table), toText(end), toText(prev));
+    return new KeyExtent(table, toText(end), toText(prev));
   }
 
   private static Text toText(String value) {
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java
index 5de4923..f6c4e0d 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java
@@ -67,7 +67,7 @@
 
     public void addTablet(String er, String location) {
       TServerInstance tsi = new TServerInstance(location, 6);
-      tabletLocs.put(new KeyExtent(new Text("b"), er == null ? null : new Text(er), null), new TServerInstance(location, 6));
+      tabletLocs.put(new KeyExtent("b", er == null ? null : new Text(er), null), new TServerInstance(location, 6));
       tservers.add(tsi);
     }
 
@@ -84,7 +84,7 @@
 
             @Override
             public Pair<KeyExtent,Location> apply(final Entry<KeyExtent,TServerInstance> input) {
-              return new Pair<KeyExtent,Location>(input.getKey(), new Location(input.getValue()));
+              return new Pair<>(input.getKey(), new Location(input.getValue()));
             }
           });
 
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerReconfigurationTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerReconfigurationTest.java
index eff9a11..504b39b 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerReconfigurationTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerReconfigurationTest.java
@@ -70,8 +70,8 @@
         Assert.fail("tablet not in bounds: " + e.getKey() + " -> " + e.getValue().host());
       }
     }
-    Set<KeyExtent> migrations = new HashSet<KeyExtent>();
-    List<TabletMigration> migrationsOut = new ArrayList<TabletMigration>();
+    Set<KeyExtent> migrations = new HashSet<>();
+    List<TabletMigration> migrationsOut = new ArrayList<>();
     // Wait to trigger the out of bounds check which will call our version of getOnlineTabletsForTable
     UtilWaitThread.sleep(11000);
     this.balance(Collections.unmodifiableSortedMap(allTabletServers), migrations, migrationsOut);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java
index 868ac0a..c0ccc48 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancerTest.java
@@ -325,7 +325,7 @@
     }
     SortedMap<TServerInstance,TabletServerStatus> current = createCurrent(15);
     // Remove the BAR tablet servers from current
-    List<TServerInstance> removals = new ArrayList<TServerInstance>();
+    List<TServerInstance> removals = new ArrayList<>();
     for (Entry<TServerInstance,TabletServerStatus> e : current.entrySet()) {
       if (e.getKey().host().equals("192.168.0.6") || e.getKey().host().equals("192.168.0.7") || e.getKey().host().equals("192.168.0.8")
           || e.getKey().host().equals("192.168.0.9") || e.getKey().host().equals("192.168.0.10")) {
@@ -350,8 +350,8 @@
     init((ServerConfiguration) factory);
     // Wait to trigger the out of bounds check which will call our version of getOnlineTabletsForTable
     UtilWaitThread.sleep(11000);
-    Set<KeyExtent> migrations = new HashSet<KeyExtent>();
-    List<TabletMigration> migrationsOut = new ArrayList<TabletMigration>();
+    Set<KeyExtent> migrations = new HashSet<>();
+    List<TabletMigration> migrationsOut = new ArrayList<>();
     this.balance(createCurrent(15), migrations, migrationsOut);
     Assert.assertEquals(2, migrationsOut.size());
   }
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/TableLoadBalancerTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/TableLoadBalancerTest.java
index c957009..f34cc3d 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/TableLoadBalancerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/TableLoadBalancerTest.java
@@ -24,12 +24,11 @@
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
+import java.util.UUID;
 
-import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.admin.TableOperations;
 import org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.DefaultConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.impl.KeyExtent;
@@ -42,20 +41,24 @@
 import org.apache.accumulo.server.master.state.TabletMigration;
 import org.apache.hadoop.io.Text;
 import org.apache.thrift.TException;
+import org.easymock.EasyMock;
 import org.junit.Assert;
 import org.junit.Test;
 
+import com.google.common.collect.ImmutableMap;
 import com.google.common.net.HostAndPort;
 
 public class TableLoadBalancerTest {
 
+  private static Map<String,String> TABLE_ID_MAP = ImmutableMap.of("t1", "a1", "t2", "b12", "t3", "c4");
+
   static private TServerInstance mkts(String address, String session) throws Exception {
     return new TServerInstance(HostAndPort.fromParts(address, 1234), session);
   }
 
   static private TabletServerStatus status(Object... config) {
     TabletServerStatus result = new TabletServerStatus();
-    result.tableMap = new HashMap<String,TableInfo>();
+    result.tableMap = new HashMap<>();
     String tablename = null;
     for (Object c : config) {
       if (c instanceof String) {
@@ -71,18 +74,16 @@
     return result;
   }
 
-  static MockInstance instance = new MockInstance("mockamatic");
-
   static SortedMap<TServerInstance,TabletServerStatus> state;
 
   static List<TabletStats> generateFakeTablets(TServerInstance tserver, String tableId) {
-    List<TabletStats> result = new ArrayList<TabletStats>();
+    List<TabletStats> result = new ArrayList<>();
     TabletServerStatus tableInfo = state.get(tserver);
     // generate some fake tablets
     for (int i = 0; i < tableInfo.tableMap.get(tableId).onlineTablets; i++) {
       TabletStats stats = new TabletStats();
-      stats.extent = new KeyExtent(new Text(tableId), new Text(tserver.host() + String.format("%03d", i + 1)), new Text(tserver.host()
-          + String.format("%03d", i))).toThrift();
+      stats.extent = new KeyExtent(tableId, new Text(tserver.host() + String.format("%03d", i + 1)), new Text(tserver.host() + String.format("%03d", i)))
+          .toThrift();
       result.add(stats);
     }
     return result;
@@ -95,6 +96,9 @@
     }
 
     @Override
+    public void init(ServerConfigurationFactory conf) {}
+
+    @Override
     public List<TabletStats> getOnlineTabletsForTable(TServerInstance tserver, String tableId) throws ThriftSecurityException, TException {
       return generateFakeTablets(tserver, tableId);
     }
@@ -107,6 +111,9 @@
       super();
     }
 
+    @Override
+    public void init(ServerConfigurationFactory conf) {}
+
     // use our new classname to test class loading
     @Override
     protected String getLoadBalancerClassNameForTable(String table) {
@@ -118,15 +125,26 @@
     public List<TabletStats> getOnlineTabletsForTable(TServerInstance tserver, String tableId) throws ThriftSecurityException, TException {
       return generateFakeTablets(tserver, tableId);
     }
+
+    @Override
+    protected TableOperations getTableOperations() {
+      TableOperations tops = EasyMock.createMock(TableOperations.class);
+      EasyMock.expect(tops.tableIdMap()).andReturn(TABLE_ID_MAP).anyTimes();
+      EasyMock.replay(tops);
+      return tops;
+    }
   }
 
   @Test
   public void test() throws Exception {
-    Connector c = instance.getConnector("user", new PasswordToken("pass"));
-    ServerConfigurationFactory confFactory = new ServerConfigurationFactory(instance) {
+    final Instance inst = EasyMock.createMock(Instance.class);
+    EasyMock.expect(inst.getInstanceID()).andReturn(UUID.nameUUIDFromBytes(new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 0}).toString()).anyTimes();
+    EasyMock.replay(inst);
+
+    ServerConfigurationFactory confFactory = new ServerConfigurationFactory(inst) {
       @Override
       public TableConfiguration getTableConfiguration(String tableId) {
-        return new TableConfiguration(instance, tableId, null) {
+        return new TableConfiguration(inst, tableId, null) {
           @Override
           public String get(Property property) {
             // fake the get table configuration so the test doesn't try to look in zookeeper for per-table classpath stuff
@@ -135,17 +153,14 @@
         };
       }
     };
-    TableOperations tops = c.tableOperations();
-    tops.create("t1");
-    tops.create("t2");
-    tops.create("t3");
-    String t1Id = tops.tableIdMap().get("t1"), t2Id = tops.tableIdMap().get("t2"), t3Id = tops.tableIdMap().get("t3");
-    state = new TreeMap<TServerInstance,TabletServerStatus>();
+
+    String t1Id = TABLE_ID_MAP.get("t1"), t2Id = TABLE_ID_MAP.get("t2"), t3Id = TABLE_ID_MAP.get("t3");
+    state = new TreeMap<>();
     TServerInstance svr = mkts("10.0.0.1", "0x01020304");
     state.put(svr, status(t1Id, 10, t2Id, 10, t3Id, 10));
 
     Set<KeyExtent> migrations = Collections.emptySet();
-    List<TabletMigration> migrationsOut = new ArrayList<TabletMigration>();
+    List<TabletMigration> migrationsOut = new ArrayList<>();
     TableLoadBalancer tls = new TableLoadBalancer();
     tls.init(confFactory);
     tls.balance(state, migrations, migrationsOut);
@@ -156,14 +171,14 @@
     tls.init(confFactory);
     tls.balance(state, migrations, migrationsOut);
     int count = 0;
-    Map<String,Integer> movedByTable = new HashMap<String,Integer>();
+    Map<String,Integer> movedByTable = new HashMap<>();
     movedByTable.put(t1Id, Integer.valueOf(0));
     movedByTable.put(t2Id, Integer.valueOf(0));
     movedByTable.put(t3Id, Integer.valueOf(0));
     for (TabletMigration migration : migrationsOut) {
       if (migration.oldServer.equals(svr))
         count++;
-      String key = migration.tablet.getTableId().toString();
+      String key = migration.tablet.getTableId();
       movedByTable.put(key, movedByTable.get(key) + 1);
     }
     Assert.assertEquals(15, count);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/state/MergeInfoTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/state/MergeInfoTest.java
index 1ce5dd8..ae06e53 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/state/MergeInfoTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/state/MergeInfoTest.java
@@ -68,7 +68,7 @@
 
   @Test
   public void testSerialization() throws Exception {
-    Text table = new Text("table");
+    String table = "table";
     Text endRow = new Text("end");
     Text prevEndRow = new Text("begin");
     keyExtent = new KeyExtent(table, endRow, prevEndRow);
@@ -88,10 +88,10 @@
 
   @Test
   public void testNeedsToBeChopped_DifferentTables() {
-    expect(keyExtent.getTableId()).andReturn(new Text("table1"));
+    expect(keyExtent.getTableId()).andReturn("table1");
     replay(keyExtent);
     KeyExtent keyExtent2 = createMock(KeyExtent.class);
-    expect(keyExtent2.getTableId()).andReturn(new Text("table2"));
+    expect(keyExtent2.getTableId()).andReturn("table2");
     replay(keyExtent2);
     mi = new MergeInfo(keyExtent, MergeInfo.Operation.MERGE);
     assertFalse(mi.needsToBeChopped(keyExtent2));
@@ -99,9 +99,9 @@
 
   @Test
   public void testNeedsToBeChopped_NotDelete() {
-    expect(keyExtent.getTableId()).andReturn(new Text("table1"));
+    expect(keyExtent.getTableId()).andReturn("table1");
     KeyExtent keyExtent2 = createMock(KeyExtent.class);
-    expect(keyExtent2.getTableId()).andReturn(new Text("table1"));
+    expect(keyExtent2.getTableId()).andReturn("table1");
     replay(keyExtent2);
     expect(keyExtent.overlaps(keyExtent2)).andReturn(true);
     replay(keyExtent);
@@ -125,11 +125,11 @@
   }
 
   private void testNeedsToBeChopped_Delete(String prevEndRow, boolean expected) {
-    expect(keyExtent.getTableId()).andReturn(new Text("table1"));
+    expect(keyExtent.getTableId()).andReturn("table1");
     expect(keyExtent.getEndRow()).andReturn(new Text("prev"));
     replay(keyExtent);
     KeyExtent keyExtent2 = createMock(KeyExtent.class);
-    expect(keyExtent2.getTableId()).andReturn(new Text("table1"));
+    expect(keyExtent2.getTableId()).andReturn("table1");
     expect(keyExtent2.getPrevEndRow()).andReturn(prevEndRow != null ? new Text(prevEndRow) : null);
     expectLastCall().anyTimes();
     replay(keyExtent2);
@@ -150,9 +150,9 @@
   public void testOverlaps_DoesNotNeedChopping() {
     KeyExtent keyExtent2 = createMock(KeyExtent.class);
     expect(keyExtent.overlaps(keyExtent2)).andReturn(false);
-    expect(keyExtent.getTableId()).andReturn(new Text("table1"));
+    expect(keyExtent.getTableId()).andReturn("table1");
     replay(keyExtent);
-    expect(keyExtent2.getTableId()).andReturn(new Text("table2"));
+    expect(keyExtent2.getTableId()).andReturn("table2");
     replay(keyExtent2);
     mi = new MergeInfo(keyExtent, MergeInfo.Operation.MERGE);
     assertFalse(mi.overlaps(keyExtent2));
@@ -162,10 +162,10 @@
   public void testOverlaps_NeedsChopping() {
     KeyExtent keyExtent2 = createMock(KeyExtent.class);
     expect(keyExtent.overlaps(keyExtent2)).andReturn(false);
-    expect(keyExtent.getTableId()).andReturn(new Text("table1"));
+    expect(keyExtent.getTableId()).andReturn("table1");
     expect(keyExtent.getEndRow()).andReturn(new Text("prev"));
     replay(keyExtent);
-    expect(keyExtent2.getTableId()).andReturn(new Text("table1"));
+    expect(keyExtent2.getTableId()).andReturn("table1");
     expect(keyExtent2.getPrevEndRow()).andReturn(new Text("prev"));
     expectLastCall().anyTimes();
     replay(keyExtent2);
@@ -187,7 +187,7 @@
   }
 
   private static KeyExtent ke(String tableId, String endRow, String prevEndRow) {
-    return new KeyExtent(new Text(tableId), endRow == null ? null : new Text(endRow), prevEndRow == null ? null : new Text(prevEndRow));
+    return new KeyExtent(tableId, endRow == null ? null : new Text(endRow), prevEndRow == null ? null : new Text(prevEndRow));
   }
 
   @Test
diff --git a/server/base/src/test/java/org/apache/accumulo/server/master/state/TabletLocationStateTest.java b/server/base/src/test/java/org/apache/accumulo/server/master/state/TabletLocationStateTest.java
index 0a0afd1..bd81267 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/master/state/TabletLocationStateTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/master/state/TabletLocationStateTest.java
@@ -35,8 +35,8 @@
 import org.junit.Test;
 
 public class TabletLocationStateTest {
-  private static final Collection<String> innerWalogs = new java.util.HashSet<String>();
-  private static final Collection<Collection<String>> walogs = new java.util.HashSet<Collection<String>>();
+  private static final Collection<String> innerWalogs = new java.util.HashSet<>();
+  private static final Collection<Collection<String>> walogs = new java.util.HashSet<>();
 
   @BeforeClass
   public static void setUpClass() {
@@ -60,7 +60,7 @@
 
   @Test
   public void testConstruction_NoFuture() throws Exception {
-    tls = new TabletLocationState(keyExtent, null, current, last, walogs, true);
+    tls = new TabletLocationState(keyExtent, null, current, last, null, walogs, true);
     assertSame(keyExtent, tls.extent);
     assertNull(tls.future);
     assertSame(current, tls.current);
@@ -71,7 +71,7 @@
 
   @Test
   public void testConstruction_NoCurrent() throws Exception {
-    tls = new TabletLocationState(keyExtent, future, null, last, walogs, true);
+    tls = new TabletLocationState(keyExtent, future, null, last, null, walogs, true);
     assertSame(keyExtent, tls.extent);
     assertSame(future, tls.future);
     assertNull(tls.current);
@@ -85,7 +85,7 @@
     expect(keyExtent.getMetadataEntry()).andReturn(new Text("entry"));
     replay(keyExtent);
     try {
-      new TabletLocationState(keyExtent, future, current, last, walogs, true);
+      new TabletLocationState(keyExtent, future, current, last, null, walogs, true);
     } catch (TabletLocationState.BadLocationStateException e) {
       assertEquals(new Text("entry"), e.getEncodedEndRow());
       throw (e);
@@ -94,76 +94,76 @@
 
   @Test
   public void testConstruction_NoFuture_NoWalogs() throws Exception {
-    tls = new TabletLocationState(keyExtent, null, current, last, null, true);
+    tls = new TabletLocationState(keyExtent, null, current, last, null, null, true);
     assertNotNull(tls.walogs);
     assertEquals(0, tls.walogs.size());
   }
 
   @Test
   public void testGetServer_Current() throws Exception {
-    tls = new TabletLocationState(keyExtent, null, current, last, walogs, true);
+    tls = new TabletLocationState(keyExtent, null, current, last, null, walogs, true);
     assertSame(current, tls.getServer());
   }
 
   @Test
   public void testGetServer_Future() throws Exception {
-    tls = new TabletLocationState(keyExtent, future, null, last, walogs, true);
+    tls = new TabletLocationState(keyExtent, future, null, last, null, walogs, true);
     assertSame(future, tls.getServer());
   }
 
   @Test
   public void testGetServer_Last() throws Exception {
-    tls = new TabletLocationState(keyExtent, null, null, last, walogs, true);
+    tls = new TabletLocationState(keyExtent, null, null, last, null, walogs, true);
     assertSame(last, tls.getServer());
   }
 
   @Test
   public void testGetServer_None() throws Exception {
-    tls = new TabletLocationState(keyExtent, null, null, null, walogs, true);
+    tls = new TabletLocationState(keyExtent, null, null, null, null, walogs, true);
     assertNull(tls.getServer());
   }
 
   @Test
   public void testGetState_Unassigned1() throws Exception {
-    tls = new TabletLocationState(keyExtent, null, null, null, walogs, true);
+    tls = new TabletLocationState(keyExtent, null, null, null, null, walogs, true);
     assertEquals(TabletState.UNASSIGNED, tls.getState(null));
   }
 
   @Test
   public void testGetState_Unassigned2() throws Exception {
-    tls = new TabletLocationState(keyExtent, null, null, last, walogs, true);
+    tls = new TabletLocationState(keyExtent, null, null, last, null, walogs, true);
     assertEquals(TabletState.UNASSIGNED, tls.getState(null));
   }
 
   @Test
   public void testGetState_Assigned() throws Exception {
-    Set<TServerInstance> liveServers = new java.util.HashSet<TServerInstance>();
+    Set<TServerInstance> liveServers = new java.util.HashSet<>();
     liveServers.add(future);
-    tls = new TabletLocationState(keyExtent, future, null, last, walogs, true);
+    tls = new TabletLocationState(keyExtent, future, null, last, null, walogs, true);
     assertEquals(TabletState.ASSIGNED, tls.getState(liveServers));
   }
 
   @Test
   public void testGetState_Hosted() throws Exception {
-    Set<TServerInstance> liveServers = new java.util.HashSet<TServerInstance>();
+    Set<TServerInstance> liveServers = new java.util.HashSet<>();
     liveServers.add(current);
-    tls = new TabletLocationState(keyExtent, null, current, last, walogs, true);
+    tls = new TabletLocationState(keyExtent, null, current, last, null, walogs, true);
     assertEquals(TabletState.HOSTED, tls.getState(liveServers));
   }
 
   @Test
   public void testGetState_Dead1() throws Exception {
-    Set<TServerInstance> liveServers = new java.util.HashSet<TServerInstance>();
+    Set<TServerInstance> liveServers = new java.util.HashSet<>();
     liveServers.add(current);
-    tls = new TabletLocationState(keyExtent, future, null, last, walogs, true);
+    tls = new TabletLocationState(keyExtent, future, null, last, null, walogs, true);
     assertEquals(TabletState.ASSIGNED_TO_DEAD_SERVER, tls.getState(liveServers));
   }
 
   @Test
   public void testGetState_Dead2() throws Exception {
-    Set<TServerInstance> liveServers = new java.util.HashSet<TServerInstance>();
+    Set<TServerInstance> liveServers = new java.util.HashSet<>();
     liveServers.add(future);
-    tls = new TabletLocationState(keyExtent, null, current, last, walogs, true);
+    tls = new TabletLocationState(keyExtent, null, current, last, null, walogs, true);
     assertEquals(TabletState.ASSIGNED_TO_DEAD_SERVER, tls.getState(liveServers));
   }
 }
diff --git a/server/base/src/test/java/org/apache/accumulo/server/problems/ProblemReportTest.java b/server/base/src/test/java/org/apache/accumulo/server/problems/ProblemReportTest.java
index 1ca3e8d..3a9cbc0 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/problems/ProblemReportTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/problems/ProblemReportTest.java
@@ -43,7 +43,7 @@
 import org.junit.Test;
 
 public class ProblemReportTest {
-  private static final String TABLE = "table";
+  private static final String TABLE_ID = "table";
   private static final String RESOURCE = "resource";
   private static final String SERVER = "server";
 
@@ -63,8 +63,8 @@
   @Test
   public void testGetters() {
     long now = System.currentTimeMillis();
-    r = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, null, now);
-    assertEquals(TABLE, r.getTableName());
+    r = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, null, now);
+    assertEquals(TABLE_ID, r.getTableName());
     assertSame(ProblemType.FILE_READ, r.getProblemType());
     assertEquals(RESOURCE, r.getResource());
     assertEquals(SERVER, r.getServer());
@@ -75,43 +75,43 @@
   @Test
   public void testWithException() {
     Exception e = new IllegalArgumentException("Oh noes");
-    r = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, e);
+    r = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, e);
     assertEquals("Oh noes", r.getException());
   }
 
   @Test
   public void testEquals() {
-    r = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, null);
+    r = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, null);
     assertTrue(r.equals(r));
-    ProblemReport r2 = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, null);
+    ProblemReport r2 = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, null);
     assertTrue(r.equals(r2));
     assertTrue(r2.equals(r));
-    ProblemReport rx1 = new ProblemReport(TABLE + "x", ProblemType.FILE_READ, RESOURCE, SERVER, null);
+    ProblemReport rx1 = new ProblemReport(TABLE_ID + "x", ProblemType.FILE_READ, RESOURCE, SERVER, null);
     assertFalse(r.equals(rx1));
-    ProblemReport rx2 = new ProblemReport(TABLE, ProblemType.FILE_WRITE, RESOURCE, SERVER, null);
+    ProblemReport rx2 = new ProblemReport(TABLE_ID, ProblemType.FILE_WRITE, RESOURCE, SERVER, null);
     assertFalse(r.equals(rx2));
-    ProblemReport rx3 = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE + "x", SERVER, null);
+    ProblemReport rx3 = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE + "x", SERVER, null);
     assertFalse(r.equals(rx3));
-    ProblemReport re1 = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER + "x", null);
+    ProblemReport re1 = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER + "x", null);
     assertTrue(r.equals(re1));
-    ProblemReport re2 = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, new IllegalArgumentException("yikes"));
+    ProblemReport re2 = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, new IllegalArgumentException("yikes"));
     assertTrue(r.equals(re2));
   }
 
   @Test
   public void testEqualsNull() {
-    r = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, null);
+    r = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, null);
     assertFalse(r.equals(null));
   }
 
   @Test
   public void testHashCode() {
-    r = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, null);
-    ProblemReport r2 = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, null);
+    r = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, null);
+    ProblemReport r2 = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, null);
     assertEquals(r.hashCode(), r2.hashCode());
-    ProblemReport re1 = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER + "x", null);
+    ProblemReport re1 = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER + "x", null);
     assertEquals(r.hashCode(), re1.hashCode());
-    ProblemReport re2 = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, new IllegalArgumentException("yikes"));
+    ProblemReport re2 = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, new IllegalArgumentException("yikes"));
     assertEquals(r.hashCode(), re2.hashCode());
   }
 
@@ -143,8 +143,8 @@
 
   @Test
   public void testRemoveFromZooKeeper() throws Exception {
-    r = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, null);
-    byte[] zpathFileName = makeZPathFileName(TABLE, ProblemType.FILE_READ, RESOURCE);
+    r = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, null);
+    byte[] zpathFileName = makeZPathFileName(TABLE_ID, ProblemType.FILE_READ, RESOURCE);
     String path = ZooUtil.getRoot("instance") + Constants.ZPROBLEMS + "/" + Encoding.encodeAsBase64FileName(new Text(zpathFileName));
     zoorw.recursiveDelete(path, NodeMissingPolicy.SKIP);
     replay(zoorw);
@@ -156,8 +156,8 @@
   @Test
   public void testSaveToZooKeeper() throws Exception {
     long now = System.currentTimeMillis();
-    r = new ProblemReport(TABLE, ProblemType.FILE_READ, RESOURCE, SERVER, null, now);
-    byte[] zpathFileName = makeZPathFileName(TABLE, ProblemType.FILE_READ, RESOURCE);
+    r = new ProblemReport(TABLE_ID, ProblemType.FILE_READ, RESOURCE, SERVER, null, now);
+    byte[] zpathFileName = makeZPathFileName(TABLE_ID, ProblemType.FILE_READ, RESOURCE);
     String path = ZooUtil.getRoot("instance") + Constants.ZPROBLEMS + "/" + Encoding.encodeAsBase64FileName(new Text(zpathFileName));
     byte[] encoded = encodeReportData(now, SERVER, null);
     expect(zoorw.putPersistentData(eq(path), aryEq(encoded), eq(NodeExistsPolicy.OVERWRITE))).andReturn(true);
@@ -169,7 +169,7 @@
 
   @Test
   public void testDecodeZooKeeperEntry() throws Exception {
-    byte[] zpathFileName = makeZPathFileName(TABLE, ProblemType.FILE_READ, RESOURCE);
+    byte[] zpathFileName = makeZPathFileName(TABLE_ID, ProblemType.FILE_READ, RESOURCE);
     String node = Encoding.encodeAsBase64FileName(new Text(zpathFileName));
     long now = System.currentTimeMillis();
     byte[] encoded = encodeReportData(now, SERVER, "excmsg");
@@ -178,7 +178,7 @@
     replay(zoorw);
 
     r = ProblemReport.decodeZooKeeperEntry(node, zoorw, instance);
-    assertEquals(TABLE, r.getTableName());
+    assertEquals(TABLE_ID, r.getTableName());
     assertSame(ProblemType.FILE_READ, r.getProblemType());
     assertEquals(RESOURCE, r.getResource());
     assertEquals(SERVER, r.getServer());
diff --git a/server/base/src/test/java/org/apache/accumulo/server/problems/ProblemReportingIteratorTest.java b/server/base/src/test/java/org/apache/accumulo/server/problems/ProblemReportingIteratorTest.java
index 2e0ad0c..ac91bdf 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/problems/ProblemReportingIteratorTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/problems/ProblemReportingIteratorTest.java
@@ -16,7 +16,6 @@
  */
 package org.apache.accumulo.server.problems;
 
-import static org.easymock.EasyMock.createMock;
 import static org.easymock.EasyMock.expect;
 import static org.easymock.EasyMock.replay;
 import static org.easymock.EasyMock.verify;
@@ -28,19 +27,17 @@
 import java.util.Collection;
 import java.util.concurrent.atomic.AtomicBoolean;
 
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.system.InterruptibleIterator;
-import org.apache.accumulo.server.AccumuloServerContext;
-import org.apache.accumulo.server.conf.ServerConfigurationFactory;
+import org.easymock.EasyMock;
 import org.junit.Before;
 import org.junit.Test;
 
 public class ProblemReportingIteratorTest {
-  private static final String TABLE = "table";
+  private static final String TABLE_ID = "table";
   private static final String RESOURCE = "resource";
 
   private InterruptibleIterator ii;
@@ -48,16 +45,15 @@
 
   @Before
   public void setUp() throws Exception {
-    ii = createMock(InterruptibleIterator.class);
-    AccumuloServerContext context = new AccumuloServerContext(new ServerConfigurationFactory(new MockInstance()));
-    pri = new ProblemReportingIterator(context, TABLE, RESOURCE, false, ii);
+    ii = EasyMock.createMock(InterruptibleIterator.class);
+    pri = new ProblemReportingIterator(null, TABLE_ID, RESOURCE, false, ii);
   }
 
   @Test
   public void testBasicGetters() {
-    Key key = createMock(Key.class);
+    Key key = EasyMock.createMock(Key.class);
     expect(ii.getTopKey()).andReturn(key);
-    Value value = createMock(Value.class);
+    Value value = EasyMock.createMock(Value.class);
     expect(ii.getTopValue()).andReturn(value);
     expect(ii.hasTop()).andReturn(true);
     replay(ii);
@@ -84,8 +80,8 @@
 
   @Test
   public void testSeek() throws Exception {
-    Range r = createMock(Range.class);
-    Collection<ByteSequence> f = new java.util.HashSet<ByteSequence>();
+    Range r = EasyMock.createMock(Range.class);
+    Collection<ByteSequence> f = new java.util.HashSet<>();
     ii.seek(r, f, true);
     replay(ii);
     pri.seek(r, f, true);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/replication/StatusCombinerTest.java b/server/base/src/test/java/org/apache/accumulo/server/replication/StatusCombinerTest.java
index f4d5a9b..7f70a57 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/replication/StatusCombinerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/replication/StatusCombinerTest.java
@@ -24,16 +24,12 @@
 
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.IteratorSetting.Column;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
 import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.Combiner;
 import org.apache.accumulo.core.iterators.DevNull;
-import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
-import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
-import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
 import org.junit.Assert;
 import org.junit.Before;
@@ -45,6 +41,13 @@
   private Key key;
   private Status.Builder builder;
 
+  private static class TestIE extends BaseIteratorEnvironment {
+    @Override
+    public IteratorScope getIteratorScope() {
+      return IteratorScope.scan;
+    }
+  }
+
   @Before
   public void initCombiner() throws IOException {
     key = new Key();
@@ -52,38 +55,7 @@
     builder = Status.newBuilder();
     IteratorSetting cfg = new IteratorSetting(50, StatusCombiner.class);
     Combiner.setColumns(cfg, Collections.singletonList(new Column(StatusSection.NAME)));
-    combiner.init(new DevNull(), cfg.getOptions(), new IteratorEnvironment() {
-
-      @Override
-      public AccumuloConfiguration getConfig() {
-        return null;
-      }
-
-      @Override
-      public IteratorScope getIteratorScope() {
-        return null;
-      }
-
-      @Override
-      public boolean isFullMajorCompaction() {
-        return false;
-      }
-
-      @Override
-      public void registerSideChannel(SortedKeyValueIterator<Key,Value> arg0) {
-
-      }
-
-      @Override
-      public Authorizations getAuthorizations() {
-        return null;
-      }
-
-      @Override
-      public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String arg0) throws IOException {
-        return null;
-      }
-    });
+    combiner.init(new DevNull(), cfg.getOptions(), new TestIE());
   }
 
   @Test
diff --git a/server/base/src/test/java/org/apache/accumulo/server/rpc/RpcWrapperTest.java b/server/base/src/test/java/org/apache/accumulo/server/rpc/RpcWrapperTest.java
index 39d3705..d32178e 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/rpc/RpcWrapperTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/rpc/RpcWrapperTest.java
@@ -22,7 +22,6 @@
 import java.util.Set;
 
 import org.apache.accumulo.core.trace.wrappers.RpcServerInvocationHandler;
-import org.apache.accumulo.server.rpc.RpcWrapper;
 import org.apache.thrift.ProcessFunction;
 import org.apache.thrift.TBase;
 import org.apache.thrift.TException;
@@ -49,12 +48,12 @@
    * @return A ProcessFunction.
    */
   private fake_proc<FakeService> createProcessFunction(String methodName, boolean isOneway) {
-    return new fake_proc<FakeService>(methodName, isOneway);
+    return new fake_proc<>(methodName, isOneway);
   }
 
   @Test
   public void testSomeOnewayMethods() {
-    Map<String,ProcessFunction<FakeService,?>> procs = new HashMap<String,ProcessFunction<FakeService,?>>();
+    Map<String,ProcessFunction<FakeService,?>> procs = new HashMap<>();
     procs.put("foo", createProcessFunction("foo", true));
     procs.put("foobar", createProcessFunction("foobar", false));
     procs.put("bar", createProcessFunction("bar", true));
@@ -66,7 +65,7 @@
 
   @Test
   public void testNoOnewayMethods() {
-    Map<String,ProcessFunction<FakeService,?>> procs = new HashMap<String,ProcessFunction<FakeService,?>>();
+    Map<String,ProcessFunction<FakeService,?>> procs = new HashMap<>();
     procs.put("foo", createProcessFunction("foo", false));
     procs.put("foobar", createProcessFunction("foobar", false));
     procs.put("bar", createProcessFunction("bar", false));
@@ -78,7 +77,7 @@
 
   @Test
   public void testAllOnewayMethods() {
-    Map<String,ProcessFunction<FakeService,?>> procs = new HashMap<String,ProcessFunction<FakeService,?>>();
+    Map<String,ProcessFunction<FakeService,?>> procs = new HashMap<>();
     procs.put("foo", createProcessFunction("foo", true));
     procs.put("foobar", createProcessFunction("foobar", true));
     procs.put("bar", createProcessFunction("bar", true));
@@ -196,7 +195,7 @@
     public long barfoo() {
       return 0;
     }
-  };
+  }
 
   /**
    * A fake ProcessFunction implementation for testing that allows injection of method name and oneway.
diff --git a/server/base/src/test/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingInvocationHandlerTest.java b/server/base/src/test/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingInvocationHandlerTest.java
index c4340c6..52eee25 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingInvocationHandlerTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/rpc/TCredentialsUpdatingInvocationHandlerTest.java
@@ -63,7 +63,7 @@
       }
     };
 
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
   }
 
   @After
@@ -123,7 +123,7 @@
     final String proxyServer = "proxy";
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".users", "*");
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".hosts", "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     proxy.updateArgs(new Object[] {new Object(), tcreds});
@@ -134,7 +134,7 @@
     final String proxyServer = "proxy";
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION, proxyServer + ":*");
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION, "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     proxy.updateArgs(new Object[] {new Object(), tcreds});
@@ -146,7 +146,7 @@
     final String proxyServer = "proxy";
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".users", "client1,client2");
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".hosts", "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client1", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     proxy.updateArgs(new Object[] {new Object(), tcreds});
@@ -159,7 +159,7 @@
     final String proxyServer = "proxy";
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION, proxyServer + ":client1,client2");
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION, "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client1", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     proxy.updateArgs(new Object[] {new Object(), tcreds});
@@ -174,7 +174,7 @@
     // let "otherproxy" impersonate, but not "proxy"
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy" + ".users", "*");
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy" + ".hosts", "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     proxy.updateArgs(new Object[] {new Object(), tcreds});
@@ -186,7 +186,7 @@
     // let "otherproxy" impersonate, but not "proxy"
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION, "otherproxy:*");
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION, "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     proxy.updateArgs(new Object[] {new Object(), tcreds});
@@ -201,7 +201,7 @@
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy1" + ".hosts", "*");
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy2" + ".users", "client1,client2");
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + "otherproxy2" + ".hosts", "*");
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client1", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     proxy.updateArgs(new Object[] {new Object(), tcreds});
@@ -213,7 +213,7 @@
     // let "otherproxy" impersonate, but not "proxy"
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION, "otherproxy1:*;otherproxy2:client1,client2");
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION, "*;*");
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client1", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     proxy.updateArgs(new Object[] {new Object(), tcreds});
@@ -225,7 +225,7 @@
     final String proxyServer = "proxy", client = "client", host = "host.domain.com";
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".users", client);
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".hosts", host);
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     TServerUtils.clientAddress.set(host);
@@ -237,7 +237,7 @@
     final String proxyServer = "proxy", client = "client", host = "host.domain.com";
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION, proxyServer + ":" + client);
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION, host);
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     TServerUtils.clientAddress.set(host);
@@ -250,7 +250,7 @@
     final String proxyServer = "proxy", client = "client", host = "host.domain.com";
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".users", client);
     cc.set(Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey() + proxyServer + ".hosts", host);
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     // The RPC came from a different host than is allowed
@@ -263,7 +263,7 @@
     final String proxyServer = "proxy", client = "client", host = "host.domain.com";
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION, proxyServer + ":" + client);
     cc.set(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION, host);
-    proxy = new TCredentialsUpdatingInvocationHandler<Object>(new Object(), conf);
+    proxy = new TCredentialsUpdatingInvocationHandler<>(new Object(), conf);
     TCredentials tcreds = new TCredentials("client", KerberosToken.class.getName(), ByteBuffer.allocate(0), UUID.randomUUID().toString());
     UGIAssumingProcessor.rpcPrincipal.set(proxyServer);
     // The RPC came from a different host than is allowed
diff --git a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
index 274ec76..a29e3dc 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
@@ -23,24 +23,27 @@
 import java.io.IOException;
 import java.util.UUID;
 
+import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.impl.ConnectorImpl;
 import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.server.ServerConstants;
 import org.apache.accumulo.server.security.SystemCredentials.SystemToken;
+import org.easymock.EasyMock;
+import org.junit.Before;
 import org.junit.BeforeClass;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.TestName;
 
-/**
- *
- */
 public class SystemCredentialsTest {
 
-  private static MockInstance inst;
+  @Rule
+  public TestName test = new TestName();
+
+  private Instance inst;
 
   @BeforeClass
   public static void setUp() throws IOException {
-    inst = new MockInstance();
     File testInstanceId = new File(new File(new File(new File("target"), "instanceTest"), ServerConstants.INSTANCE_ID_DIR), UUID.fromString(
         "00000000-0000-0000-0000-000000000000").toString());
     if (!testInstanceId.exists()) {
@@ -55,6 +58,13 @@
     }
   }
 
+  @Before
+  public void setupInstance() {
+    inst = EasyMock.createMock(Instance.class);
+    EasyMock.expect(inst.getInstanceID()).andReturn(UUID.nameUUIDFromBytes(new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 0}).toString()).anyTimes();
+    EasyMock.replay(inst);
+  }
+
   /**
    * This is a test to ensure the string literal in {@link ConnectorImpl#ConnectorImpl(org.apache.accumulo.core.client.impl.ClientContext)} is kept up-to-date
    * if we move the {@link SystemToken}<br>
diff --git a/server/base/src/test/java/org/apache/accumulo/server/security/handler/ZKAuthenticatorTest.java b/server/base/src/test/java/org/apache/accumulo/server/security/handler/ZKAuthenticatorTest.java
index 827e772..1b0970d 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/security/handler/ZKAuthenticatorTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/handler/ZKAuthenticatorTest.java
@@ -55,7 +55,7 @@
   }
 
   public void testSystemConversion() {
-    Set<SystemPermission> perms = new TreeSet<SystemPermission>();
+    Set<SystemPermission> perms = new TreeSet<>();
     for (SystemPermission s : SystemPermission.values())
       perms.add(s);
 
@@ -66,7 +66,7 @@
   }
 
   public void testTableConversion() {
-    Set<TablePermission> perms = new TreeSet<TablePermission>();
+    Set<TablePermission> perms = new TreeSet<>();
     for (TablePermission s : TablePermission.values())
       perms.add(s);
 
diff --git a/server/base/src/test/java/org/apache/accumulo/server/tablets/LogicalTimeTest.java b/server/base/src/test/java/org/apache/accumulo/server/tablets/LogicalTimeTest.java
index b491f0e..1076249 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/tablets/LogicalTimeTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/tablets/LogicalTimeTest.java
@@ -57,7 +57,7 @@
 
   @Test
   public void testSetUpdateTimes() {
-    List<Mutation> ms = new java.util.ArrayList<Mutation>();
+    List<Mutation> ms = new java.util.ArrayList<>();
     ServerMutation m = createMock(ServerMutation.class);
     ServerMutation m2 = createMock(ServerMutation.class);
     m.setSystemTimestamp(1235L);
@@ -74,7 +74,7 @@
 
   @Test
   public void testSetUpdateTimes_NoMutations() {
-    List<Mutation> ms = new java.util.ArrayList<Mutation>();
+    List<Mutation> ms = new java.util.ArrayList<>();
     assertEquals(TIME, ltime.setUpdateTimes(ms));
   }
 
diff --git a/server/base/src/test/java/org/apache/accumulo/server/tablets/MillisTimeTest.java b/server/base/src/test/java/org/apache/accumulo/server/tablets/MillisTimeTest.java
index 3ee220e..49f5913 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/tablets/MillisTimeTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/tablets/MillisTimeTest.java
@@ -59,7 +59,7 @@
 
   @Test
   public void testSetUpdateTimes() {
-    List<Mutation> ms = new java.util.ArrayList<Mutation>();
+    List<Mutation> ms = new java.util.ArrayList<>();
     ServerMutation m = createMock(ServerMutation.class);
     m.setSystemTimestamp(anyLong());
     replay(m);
@@ -71,7 +71,7 @@
 
   @Test
   public void testSetUpdateTimes_NoMutations() {
-    List<Mutation> ms = new java.util.ArrayList<Mutation>();
+    List<Mutation> ms = new java.util.ArrayList<>();
     assertEquals(TIME, mtime.setUpdateTimes(ms));
   }
 
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/AdminCommandsTest.java b/server/base/src/test/java/org/apache/accumulo/server/util/AdminCommandsTest.java
index ab799ec..7a665b3 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/util/AdminCommandsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/util/AdminCommandsTest.java
@@ -39,7 +39,7 @@
   public void testCheckTabletsCommand() {
     Admin.CheckTabletsCommand cmd = new Admin.CheckTabletsCommand();
     assertFalse(cmd.fixFiles);
-    assertNull(cmd.table);
+    assertNull(cmd.tableName);
   }
 
   @Test
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/DefaultMapTest.java b/server/base/src/test/java/org/apache/accumulo/server/util/DefaultMapTest.java
index 9ca2596..88aaa71 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/util/DefaultMapTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/util/DefaultMapTest.java
@@ -38,7 +38,7 @@
     assertNotNull(canConstruct);
     assertEquals(new AtomicInteger(0).get(), canConstruct.get());
 
-    DefaultMap<String,String> map = new DefaultMap<String,String>("");
+    DefaultMap<String,String> map = new DefaultMap<>("");
     assertEquals(map.get("foo"), "");
     map.put("foo", "bar");
     assertEquals(map.get("foo"), "bar");
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/FileSystemMonitorTest.java b/server/base/src/test/java/org/apache/accumulo/server/util/FileSystemMonitorTest.java
index 7387035..95167d6 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/util/FileSystemMonitorTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/util/FileSystemMonitorTest.java
@@ -74,7 +74,7 @@
     List<Mount> mounts = FileSystemMonitor.getMountsFromFile(reader);
     log.info("Filtered mount points: " + mounts);
     assertEquals(2, mounts.size());
-    Set<String> expectedCheckedMountPoints = new HashSet<String>();
+    Set<String> expectedCheckedMountPoints = new HashSet<>();
     expectedCheckedMountPoints.add("/");
     expectedCheckedMountPoints.add("/grid/0");
     for (Mount mount : mounts) {
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/FileUtilTest.java b/server/base/src/test/java/org/apache/accumulo/server/util/FileUtilTest.java
index 7bded1a..a826acf 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/util/FileUtilTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/util/FileUtilTest.java
@@ -66,7 +66,7 @@
 
   @Test
   public void testToPathStrings() {
-    Collection<FileRef> c = new java.util.ArrayList<FileRef>();
+    Collection<FileRef> c = new java.util.ArrayList<>();
     FileRef r1 = createMock(FileRef.class);
     expect(r1.path()).andReturn(new Path("/foo"));
     replay(r1);
@@ -91,7 +91,7 @@
     assertTrue(tmp1.mkdirs() || tmp1.isDirectory());
     Path tmpPath1 = new Path(tmp1.toURI());
 
-    HashMap<Property,String> testProps = new HashMap<Property,String>();
+    HashMap<Property,String> testProps = new HashMap<>();
     testProps.put(Property.INSTANCE_DFS_DIR, accumuloDir.getAbsolutePath());
 
     AccumuloConfiguration testConf = new FileUtilTestConfiguration(testProps);
@@ -118,7 +118,7 @@
     assertTrue(tmp2.mkdirs() || tmp2.isDirectory());
     Path tmpPath1 = new Path(tmp1.toURI()), tmpPath2 = new Path(tmp2.toURI());
 
-    HashMap<Property,String> testProps = new HashMap<Property,String>();
+    HashMap<Property,String> testProps = new HashMap<>();
     testProps.put(Property.INSTANCE_VOLUMES, v1.toURI().toString() + "," + v2.toURI().toString());
 
     AccumuloConfiguration testConf = new FileUtilTestConfiguration(testProps);
@@ -150,7 +150,7 @@
     assertTrue(tmp2.mkdirs() || tmp2.isDirectory());
     Path tmpPath1 = new Path(tmp1.toURI()), tmpPath2 = new Path(tmp2.toURI());
 
-    HashMap<Property,String> testProps = new HashMap<Property,String>();
+    HashMap<Property,String> testProps = new HashMap<>();
     testProps.put(Property.INSTANCE_VOLUMES, v1.toURI().toString() + "," + v2.toURI().toString());
 
     AccumuloConfiguration testConf = new FileUtilTestConfiguration(testProps);
@@ -178,7 +178,7 @@
     assertTrue(tmp2.mkdirs() || tmp2.isDirectory());
     Path tmpPath1 = new Path(tmp1.toURI()), tmpPath2 = new Path(tmp2.toURI());
 
-    HashMap<Property,String> testProps = new HashMap<Property,String>();
+    HashMap<Property,String> testProps = new HashMap<>();
     testProps.put(Property.INSTANCE_VOLUMES, v1.toURI().toString() + "," + v2.toURI().toString());
 
     AccumuloConfiguration testConf = new FileUtilTestConfiguration(testProps);
@@ -207,7 +207,7 @@
     assertTrue(tmp2.mkdirs() || tmp2.isDirectory());
     Path tmpPath1 = new Path(tmp1.toURI()), tmpPath2 = new Path(tmp2.toURI());
 
-    HashMap<Property,String> testProps = new HashMap<Property,String>();
+    HashMap<Property,String> testProps = new HashMap<>();
     testProps.put(Property.INSTANCE_VOLUMES, v1.toURI().toString() + "," + v2.toURI().toString());
 
     AccumuloConfiguration testConf = new FileUtilTestConfiguration(testProps);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/ReplicationTableUtilTest.java b/server/base/src/test/java/org/apache/accumulo/server/util/ReplicationTableUtilTest.java
index 3983bde..e135c36 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/util/ReplicationTableUtilTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/util/ReplicationTableUtilTest.java
@@ -31,7 +31,6 @@
 import java.util.Map.Entry;
 import java.util.UUID;
 
-import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.IteratorSetting.Column;
@@ -39,7 +38,6 @@
 import org.apache.accumulo.core.client.impl.ClientContext;
 import org.apache.accumulo.core.client.impl.Credentials;
 import org.apache.accumulo.core.client.impl.Writer;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.ColumnUpdate;
@@ -67,7 +65,7 @@
   public void properPathInRow() throws Exception {
     Writer writer = EasyMock.createNiceMock(Writer.class);
     writer.update(EasyMock.anyObject(Mutation.class));
-    final List<Mutation> mutations = new ArrayList<Mutation>();
+    final List<Mutation> mutations = new ArrayList<>();
 
     // Mock a Writer to just add the mutation to a list
     EasyMock.expectLastCall().andAnswer(new IAnswer<Object>() {
@@ -81,7 +79,9 @@
     EasyMock.replay(writer);
 
     Credentials creds = new Credentials("root", new PasswordToken(""));
-    ClientContext context = new ClientContext(new MockInstance(), creds, new ClientConfiguration());
+    ClientContext context = EasyMock.createMock(ClientContext.class);
+    EasyMock.expect(context.getCredentials()).andReturn(creds).anyTimes();
+    EasyMock.replay(context);
 
     // Magic hook to create a Writer
     ReplicationTableUtil.addWriter(creds, writer);
@@ -91,7 +91,7 @@
     String myFile = "file:////home/user/accumulo/wal/server+port/" + uuid;
 
     long createdTime = System.currentTimeMillis();
-    ReplicationTableUtil.updateFiles(context, new KeyExtent(new Text("1"), null, null), Collections.singleton(myFile), StatusUtil.fileCreated(createdTime));
+    ReplicationTableUtil.updateFiles(context, new KeyExtent("1", null, null), myFile, StatusUtil.fileCreated(createdTime));
 
     verify(writer);
 
@@ -116,7 +116,7 @@
     String file = "file:///accumulo/wal/127.0.0.1+9997" + UUID.randomUUID();
     Path filePath = new Path(file);
     Text row = new Text(filePath.toString());
-    KeyExtent extent = new KeyExtent(new Text("1"), new Text("b"), new Text("a"));
+    KeyExtent extent = new KeyExtent("1", new Text("b"), new Text("a"));
 
     Mutation m = ReplicationTableUtil.createUpdateMutation(filePath, ProtobufUtil.toValue(stat), extent);
 
@@ -125,7 +125,7 @@
     ColumnUpdate col = m.getUpdates().get(0);
 
     Assert.assertEquals(MetadataSchema.ReplicationSection.COLF, new Text(col.getColumnFamily()));
-    Assert.assertEquals(extent.getTableId(), new Text(col.getColumnQualifier()));
+    Assert.assertEquals(extent.getTableId(), new Text(col.getColumnQualifier()).toString());
     Assert.assertEquals(0, col.getColumnVisibility().length);
     Assert.assertArrayEquals(stat.toByteArray(), col.getValue());
   }
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java b/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java
index 218d82c..37d127a 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/util/TServerUtilsTest.java
@@ -21,16 +21,124 @@
 import static org.easymock.EasyMock.expect;
 import static org.easymock.EasyMock.replay;
 import static org.easymock.EasyMock.verify;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.ServerSocket;
+import java.net.UnknownHostException;
+import java.nio.ByteBuffer;
+import java.util.List;
 import java.util.concurrent.ExecutorService;
 
+import org.apache.accumulo.core.client.AccumuloException;
+import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.impl.thrift.ClientService.Iface;
+import org.apache.accumulo.core.client.impl.thrift.ClientService.Processor;
+import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.ConfigurationCopy;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.server.AccumuloServerContext;
+import org.apache.accumulo.server.client.ClientServiceHandler;
+import org.apache.accumulo.server.conf.ServerConfigurationFactory;
+import org.apache.accumulo.server.rpc.RpcWrapper;
+import org.apache.accumulo.server.rpc.ServerAddress;
 import org.apache.accumulo.server.rpc.TServerUtils;
 import org.apache.thrift.server.TServer;
 import org.apache.thrift.transport.TServerSocket;
+import org.junit.After;
 import org.junit.Test;
 
 public class TServerUtilsTest {
+
+  protected static class TestInstance implements Instance {
+
+    @Override
+    public String getRootTabletLocation() {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public List<String> getMasterLocations() {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public String getInstanceID() {
+      return "1111";
+    }
+
+    @Override
+    public String getInstanceName() {
+      return "test";
+    }
+
+    @Override
+    public String getZooKeepers() {
+      return "";
+    }
+
+    @Override
+    public int getZooKeepersSessionTimeOut() {
+      return 30;
+    }
+
+    @Deprecated
+    @Override
+    public Connector getConnector(String user, byte[] pass) throws AccumuloException, AccumuloSecurityException {
+      throw new UnsupportedOperationException();
+    }
+
+    @Deprecated
+    @Override
+    public Connector getConnector(String user, ByteBuffer pass) throws AccumuloException, AccumuloSecurityException {
+      throw new UnsupportedOperationException();
+    }
+
+    @Deprecated
+    @Override
+    public Connector getConnector(String user, CharSequence pass) throws AccumuloException, AccumuloSecurityException {
+      throw new UnsupportedOperationException();
+    }
+
+    @Deprecated
+    @Override
+    public AccumuloConfiguration getConfiguration() {
+      throw new UnsupportedOperationException();
+    }
+
+    @Deprecated
+    @Override
+    public void setConfiguration(AccumuloConfiguration conf) {}
+
+    @Override
+    public Connector getConnector(String principal, AuthenticationToken token) throws AccumuloException, AccumuloSecurityException {
+      throw new UnsupportedOperationException();
+    }
+
+  }
+
+  protected static class TestServerConfigurationFactory extends ServerConfigurationFactory {
+
+    private ConfigurationCopy conf = null;
+
+    public TestServerConfigurationFactory(Instance instance) {
+      super(instance);
+      conf = new ConfigurationCopy(AccumuloConfiguration.getDefaultConfiguration());
+    }
+
+    @Override
+    public synchronized AccumuloConfiguration getConfiguration() {
+      return conf;
+    }
+
+  }
+
   private static class TServerWithoutES extends TServer {
     boolean stopCalled;
 
@@ -81,4 +189,167 @@
     TServerUtils.stopTServer(null);
     // not dying is enough
   }
+
+  private static final TestInstance instance = new TestInstance();
+  private static final TestServerConfigurationFactory factory = new TestServerConfigurationFactory(instance);
+
+  @After
+  public void resetProperty() {
+    ((ConfigurationCopy) factory.getConfiguration()).set(Property.TSERV_CLIENTPORT, Property.TSERV_CLIENTPORT.getDefaultValue());
+    ((ConfigurationCopy) factory.getConfiguration()).set(Property.TSERV_PORTSEARCH, Property.TSERV_PORTSEARCH.getDefaultValue());
+  }
+
+  @Test
+  public void testStartServerZeroPort() throws Exception {
+    TServer server = null;
+    ((ConfigurationCopy) factory.getConfiguration()).set(Property.TSERV_CLIENTPORT, "0");
+    try {
+      ServerAddress address = startServer();
+      assertNotNull(address);
+      server = address.getServer();
+      assertNotNull(server);
+      assertTrue(address.getAddress().getPort() > 1024);
+    } finally {
+      if (null != server) {
+        TServerUtils.stopTServer(server);
+      }
+    }
+  }
+
+  @Test
+  public void testStartServerFreePort() throws Exception {
+    TServer server = null;
+    int port = getFreePort(1024);
+    ((ConfigurationCopy) factory.getConfiguration()).set(Property.TSERV_CLIENTPORT, Integer.toString(port));
+    try {
+      ServerAddress address = startServer();
+      assertNotNull(address);
+      server = address.getServer();
+      assertNotNull(server);
+      assertEquals(port, address.getAddress().getPort());
+    } finally {
+      if (null != server) {
+        TServerUtils.stopTServer(server);
+      }
+    }
+  }
+
+  @Test(expected = UnknownHostException.class)
+  public void testStartServerUsedPort() throws Exception {
+    int port = getFreePort(1024);
+    InetAddress addr = InetAddress.getByName("localhost");
+    // Bind to the port
+    ServerSocket s = new ServerSocket(port, 50, addr);
+    ((ConfigurationCopy) factory.getConfiguration()).set(Property.TSERV_CLIENTPORT, Integer.toString(port));
+    try {
+      startServer();
+    } finally {
+      s.close();
+    }
+  }
+
+  @Test
+  public void testStartServerUsedPortWithSearch() throws Exception {
+    TServer server = null;
+    int[] port = findTwoFreeSequentialPorts(1024);
+    // Bind to the port
+    InetAddress addr = InetAddress.getByName("localhost");
+    ServerSocket s = new ServerSocket(port[0], 50, addr);
+    ((ConfigurationCopy) factory.getConfiguration()).set(Property.TSERV_CLIENTPORT, Integer.toString(port[0]));
+    ((ConfigurationCopy) factory.getConfiguration()).set(Property.TSERV_PORTSEARCH, "true");
+    try {
+      ServerAddress address = startServer();
+      assertNotNull(address);
+      server = address.getServer();
+      assertNotNull(server);
+      assertEquals(port[1], address.getAddress().getPort());
+    } finally {
+      if (null != server) {
+        TServerUtils.stopTServer(server);
+      }
+      s.close();
+    }
+  }
+
+  @Test
+  public void testStartServerPortRange() throws Exception {
+    TServer server = null;
+    int[] port = findTwoFreeSequentialPorts(1024);
+    String portRange = Integer.toString(port[0]) + "-" + Integer.toString(port[1]);
+    ((ConfigurationCopy) factory.getConfiguration()).set(Property.TSERV_CLIENTPORT, portRange);
+    try {
+      ServerAddress address = startServer();
+      assertNotNull(address);
+      server = address.getServer();
+      assertNotNull(server);
+      assertTrue(port[0] == address.getAddress().getPort() || port[1] == address.getAddress().getPort());
+    } finally {
+      if (null != server) {
+        TServerUtils.stopTServer(server);
+      }
+    }
+  }
+
+  @Test
+  public void testStartServerPortRangeFirstPortUsed() throws Exception {
+    TServer server = null;
+    InetAddress addr = InetAddress.getByName("localhost");
+    int[] port = findTwoFreeSequentialPorts(1024);
+    String portRange = Integer.toString(port[0]) + "-" + Integer.toString(port[1]);
+    // Bind to the port
+    ServerSocket s = new ServerSocket(port[0], 50, addr);
+    ((ConfigurationCopy) factory.getConfiguration()).set(Property.TSERV_CLIENTPORT, portRange);
+    try {
+      ServerAddress address = startServer();
+      assertNotNull(address);
+      server = address.getServer();
+      assertNotNull(server);
+      assertTrue(port[1] == address.getAddress().getPort());
+    } finally {
+      if (null != server) {
+        TServerUtils.stopTServer(server);
+      }
+      s.close();
+    }
+  }
+
+  private int[] findTwoFreeSequentialPorts(int startingAddress) throws UnknownHostException {
+    boolean sequential = false;
+    int low = startingAddress;
+    int high = 0;
+    do {
+      low = getFreePort(low);
+      high = getFreePort(low + 1);
+      sequential = ((high - low) == 1);
+    } while (!sequential);
+    return new int[] {low, high};
+  }
+
+  private int getFreePort(int startingAddress) throws UnknownHostException {
+    final InetAddress addr = InetAddress.getByName("localhost");
+    for (int i = startingAddress; i < 65535; i++) {
+      try {
+        ServerSocket s = new ServerSocket(i, 50, addr);
+        int port = s.getLocalPort();
+        s.close();
+        return port;
+      } catch (IOException e) {
+        // keep trying
+      }
+    }
+    throw new RuntimeException("Unable to find open port");
+  }
+
+  private ServerAddress startServer() throws Exception {
+    AccumuloServerContext ctx = new AccumuloServerContext(factory);
+    ClientServiceHandler clientHandler = new ClientServiceHandler(ctx, null, null);
+    Iface rpcProxy = RpcWrapper.service(clientHandler, new Processor<Iface>(clientHandler));
+    Processor<Iface> processor = new Processor<>(rpcProxy);
+    // "localhost" explicitly to make sure we can always bind to that interface (avoids DNS misconfiguration)
+    String hostname = "localhost";
+
+    return TServerUtils.startServer(ctx, hostname, Property.TSERV_CLIENTPORT, processor, "TServerUtilsTest", "TServerUtilsTestThread",
+        Property.TSERV_PORTSEARCH, Property.TSERV_MINTHREADS, Property.TSERV_THREADCHECK, Property.GENERAL_MAX_MESSAGE_SIZE);
+
+  }
 }
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/TabletIteratorTest.java b/server/base/src/test/java/org/apache/accumulo/server/util/TabletIteratorTest.java
deleted file mode 100644
index 9c6cee1..0000000
--- a/server/base/src/test/java/org/apache/accumulo/server/util/TabletIteratorTest.java
+++ /dev/null
@@ -1,107 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.server.util;
-
-import java.util.Map.Entry;
-
-import junit.framework.TestCase;
-
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.data.impl.KeyExtent;
-import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.server.util.TabletIterator.TabletDeletedException;
-import org.apache.hadoop.io.Text;
-
-public class TabletIteratorTest extends TestCase {
-
-  class TestTabletIterator extends TabletIterator {
-
-    private Connector conn;
-
-    public TestTabletIterator(Connector conn) throws Exception {
-      super(conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY), MetadataSchema.TabletsSection.getRange(), true, true);
-      this.conn = conn;
-    }
-
-    @Override
-    protected void resetScanner() {
-      try {
-        Scanner ds = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-        Text tablet = new KeyExtent(new Text("0"), new Text("m"), null).getMetadataEntry();
-        ds.setRange(new Range(tablet, true, tablet, true));
-
-        Mutation m = new Mutation(tablet);
-
-        BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
-        for (Entry<Key,Value> entry : ds) {
-          Key k = entry.getKey();
-          m.putDelete(k.getColumnFamily(), k.getColumnQualifier(), k.getTimestamp());
-        }
-
-        bw.addMutation(m);
-
-        bw.close();
-
-      } catch (Exception e) {
-        throw new RuntimeException(e);
-      }
-
-      super.resetScanner();
-    }
-
-  }
-
-  // simulate a merge happening while iterating over tablets
-  public void testMerge() throws Exception {
-    MockInstance mi = new MockInstance();
-    Connector conn = mi.getConnector("", new PasswordToken(""));
-
-    KeyExtent ke1 = new KeyExtent(new Text("0"), new Text("m"), null);
-    Mutation mut1 = ke1.getPrevRowUpdateMutation();
-    TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(mut1, new Value("/d1".getBytes()));
-
-    KeyExtent ke2 = new KeyExtent(new Text("0"), null, null);
-    Mutation mut2 = ke2.getPrevRowUpdateMutation();
-    TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(mut2, new Value("/d2".getBytes()));
-
-    BatchWriter bw1 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
-    bw1.addMutation(mut1);
-    bw1.addMutation(mut2);
-    bw1.close();
-
-    TestTabletIterator tabIter = new TestTabletIterator(conn);
-
-    try {
-      while (tabIter.hasNext()) {
-        tabIter.next();
-      }
-      assertTrue(false);
-    } catch (TabletDeletedException tde) {}
-  }
-}
diff --git a/server/gc/pom.xml b/server/gc/pom.xml
index 9ece3e0..4aa31f7 100644
--- a/server/gc/pom.xml
+++ b/server/gc/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-gc</artifactId>
diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogs.java b/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogs.java
index b57b8fc..8803a40 100644
--- a/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogs.java
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogs.java
@@ -18,7 +18,7 @@
 
 import java.io.FileNotFoundException;
 import java.io.IOException;
-import java.util.ArrayList;
+import java.util.Collection;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
@@ -27,65 +27,56 @@
 import java.util.Map.Entry;
 import java.util.Set;
 import java.util.UUID;
-import java.util.concurrent.TimeUnit;
 
-import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.gc.thrift.GCStatus;
 import org.apache.accumulo.core.gc.thrift.GcCycleStats;
 import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.ReplicationSection;
-import org.apache.accumulo.core.protobuf.ProtobufUtil;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.replication.ReplicationTableOfflineException;
-import org.apache.accumulo.core.rpc.ThriftUtil;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.tabletserver.log.LogEntry;
-import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
-import org.apache.accumulo.core.tabletserver.thrift.TabletClientService.Client;
 import org.apache.accumulo.core.trace.Span;
 import org.apache.accumulo.core.trace.Trace;
-import org.apache.accumulo.core.trace.Tracer;
-import org.apache.accumulo.core.util.AddressUtil;
-import org.apache.accumulo.core.zookeeper.ZooUtil;
+import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.server.AccumuloServerContext;
-import org.apache.accumulo.server.ServerConstants;
 import org.apache.accumulo.server.fs.VolumeManager;
-import org.apache.accumulo.server.replication.StatusUtil;
-import org.apache.accumulo.server.replication.proto.Replication.Status;
-import org.apache.accumulo.server.util.MetadataTableUtil;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.log.WalStateManager.WalMarkerException;
+import org.apache.accumulo.server.log.WalStateManager.WalState;
+import org.apache.accumulo.server.master.LiveTServerSet;
+import org.apache.accumulo.server.master.LiveTServerSet.Listener;
+import org.apache.accumulo.server.master.state.MetaDataStateStore;
+import org.apache.accumulo.server.master.state.RootTabletStateStore;
+import org.apache.accumulo.server.master.state.TServerInstance;
+import org.apache.accumulo.server.master.state.TabletLocationState;
+import org.apache.accumulo.server.master.state.TabletState;
 import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
-import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
-import org.apache.thrift.TException;
+import org.apache.hadoop.io.Text;
 import org.apache.zookeeper.KeeperException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.annotations.VisibleForTesting;
-import com.google.common.collect.Iterables;
-import com.google.common.net.HostAndPort;
-import com.google.protobuf.InvalidProtocolBufferException;
+import com.google.common.collect.Iterators;
 
 public class GarbageCollectWriteAheadLogs {
   private static final Logger log = LoggerFactory.getLogger(GarbageCollectWriteAheadLogs.class);
 
   private final AccumuloServerContext context;
   private final VolumeManager fs;
-  private final Map<HostAndPort,Long> firstSeenDead = new HashMap<HostAndPort,Long>();
-
-  private boolean useTrash;
+  private final boolean useTrash;
+  private final LiveTServerSet liveServers;
+  private final WalStateManager walMarker;
+  private final Iterable<TabletLocationState> store;
 
   /**
    * Creates a new GC WAL object.
@@ -97,69 +88,79 @@
    * @param useTrash
    *          true to move files to trash rather than delete them
    */
-  GarbageCollectWriteAheadLogs(AccumuloServerContext context, VolumeManager fs, boolean useTrash) throws IOException {
+  GarbageCollectWriteAheadLogs(final AccumuloServerContext context, VolumeManager fs, boolean useTrash) throws IOException {
     this.context = context;
     this.fs = fs;
     this.useTrash = useTrash;
+    this.liveServers = new LiveTServerSet(context, new Listener() {
+      @Override
+      public void update(LiveTServerSet current, Set<TServerInstance> deleted, Set<TServerInstance> added) {
+        log.debug("New tablet servers noticed: " + added);
+        log.debug("Tablet servers removed: " + deleted);
+      }
+    });
+    liveServers.startListeningForTabletServerChanges();
+    this.walMarker = new WalStateManager(context.getInstance(), ZooReaderWriter.getInstance());
+    this.store = new Iterable<TabletLocationState>() {
+      @Override
+      public Iterator<TabletLocationState> iterator() {
+        return Iterators.concat(new RootTabletStateStore(context).iterator(), new MetaDataStateStore(context).iterator());
+      }
+    };
   }
 
   /**
-   * Gets the instance used by this object.
+   * Creates a new GC WAL object. Meant for testing -- allows mocked objects.
    *
-   * @return instance
+   * @param context
+   *          the collection server's context
+   * @param fs
+   *          volume manager to use
+   * @param useTrash
+   *          true to move files to trash rather than delete them
+   * @param liveTServerSet
+   *          a started LiveTServerSet instance
    */
-  Instance getInstance() {
-    return context.getInstance();
+  @VisibleForTesting
+  GarbageCollectWriteAheadLogs(AccumuloServerContext context, VolumeManager fs, boolean useTrash, LiveTServerSet liveTServerSet, WalStateManager walMarker,
+      Iterable<TabletLocationState> store) throws IOException {
+    this.context = context;
+    this.fs = fs;
+    this.useTrash = useTrash;
+    this.liveServers = liveTServerSet;
+    this.walMarker = walMarker;
+    this.store = store;
   }
 
-  /**
-   * Gets the volume manager used by this object.
-   *
-   * @return volume manager
-   */
-  VolumeManager getVolumeManager() {
-    return fs;
-  }
-
-  /**
-   * Checks if the volume manager should move files to the trash rather than delete them.
-   *
-   * @return true if trash is used
-   */
-  boolean isUsingTrash() {
-    return useTrash;
-  }
-
-  /**
-   * Removes all the WAL files that are no longer used.
-   * <p>
-   *
-   * This method is not Threadsafe. SimpleGarbageCollector#run does not invoke collect in a concurrent manner.
-   *
-   * @param status
-   *          GCStatus object
-   */
   public void collect(GCStatus status) {
 
-    Span span = Trace.start("scanServers");
+    Span span = Trace.start("getCandidates");
     try {
-
-      Map<String,Path> sortedWALogs = getSortedWALogs();
-
       status.currentLog.started = System.currentTimeMillis();
 
-      Map<Path,String> fileToServerMap = new HashMap<Path,String>();
-      Map<String,Path> nameToFileMap = new HashMap<String,Path>();
-      int count = scanServers(fileToServerMap, nameToFileMap);
+      Map<TServerInstance,Set<UUID>> logsByServer = new HashMap<>();
+      Map<UUID,Pair<WalState,Path>> logsState = new HashMap<>();
+      // Scan for log file info first: the order is important
+      // Consider:
+      // * get live servers
+      // * new server gets a lock, creates a log
+      // * get logs
+      // * the log appears to belong to a dead server
+      long count = getCurrent(logsByServer, logsState);
       long fileScanStop = System.currentTimeMillis();
-      log.info(String.format("Fetched %d files from %d servers in %.2f seconds", fileToServerMap.size(), count,
-          (fileScanStop - status.currentLog.started) / 1000.));
-      status.currentLog.candidates = fileToServerMap.size();
+
+      log.info(String.format("Fetched %d files for %d servers in %.2f seconds", count, logsByServer.size(), (fileScanStop - status.currentLog.started) / 1000.));
+      status.currentLog.candidates = count;
       span.stop();
 
-      span = Trace.start("removeMetadataEntries");
+      // now it's safe to get the liveServers
+      Set<TServerInstance> currentServers = liveServers.getCurrentServers();
+
+      Map<UUID,TServerInstance> uuidToTServer;
+      span = Trace.start("removeEntriesInUse");
       try {
-        count = removeMetadataEntries(nameToFileMap, sortedWALogs, status);
+        uuidToTServer = removeEntriesInUse(logsByServer, currentServers, logsState);
+        count = uuidToTServer.size();
       } catch (Exception ex) {
         log.error("Unable to scan metadata table", ex);
         return;
@@ -172,7 +173,7 @@
 
       span = Trace.start("removeReplicationEntries");
       try {
-        count = removeReplicationEntries(nameToFileMap, sortedWALogs, status);
+        count = removeReplicationEntries(uuidToTServer);
       } catch (Exception ex) {
         log.error("Unable to scan replication table", ex);
         return;
@@ -184,16 +185,23 @@
       log.info(String.format("%d replication entries scanned in %.2f seconds", count, (replicationEntryScanStop - logEntryScanStop) / 1000.));
 
       span = Trace.start("removeFiles");
-      Map<String,ArrayList<Path>> serverToFileMap = mapServersToFiles(fileToServerMap, nameToFileMap);
 
-      count = removeFiles(nameToFileMap, serverToFileMap, sortedWALogs, status);
+      logsState.keySet().retainAll(uuidToTServer.keySet());
+      count = removeFiles(logsState.values(), status);
 
       long removeStop = System.currentTimeMillis();
-      log.info(String.format("%d total logs removed from %d servers in %.2f seconds", count, serverToFileMap.size(), (removeStop - logEntryScanStop) / 1000.));
+      log.info(String.format("%d total logs removed from %d servers in %.2f seconds", count, logsByServer.size(), (removeStop - logEntryScanStop) / 1000.));
+      span.stop();
+
+      span = Trace.start("removeMarkers");
+      count = removeTabletServerMarkers(uuidToTServer, logsByServer, currentServers);
+      long removeMarkersStop = System.currentTimeMillis();
+      log.info(String.format("%d markers removed in %.2f seconds", count, (removeMarkersStop - removeStop) / 1000.));
+      span.stop();
+
       status.currentLog.finished = removeStop;
       status.lastLog = status.currentLog;
       status.currentLog = new GcCycleStats();
-      span.stop();
 
     } catch (Exception e) {
       log.error("exception occured while garbage collecting write ahead logs", e);
@@ -202,208 +210,38 @@
     }
   }
 
-  boolean holdsLock(HostAndPort addr) {
+  private long removeTabletServerMarkers(Map<UUID,TServerInstance> uidMap, Map<TServerInstance,Set<UUID>> candidates, Set<TServerInstance> liveServers) {
+    long result = 0;
+    // remove markers for files removed
     try {
-      String zpath = ZooUtil.getRoot(context.getInstance()) + Constants.ZTSERVERS + "/" + addr.toString();
-      List<String> children = ZooReaderWriter.getInstance().getChildren(zpath);
-      return !(children == null || children.isEmpty());
-    } catch (KeeperException.NoNodeException ex) {
-      return false;
+      for (Entry<UUID,TServerInstance> entry : uidMap.entrySet()) {
+        walMarker.removeWalMarker(entry.getValue(), entry.getKey());
+      }
     } catch (Exception ex) {
-      log.debug(ex.toString(), ex);
-      return true;
+      throw new RuntimeException(ex);
     }
-  }
-
-  private AccumuloConfiguration getConfig() {
-    return context.getServerConfigurationFactory().getConfiguration();
-  }
-
-  /**
-   * Top level method for removing WAL files.
-   * <p>
-   * Loops over all the gathered WAL and sortedWAL entries and calls the appropriate methods for removal
-   *
-   * @param nameToFileMap
-   *          Map of filename to Path
-   * @param serverToFileMap
-   *          Map of HostAndPort string to a list of Paths
-   * @param sortedWALogs
-   *          Map of sorted WAL names to Path
-   * @param status
-   *          GCStatus object for tracking what is done
-   * @return 0 always
-   */
-  @VisibleForTesting
-  int removeFiles(Map<String,Path> nameToFileMap, Map<String,ArrayList<Path>> serverToFileMap, Map<String,Path> sortedWALogs, final GCStatus status) {
-    // TODO: remove nameToFileMap from method signature, not used here I don't think
-    AccumuloConfiguration conf = getConfig();
-    for (Entry<String,ArrayList<Path>> entry : serverToFileMap.entrySet()) {
-      if (entry.getKey().isEmpty()) {
-        removeOldStyleWAL(entry, status);
-      } else {
-        removeWALFile(entry, conf, status);
-      }
-    }
-    for (Path swalog : sortedWALogs.values()) {
-      removeSortedWAL(swalog);
-    }
-    return 0;
-  }
-
-  /**
-   * Removes sortedWALs.
-   * <p>
-   * Sorted WALs are WALs that are in the recovery directory and have already been used.
-   *
-   * @param swalog
-   *          Path to the WAL
-   */
-  @VisibleForTesting
-  void removeSortedWAL(Path swalog) {
-    log.debug("Removing sorted WAL " + swalog);
-    try {
-      if (!useTrash || !fs.moveToTrash(swalog)) {
-        fs.deleteRecursively(swalog);
-      }
-    } catch (FileNotFoundException ex) {
-      // ignored
-    } catch (IOException ioe) {
-      try {
-        if (fs.exists(swalog)) {
-          log.error("Unable to delete sorted walog " + swalog + ": " + ioe);
-        }
-      } catch (IOException ex) {
-        log.error("Unable to check for the existence of " + swalog, ex);
-      }
-    }
-  }
-
-  /**
-   * A wrapper method to check if the tserver using the WAL is still alive
-   * <p>
-   * Delegates to the deletion to #removeWALfromDownTserver if the ZK lock is gone or #askTserverToRemoveWAL if the server is known to still be alive
-   *
-   * @param entry
-   *          WAL information gathered
-   * @param conf
-   *          AccumuloConfiguration object
-   * @param status
-   *          GCStatus object
-   */
-  void removeWALFile(Entry<String,ArrayList<Path>> entry, AccumuloConfiguration conf, final GCStatus status) {
-    HostAndPort address = AddressUtil.parseAddress(entry.getKey(), false);
-    if (!holdsLock(address)) {
-      removeWALfromDownTserver(address, conf, entry, status);
-    } else {
-      askTserverToRemoveWAL(address, conf, entry, status);
-    }
-  }
-
-  /**
-   * Asks a currently running tserver to remove it's WALs.
-   * <p>
-   * A tserver has more information about whether a WAL is still being used for current mutations. It is safer to ask the tserver to remove the file instead of
-   * just relying on information in the metadata table.
-   *
-   * @param address
-   *          HostAndPort of the tserver
-   * @param conf
-   *          AccumuloConfiguration entry
-   * @param entry
-   *          WAL information gathered
-   * @param status
-   *          GCStatus object
-   */
-  @VisibleForTesting
-  void askTserverToRemoveWAL(HostAndPort address, AccumuloConfiguration conf, Entry<String,ArrayList<Path>> entry, final GCStatus status) {
-    firstSeenDead.remove(address);
-    Client tserver = null;
-    try {
-      tserver = ThriftUtil.getClient(new TabletClientService.Client.Factory(), address, context);
-      tserver.removeLogs(Tracer.traceInfo(), context.rpcCreds(), paths2strings(entry.getValue()));
-      log.debug("asked tserver to delete " + entry.getValue() + " from " + entry.getKey());
-      status.currentLog.deleted += entry.getValue().size();
-    } catch (TException e) {
-      log.warn("Error talking to " + address + ": " + e);
-    } finally {
-      if (tserver != null)
-        ThriftUtil.returnClient(tserver);
-    }
-  }
-
-  /**
-   * Get the configured wait period a server has to be dead.
-   * <p>
-   * The property is "gc.wal.dead.server.wait" defined in Property.GC_WAL_DEAD_SERVER_WAIT and is duration. Valid values include a unit with no space like
-   * 3600s, 5m or 2h.
-   *
-   * @param conf
-   *          AccumuloConfiguration
-   * @return long that represents the millis to wait
-   */
-  @VisibleForTesting
-  long getGCWALDeadServerWaitTime(AccumuloConfiguration conf) {
-    return conf.getTimeInMillis(Property.GC_WAL_DEAD_SERVER_WAIT);
-  }
-
-  /**
-   * Remove walogs associated with a tserver that no longer has a look.
-   * <p>
-   * There is configuration option, see #getGCWALDeadServerWaitTime, that defines how long a server must be "dead" before removing the associated write ahead
-   * log files. The intent to ensure that recovery succeeds for the tablet that were host on that tserver.
-   *
-   * @param address
-   *          HostAndPort of the tserver with no lock
-   * @param conf
-   *          AccumuloConfiguration to get that gc.wal.dead.server.wait info
-   * @param entry
-   *          The WALOG path
-   * @param status
-   *          GCStatus for tracking changes
-   */
-  @VisibleForTesting
-  void removeWALfromDownTserver(HostAndPort address, AccumuloConfiguration conf, Entry<String,ArrayList<Path>> entry, final GCStatus status) {
-    // tserver is down, only delete once configured time has passed
-    if (timeToDelete(address, getGCWALDeadServerWaitTime(conf))) {
-      for (Path path : entry.getValue()) {
-        log.debug("Removing WAL for offline server " + address + " at " + path);
+    // remove parent znode for dead tablet servers
+    for (Entry<TServerInstance,Set<UUID>> entry : candidates.entrySet()) {
+      if (!liveServers.contains(entry.getKey())) {
+        log.info("Removing znode for " + entry.getKey());
         try {
-          if (!useTrash || !fs.moveToTrash(path)) {
-            fs.deleteRecursively(path);
-          }
-          status.currentLog.deleted++;
-        } catch (FileNotFoundException ex) {
-          // ignored
-        } catch (IOException ex) {
-          log.error("Unable to delete wal " + path + ": " + ex);
+          walMarker.forget(entry.getKey());
+        } catch (WalMarkerException ex) {
+          log.info("Error removing znode for " + entry.getKey() + " " + ex.toString());
         }
       }
-      firstSeenDead.remove(address);
-    } else {
-      log.debug("Not removing " + entry.getValue().size() + " WAL(s) for offline server since it has not be long enough: " + address);
     }
+    return result;
   }
 
-  /**
-   * Removes old style WAL entries.
-   * <p>
-   * The format for storing WAL info in the metadata table changed at some point, maybe the 1.5 release. Once that is known for sure and we no longer support
-   * upgrading from that version, this code should be removed
-   *
-   * @param entry
-   *          Map of empty server address to List of Paths
-   * @param status
-   *          GCStatus object
-   */
-  @VisibleForTesting
-  void removeOldStyleWAL(Entry<String,ArrayList<Path>> entry, final GCStatus status) {
-    // old-style log entry, just remove it
-    for (Path path : entry.getValue()) {
-      log.debug("Removing old-style WAL " + path);
+  private long removeFiles(Collection<Pair<WalState,Path>> collection, final GCStatus status) {
+    for (Pair<WalState,Path> stateFile : collection) {
+      Path path = stateFile.getSecond();
+      log.debug("Removing " + stateFile.getFirst() + " WAL " + path);
       try {
-        if (!useTrash || !fs.moveToTrash(path))
+        if (!useTrash || !fs.moveToTrash(path)) {
           fs.deleteRecursively(path);
+        }
         status.currentLog.deleted++;
       } catch (FileNotFoundException ex) {
         // ignored
@@ -411,312 +249,126 @@
         log.error("Unable to delete wal " + path + ": " + ex);
       }
     }
+    return status.currentLog.deleted;
   }
 
-  /**
-   * Converts a list of paths to their corresponding strings.
-   *
-   * @param paths
-   *          list of paths
-   * @return string forms of paths
-   */
-  static List<String> paths2strings(List<Path> paths) {
-    List<String> result = new ArrayList<String>(paths.size());
-    for (Path path : paths)
-      result.add(path.toString());
-    return result;
+  private UUID path2uuid(Path path) {
+    return UUID.fromString(path.getName());
   }
 
-  /**
-   * Reverses the given mapping of file paths to servers. The returned map provides a list of file paths for each server. Any path whose name is not in the
-   * mapping of file names to paths is skipped.
-   *
-   * @param fileToServerMap
-   *          map of file paths to servers
-   * @param nameToFileMap
-   *          map of file names to paths
-   * @return map of servers to lists of file paths
-   */
-  static Map<String,ArrayList<Path>> mapServersToFiles(Map<Path,String> fileToServerMap, Map<String,Path> nameToFileMap) {
-    Map<String,ArrayList<Path>> result = new HashMap<String,ArrayList<Path>>();
-    for (Entry<Path,String> fileServer : fileToServerMap.entrySet()) {
-      if (!nameToFileMap.containsKey(fileServer.getKey().getName()))
-        continue;
-      ArrayList<Path> files = result.get(fileServer.getValue());
-      if (files == null) {
-        files = new ArrayList<Path>();
-        result.put(fileServer.getValue(), files);
+  private Map<UUID,TServerInstance> removeEntriesInUse(Map<TServerInstance,Set<UUID>> candidates, Set<TServerInstance> liveServers,
+      Map<UUID,Pair<WalState,Path>> logsState) throws IOException, KeeperException, InterruptedException {
+
+    Map<UUID,TServerInstance> result = new HashMap<>();
+    for (Entry<TServerInstance,Set<UUID>> entry : candidates.entrySet()) {
+      for (UUID id : entry.getValue()) {
+        result.put(id, entry.getKey());
       }
-      files.add(fileServer.getKey());
+    }
+
+    // remove any entries if there's a log reference (recovery hasn't finished)
+    Iterator<TabletLocationState> states = store.iterator();
+    while (states.hasNext()) {
+      TabletLocationState state = states.next();
+
+      // Tablet is still assigned to a dead server. Master has moved markers and reassigned it
+      // Easiest to just ignore all the WALs for the dead server.
+      if (state.getState(liveServers) == TabletState.ASSIGNED_TO_DEAD_SERVER) {
+        Set<UUID> idsToIgnore = candidates.remove(state.current);
+        if (idsToIgnore != null) {
+          for (UUID id : idsToIgnore) {
+            result.remove(id);
+          }
+        }
+      }
+      // Tablet is being recovered and has WAL references, remove all the WALs for the dead server
+      // that made the WALs.
+      for (Collection<String> wals : state.walogs) {
+        for (String wal : wals) {
+          UUID walUUID = path2uuid(new Path(wal));
+          TServerInstance dead = result.get(walUUID);
+          // There's a reference to a log file, so skip that server's logs
+          Set<UUID> idsToIgnore = candidates.remove(dead);
+          if (idsToIgnore != null) {
+            for (UUID id : idsToIgnore) {
+              result.remove(id);
+            }
+          }
+        }
+      }
+    }
+
+    // Remove OPEN and CLOSED logs for live servers: they are still in use
+    for (TServerInstance liveServer : liveServers) {
+      Set<UUID> idsForServer = candidates.get(liveServer);
+      // Server may not have any logs yet
+      if (idsForServer != null) {
+        for (UUID id : idsForServer) {
+          Pair<WalState,Path> stateFile = logsState.get(id);
+          if (stateFile.getFirst() != WalState.UNREFERENCED) {
+            result.remove(id);
+          }
+        }
+      }
     }
     return result;
   }
 
-  @VisibleForTesting
-  int removeMetadataEntries(Map<String,Path> nameToFileMap, Map<String,Path> sortedWALogs, GCStatus status) throws IOException, KeeperException,
-      InterruptedException {
-    int count = 0;
-    Iterator<LogEntry> iterator = MetadataTableUtil.getLogEntries(context);
-
-    // For each WAL reference in the metadata table
-    while (iterator.hasNext()) {
-      // Each metadata reference has at least one WAL file
-      for (String entry : iterator.next().logSet) {
-        // old style WALs will have the IP:Port of their logger and new style will either be a Path either absolute or relative, in all cases
-        // the last "/" will mark a UUID file name.
-        String uuid = entry.substring(entry.lastIndexOf("/") + 1);
-        if (!isUUID(uuid)) {
-          // fully expect this to be a uuid, if its not then something is wrong and walog GC should not proceed!
-          throw new IllegalArgumentException("Expected uuid, but got " + uuid + " from " + entry);
-        }
-
-        Path pathFromNN = nameToFileMap.remove(uuid);
-        if (pathFromNN != null) {
-          status.currentLog.inUse++;
-          sortedWALogs.remove(uuid);
-        }
-
-        count++;
-      }
-    }
-
-    return count;
-  }
-
-  protected int removeReplicationEntries(Map<String,Path> nameToFileMap, Map<String,Path> sortedWALogs, GCStatus status) throws IOException, KeeperException,
-      InterruptedException {
+  protected int removeReplicationEntries(Map<UUID,TServerInstance> candidates) throws IOException, KeeperException, InterruptedException {
     Connector conn;
     try {
       conn = context.getConnector();
-    } catch (AccumuloException | AccumuloSecurityException e) {
-      log.error("Failed to get connector", e);
+      try {
+        final Scanner s = ReplicationTable.getScanner(conn);
+        StatusSection.limit(s);
+        for (Entry<Key,Value> entry : s) {
+          UUID id = path2uuid(new Path(entry.getKey().getRow().toString()));
+          candidates.remove(id);
+          log.info("Ignore closed log " + id + " because it is being replicated");
+        }
+      } catch (ReplicationTableOfflineException ex) {
+        return candidates.size();
+      }
+
+      final Scanner scanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
+      scanner.fetchColumnFamily(MetadataSchema.ReplicationSection.COLF);
+      scanner.setRange(MetadataSchema.ReplicationSection.getRange());
+      for (Entry<Key,Value> entry : scanner) {
+        Text file = new Text();
+        MetadataSchema.ReplicationSection.getFile(entry.getKey(), file);
+        UUID id = path2uuid(new Path(file.toString()));
+        candidates.remove(id);
+        log.info("Ignore closed log " + id + " because it is being replicated");
+      }
+
+      return candidates.size();
+    } catch (AccumuloException | AccumuloSecurityException | TableNotFoundException e) {
+      log.error("Failed to scan metadata table", e);
       throw new IllegalArgumentException(e);
     }
-
-    int count = 0;
-
-    Iterator<Entry<String,Path>> walIter = nameToFileMap.entrySet().iterator();
-
-    while (walIter.hasNext()) {
-      Entry<String,Path> wal = walIter.next();
-      String fullPath = wal.getValue().toString();
-      if (neededByReplication(conn, fullPath)) {
-        log.debug("Removing WAL from candidate deletion as it is still needed for replication: {}", fullPath);
-        // If we haven't already removed it, check to see if this WAL is
-        // "in use" by replication (needed for replication purposes)
-        status.currentLog.inUse++;
-
-        walIter.remove();
-        sortedWALogs.remove(wal.getKey());
-      } else {
-        log.debug("WAL not needed for replication {}", fullPath);
-      }
-      count++;
-    }
-
-    return count;
   }
 
   /**
-   * Determine if the given WAL is needed for replication
+   * Scans log markers. The map passed in is populated with the log ids.
    *
-   * @param wal
-   *          The full path (URI)
-   * @return True if the WAL is still needed by replication (not a candidate for deletion)
+   * @param logsByServer
+   *          map of dead server to log file entries
+   * @return total number of log files
    */
-  protected boolean neededByReplication(Connector conn, String wal) {
-    log.info("Checking replication table for " + wal);
+  private long getCurrent(Map<TServerInstance,Set<UUID>> logsByServer, Map<UUID,Pair<WalState,Path>> logState) throws Exception {
 
-    Iterable<Entry<Key,Value>> iter = getReplicationStatusForFile(conn, wal);
-
-    // TODO Push down this filter to the tserver to only return records
-    // that are not completely replicated and convert this loop into a
-    // `return s.iterator.hasNext()` statement
-    for (Entry<Key,Value> entry : iter) {
-      try {
-        Status status = Status.parseFrom(entry.getValue().get());
-        log.info("Checking if {} is safe for removal with {}", wal, ProtobufUtil.toString(status));
-        if (!StatusUtil.isSafeForRemoval(status)) {
-          return true;
-        }
-      } catch (InvalidProtocolBufferException e) {
-        log.error("Could not deserialize Status protobuf for " + entry.getKey(), e);
+    // get all the unused WALs in zookeeper
+    long result = 0;
+    Map<TServerInstance,List<UUID>> markers = walMarker.getAllMarkers();
+    for (Entry<TServerInstance,List<UUID>> entry : markers.entrySet()) {
+      HashSet<UUID> ids = new HashSet<>(entry.getValue().size());
+      for (UUID id : entry.getValue()) {
+        ids.add(id);
+        logState.put(id, walMarker.state(entry.getKey(), id));
+        result++;
       }
-    }
-
-    return false;
-  }
-
-  protected Iterable<Entry<Key,Value>> getReplicationStatusForFile(Connector conn, String wal) {
-    Scanner metaScanner;
-    try {
-      metaScanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    } catch (TableNotFoundException e) {
-      throw new RuntimeException(e);
-    }
-
-    // Need to add in the replication section prefix
-    metaScanner.setRange(Range.exact(ReplicationSection.getRowPrefix() + wal));
-    // Limit the column family to be sure
-    metaScanner.fetchColumnFamily(ReplicationSection.COLF);
-
-    try {
-      Scanner replScanner = ReplicationTable.getScanner(conn);
-
-      // Scan only the Status records
-      StatusSection.limit(replScanner);
-
-      // Only look for this specific WAL
-      replScanner.setRange(Range.exact(wal));
-
-      return Iterables.concat(metaScanner, replScanner);
-    } catch (ReplicationTableOfflineException e) {
-      // do nothing
-    }
-
-    return metaScanner;
-  }
-
-  @VisibleForTesting
-  int scanServers(Map<Path,String> fileToServerMap, Map<String,Path> nameToFileMap) throws Exception {
-    return scanServers(ServerConstants.getWalDirs(), fileToServerMap, nameToFileMap);
-  }
-
-  /**
-   * Scans write-ahead log directories for logs. The maps passed in are populated with scan information.
-   *
-   * @param walDirs
-   *          write-ahead log directories
-   * @param fileToServerMap
-   *          map of file paths to servers
-   * @param nameToFileMap
-   *          map of file names to paths
-   * @return number of servers located (including those with no logs present)
-   */
-  int scanServers(String[] walDirs, Map<Path,String> fileToServerMap, Map<String,Path> nameToFileMap) throws Exception {
-    Set<String> servers = new HashSet<String>();
-    for (String walDir : walDirs) {
-      Path walRoot = new Path(walDir);
-      FileStatus[] listing = null;
-      try {
-        listing = fs.listStatus(walRoot);
-      } catch (FileNotFoundException e) {
-        // ignore dir
-      }
-
-      if (listing == null)
-        continue;
-      for (FileStatus status : listing) {
-        String server = status.getPath().getName();
-        if (status.isDirectory()) {
-          servers.add(server);
-          for (FileStatus file : fs.listStatus(new Path(walRoot, server))) {
-            if (isUUID(file.getPath().getName())) {
-              fileToServerMap.put(file.getPath(), server);
-              nameToFileMap.put(file.getPath().getName(), file.getPath());
-            } else {
-              log.info("Ignoring file " + file.getPath() + " because it doesn't look like a uuid");
-            }
-          }
-        } else if (isUUID(server)) {
-          // old-style WAL are not under a directory
-          servers.add("");
-          fileToServerMap.put(status.getPath(), "");
-          nameToFileMap.put(server, status.getPath());
-        } else {
-          log.info("Ignoring file " + status.getPath() + " because it doesn't look like a uuid");
-        }
-      }
-    }
-    return servers.size();
-  }
-
-  @VisibleForTesting
-  Map<String,Path> getSortedWALogs() throws IOException {
-    return getSortedWALogs(ServerConstants.getRecoveryDirs());
-  }
-
-  /**
-   * Looks for write-ahead logs in recovery directories.
-   *
-   * @param recoveryDirs
-   *          recovery directories
-   * @return map of log file names to paths
-   */
-  Map<String,Path> getSortedWALogs(String[] recoveryDirs) throws IOException {
-    Map<String,Path> result = new HashMap<String,Path>();
-
-    for (String dir : recoveryDirs) {
-      Path recoveryDir = new Path(dir);
-
-      if (fs.exists(recoveryDir)) {
-        for (FileStatus status : fs.listStatus(recoveryDir)) {
-          String name = status.getPath().getName();
-          if (isUUID(name)) {
-            result.put(name, status.getPath());
-          } else {
-            log.debug("Ignoring file " + status.getPath() + " because it doesn't look like a uuid");
-          }
-        }
-      }
+      logsByServer.put(entry.getKey(), ids);
     }
     return result;
   }
-
-  /**
-   * Checks if a string is a valid UUID.
-   *
-   * @param name
-   *          string to check
-   * @return true if string is a UUID
-   */
-  static boolean isUUID(String name) {
-    if (name == null || name.length() != 36) {
-      return false;
-    }
-    try {
-      UUID.fromString(name);
-      return true;
-    } catch (IllegalArgumentException ex) {
-      return false;
-    }
-  }
-
-  /**
-   * Determine if TServer has been dead long enough to remove associated WALs.
-   * <p>
-   * Uses a map where the key is the address and the value is the time first seen dead. If the address is not in the map, it is added with the current system
-   * nanoTime. When the passed in wait time has elapsed, this method returns true and removes the key and value from the map.
-   *
-   * @param address
-   *          HostAndPort of dead tserver
-   * @param wait
-   *          long value of elapsed millis to wait
-   * @return boolean whether enough time elapsed since the server was first seen as dead.
-   */
-  @VisibleForTesting
-  protected boolean timeToDelete(HostAndPort address, long wait) {
-    // check whether the tserver has been dead long enough
-    Long firstSeen = firstSeenDead.get(address);
-    if (firstSeen != null) {
-      long elapsedTime = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - firstSeen);
-      log.trace("Elapsed milliseconds since " + address + " first seen dead: " + elapsedTime);
-      return elapsedTime > wait;
-    } else {
-      log.trace("Adding server to firstSeenDead map " + address);
-      firstSeenDead.put(address, System.nanoTime());
-      return false;
-    }
-  }
-
-  /**
-   * Method to clear the map used in timeToDelete.
-   * <p>
-   * Useful for testing.
-   */
-  @VisibleForTesting
-  void clearFirstSeenDead() {
-    firstSeenDead.clear();
-  }
-
 }
diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectionAlgorithm.java b/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectionAlgorithm.java
index 9f94622..1e15324 100644
--- a/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectionAlgorithm.java
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/GarbageCollectionAlgorithm.java
@@ -81,7 +81,7 @@
     }
 
     if (containsEmpty) {
-      ArrayList<String> tmp = new ArrayList<String>();
+      ArrayList<String> tmp = new ArrayList<>();
       for (String token : tokens) {
         if (!token.equals("")) {
           tmp.add(token);
@@ -112,7 +112,7 @@
 
   private SortedMap<String,String> makeRelative(Collection<String> candidates) {
 
-    SortedMap<String,String> ret = new TreeMap<String,String>();
+    SortedMap<String,String> ret = new TreeMap<>();
 
     for (String candidate : candidates) {
       String relPath;
@@ -243,7 +243,7 @@
   }
 
   private void cleanUpDeletedTableDirs(GarbageCollectionEnvironment gce, SortedMap<String,String> candidateMap) throws IOException {
-    HashSet<String> tableIdsWithDeletes = new HashSet<String>();
+    HashSet<String> tableIdsWithDeletes = new HashSet<>();
 
     // find the table ids that had dirs deleted
     for (String delete : candidateMap.keySet()) {
@@ -305,7 +305,7 @@
 
     boolean outOfMemory = true;
     while (outOfMemory) {
-      List<String> candidates = new ArrayList<String>();
+      List<String> candidates = new ArrayList<>();
 
       outOfMemory = getCandidates(gce, lastCandidate, candidates);
 
diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java b/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
index da25d55..1593c75 100644
--- a/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.gc;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.net.UnknownHostException;
@@ -73,7 +75,6 @@
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.core.util.ServerServices;
 import org.apache.accumulo.core.util.ServerServices.Service;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.volume.Volume;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooLock.LockLossReason;
@@ -91,6 +92,7 @@
 import org.apache.accumulo.server.fs.VolumeUtil;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
 import org.apache.accumulo.server.rpc.RpcWrapper;
+import org.apache.accumulo.server.rpc.ServerAddress;
 import org.apache.accumulo.server.rpc.TCredentialsUpdatingWrapper;
 import org.apache.accumulo.server.rpc.TServerUtils;
 import org.apache.accumulo.server.rpc.ThriftServerType;
@@ -269,6 +271,7 @@
 
     @Override
     public Iterator<String> getBlipIterator() throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
+      @SuppressWarnings("resource")
       IsolatedScanner scanner = new IsolatedScanner(getConnector().createScanner(tableName, Authorizations.EMPTY));
 
       scanner.setRange(MetadataSchema.BlipSection.getRange());
@@ -571,7 +574,6 @@
         replSpan.stop();
       }
 
-      // Clean up any unused write-ahead logs
       Span waLogs = Trace.start("walogs");
       try {
         GarbageCollectWriteAheadLogs walogCollector = new GarbageCollectWriteAheadLogs(this, fs, isUsingTrash());
@@ -708,7 +710,7 @@
         return;
       }
       log.debug("Failed to get GC ZooKeeper lock, will retry");
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
   }
 
@@ -717,17 +719,19 @@
     final Processor<Iface> processor;
     if (ThriftServerType.SASL == getThriftServerType()) {
       Iface tcProxy = TCredentialsUpdatingWrapper.service(rpcProxy, getClass(), getConfiguration());
-      processor = new Processor<Iface>(tcProxy);
+      processor = new Processor<>(tcProxy);
     } else {
-      processor = new Processor<Iface>(rpcProxy);
+      processor = new Processor<>(rpcProxy);
     }
-    int port = getConfiguration().getPort(Property.GC_PORT);
+    int port[] = getConfiguration().getPort(Property.GC_PORT);
+    HostAndPort[] addresses = TServerUtils.getHostAndPorts(this.opts.getAddress(), port);
     long maxMessageSize = getConfiguration().getMemoryInBytes(Property.GENERAL_MAX_MESSAGE_SIZE);
-    HostAndPort result = HostAndPort.fromParts(opts.getAddress(), port);
-    log.debug("Starting garbage collector listening on " + result);
     try {
-      return TServerUtils.startTServer(getConfiguration(), result, getThriftServerType(), processor, this.getClass().getSimpleName(), "GC Monitor Service", 2,
-          getConfiguration().getCount(Property.GENERAL_SIMPLETIMER_THREADPOOL_SIZE), 1000, maxMessageSize, getServerSslParams(), getSaslParams(), 0).address;
+      ServerAddress server = TServerUtils.startTServer(getConfiguration(), getThriftServerType(), processor, this.getClass().getSimpleName(),
+          "GC Monitor Service", 2, getConfiguration().getCount(Property.GENERAL_SIMPLETIMER_THREADPOOL_SIZE), 1000, maxMessageSize, getServerSslParams(),
+          getSaslParams(), 0, addresses);
+      log.debug("Starting garbage collector listening on " + server.address);
+      return server.address;
     } catch (Exception ex) {
       // ACCUMULO-3651 Level changed to error and FATAL added to message for slf4j compatibility
       log.error("FATAL:", ex);
diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/replication/CloseWriteAheadLogReferences.java b/server/gc/src/main/java/org/apache/accumulo/gc/replication/CloseWriteAheadLogReferences.java
index 78ac4ac..0c09396 100644
--- a/server/gc/src/main/java/org/apache/accumulo/gc/replication/CloseWriteAheadLogReferences.java
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/replication/CloseWriteAheadLogReferences.java
@@ -21,7 +21,6 @@
 import java.util.List;
 import java.util.Map.Entry;
 import java.util.Set;
-import java.util.concurrent.ExecutionException;
 
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.BatchWriter;
@@ -38,20 +37,20 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.ReplicationSection;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.rpc.ThriftUtil;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.tabletserver.log.LogEntry;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
 import org.apache.accumulo.core.trace.Span;
 import org.apache.accumulo.core.trace.Trace;
-import org.apache.accumulo.core.trace.Tracer;
 import org.apache.accumulo.core.trace.thrift.TInfo;
 import org.apache.accumulo.server.AccumuloServerContext;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.log.WalStateManager.WalMarkerException;
+import org.apache.accumulo.server.log.WalStateManager.WalState;
 import org.apache.accumulo.server.replication.StatusUtil;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
+import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
 import org.apache.thrift.TException;
@@ -60,9 +59,6 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.base.Stopwatch;
-import com.google.common.cache.CacheBuilder;
-import com.google.common.cache.CacheLoader;
-import com.google.common.cache.LoadingCache;
 import com.google.common.net.HostAndPort;
 import com.google.protobuf.InvalidProtocolBufferException;
 
@@ -104,54 +100,23 @@
     }
 
     Span findWalsSpan = Trace.start("findReferencedWals");
-    HashSet<String> referencedWals = null;
+    HashSet<String> closed = null;
     try {
       sw.start();
-      referencedWals = getReferencedWals(conn);
+      closed = getClosedLogs(conn);
     } finally {
       sw.stop();
       findWalsSpan.stop();
     }
 
-    log.info("Found " + referencedWals.size() + " WALs referenced in metadata in " + sw.toString());
-    log.debug("Referenced WALs: " + referencedWals);
+    log.info("Found " + closed.size() + " WALs referenced in metadata in " + sw.toString());
     sw.reset();
 
-    // ACCUMULO-3320 WALs cannot be closed while a TabletServer may still use it later.
-    //
-    // In addition to the WALs that are actively referenced in the metadata table, tservers can also hold on to a WAL that is not presently referenced by any
-    // tablet. For example, a tablet could MinC which would end in all logs for that tablet being removed. However, if more data was ingested into the table,
-    // the same WAL could be re-used again by that tserver.
-    //
-    // If this code happened to run after the compaction but before the log is again referenced by a tabletserver, we might delete the WAL reference, only to
-    // have it recreated again which causes havoc with the replication status for a table.
-    final TInfo tinfo = Tracer.traceInfo();
-    Set<String> activeWals;
-    Span findActiveWalsSpan = Trace.start("findActiveWals");
-    try {
-      sw.start();
-      activeWals = getActiveWals(tinfo);
-    } finally {
-      sw.stop();
-      findActiveWalsSpan.stop();
-    }
-
-    if (null == activeWals) {
-      log.warn("Could not compute the set of currently active WALs. Not closing any files");
-      return;
-    }
-
-    log.debug("Got active WALs from all tservers " + activeWals);
-
-    referencedWals.addAll(activeWals);
-
-    log.info("Found " + activeWals.size() + " WALs actively in use by TabletServers in " + sw.toString());
-
     Span updateReplicationSpan = Trace.start("updateReplicationTable");
     long recordsClosed = 0;
     try {
       sw.start();
-      recordsClosed = updateReplicationEntries(conn, referencedWals);
+      recordsClosed = updateReplicationEntries(conn, closed);
     } finally {
       sw.stop();
       updateReplicationSpan.stop();
@@ -161,57 +126,28 @@
   }
 
   /**
-   * Construct the set of referenced WALs from the metadata table
+   * Construct the set of referenced WALs from zookeeper
    *
    * @param conn
    *          Connector
    * @return The Set of WALs that are referenced in the metadata table
    */
-  protected HashSet<String> getReferencedWals(Connector conn) {
-    // Make a bounded cache to alleviate repeatedly creating the same Path object
-    final LoadingCache<String,String> normalizedWalPaths = CacheBuilder.newBuilder().maximumSize(1024).concurrencyLevel(1)
-        .build(new CacheLoader<String,String>() {
+  protected HashSet<String> getClosedLogs(Connector conn) {
+    WalStateManager wals = new WalStateManager(conn.getInstance(), ZooReaderWriter.getInstance());
 
-          @Override
-          public String load(String key) {
-            return new Path(key).toString();
-          }
-
-        });
-
-    HashSet<String> referencedWals = new HashSet<>();
-    BatchScanner bs = null;
+    HashSet<String> result = new HashSet<>();
     try {
-      // TODO Configurable number of threads
-      bs = conn.createBatchScanner(MetadataTable.NAME, Authorizations.EMPTY, 4);
-      bs.setRanges(Collections.singleton(TabletsSection.getRange()));
-      bs.fetchColumnFamily(LogColumnFamily.NAME);
-
-      // For each log key/value in the metadata table
-      for (Entry<Key,Value> entry : bs) {
-        // The value may contain multiple WALs
-        LogEntry logEntry = LogEntry.fromKeyValue(entry.getKey(), entry.getValue());
-
-        log.debug("Found WALs for table(" + logEntry.extent.getTableId() + "): " + logEntry.logSet);
-
-        // Normalize each log file (using Path) and add it to the set
-        for (String logFile : logEntry.logSet) {
-          referencedWals.add(normalizedWalPaths.get(logFile));
+      for (Entry<Path,WalState> entry : wals.getAllState().entrySet()) {
+        if (entry.getValue() == WalState.UNREFERENCED || entry.getValue() == WalState.CLOSED) {
+          Path path = entry.getKey();
+          log.debug("Found closed WAL " + path.toString());
+          result.add(path.toString());
         }
       }
-    } catch (TableNotFoundException e) {
-      // uhhhh
+    } catch (WalMarkerException e) {
       throw new RuntimeException(e);
-    } catch (ExecutionException e) {
-      log.error("Failed to normalize WAL file path", e);
-      throw new RuntimeException(e);
-    } finally {
-      if (null != bs) {
-        bs.close();
-      }
     }
-
-    return referencedWals;
+    return result;
   }
 
   /**
@@ -219,10 +155,10 @@
    *
    * @param conn
    *          Connector
-   * @param referencedWals
-   *          {@link Set} of paths to WALs that are referenced in the tablets section of the metadata table
+   * @param closedWals
+   *          {@link Set} of paths to WALs that marked as closed or unreferenced in zookeeper
    */
-  protected long updateReplicationEntries(Connector conn, Set<String> referencedWals) {
+  protected long updateReplicationEntries(Connector conn, Set<String> closedWals) {
     BatchScanner bs = null;
     BatchWriter bw = null;
     long recordsClosed = 0;
@@ -245,11 +181,11 @@
         // Ignore things that aren't completely replicated as we can't delete those anyways
         MetadataSchema.ReplicationSection.getFile(entry.getKey(), replFileText);
         String replFile = replFileText.toString();
-        boolean isReferenced = referencedWals.contains(replFile);
+        boolean isClosed = closedWals.contains(replFile);
 
         // We only want to clean up WALs (which is everything but rfiles) and only when
         // metadata doesn't have a reference to the given WAL
-        if (!status.getClosed() && !replFile.endsWith(RFILE_SUFFIX) && !isReferenced) {
+        if (!status.getClosed() && !replFile.endsWith(RFILE_SUFFIX) && isClosed) {
           try {
             closeWal(bw, entry.getKey());
             recordsClosed++;
@@ -321,39 +257,6 @@
   }
 
   /**
-   * Fetch the set of WALs in use by tabletservers
-   *
-   * @return Set of WALs in use by tservers, null if they cannot be computed for some reason
-   */
-  protected Set<String> getActiveWals(TInfo tinfo) {
-    List<String> tservers = getActiveTservers(tinfo);
-
-    // Compute the total set of WALs used by tservers
-    Set<String> walogs = null;
-    if (null != tservers) {
-      walogs = new HashSet<String>();
-      // TODO If we have a lot of tservers, this might start to take a fair amount of time
-      // Consider adding a threadpool to parallelize the requests.
-      // Alternatively, we might have to move to a solution that doesn't involve tserver RPC
-      for (String tserver : tservers) {
-        HostAndPort address = HostAndPort.fromString(tserver);
-        List<String> activeWalsForServer = getActiveWalsForServer(tinfo, address);
-        if (null == activeWalsForServer) {
-          log.debug("Could not fetch active wals from " + address);
-          return null;
-        }
-        log.debug("Got raw active wals for " + address + ", " + activeWalsForServer);
-        for (String activeWal : activeWalsForServer) {
-          // Normalize the WAL URI
-          walogs.add(new Path(activeWal).toString());
-        }
-      }
-    }
-
-    return walogs;
-  }
-
-  /**
    * Get the active tabletservers as seen by the master.
    *
    * @return The active tabletservers, null if they can't be computed.
diff --git a/server/gc/src/test/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogsTest.java b/server/gc/src/test/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogsTest.java
index bc9fca3..4665836 100644
--- a/server/gc/src/test/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogsTest.java
+++ b/server/gc/src/test/java/org/apache/accumulo/gc/GarbageCollectWriteAheadLogsTest.java
@@ -16,856 +16,222 @@
  */
 package org.apache.accumulo.gc;
 
-import com.google.common.net.HostAndPort;
-
-import static org.easymock.EasyMock.expect;
-import static org.easymock.EasyMock.replay;
-
-import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.util.Collection;
 import java.util.Collections;
 import java.util.Iterator;
-import java.util.LinkedList;
-import java.util.ArrayList;
-import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.UUID;
 
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.conf.ConfigurationCopy;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.conf.SiteConfiguration;
+import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.ReplicationSection;
-import org.apache.accumulo.core.protobuf.ProtobufUtil;
-import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
-import org.apache.accumulo.core.replication.ReplicationTable;
-import org.apache.accumulo.server.AccumuloServerContext;
-import org.apache.accumulo.server.conf.ServerConfigurationFactory;
-import org.apache.accumulo.server.fs.VolumeManager;
-import org.apache.accumulo.server.replication.StatusUtil;
-import org.apache.accumulo.server.replication.proto.Replication.Status;
-import org.apache.hadoop.io.Text;
-import org.easymock.EasyMock;
-import org.easymock.IAnswer;
-import org.junit.Assert;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.gc.thrift.GCStatus;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.Path;
-
-import org.junit.Before;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.TestName;
-
-import com.google.common.collect.Iterables;
-import com.google.common.collect.Maps;
-
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.gc.thrift.GcCycleStats;
-import org.apache.accumulo.server.fs.VolumeManagerImpl;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema;
+import org.apache.accumulo.core.replication.ReplicationSchema;
+import org.apache.accumulo.core.replication.ReplicationTable;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.Pair;
+import org.apache.accumulo.server.AccumuloServerContext;
+import org.apache.accumulo.server.fs.VolumeManager;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.log.WalStateManager.WalState;
+import org.apache.accumulo.server.master.LiveTServerSet;
+import org.apache.accumulo.server.master.state.TServerInstance;
+import org.apache.accumulo.server.master.state.TabletLocationState;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
 import org.apache.zookeeper.KeeperException;
-
-import java.io.File;
-import java.util.Arrays;
-import java.util.LinkedHashMap;
-import java.util.Map.Entry;
-
-import static org.easymock.EasyMock.createMock;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertSame;
-import static org.junit.Assert.assertTrue;
-import static java.lang.Thread.sleep;
-
-import java.io.FileOutputStream;
-
-import org.apache.commons.io.FileUtils;
-
-import java.util.concurrent.TimeUnit;
+import org.easymock.EasyMock;
+import org.junit.Test;
 
 public class GarbageCollectWriteAheadLogsTest {
-  private static final long BLOCK_SIZE = 64000000L;
 
-  private static final Path DIR_1_PATH = new Path("/dir1");
-  private static final Path DIR_2_PATH = new Path("/dir2");
-  private static final Path DIR_3_PATH = new Path("/dir3");
-  private static final String UUID1 = UUID.randomUUID().toString();
-  private static final String UUID2 = UUID.randomUUID().toString();
-  private static final String UUID3 = UUID.randomUUID().toString();
-
-  private Instance instance;
-  private AccumuloConfiguration systemConfig;
-  private VolumeManager volMgr;
-  private GarbageCollectWriteAheadLogs gcwal;
-  private long modTime;
-
-  @Rule
-  public TestName testName = new TestName();
-
-  @Before
-  public void setUp() throws Exception {
-    SiteConfiguration siteConfig = EasyMock.createMock(SiteConfiguration.class);
-    instance = createMock(Instance.class);
-    expect(instance.getInstanceID()).andReturn("mock").anyTimes();
-    expect(instance.getZooKeepers()).andReturn("localhost").anyTimes();
-    expect(instance.getZooKeepersSessionTimeOut()).andReturn(30000).anyTimes();
-    systemConfig = new ConfigurationCopy(new HashMap<String,String>());
-    volMgr = createMock(VolumeManager.class);
-    ServerConfigurationFactory factory = createMock(ServerConfigurationFactory.class);
-    expect(factory.getConfiguration()).andReturn(systemConfig).anyTimes();
-    expect(factory.getInstance()).andReturn(instance).anyTimes();
-    expect(factory.getSiteConfiguration()).andReturn(siteConfig).anyTimes();
-
-    // Just make the SiteConfiguration delegate to our AccumuloConfiguration
-    // Presently, we only need get(Property) and iterator().
-    EasyMock.expect(siteConfig.get(EasyMock.anyObject(Property.class))).andAnswer(new IAnswer<String>() {
-      @Override
-      public String answer() {
-        Object[] args = EasyMock.getCurrentArguments();
-        return systemConfig.get((Property) args[0]);
-      }
-    }).anyTimes();
-    EasyMock.expect(siteConfig.getBoolean(EasyMock.anyObject(Property.class))).andAnswer(new IAnswer<Boolean>() {
-      @Override
-      public Boolean answer() {
-        Object[] args = EasyMock.getCurrentArguments();
-        return systemConfig.getBoolean((Property) args[0]);
-      }
-    }).anyTimes();
-
-    EasyMock.expect(siteConfig.iterator()).andAnswer(new IAnswer<Iterator<Entry<String,String>>>() {
-      @Override
-      public Iterator<Entry<String,String>> answer() {
-        return systemConfig.iterator();
-      }
-    }).anyTimes();
-
-    replay(instance, factory, siteConfig);
-    AccumuloServerContext context = new AccumuloServerContext(factory);
-    gcwal = new GarbageCollectWriteAheadLogs(context, volMgr, false);
-    modTime = System.currentTimeMillis();
-  }
-
-  @Test
-  public void testGetters() {
-    assertSame(instance, gcwal.getInstance());
-    assertSame(volMgr, gcwal.getVolumeManager());
-    assertFalse(gcwal.isUsingTrash());
-  }
-
-  @Test
-  public void testPathsToStrings() {
-    ArrayList<Path> paths = new ArrayList<Path>();
-    paths.add(new Path(DIR_1_PATH, "file1"));
-    paths.add(DIR_2_PATH);
-    paths.add(new Path(DIR_3_PATH, "file3"));
-    List<String> strings = GarbageCollectWriteAheadLogs.paths2strings(paths);
-    int len = 3;
-    assertEquals(len, strings.size());
-    for (int i = 0; i < len; i++) {
-      assertEquals(paths.get(i).toString(), strings.get(i));
-    }
-  }
-
-  @Test
-  public void testMapServersToFiles() {
-    // @formatter:off
-    /*
-     * Test fileToServerMap:
-     * /dir1/server1/uuid1 -> server1 (new-style)
-     * /dir1/uuid2 -> "" (old-style)
-     * /dir3/server3/uuid3 -> server3 (new-style)
-     */
-    // @formatter:on
-    Map<Path,String> fileToServerMap = new java.util.HashMap<Path,String>();
-    Path path1 = new Path(new Path(DIR_1_PATH, "server1"), UUID1);
-    fileToServerMap.put(path1, "server1"); // new-style
-    Path path2 = new Path(DIR_1_PATH, UUID2);
-    fileToServerMap.put(path2, ""); // old-style
-    Path path3 = new Path(new Path(DIR_3_PATH, "server3"), UUID3);
-    fileToServerMap.put(path3, "server3"); // old-style
-    // @formatter:off
-    /*
-     * Test nameToFileMap:
-     * uuid1 -> /dir1/server1/uuid1
-     * uuid3 -> /dir3/server3/uuid3
-     */
-    // @formatter:on
-    Map<String,Path> nameToFileMap = new java.util.HashMap<String,Path>();
-    nameToFileMap.put(UUID1, path1);
-    nameToFileMap.put(UUID3, path3);
-
-    // @formatter:off
-    /*
-     * Expected map:
-     * server1 -> [ /dir1/server1/uuid1 ]
-     * server3 -> [ /dir3/server3/uuid3 ]
-     */
-    // @formatter:on
-    Map<String,ArrayList<Path>> result = GarbageCollectWriteAheadLogs.mapServersToFiles(fileToServerMap, nameToFileMap);
-    assertEquals(2, result.size());
-    ArrayList<Path> list1 = result.get("server1");
-    assertEquals(1, list1.size());
-    assertTrue(list1.contains(path1));
-    ArrayList<Path> list3 = result.get("server3");
-    assertEquals(1, list3.size());
-    assertTrue(list3.contains(path3));
-  }
-
-  private FileStatus makeFileStatus(int size, Path path) {
-    boolean isDir = (size == 0);
-    return new FileStatus(size, isDir, 3, BLOCK_SIZE, modTime, path);
-  }
-
-  private void mockListStatus(Path dir, FileStatus... fileStatuses) throws Exception {
-    expect(volMgr.listStatus(dir)).andReturn(fileStatuses);
-  }
-
-  @Test
-  public void testScanServers_NewStyle() throws Exception {
-    String[] walDirs = new String[] {"/dir1", "/dir2", "/dir3"};
-    // @formatter:off
-    /*
-     * Test directory layout:
-     * /dir1/
-     *   server1/
-     *     uuid1
-     *     file2
-     *   subdir2/
-     * /dir2/ missing
-     * /dir3/
-     *   server3/
-     *     uuid3
-     */
-    // @formatter:on
-    Path serverDir1Path = new Path(DIR_1_PATH, "server1");
-    FileStatus serverDir1 = makeFileStatus(0, serverDir1Path);
-    Path subDir2Path = new Path(DIR_1_PATH, "subdir2");
-    FileStatus serverDir2 = makeFileStatus(0, subDir2Path);
-    mockListStatus(DIR_1_PATH, serverDir1, serverDir2);
-    Path path1 = new Path(serverDir1Path, UUID1);
-    FileStatus file1 = makeFileStatus(100, path1);
-    FileStatus file2 = makeFileStatus(200, new Path(serverDir1Path, "file2"));
-    mockListStatus(serverDir1Path, file1, file2);
-    mockListStatus(subDir2Path);
-    expect(volMgr.listStatus(DIR_2_PATH)).andThrow(new FileNotFoundException());
-    Path serverDir3Path = new Path(DIR_3_PATH, "server3");
-    FileStatus serverDir3 = makeFileStatus(0, serverDir3Path);
-    mockListStatus(DIR_3_PATH, serverDir3);
-    Path path3 = new Path(serverDir3Path, UUID3);
-    FileStatus file3 = makeFileStatus(300, path3);
-    mockListStatus(serverDir3Path, file3);
-    replay(volMgr);
-
-    Map<Path,String> fileToServerMap = new java.util.HashMap<Path,String>();
-    Map<String,Path> nameToFileMap = new java.util.HashMap<String,Path>();
-    int count = gcwal.scanServers(walDirs, fileToServerMap, nameToFileMap);
-    assertEquals(3, count);
-    // @formatter:off
-    /*
-     * Expected fileToServerMap:
-     * /dir1/server1/uuid1 -> server1
-     * /dir3/server3/uuid3 -> server3
-     */
-    // @formatter:on
-    assertEquals(2, fileToServerMap.size());
-    assertEquals("server1", fileToServerMap.get(path1));
-    assertEquals("server3", fileToServerMap.get(path3));
-    // @formatter:off
-    /*
-     * Expected nameToFileMap:
-     * uuid1 -> /dir1/server1/uuid1
-     * uuid3 -> /dir3/server3/uuid3
-     */
-    // @formatter:on
-    assertEquals(2, nameToFileMap.size());
-    assertEquals(path1, nameToFileMap.get(UUID1));
-    assertEquals(path3, nameToFileMap.get(UUID3));
-  }
-
-  @Test
-  public void testScanServers_OldStyle() throws Exception {
-    // @formatter:off
-    /*
-     * Test directory layout:
-     * /dir1/
-     *   uuid1
-     * /dir3/
-     *   uuid3
-     */
-    // @formatter:on
-    String[] walDirs = new String[] {"/dir1", "/dir3"};
-    Path serverFile1Path = new Path(DIR_1_PATH, UUID1);
-    FileStatus serverFile1 = makeFileStatus(100, serverFile1Path);
-    mockListStatus(DIR_1_PATH, serverFile1);
-    Path serverFile3Path = new Path(DIR_3_PATH, UUID3);
-    FileStatus serverFile3 = makeFileStatus(300, serverFile3Path);
-    mockListStatus(DIR_3_PATH, serverFile3);
-    replay(volMgr);
-
-    Map<Path,String> fileToServerMap = new java.util.HashMap<Path,String>();
-    Map<String,Path> nameToFileMap = new java.util.HashMap<String,Path>();
-    int count = gcwal.scanServers(walDirs, fileToServerMap, nameToFileMap);
-    /*
-     * Expect only a single server, the non-server entry for upgrade WALs
-     */
-    assertEquals(1, count);
-    // @formatter:off
-    /*
-     * Expected fileToServerMap:
-     * /dir1/uuid1 -> ""
-     * /dir3/uuid3 -> ""
-     */
-    // @formatter:on
-    assertEquals(2, fileToServerMap.size());
-    assertEquals("", fileToServerMap.get(serverFile1Path));
-    assertEquals("", fileToServerMap.get(serverFile3Path));
-    // @formatter:off
-    /*
-     * Expected nameToFileMap:
-     * uuid1 -> /dir1/uuid1
-     * uuid3 -> /dir3/uuid3
-     */
-    // @formatter:on
-    assertEquals(2, nameToFileMap.size());
-    assertEquals(serverFile1Path, nameToFileMap.get(UUID1));
-    assertEquals(serverFile3Path, nameToFileMap.get(UUID3));
-  }
-
-  @Test
-  public void testGetSortedWALogs() throws Exception {
-    String[] recoveryDirs = new String[] {"/dir1", "/dir2", "/dir3"};
-    // @formatter:off
-    /*
-     * Test directory layout:
-     * /dir1/
-     *   uuid1
-     *   file2
-     * /dir2/ missing
-     * /dir3/
-     *   uuid3
-     */
-    // @formatter:on
-    expect(volMgr.exists(DIR_1_PATH)).andReturn(true);
-    expect(volMgr.exists(DIR_2_PATH)).andReturn(false);
-    expect(volMgr.exists(DIR_3_PATH)).andReturn(true);
-    Path path1 = new Path(DIR_1_PATH, UUID1);
-    FileStatus file1 = makeFileStatus(100, path1);
-    FileStatus file2 = makeFileStatus(200, new Path(DIR_1_PATH, "file2"));
-    mockListStatus(DIR_1_PATH, file1, file2);
-    Path path3 = new Path(DIR_3_PATH, UUID3);
-    FileStatus file3 = makeFileStatus(300, path3);
-    mockListStatus(DIR_3_PATH, file3);
-    replay(volMgr);
-
-    Map<String,Path> sortedWalogs = gcwal.getSortedWALogs(recoveryDirs);
-    // @formatter:off
-    /*
-     * Expected map:
-     * uuid1 -> /dir1/uuid1
-     * uuid3 -> /dir3/uuid3
-     */
-    // @formatter:on
-    assertEquals(2, sortedWalogs.size());
-    assertEquals(path1, sortedWalogs.get(UUID1));
-    assertEquals(path3, sortedWalogs.get(UUID3));
-  }
-
-  @Test
-  public void testIsUUID() {
-    assertTrue(GarbageCollectWriteAheadLogs.isUUID(UUID.randomUUID().toString()));
-    assertFalse(GarbageCollectWriteAheadLogs.isUUID("foo"));
-    assertFalse(GarbageCollectWriteAheadLogs.isUUID("0" + UUID.randomUUID().toString()));
-    assertFalse(GarbageCollectWriteAheadLogs.isUUID(null));
-  }
-
-  // It was easier to do this than get the mocking working for me
-  private static class ReplicationGCWAL extends GarbageCollectWriteAheadLogs {
-
-    private List<Entry<Key,Value>> replData;
-
-    ReplicationGCWAL(AccumuloServerContext context, VolumeManager fs, boolean useTrash, List<Entry<Key,Value>> replData) throws IOException {
-      super(context, fs, useTrash);
-      this.replData = replData;
-    }
-
-    @Override
-    protected Iterable<Entry<Key,Value>> getReplicationStatusForFile(Connector conn, String wal) {
-      return this.replData;
-    }
-  }
-
-  @Test
-  public void replicationEntriesAffectGC() throws Exception {
-    String file1 = UUID.randomUUID().toString(), file2 = UUID.randomUUID().toString();
-    Connector conn = createMock(Connector.class);
-
-    // Write a Status record which should prevent file1 from being deleted
-    LinkedList<Entry<Key,Value>> replData = new LinkedList<>();
-    replData.add(Maps.immutableEntry(new Key("/wals/" + file1, StatusSection.NAME.toString(), "1"), StatusUtil.fileCreatedValue(System.currentTimeMillis())));
-
-    ReplicationGCWAL replGC = new ReplicationGCWAL(null, volMgr, false, replData);
-
-    replay(conn);
-
-    // Open (not-closed) file must be retained
-    assertTrue(replGC.neededByReplication(conn, "/wals/" + file1));
-
-    // No replication data, not needed
-    replData.clear();
-    assertFalse(replGC.neededByReplication(conn, "/wals/" + file2));
-
-    // The file is closed but not replicated, must be retained
-    replData.add(Maps.immutableEntry(new Key("/wals/" + file1, StatusSection.NAME.toString(), "1"), StatusUtil.fileClosedValue()));
-    assertTrue(replGC.neededByReplication(conn, "/wals/" + file1));
-
-    // File is closed and fully replicated, can be deleted
-    replData.clear();
-    replData.add(Maps.immutableEntry(new Key("/wals/" + file1, StatusSection.NAME.toString(), "1"),
-        ProtobufUtil.toValue(Status.newBuilder().setInfiniteEnd(true).setBegin(Long.MAX_VALUE).setClosed(true).build())));
-    assertFalse(replGC.neededByReplication(conn, "/wals/" + file1));
-  }
-
-  @Test
-  public void removeReplicationEntries() throws Exception {
-    String file1 = UUID.randomUUID().toString(), file2 = UUID.randomUUID().toString();
-
-    Instance inst = new MockInstance(testName.getMethodName());
-    AccumuloServerContext context = new AccumuloServerContext(new ServerConfigurationFactory(inst));
-
-    GarbageCollectWriteAheadLogs gcWALs = new GarbageCollectWriteAheadLogs(context, volMgr, false);
-
-    long file1CreateTime = System.currentTimeMillis();
-    long file2CreateTime = file1CreateTime + 50;
-    BatchWriter bw = ReplicationTable.getBatchWriter(context.getConnector());
-    Mutation m = new Mutation("/wals/" + file1);
-    StatusSection.add(m, new Text("1"), StatusUtil.fileCreatedValue(file1CreateTime));
-    bw.addMutation(m);
-    m = new Mutation("/wals/" + file2);
-    StatusSection.add(m, new Text("1"), StatusUtil.fileCreatedValue(file2CreateTime));
-    bw.addMutation(m);
-
-    // These WALs are potential candidates for deletion from fs
-    Map<String,Path> nameToFileMap = new HashMap<>();
-    nameToFileMap.put(file1, new Path("/wals/" + file1));
-    nameToFileMap.put(file2, new Path("/wals/" + file2));
-
-    Map<String,Path> sortedWALogs = Collections.emptyMap();
-
-    // Make the GCStatus and GcCycleStats
-    GCStatus status = new GCStatus();
-    GcCycleStats cycleStats = new GcCycleStats();
-    status.currentLog = cycleStats;
-
-    // We should iterate over two entries
-    Assert.assertEquals(2, gcWALs.removeReplicationEntries(nameToFileMap, sortedWALogs, status));
-
-    // We should have noted that two files were still in use
-    Assert.assertEquals(2l, cycleStats.inUse);
-
-    // Both should have been deleted
-    Assert.assertEquals(0, nameToFileMap.size());
-  }
-
-  @Test
-  public void replicationEntriesOnlyInMetaPreventGC() throws Exception {
-    String file1 = UUID.randomUUID().toString(), file2 = UUID.randomUUID().toString();
-
-    Instance inst = new MockInstance(testName.getMethodName());
-    AccumuloServerContext context = new AccumuloServerContext(new ServerConfigurationFactory(inst));
-
-    Connector conn = context.getConnector();
-
-    GarbageCollectWriteAheadLogs gcWALs = new GarbageCollectWriteAheadLogs(context, volMgr, false);
-
-    long file1CreateTime = System.currentTimeMillis();
-    long file2CreateTime = file1CreateTime + 50;
-    // Write some records to the metadata table, we haven't yet written status records to the replication table
-    BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
-    Mutation m = new Mutation(ReplicationSection.getRowPrefix() + "/wals/" + file1);
-    m.put(ReplicationSection.COLF, new Text("1"), StatusUtil.fileCreatedValue(file1CreateTime));
-    bw.addMutation(m);
-
-    m = new Mutation(ReplicationSection.getRowPrefix() + "/wals/" + file2);
-    m.put(ReplicationSection.COLF, new Text("1"), StatusUtil.fileCreatedValue(file2CreateTime));
-    bw.addMutation(m);
-
-    // These WALs are potential candidates for deletion from fs
-    Map<String,Path> nameToFileMap = new HashMap<>();
-    nameToFileMap.put(file1, new Path("/wals/" + file1));
-    nameToFileMap.put(file2, new Path("/wals/" + file2));
-
-    Map<String,Path> sortedWALogs = Collections.emptyMap();
-
-    // Make the GCStatus and GcCycleStats objects
-    GCStatus status = new GCStatus();
-    GcCycleStats cycleStats = new GcCycleStats();
-    status.currentLog = cycleStats;
-
-    // We should iterate over two entries
-    Assert.assertEquals(2, gcWALs.removeReplicationEntries(nameToFileMap, sortedWALogs, status));
-
-    // We should have noted that two files were still in use
-    Assert.assertEquals(2l, cycleStats.inUse);
-
-    // Both should have been deleted
-    Assert.assertEquals(0, nameToFileMap.size());
-  }
-
-  @Test
-  public void noReplicationTableDoesntLimitMetatdataResults() throws Exception {
-    Instance inst = new MockInstance(testName.getMethodName());
-    AccumuloServerContext context = new AccumuloServerContext(new ServerConfigurationFactory(inst));
-    Connector conn = context.getConnector();
-
-    String wal = "hdfs://localhost:8020/accumulo/wal/tserver+port/123456-1234-1234-12345678";
-    BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
-    Mutation m = new Mutation(ReplicationSection.getRowPrefix() + wal);
-    m.put(ReplicationSection.COLF, new Text("1"), StatusUtil.fileCreatedValue(System.currentTimeMillis()));
-    bw.addMutation(m);
-    bw.close();
-
-    GarbageCollectWriteAheadLogs gcWALs = new GarbageCollectWriteAheadLogs(context, volMgr, false);
-
-    Iterable<Entry<Key,Value>> data = gcWALs.getReplicationStatusForFile(conn, wal);
-    Entry<Key,Value> entry = Iterables.getOnlyElement(data);
-
-    Assert.assertEquals(ReplicationSection.getRowPrefix() + wal, entry.getKey().getRow().toString());
-  }
-
-  @Test
-  public void fetchesReplicationEntriesFromMetadataAndReplicationTables() throws Exception {
-    Instance inst = new MockInstance(testName.getMethodName());
-    AccumuloServerContext context = new AccumuloServerContext(new ServerConfigurationFactory(inst));
-    Connector conn = context.getConnector();
-
-    long walCreateTime = System.currentTimeMillis();
-    String wal = "hdfs://localhost:8020/accumulo/wal/tserver+port/123456-1234-1234-12345678";
-    BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
-    Mutation m = new Mutation(ReplicationSection.getRowPrefix() + wal);
-    m.put(ReplicationSection.COLF, new Text("1"), StatusUtil.fileCreatedValue(walCreateTime));
-    bw.addMutation(m);
-    bw.close();
-
-    bw = ReplicationTable.getBatchWriter(conn);
-    m = new Mutation(wal);
-    StatusSection.add(m, new Text("1"), StatusUtil.fileCreatedValue(walCreateTime));
-    bw.addMutation(m);
-    bw.close();
-
-    GarbageCollectWriteAheadLogs gcWALs = new GarbageCollectWriteAheadLogs(context, volMgr, false);
-
-    Iterable<Entry<Key,Value>> iter = gcWALs.getReplicationStatusForFile(conn, wal);
-    Map<Key,Value> data = new HashMap<>();
-    for (Entry<Key,Value> e : iter) {
-      data.put(e.getKey(), e.getValue());
-    }
-
-    Assert.assertEquals(2, data.size());
-
-    // Should get one element from each table (metadata and replication)
-    for (Key k : data.keySet()) {
-      String row = k.getRow().toString();
-      if (row.startsWith(ReplicationSection.getRowPrefix())) {
-        Assert.assertTrue(row.endsWith(wal));
-      } else {
-        Assert.assertEquals(wal, row);
-      }
-    }
-  }
-
-  @Test
-  public void testTimeToDeleteTrue() throws InterruptedException {
-    HostAndPort address = HostAndPort.fromString("tserver1:9998");
-    long wait = AccumuloConfiguration.getTimeInMillis("1s");
-    gcwal.clearFirstSeenDead();
-    assertFalse("First call should be false and should store the first seen time", gcwal.timeToDelete(address, wait));
-    sleep(wait * 2);
-    assertTrue(gcwal.timeToDelete(address, wait));
-  }
-
-  @Test
-  public void testTimeToDeleteFalse() {
-    HostAndPort address = HostAndPort.fromString("tserver1:9998");
-    long wait = AccumuloConfiguration.getTimeInMillis("1h");
-    long t1, t2;
-    boolean ttd;
-    do {
-      t1 = System.nanoTime();
-      gcwal.clearFirstSeenDead();
-      assertFalse("First call should be false and should store the first seen time", gcwal.timeToDelete(address, wait));
-      ttd = gcwal.timeToDelete(address, wait);
-      t2 = System.nanoTime();
-    } while (TimeUnit.NANOSECONDS.toMillis(t2 - t1) > (wait / 2)); // as long as it took less than half of the configured wait
-
-    assertFalse(ttd);
-  }
-
-  @Test
-  public void testTimeToDeleteWithNullAddress() {
-    assertFalse(gcwal.timeToDelete(null, 123l));
-  }
-
-  /**
-   * Wrapper class with some helper methods
-   * <p>
-   * Just a wrapper around a LinkedHashMap that store method name and argument information. Also includes some convenience methods to make usage cleaner.
-   */
-  class MethodCalls {
-
-    private LinkedHashMap<String,List<Object>> mapWrapper;
-
-    public MethodCalls() {
-      mapWrapper = new LinkedHashMap<String,List<Object>>();
-    }
-
-    public void put(String methodName, Object... args) {
-      mapWrapper.put(methodName, Arrays.asList(args));
-    }
-
-    public int size() {
-      return mapWrapper.size();
-    }
-
-    public boolean hasOneEntry() {
-      return size() == 1;
-    }
-
-    public Map.Entry<String,List<Object>> getFirstEntry() {
-      return mapWrapper.entrySet().iterator().next();
-    }
-
-    public String getFirstEntryMethod() {
-      return getFirstEntry().getKey();
-    }
-
-    public List<Object> getFirstEntryArgs() {
-      return getFirstEntry().getValue();
-    }
-
-    public Object getFirstEntryArg(int number) {
-      return getFirstEntryArgs().get(number);
-    }
-  }
-
-  /**
-   * Partial mock of the GarbageCollectWriteAheadLogs for testing the removeFile method
-   * <p>
-   * There is a map named methodCalls that can be used to assert parameters on methods called inside the removeFile method
-   */
-  class GCWALPartialMock extends GarbageCollectWriteAheadLogs {
-
-    private boolean holdsLockBool = false;
-
-    public GCWALPartialMock(AccumuloServerContext ctx, VolumeManager vm, boolean useTrash, boolean holdLock) throws IOException {
-      super(ctx, vm, useTrash);
-      this.holdsLockBool = holdLock;
-    }
-
-    public MethodCalls methodCalls = new MethodCalls();
-
-    @Override
-    boolean holdsLock(HostAndPort addr) {
-      return holdsLockBool;
-    }
-
-    @Override
-    void removeWALfromDownTserver(HostAndPort address, AccumuloConfiguration conf, Entry<String,ArrayList<Path>> entry, final GCStatus status) {
-      methodCalls.put("removeWALFromDownTserver", address, conf, entry, status);
-    }
-
-    @Override
-    void askTserverToRemoveWAL(HostAndPort address, AccumuloConfiguration conf, Entry<String,ArrayList<Path>> entry, final GCStatus status) {
-      methodCalls.put("askTserverToRemoveWAL", address, conf, entry, status);
-    }
-
-    @Override
-    void removeOldStyleWAL(Entry<String,ArrayList<Path>> entry, final GCStatus status) {
-      methodCalls.put("removeOldStyleWAL", entry, status);
-    }
-
-    @Override
-    void removeSortedWAL(Path swalog) {
-      methodCalls.put("removeSortedWAL", swalog);
-    }
-  }
-
-  private GCWALPartialMock getGCWALForRemoveFileTest(GCStatus s, final boolean locked) throws IOException {
-    AccumuloServerContext ctx = new AccumuloServerContext(new ServerConfigurationFactory(new MockInstance("accumulo")));
-    return new GCWALPartialMock(ctx, VolumeManagerImpl.get(), false, locked);
-  }
-
-  private Map<String,Path> getEmptyMap() {
-    return new HashMap<String,Path>();
-  }
-
-  private Map<String,ArrayList<Path>> getServerToFileMap1(String key, Path singlePath) {
-    Map<String,ArrayList<Path>> serverToFileMap = new HashMap<String,ArrayList<Path>>();
-    serverToFileMap.put(key, new ArrayList<Path>(Arrays.asList(singlePath)));
-    return serverToFileMap;
-  }
-
-  @Test
-  public void testRemoveFilesWithOldStyle() throws IOException {
-    GCStatus status = new GCStatus();
-    GarbageCollectWriteAheadLogs realGCWAL = getGCWALForRemoveFileTest(status, true);
-    Path p1 = new Path("hdfs://localhost:9000/accumulo/wal/tserver1+9997/" + UUID.randomUUID().toString());
-    Map<String,ArrayList<Path>> serverToFileMap = getServerToFileMap1("", p1);
-
-    realGCWAL.removeFiles(getEmptyMap(), serverToFileMap, getEmptyMap(), status);
-
-    MethodCalls calls = ((GCWALPartialMock) realGCWAL).methodCalls;
-    assertEquals("Only one method should have been called", 1, calls.size());
-    assertEquals("Method should be removeOldStyleWAL", "removeOldStyleWAL", calls.getFirstEntryMethod());
-    Entry<String,ArrayList<Path>> firstServerToFileMap = serverToFileMap.entrySet().iterator().next();
-    assertEquals("First param should be empty", firstServerToFileMap, calls.getFirstEntryArg(0));
-    assertEquals("Second param should be the status", status, calls.getFirstEntryArg(1));
-  }
-
-  @Test
-  public void testRemoveFilesWithDeadTservers() throws IOException {
-    GCStatus status = new GCStatus();
-    GarbageCollectWriteAheadLogs realGCWAL = getGCWALForRemoveFileTest(status, false);
-    String server = "tserver1+9997";
-    Path p1 = new Path("hdfs://localhost:9000/accumulo/wal/" + server + "/" + UUID.randomUUID().toString());
-    Map<String,ArrayList<Path>> serverToFileMap = getServerToFileMap1(server, p1);
-
-    realGCWAL.removeFiles(getEmptyMap(), serverToFileMap, getEmptyMap(), status);
-
-    MethodCalls calls = ((GCWALPartialMock) realGCWAL).methodCalls;
-    assertEquals("Only one method should have been called", 1, calls.size());
-    assertEquals("Method should be removeWALfromDownTserver", "removeWALFromDownTserver", calls.getFirstEntryMethod());
-    assertEquals("First param should be address", HostAndPort.fromString(server.replaceAll("[+]", ":")), calls.getFirstEntryArg(0));
-    assertTrue("Second param should be an AccumuloConfiguration", calls.getFirstEntryArg(1) instanceof AccumuloConfiguration);
-    Entry<String,ArrayList<Path>> firstServerToFileMap = serverToFileMap.entrySet().iterator().next();
-    assertEquals("Third param should be the entry", firstServerToFileMap, calls.getFirstEntryArg(2));
-    assertEquals("Forth param should be the status", status, calls.getFirstEntryArg(3));
-  }
-
-  @Test
-  public void testRemoveFilesWithLiveTservers() throws IOException {
-    GCStatus status = new GCStatus();
-    GarbageCollectWriteAheadLogs realGCWAL = getGCWALForRemoveFileTest(status, true);
-    String server = "tserver1+9997";
-    Path p1 = new Path("hdfs://localhost:9000/accumulo/wal/" + server + "/" + UUID.randomUUID().toString());
-    Map<String,ArrayList<Path>> serverToFileMap = getServerToFileMap1(server, p1);
-
-    realGCWAL.removeFiles(getEmptyMap(), serverToFileMap, getEmptyMap(), status);
-
-    MethodCalls calls = ((GCWALPartialMock) realGCWAL).methodCalls;
-    assertEquals("Only one method should have been called", 1, calls.size());
-    assertEquals("Method should be askTserverToRemoveWAL", "askTserverToRemoveWAL", calls.getFirstEntryMethod());
-    assertEquals("First param should be address", HostAndPort.fromString(server.replaceAll("[+]", ":")), calls.getFirstEntryArg(0));
-    assertTrue("Second param should be an AccumuloConfiguration", calls.getFirstEntryArg(1) instanceof AccumuloConfiguration);
-    Entry<String,ArrayList<Path>> firstServerToFileMap = serverToFileMap.entrySet().iterator().next();
-    assertEquals("Third param should be the entry", firstServerToFileMap, calls.getFirstEntryArg(2));
-    assertEquals("Forth param should be the status", status, calls.getFirstEntryArg(3));
-  }
-
-  @Test
-  public void testRemoveFilesRemovesSortedWALs() throws IOException {
-    GCStatus status = new GCStatus();
-    GarbageCollectWriteAheadLogs realGCWAL = getGCWALForRemoveFileTest(status, true);
-    Map<String,ArrayList<Path>> serverToFileMap = new HashMap<String,ArrayList<Path>>();
-    Map<String,Path> sortedWALogs = new HashMap<String,Path>();
-    Path p1 = new Path("hdfs://localhost:9000/accumulo/wal/tserver1+9997/" + UUID.randomUUID().toString());
-    sortedWALogs.put("junk", p1); // TODO: see if this key is actually used here, maybe can be removed
-
-    realGCWAL.removeFiles(getEmptyMap(), serverToFileMap, sortedWALogs, status);
-    MethodCalls calls = ((GCWALPartialMock) realGCWAL).methodCalls;
-    assertEquals("Only one method should have been called", 1, calls.size());
-    assertEquals("Method should be removeSortedWAL", "removeSortedWAL", calls.getFirstEntryMethod());
-    assertEquals("First param should be the Path", p1, calls.getFirstEntryArg(0));
-
-  }
-
-  static String GCWAL_DEAD_DIR = "gcwal-collect-deadtserver";
-  static String GCWAL_DEAD_TSERVER = "tserver1";
-  static String GCWAL_DEAD_TSERVER_PORT = "9995";
-  static String GCWAL_DEAD_TSERVER_COLLECT_FILE = UUID.randomUUID().toString();
-
-  class GCWALDeadTserverCollectMock extends GarbageCollectWriteAheadLogs {
-
-    public GCWALDeadTserverCollectMock(AccumuloServerContext ctx, VolumeManager vm, boolean useTrash) throws IOException {
-      super(ctx, vm, useTrash);
-    }
-
-    @Override
-    boolean holdsLock(HostAndPort addr) {
-      // tries use zookeeper
-      return false;
-    }
-
-    @Override
-    Map<String,Path> getSortedWALogs() {
-      return new HashMap<String,Path>();
-    }
-
-    @Override
-    int scanServers(Map<Path,String> fileToServerMap, Map<String,Path> nameToFileMap) throws Exception {
-      String sep = File.separator;
-      Path p = new Path(System.getProperty("user.dir") + sep + "target" + sep + GCWAL_DEAD_DIR + sep + GCWAL_DEAD_TSERVER + "+" + GCWAL_DEAD_TSERVER_PORT + sep
-          + GCWAL_DEAD_TSERVER_COLLECT_FILE);
-      fileToServerMap.put(p, GCWAL_DEAD_TSERVER + ":" + GCWAL_DEAD_TSERVER_PORT);
-      nameToFileMap.put(GCWAL_DEAD_TSERVER_COLLECT_FILE, p);
-      return 1;
-    }
-
-    @Override
-    int removeMetadataEntries(Map<String,Path> nameToFileMap, Map<String,Path> sortedWALogs, GCStatus status) throws IOException, KeeperException,
-        InterruptedException {
-      return 0;
-    }
-
-    long getGCWALDeadServerWaitTime(AccumuloConfiguration conf) {
-      // tries to use zookeeper
-      return 1000l;
-    }
-  }
-
-  @Test
-  public void testCollectWithDeadTserver() throws IOException, InterruptedException {
-    Instance i = new MockInstance();
-    AccumuloServerContext ctx = new AccumuloServerContext(new ServerConfigurationFactory(i));
-    File walDir = new File(System.getProperty("user.dir") + File.separator + "target" + File.separator + GCWAL_DEAD_DIR);
-    File walFileDir = new File(walDir + File.separator + GCWAL_DEAD_TSERVER + "+" + GCWAL_DEAD_TSERVER_PORT);
-    File walFile = new File(walFileDir + File.separator + GCWAL_DEAD_TSERVER_COLLECT_FILE);
-    if (!walFileDir.exists()) {
-      assertTrue("Directory was made", walFileDir.mkdirs());
-      new FileOutputStream(walFile).close();
-    }
-
+  private final TServerInstance server1 = new TServerInstance("localhost:1234[SESSION]");
+  private final TServerInstance server2 = new TServerInstance("localhost:1234[OTHERSESS]");
+  private final UUID id = UUID.randomUUID();
+  private final Map<TServerInstance,List<UUID>> markers = Collections.singletonMap(server1, Collections.singletonList(id));
+  private final Map<TServerInstance,List<UUID>> markers2 = Collections.singletonMap(server2, Collections.singletonList(id));
+  private final Path path = new Path("hdfs://localhost:9000/accumulo/wal/localhost+1234/" + id);
+  private final KeyExtent extent = new KeyExtent(new Text("1<"), new Text(new byte[] {0}));
+  private final Collection<Collection<String>> walogs = Collections.emptyList();
+  private final TabletLocationState tabletAssignedToServer1;
+  private final TabletLocationState tabletAssignedToServer2;
+  {
     try {
-      VolumeManager vm = VolumeManagerImpl.getLocal(walDir.toString());
-      GarbageCollectWriteAheadLogs gcwal2 = new GCWALDeadTserverCollectMock(ctx, vm, false);
-      GCStatus status = new GCStatus(new GcCycleStats(), new GcCycleStats(), new GcCycleStats(), new GcCycleStats());
-
-      gcwal2.collect(status);
-
-      assertTrue("File should not be deleted", walFile.exists());
-      assertEquals("Should have one candidate", 1, status.lastLog.getCandidates());
-      assertEquals("Should not have deleted that file", 0, status.lastLog.getDeleted());
-
-      sleep(2000);
-      gcwal2.collect(status);
-
-      assertFalse("File should be gone", walFile.exists());
-      assertEquals("Should have one candidate", 1, status.lastLog.getCandidates());
-      assertEquals("Should have deleted that file", 1, status.lastLog.getDeleted());
-
-    } finally {
-      if (walDir.exists()) {
-        FileUtils.deleteDirectory(walDir);
-      }
+      tabletAssignedToServer1 = new TabletLocationState(extent, (TServerInstance) null, server1, (TServerInstance) null, null, walogs, false);
+      tabletAssignedToServer2 = new TabletLocationState(extent, (TServerInstance) null, server2, (TServerInstance) null, null, walogs, false);
+    } catch (Exception ex) {
+      throw new RuntimeException(ex);
     }
   }
+  private final Iterable<TabletLocationState> tabletOnServer1List = Collections.singletonList(tabletAssignedToServer1);
+  private final Iterable<TabletLocationState> tabletOnServer2List = Collections.singletonList(tabletAssignedToServer2);
+  private final List<Entry<Key,Value>> emptyList = Collections.emptyList();
+  private final Iterator<Entry<Key,Value>> emptyKV = emptyList.iterator();
+
+  @Test
+  public void testRemoveUnusedLog() throws Exception {
+    AccumuloServerContext context = EasyMock.createMock(AccumuloServerContext.class);
+    VolumeManager fs = EasyMock.createMock(VolumeManager.class);
+    WalStateManager marker = EasyMock.createMock(WalStateManager.class);
+    LiveTServerSet tserverSet = EasyMock.createMock(LiveTServerSet.class);
+
+    GCStatus status = new GCStatus(null, null, null, new GcCycleStats());
+
+    EasyMock.expect(tserverSet.getCurrentServers()).andReturn(Collections.singleton(server1));
+
+    EasyMock.expect(marker.getAllMarkers()).andReturn(markers).once();
+    EasyMock.expect(marker.state(server1, id)).andReturn(new Pair<>(WalState.UNREFERENCED, path));
+    EasyMock.expect(fs.deleteRecursively(path)).andReturn(true).once();
+    marker.removeWalMarker(server1, id);
+    EasyMock.expectLastCall().once();
+    EasyMock.replay(context, fs, marker, tserverSet);
+    GarbageCollectWriteAheadLogs gc = new GarbageCollectWriteAheadLogs(context, fs, false, tserverSet, marker, tabletOnServer1List) {
+      @Override
+      protected int removeReplicationEntries(Map<UUID,TServerInstance> candidates) throws IOException, KeeperException, InterruptedException {
+        return 0;
+      }
+    };
+    gc.collect(status);
+    EasyMock.verify(context, fs, marker, tserverSet);
+  }
+
+  @Test
+  public void testKeepClosedLog() throws Exception {
+    AccumuloServerContext context = EasyMock.createMock(AccumuloServerContext.class);
+    VolumeManager fs = EasyMock.createMock(VolumeManager.class);
+    WalStateManager marker = EasyMock.createMock(WalStateManager.class);
+    LiveTServerSet tserverSet = EasyMock.createMock(LiveTServerSet.class);
+
+    GCStatus status = new GCStatus(null, null, null, new GcCycleStats());
+
+    EasyMock.expect(tserverSet.getCurrentServers()).andReturn(Collections.singleton(server1));
+    EasyMock.expect(marker.getAllMarkers()).andReturn(markers).once();
+    EasyMock.expect(marker.state(server1, id)).andReturn(new Pair<>(WalState.CLOSED, path));
+    EasyMock.replay(context, marker, tserverSet, fs);
+    GarbageCollectWriteAheadLogs gc = new GarbageCollectWriteAheadLogs(context, fs, false, tserverSet, marker, tabletOnServer1List) {
+      @Override
+      protected int removeReplicationEntries(Map<UUID,TServerInstance> candidates) throws IOException, KeeperException, InterruptedException {
+        return 0;
+      }
+    };
+    gc.collect(status);
+    EasyMock.verify(context, marker, tserverSet, fs);
+  }
+
+  @Test
+  public void deleteUnreferenceLogOnDeadServer() throws Exception {
+    AccumuloServerContext context = EasyMock.createMock(AccumuloServerContext.class);
+    VolumeManager fs = EasyMock.createMock(VolumeManager.class);
+    WalStateManager marker = EasyMock.createMock(WalStateManager.class);
+    LiveTServerSet tserverSet = EasyMock.createMock(LiveTServerSet.class);
+    Connector conn = EasyMock.createMock(Connector.class);
+    Scanner mscanner = EasyMock.createMock(Scanner.class);
+    Scanner rscanner = EasyMock.createMock(Scanner.class);
+
+    GCStatus status = new GCStatus(null, null, null, new GcCycleStats());
+    EasyMock.expect(tserverSet.getCurrentServers()).andReturn(Collections.singleton(server1));
+    EasyMock.expect(marker.getAllMarkers()).andReturn(markers2).once();
+    EasyMock.expect(marker.state(server2, id)).andReturn(new Pair<>(WalState.OPEN, path));
+    EasyMock.expect(context.getConnector()).andReturn(conn);
+
+    EasyMock.expect(conn.createScanner(ReplicationTable.NAME, Authorizations.EMPTY)).andReturn(rscanner);
+    rscanner.fetchColumnFamily(ReplicationSchema.StatusSection.NAME);
+    EasyMock.expectLastCall().once();
+    EasyMock.expect(rscanner.iterator()).andReturn(emptyKV);
+
+    EasyMock.expect(conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY)).andReturn(mscanner);
+    mscanner.fetchColumnFamily(MetadataSchema.ReplicationSection.COLF);
+    EasyMock.expectLastCall().once();
+    mscanner.setRange(MetadataSchema.ReplicationSection.getRange());
+    EasyMock.expectLastCall().once();
+    EasyMock.expect(mscanner.iterator()).andReturn(emptyKV);
+    EasyMock.expect(fs.deleteRecursively(path)).andReturn(true).once();
+    marker.removeWalMarker(server2, id);
+    EasyMock.expectLastCall().once();
+    marker.forget(server2);
+    EasyMock.expectLastCall().once();
+    EasyMock.replay(context, fs, marker, tserverSet, conn, rscanner, mscanner);
+    GarbageCollectWriteAheadLogs gc = new GarbageCollectWriteAheadLogs(context, fs, false, tserverSet, marker, tabletOnServer1List);
+    gc.collect(status);
+    EasyMock.verify(context, fs, marker, tserverSet, conn, rscanner, mscanner);
+  }
+
+  @Test
+  public void ignoreReferenceLogOnDeadServer() throws Exception {
+    AccumuloServerContext context = EasyMock.createMock(AccumuloServerContext.class);
+    VolumeManager fs = EasyMock.createMock(VolumeManager.class);
+    WalStateManager marker = EasyMock.createMock(WalStateManager.class);
+    LiveTServerSet tserverSet = EasyMock.createMock(LiveTServerSet.class);
+    Connector conn = EasyMock.createMock(Connector.class);
+    Scanner mscanner = EasyMock.createMock(Scanner.class);
+    Scanner rscanner = EasyMock.createMock(Scanner.class);
+
+    GCStatus status = new GCStatus(null, null, null, new GcCycleStats());
+    EasyMock.expect(tserverSet.getCurrentServers()).andReturn(Collections.singleton(server1));
+    EasyMock.expect(marker.getAllMarkers()).andReturn(markers2).once();
+    EasyMock.expect(marker.state(server2, id)).andReturn(new Pair<>(WalState.OPEN, path));
+    EasyMock.expect(context.getConnector()).andReturn(conn);
+
+    EasyMock.expect(conn.createScanner(ReplicationTable.NAME, Authorizations.EMPTY)).andReturn(rscanner);
+    rscanner.fetchColumnFamily(ReplicationSchema.StatusSection.NAME);
+    EasyMock.expectLastCall().once();
+    EasyMock.expect(rscanner.iterator()).andReturn(emptyKV);
+
+    EasyMock.expect(conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY)).andReturn(mscanner);
+    mscanner.fetchColumnFamily(MetadataSchema.ReplicationSection.COLF);
+    EasyMock.expectLastCall().once();
+    mscanner.setRange(MetadataSchema.ReplicationSection.getRange());
+    EasyMock.expectLastCall().once();
+    EasyMock.expect(mscanner.iterator()).andReturn(emptyKV);
+    EasyMock.replay(context, fs, marker, tserverSet, conn, rscanner, mscanner);
+    GarbageCollectWriteAheadLogs gc = new GarbageCollectWriteAheadLogs(context, fs, false, tserverSet, marker, tabletOnServer2List);
+    gc.collect(status);
+    EasyMock.verify(context, fs, marker, tserverSet, conn, rscanner, mscanner);
+  }
+
+  @Test
+  public void replicationDelaysFileCollection() throws Exception {
+    AccumuloServerContext context = EasyMock.createMock(AccumuloServerContext.class);
+    VolumeManager fs = EasyMock.createMock(VolumeManager.class);
+    WalStateManager marker = EasyMock.createMock(WalStateManager.class);
+    LiveTServerSet tserverSet = EasyMock.createMock(LiveTServerSet.class);
+    Connector conn = EasyMock.createMock(Connector.class);
+    Scanner mscanner = EasyMock.createMock(Scanner.class);
+    Scanner rscanner = EasyMock.createMock(Scanner.class);
+    String row = MetadataSchema.ReplicationSection.getRowPrefix() + path.toString();
+    String colf = MetadataSchema.ReplicationSection.COLF.toString();
+    String colq = "1";
+    Map<Key,Value> replicationWork = Collections.singletonMap(new Key(row, colf, colq), new Value(new byte[0]));
+
+    GCStatus status = new GCStatus(null, null, null, new GcCycleStats());
+
+    EasyMock.expect(tserverSet.getCurrentServers()).andReturn(Collections.singleton(server1));
+    EasyMock.expect(marker.getAllMarkers()).andReturn(markers).once();
+    EasyMock.expect(marker.state(server1, id)).andReturn(new Pair<>(WalState.UNREFERENCED, path));
+    EasyMock.expect(context.getConnector()).andReturn(conn);
+
+    EasyMock.expect(conn.createScanner(ReplicationTable.NAME, Authorizations.EMPTY)).andReturn(rscanner);
+    rscanner.fetchColumnFamily(ReplicationSchema.StatusSection.NAME);
+    EasyMock.expectLastCall().once();
+    EasyMock.expect(rscanner.iterator()).andReturn(emptyKV);
+
+    EasyMock.expect(conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY)).andReturn(mscanner);
+    mscanner.fetchColumnFamily(MetadataSchema.ReplicationSection.COLF);
+    EasyMock.expectLastCall().once();
+    mscanner.setRange(MetadataSchema.ReplicationSection.getRange());
+    EasyMock.expectLastCall().once();
+    EasyMock.expect(mscanner.iterator()).andReturn(replicationWork.entrySet().iterator());
+    EasyMock.replay(context, fs, marker, tserverSet, conn, rscanner, mscanner);
+    GarbageCollectWriteAheadLogs gc = new GarbageCollectWriteAheadLogs(context, fs, false, tserverSet, marker, tabletOnServer1List);
+    gc.collect(status);
+    EasyMock.verify(context, fs, marker, tserverSet, conn, rscanner, mscanner);
+  }
 }
diff --git a/server/gc/src/test/java/org/apache/accumulo/gc/GarbageCollectionTest.java b/server/gc/src/test/java/org/apache/accumulo/gc/GarbageCollectionTest.java
index 1548fa1..d61fb9e 100644
--- a/server/gc/src/test/java/org/apache/accumulo/gc/GarbageCollectionTest.java
+++ b/server/gc/src/test/java/org/apache/accumulo/gc/GarbageCollectionTest.java
@@ -46,14 +46,14 @@
  */
 public class GarbageCollectionTest {
   static class TestGCE implements GarbageCollectionEnvironment {
-    TreeSet<String> candidates = new TreeSet<String>();
-    ArrayList<String> blips = new ArrayList<String>();
-    Map<Key,Value> references = new TreeMap<Key,Value>();
-    HashSet<String> tableIds = new HashSet<String>();
+    TreeSet<String> candidates = new TreeSet<>();
+    ArrayList<String> blips = new ArrayList<>();
+    Map<Key,Value> references = new TreeMap<>();
+    HashSet<String> tableIds = new HashSet<>();
 
-    ArrayList<String> deletes = new ArrayList<String>();
-    ArrayList<String> tablesDirsToDelete = new ArrayList<String>();
-    TreeMap<String,Status> filesToReplicate = new TreeMap<String,Status>();
+    ArrayList<String> deletes = new ArrayList<>();
+    ArrayList<String> tablesDirsToDelete = new ArrayList<>();
+    TreeMap<String,Status> filesToReplicate = new TreeMap<>();
 
     @Override
     public boolean getCandidates(String continuePoint, List<String> ret) {
@@ -92,7 +92,7 @@
     }
 
     public Key newFileReferenceKey(String tableId, String endRow, String file) {
-      String row = new KeyExtent(new Text(tableId), endRow == null ? null : new Text(endRow), null).getMetadataEntry().toString();
+      String row = new KeyExtent(tableId, endRow == null ? null : new Text(endRow), null).getMetadataEntry().toString();
       String cf = MetadataSchema.TabletsSection.DataFileColumnFamily.NAME.toString();
       String cq = file;
       Key key = new Key(row, cf, cq);
@@ -110,7 +110,7 @@
     }
 
     Key newDirReferenceKey(String tableId, String endRow) {
-      String row = new KeyExtent(new Text(tableId), endRow == null ? null : new Text(endRow), null).getMetadataEntry().toString();
+      String row = new KeyExtent(tableId, endRow == null ? null : new Text(endRow), null).getMetadataEntry().toString();
       String cf = MetadataSchema.TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.getColumnFamily().toString();
       String cq = MetadataSchema.TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.getColumnQualifier().toString();
       Key key = new Key(row, cf, cq);
@@ -208,7 +208,7 @@
     gca.collect(gce);
     assertRemoved(gce);
 
-    List<String[]> refsToRemove = new ArrayList<String[]>();
+    List<String[]> refsToRemove = new ArrayList<>();
     refsToRemove.add(new String[] {"4", "/t0/F000.rf"});
     refsToRemove.add(new String[] {"5", "../4/t0/F000.rf"});
     refsToRemove.add(new String[] {"6", "hdfs://foo.com:6000/accumulo/tables/4/t0/F000.rf"});
@@ -545,7 +545,7 @@
 
     gca.collect(gce);
 
-    HashSet<String> tids = new HashSet<String>();
+    HashSet<String> tids = new HashSet<>();
     tids.add("5");
     tids.add("6");
 
diff --git a/server/gc/src/test/java/org/apache/accumulo/gc/replication/CloseWriteAheadLogReferencesTest.java b/server/gc/src/test/java/org/apache/accumulo/gc/replication/CloseWriteAheadLogReferencesTest.java
deleted file mode 100644
index 3115de1..0000000
--- a/server/gc/src/test/java/org/apache/accumulo/gc/replication/CloseWriteAheadLogReferencesTest.java
+++ /dev/null
@@ -1,476 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.gc.replication;
-
-import static org.easymock.EasyMock.createMock;
-import static org.easymock.EasyMock.expect;
-import static org.easymock.EasyMock.expectLastCall;
-import static org.easymock.EasyMock.replay;
-import static org.easymock.EasyMock.verify;
-
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map.Entry;
-import java.util.Set;
-import java.util.UUID;
-
-import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.ConfigurationCopy;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.conf.SiteConfiguration;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.data.impl.KeyExtent;
-import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.ReplicationSection;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
-import org.apache.accumulo.core.protobuf.ProtobufUtil;
-import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
-import org.apache.accumulo.core.replication.ReplicationTable;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.tabletserver.log.LogEntry;
-import org.apache.accumulo.core.trace.thrift.TInfo;
-import org.apache.accumulo.server.AccumuloServerContext;
-import org.apache.accumulo.server.conf.ServerConfigurationFactory;
-import org.apache.accumulo.server.replication.StatusUtil;
-import org.apache.accumulo.server.replication.proto.Replication.Status;
-import org.apache.hadoop.io.Text;
-import org.easymock.EasyMock;
-import org.easymock.IAnswer;
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.TestName;
-
-import com.google.common.collect.Iterables;
-import com.google.common.collect.Maps;
-import com.google.common.collect.Sets;
-import com.google.common.net.HostAndPort;
-
-public class CloseWriteAheadLogReferencesTest {
-
-  private CloseWriteAheadLogReferences refs;
-  private Instance inst;
-
-  @Rule
-  public TestName testName = new TestName();
-
-  @Before
-  public void setup() {
-    inst = createMock(Instance.class);
-    SiteConfiguration siteConfig = EasyMock.createMock(SiteConfiguration.class);
-    expect(inst.getInstanceID()).andReturn(testName.getMethodName()).anyTimes();
-    expect(inst.getZooKeepers()).andReturn("localhost").anyTimes();
-    expect(inst.getZooKeepersSessionTimeOut()).andReturn(30000).anyTimes();
-    final AccumuloConfiguration systemConf = new ConfigurationCopy(new HashMap<String,String>());
-    ServerConfigurationFactory factory = createMock(ServerConfigurationFactory.class);
-    expect(factory.getConfiguration()).andReturn(systemConf).anyTimes();
-    expect(factory.getInstance()).andReturn(inst).anyTimes();
-    expect(factory.getSiteConfiguration()).andReturn(siteConfig).anyTimes();
-
-    // Just make the SiteConfiguration delegate to our AccumuloConfiguration
-    // Presently, we only need get(Property) and iterator().
-    EasyMock.expect(siteConfig.get(EasyMock.anyObject(Property.class))).andAnswer(new IAnswer<String>() {
-      @Override
-      public String answer() {
-        Object[] args = EasyMock.getCurrentArguments();
-        return systemConf.get((Property) args[0]);
-      }
-    }).anyTimes();
-    EasyMock.expect(siteConfig.getBoolean(EasyMock.anyObject(Property.class))).andAnswer(new IAnswer<Boolean>() {
-      @Override
-      public Boolean answer() {
-        Object[] args = EasyMock.getCurrentArguments();
-        return systemConf.getBoolean((Property) args[0]);
-      }
-    }).anyTimes();
-
-    EasyMock.expect(siteConfig.iterator()).andAnswer(new IAnswer<Iterator<Entry<String,String>>>() {
-      @Override
-      public Iterator<Entry<String,String>> answer() {
-        return systemConf.iterator();
-      }
-    }).anyTimes();
-
-    replay(inst, factory, siteConfig);
-    refs = new CloseWriteAheadLogReferences(new AccumuloServerContext(factory));
-  }
-
-  @Test
-  public void findOneWalFromMetadata() throws Exception {
-    Connector conn = createMock(Connector.class);
-    BatchScanner bs = createMock(BatchScanner.class);
-
-    // Fake out some data
-    final ArrayList<Entry<Key,Value>> data = new ArrayList<>();
-    LogEntry logEntry = new LogEntry();
-    logEntry.extent = new KeyExtent(new Text("1"), new Text("b"), new Text("a"));
-    logEntry.filename = "hdfs://localhost:8020/accumulo/wal/tserver+port/" + UUID.randomUUID();
-    logEntry.server = "tserver1";
-    logEntry.tabletId = 1;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    // Get a batchscanner, scan the tablets section, fetch only the logs
-    expect(conn.createBatchScanner(MetadataTable.NAME, Authorizations.EMPTY, 4)).andReturn(bs);
-    bs.setRanges(Collections.singleton(TabletsSection.getRange()));
-    expectLastCall().once();
-    bs.fetchColumnFamily(LogColumnFamily.NAME);
-    expectLastCall().once();
-    expect(bs.iterator()).andAnswer(new IAnswer<Iterator<Entry<Key,Value>>>() {
-
-      @Override
-      public Iterator<Entry<Key,Value>> answer() throws Throwable {
-        return data.iterator();
-      }
-
-    });
-    // Close the bs
-    bs.close();
-    expectLastCall().once();
-
-    replay(conn, bs);
-
-    // Validate
-    Set<String> wals = refs.getReferencedWals(conn);
-    Assert.assertEquals(Collections.singleton(logEntry.filename), wals);
-
-    verify(conn, bs);
-  }
-
-  @Test
-  public void findManyWalFromSingleMetadata() throws Exception {
-    Connector conn = createMock(Connector.class);
-    BatchScanner bs = createMock(BatchScanner.class);
-
-    // Fake out some data
-    final ArrayList<Entry<Key,Value>> data = new ArrayList<>();
-    LogEntry logEntry = new LogEntry();
-    logEntry.extent = new KeyExtent(new Text("1"), new Text("b"), new Text("a"));
-    logEntry.filename = "hdfs://localhost:8020/accumulo/wal/tserver+port/" + UUID.randomUUID();
-    logEntry.server = "tserver1";
-    logEntry.tabletId = 1;
-    // Multiple DFSLoggers
-    logEntry.logSet = Sets.newHashSet(logEntry.filename, "hdfs://localhost:8020/accumulo/wal/tserver+port/" + UUID.randomUUID());
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    // Get a batchscanner, scan the tablets section, fetch only the logs
-    expect(conn.createBatchScanner(MetadataTable.NAME, Authorizations.EMPTY, 4)).andReturn(bs);
-    bs.setRanges(Collections.singleton(TabletsSection.getRange()));
-    expectLastCall().once();
-    bs.fetchColumnFamily(LogColumnFamily.NAME);
-    expectLastCall().once();
-    expect(bs.iterator()).andAnswer(new IAnswer<Iterator<Entry<Key,Value>>>() {
-
-      @Override
-      public Iterator<Entry<Key,Value>> answer() throws Throwable {
-        return data.iterator();
-      }
-
-    });
-    // Close the bs
-    bs.close();
-    expectLastCall().once();
-
-    replay(conn, bs);
-
-    // Validate
-    Set<String> wals = refs.getReferencedWals(conn);
-    Assert.assertEquals(logEntry.logSet, wals);
-
-    verify(conn, bs);
-  }
-
-  @Test
-  public void findManyRefsToSingleWalFromMetadata() throws Exception {
-    Connector conn = createMock(Connector.class);
-    BatchScanner bs = createMock(BatchScanner.class);
-
-    String uuid = UUID.randomUUID().toString();
-
-    // Fake out some data
-    final ArrayList<Entry<Key,Value>> data = new ArrayList<>();
-    LogEntry logEntry = new LogEntry();
-    logEntry.extent = new KeyExtent(new Text("1"), new Text("b"), new Text("a"));
-    logEntry.filename = "hdfs://localhost:8020/accumulo/wal/tserver+port/" + uuid;
-    logEntry.server = "tserver1";
-    logEntry.tabletId = 1;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    logEntry.extent = new KeyExtent(new Text("1"), new Text("c"), new Text("b"));
-    logEntry.server = "tserver1";
-    logEntry.tabletId = 2;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    logEntry.extent = new KeyExtent(new Text("1"), null, new Text("c"));
-    logEntry.server = "tserver1";
-    logEntry.tabletId = 3;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    // Get a batchscanner, scan the tablets section, fetch only the logs
-    expect(conn.createBatchScanner(MetadataTable.NAME, Authorizations.EMPTY, 4)).andReturn(bs);
-    bs.setRanges(Collections.singleton(TabletsSection.getRange()));
-    expectLastCall().once();
-    bs.fetchColumnFamily(LogColumnFamily.NAME);
-    expectLastCall().once();
-    expect(bs.iterator()).andAnswer(new IAnswer<Iterator<Entry<Key,Value>>>() {
-
-      @Override
-      public Iterator<Entry<Key,Value>> answer() throws Throwable {
-        return data.iterator();
-      }
-
-    });
-    // Close the bs
-    bs.close();
-    expectLastCall().once();
-
-    replay(conn, bs);
-
-    // Validate
-    Set<String> wals = refs.getReferencedWals(conn);
-    Assert.assertEquals(Collections.singleton(logEntry.filename), wals);
-
-    verify(conn, bs);
-  }
-
-  @Test
-  public void findRefsToManyWalsFromMetadata() throws Exception {
-    Connector conn = createMock(Connector.class);
-    BatchScanner bs = createMock(BatchScanner.class);
-
-    String file1 = "hdfs://localhost:8020/accumulo/wal/tserver1+port/" + UUID.randomUUID(), file2 = "hdfs://localhost:8020/accumulo/wal/tserver2+port/"
-        + UUID.randomUUID(), file3 = "hdfs://localhost:8020/accumulo/wal/tserver3+port/" + UUID.randomUUID();
-
-    // Fake out some data
-    final ArrayList<Entry<Key,Value>> data = new ArrayList<>();
-    LogEntry logEntry = new LogEntry();
-    logEntry.extent = new KeyExtent(new Text("1"), new Text("b"), new Text("a"));
-    logEntry.filename = file1;
-    logEntry.server = "tserver1";
-    logEntry.tabletId = 1;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    logEntry.extent = new KeyExtent(new Text("5"), null, null);
-    logEntry.tabletId = 2;
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    logEntry.extent = new KeyExtent(new Text("3"), new Text("b"), new Text("a"));
-    logEntry.filename = file2;
-    logEntry.server = "tserver2";
-    logEntry.tabletId = 3;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    logEntry.extent = new KeyExtent(new Text("3"), new Text("c"), new Text("b"));
-    logEntry.tabletId = 4;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    logEntry.extent = new KeyExtent(new Text("4"), new Text("5"), new Text("0"));
-    logEntry.filename = file3;
-    logEntry.server = "tserver3";
-    logEntry.tabletId = 5;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    logEntry.extent = new KeyExtent(new Text("4"), new Text("8"), new Text("5"));
-    logEntry.server = "tserver3";
-    logEntry.tabletId = 7;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    logEntry.extent = new KeyExtent(new Text("4"), null, new Text("8"));
-    logEntry.server = "tserver3";
-    logEntry.tabletId = 15;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
-    data.add(Maps.immutableEntry(new Key(logEntry.getRow(), logEntry.getColumnFamily(), logEntry.getColumnQualifier()), new Value(logEntry.getValue())));
-
-    // Get a batchscanner, scan the tablets section, fetch only the logs
-    expect(conn.createBatchScanner(MetadataTable.NAME, Authorizations.EMPTY, 4)).andReturn(bs);
-    bs.setRanges(Collections.singleton(TabletsSection.getRange()));
-    expectLastCall().once();
-    bs.fetchColumnFamily(LogColumnFamily.NAME);
-    expectLastCall().once();
-    expect(bs.iterator()).andAnswer(new IAnswer<Iterator<Entry<Key,Value>>>() {
-
-      @Override
-      public Iterator<Entry<Key,Value>> answer() throws Throwable {
-        return data.iterator();
-      }
-
-    });
-    // Close the bs
-    bs.close();
-    expectLastCall().once();
-
-    replay(conn, bs);
-
-    // Validate
-    Set<String> wals = refs.getReferencedWals(conn);
-    Assert.assertEquals(Sets.newHashSet(file1, file2, file3), wals);
-
-    verify(conn, bs);
-  }
-
-  @Test
-  public void unusedWalsAreClosed() throws Exception {
-    Set<String> wals = Collections.emptySet();
-    Instance inst = new MockInstance(testName.getMethodName());
-    Connector conn = inst.getConnector("root", new PasswordToken(""));
-
-    BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
-    Mutation m = new Mutation(ReplicationSection.getRowPrefix() + "file:/accumulo/wal/tserver+port/12345");
-    m.put(ReplicationSection.COLF, new Text("1"), StatusUtil.fileCreatedValue(System.currentTimeMillis()));
-    bw.addMutation(m);
-    bw.close();
-
-    refs.updateReplicationEntries(conn, wals);
-
-    Scanner s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    Entry<Key,Value> entry = Iterables.getOnlyElement(s);
-    Status status = Status.parseFrom(entry.getValue().get());
-    Assert.assertTrue(status.getClosed());
-  }
-
-  @Test
-  public void usedWalsAreNotClosed() throws Exception {
-    String file = "file:/accumulo/wal/tserver+port/12345";
-    Set<String> wals = Collections.singleton(file);
-    Instance inst = new MockInstance(testName.getMethodName());
-    Connector conn = inst.getConnector("root", new PasswordToken(""));
-
-    BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
-    Mutation m = new Mutation(ReplicationSection.getRowPrefix() + file);
-    m.put(ReplicationSection.COLF, new Text("1"), StatusUtil.fileCreatedValue(System.currentTimeMillis()));
-    bw.addMutation(m);
-    bw.close();
-
-    refs.updateReplicationEntries(conn, wals);
-
-    Scanner s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    Entry<Key,Value> entry = Iterables.getOnlyElement(s);
-    Status status = Status.parseFrom(entry.getValue().get());
-    Assert.assertFalse(status.getClosed());
-  }
-
-  @Test
-  public void partiallyReplicatedReferencedWalsAreNotClosed() throws Exception {
-    String file = "file:/accumulo/wal/tserver+port/12345";
-    Set<String> wals = Collections.singleton(file);
-    Instance inst = new MockInstance(testName.getMethodName());
-    Connector conn = inst.getConnector("root", new PasswordToken(""));
-
-    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
-    Mutation m = new Mutation(file);
-    StatusSection.add(m, new Text("1"), ProtobufUtil.toValue(StatusUtil.ingestedUntil(1000)));
-    bw.addMutation(m);
-    bw.close();
-
-    refs.updateReplicationEntries(conn, wals);
-
-    Scanner s = ReplicationTable.getScanner(conn);
-    Entry<Key,Value> entry = Iterables.getOnlyElement(s);
-    Status status = Status.parseFrom(entry.getValue().get());
-    Assert.assertFalse(status.getClosed());
-  }
-
-  @Test
-  public void getActiveWals() throws Exception {
-    CloseWriteAheadLogReferences closeWals = EasyMock.createMockBuilder(CloseWriteAheadLogReferences.class).addMockedMethod("getActiveTservers")
-        .addMockedMethod("getActiveWalsForServer").createMock();
-    TInfo tinfo = EasyMock.createMock(TInfo.class);
-
-    List<String> tservers = Arrays.asList("localhost:12345", "localhost:12346");
-    EasyMock.expect(closeWals.getActiveTservers(tinfo)).andReturn(tservers);
-    int numWals = 0;
-    for (String tserver : tservers) {
-      EasyMock.expect(closeWals.getActiveWalsForServer(tinfo, HostAndPort.fromString(tserver))).andReturn(Arrays.asList("/wal" + numWals));
-      numWals++;
-    }
-
-    EasyMock.replay(closeWals);
-
-    Set<String> wals = closeWals.getActiveWals(tinfo);
-
-    EasyMock.verify(closeWals);
-
-    Set<String> expectedWals = new HashSet<String>();
-    for (int i = 0; i < numWals; i++) {
-      expectedWals.add("/wal" + i);
-    }
-
-    Assert.assertEquals(expectedWals, wals);
-  }
-
-  @Test
-  public void offlineMaster() throws Exception {
-    CloseWriteAheadLogReferences closeWals = EasyMock.createMockBuilder(CloseWriteAheadLogReferences.class).addMockedMethod("getActiveTservers")
-        .addMockedMethod("getActiveWalsForServer").createMock();
-    TInfo tinfo = EasyMock.createMock(TInfo.class);
-
-    EasyMock.expect(closeWals.getActiveTservers(tinfo)).andReturn(null);
-
-    EasyMock.replay(closeWals);
-
-    Set<String> wals = closeWals.getActiveWals(tinfo);
-
-    EasyMock.verify(closeWals);
-
-    Assert.assertNull("Expected to get null for active WALs", wals);
-  }
-
-  @Test
-  public void offlineTserver() throws Exception {
-    CloseWriteAheadLogReferences closeWals = EasyMock.createMockBuilder(CloseWriteAheadLogReferences.class).addMockedMethod("getActiveTservers")
-        .addMockedMethod("getActiveWalsForServer").createMock();
-    TInfo tinfo = EasyMock.createMock(TInfo.class);
-
-    List<String> tservers = Arrays.asList("localhost:12345", "localhost:12346");
-    EasyMock.expect(closeWals.getActiveTservers(tinfo)).andReturn(tservers);
-    EasyMock.expect(closeWals.getActiveWalsForServer(tinfo, HostAndPort.fromString("localhost:12345"))).andReturn(Arrays.asList("/wal" + 0));
-    EasyMock.expect(closeWals.getActiveWalsForServer(tinfo, HostAndPort.fromString("localhost:12346"))).andReturn(null);
-
-    EasyMock.replay(closeWals);
-
-    Set<String> wals = closeWals.getActiveWals(tinfo);
-
-    EasyMock.verify(closeWals);
-
-    Assert.assertNull("Expected to get null for active WALs", wals);
-  }
-}
diff --git a/server/master/pom.xml b/server/master/pom.xml
index 3ab71ac..6af8344 100644
--- a/server/master/pom.xml
+++ b/server/master/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-master</artifactId>
diff --git a/server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java b/server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java
index a1ce1d4..efea076 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/FateServiceHandler.java
@@ -46,6 +46,7 @@
 import org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException;
 import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.iterators.IteratorUtil;
+import org.apache.accumulo.core.master.thrift.BulkImportState;
 import org.apache.accumulo.core.master.thrift.FateOperation;
 import org.apache.accumulo.core.master.thrift.FateService;
 import org.apache.accumulo.core.security.thrift.TCredentials;
@@ -105,7 +106,7 @@
         if (!master.security.canCreateNamespace(c, namespace))
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new CreateNamespace(c.getPrincipal(), namespace, options)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new CreateNamespace(c.getPrincipal(), namespace, options)), autoCleanup);
         break;
       }
       case NAMESPACE_RENAME: {
@@ -117,7 +118,7 @@
         if (!master.security.canRenameNamespace(c, namespaceId, oldName, newName))
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new RenameNamespace(namespaceId, oldName, newName)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new RenameNamespace(namespaceId, oldName, newName)), autoCleanup);
         break;
       }
       case NAMESPACE_DELETE: {
@@ -128,7 +129,7 @@
         if (!master.security.canDeleteNamespace(c, namespaceId))
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new DeleteNamespace(namespaceId)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new DeleteNamespace(namespaceId)), autoCleanup);
         break;
       }
       case TABLE_CREATE: {
@@ -147,7 +148,7 @@
         if (!master.security.canCreateTable(c, tableName, namespaceId))
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new CreateTable(c.getPrincipal(), tableName, timeType, options, namespaceId)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new CreateTable(c.getPrincipal(), tableName, timeType, options, namespaceId)), autoCleanup);
 
         break;
       }
@@ -157,7 +158,7 @@
         String newTableName = validateTableNameArgument(arguments.get(1), tableOp, new Validator<String>() {
 
           @Override
-          public boolean isValid(String argument) {
+          public boolean apply(String argument) {
             // verify they are in the same namespace
             String oldNamespace = Tables.qualify(oldTableName).getFirst();
             return oldNamespace.equals(Tables.qualify(argument).getFirst());
@@ -185,7 +186,7 @@
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
         try {
-          master.fate.seedTransaction(opid, new TraceRepo<Master>(new RenameTable(tableId, oldTableName, newTableName)), autoCleanup);
+          master.fate.seedTransaction(opid, new TraceRepo<>(new RenameTable(tableId, oldTableName, newTableName)), autoCleanup);
         } catch (NamespaceNotFoundException e) {
           throw new ThriftTableOperationException(null, oldTableName, tableOp, TableOperationExceptionType.NAMESPACE_NOTFOUND, "");
         }
@@ -215,8 +216,8 @@
         if (!canCloneTable)
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        Map<String,String> propertiesToSet = new HashMap<String,String>();
-        Set<String> propertiesToExclude = new HashSet<String>();
+        Map<String,String> propertiesToSet = new HashMap<>();
+        Set<String> propertiesToExclude = new HashSet<>();
 
         for (Entry<String,String> entry : options.entrySet()) {
           if (entry.getKey().startsWith(TableOperationsImpl.CLONE_EXCLUDE_PREFIX)) {
@@ -232,7 +233,7 @@
           propertiesToSet.put(entry.getKey(), entry.getValue());
         }
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new CloneTable(c.getPrincipal(), srcTableId, tableName, propertiesToSet, propertiesToExclude)),
+        master.fate.seedTransaction(opid, new TraceRepo<>(new CloneTable(c.getPrincipal(), srcTableId, tableName, propertiesToSet, propertiesToExclude)),
             autoCleanup);
 
         break;
@@ -254,7 +255,7 @@
 
         if (!canDeleteTable)
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new DeleteTable(tableId)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new DeleteTable(tableId)), autoCleanup);
         break;
       }
       case TABLE_ONLINE: {
@@ -273,7 +274,7 @@
         if (!canOnlineOfflineTable)
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new ChangeTableState(tableId, tableOp)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new ChangeTableState(tableId, tableOp)), autoCleanup);
         break;
       }
       case TABLE_OFFLINE: {
@@ -292,7 +293,7 @@
         if (!canOnlineOfflineTable)
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new ChangeTableState(tableId, tableOp)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new ChangeTableState(tableId, tableOp)), autoCleanup);
         break;
       }
       case TABLE_MERGE: {
@@ -316,7 +317,7 @@
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
         Master.log.debug("Creating merge op: " + tableId + " " + startRow + " " + endRow);
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new TableRangeOp(MergeInfo.Operation.MERGE, tableId, startRow, endRow)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new TableRangeOp(MergeInfo.Operation.MERGE, tableId, startRow, endRow)), autoCleanup);
         break;
       }
       case TABLE_DELETE_RANGE: {
@@ -339,7 +340,7 @@
         if (!canDeleteRange)
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new TableRangeOp(MergeInfo.Operation.DELETE, tableId, startRow, endRow)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new TableRangeOp(MergeInfo.Operation.DELETE, tableId, startRow, endRow)), autoCleanup);
         break;
       }
       case TABLE_BULK_IMPORT: {
@@ -363,7 +364,8 @@
         if (!canBulkImport)
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new BulkImport(tableId, dir, failDir, setTime)), autoCleanup);
+        master.updateBulkImportStatus(dir, BulkImportState.INITIAL);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new BulkImport(tableId, dir, failDir, setTime)), autoCleanup);
         break;
       }
       case TABLE_COMPACT: {
@@ -386,7 +388,7 @@
         if (!canCompact)
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new CompactRange(tableId, startRow, endRow, iterators, compactionStrategy)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new CompactRange(tableId, startRow, endRow, iterators, compactionStrategy)), autoCleanup);
         break;
       }
       case TABLE_CANCEL_COMPACT: {
@@ -405,7 +407,7 @@
         if (!canCancelCompact)
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new CancelCompactions(tableId)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new CancelCompactions(tableId)), autoCleanup);
         break;
       }
       case TABLE_IMPORT: {
@@ -430,7 +432,7 @@
         if (!canImport)
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new ImportTable(c.getPrincipal(), tableName, exportDir, namespaceId)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new ImportTable(c.getPrincipal(), tableName, exportDir, namespaceId)), autoCleanup);
         break;
       }
       case TABLE_EXPORT: {
@@ -452,7 +454,7 @@
         if (!canExport)
           throw new ThriftSecurityException(c.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-        master.fate.seedTransaction(opid, new TraceRepo<Master>(new ExportTable(tableName, tableId, exportDir)), autoCleanup);
+        master.fate.seedTransaction(opid, new TraceRepo<>(new ExportTable(tableName, tableId, exportDir)), autoCleanup);
         break;
       }
       default:
@@ -529,7 +531,8 @@
       return VALID_ID.and(userValidator).validate(tableId);
     } catch (IllegalArgumentException e) {
       String why = e.getMessage();
-      log.warn(why);
+      // Information provided by a client should generate a user-level exception, not a system-level warning.
+      log.debug(why);
       throw new ThriftTableOperationException(tableId, null, op, TableOperationExceptionType.INVALID_NAME, why);
     }
   }
@@ -552,7 +555,8 @@
       return validator.validate(arg);
     } catch (IllegalArgumentException e) {
       String why = e.getMessage();
-      log.warn(why);
+      // Information provided by a client should generate a user-level exception, not a system-level warning.
+      log.debug(why);
       throw new ThriftTableOperationException(null, String.valueOf(arg), op, TableOperationExceptionType.INVALID_NAME, why);
     }
   }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/Master.java b/server/master/src/main/java/org/apache/accumulo/master/Master.java
index 5a2a346..9633a9d 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/Master.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/Master.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.master;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.io.IOException;
@@ -32,6 +33,9 @@
 import java.util.SortedMap;
 import java.util.TreeMap;
 import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 
@@ -54,6 +58,7 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.master.state.tables.TableState;
+import org.apache.accumulo.core.master.thrift.BulkImportState;
 import org.apache.accumulo.core.master.thrift.MasterClientService.Iface;
 import org.apache.accumulo.core.master.thrift.MasterClientService.Processor;
 import org.apache.accumulo.core.master.thrift.MasterGoalState;
@@ -69,11 +74,11 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.NamespacePermission;
 import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.core.tabletserver.thrift.TUnloadTabletGoal;
 import org.apache.accumulo.core.trace.DistributedTrace;
 import org.apache.accumulo.core.trace.thrift.TInfo;
 import org.apache.accumulo.core.util.Daemon;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.AgeOffStore;
 import org.apache.accumulo.fate.Fate;
@@ -97,6 +102,8 @@
 import org.apache.accumulo.server.fs.VolumeManager.FileType;
 import org.apache.accumulo.server.fs.VolumeManagerImpl;
 import org.apache.accumulo.server.init.Initialize;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.log.WalStateManager.WalMarkerException;
 import org.apache.accumulo.server.master.LiveTServerSet;
 import org.apache.accumulo.server.master.LiveTServerSet.TServerConnection;
 import org.apache.accumulo.server.master.balancer.DefaultLoadBalancer;
@@ -133,6 +140,7 @@
 import org.apache.accumulo.server.util.DefaultMap;
 import org.apache.accumulo.server.util.Halt;
 import org.apache.accumulo.server.util.MetadataTableUtil;
+import org.apache.accumulo.server.util.ServerBulkImportStatus;
 import org.apache.accumulo.server.util.TableInfoUtil;
 import org.apache.accumulo.server.util.time.SimpleTimer;
 import org.apache.accumulo.server.zookeeper.ZooLock;
@@ -142,7 +150,6 @@
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.DataInputBuffer;
 import org.apache.hadoop.io.DataOutputBuffer;
-import org.apache.hadoop.io.Text;
 import org.apache.thrift.TException;
 import org.apache.thrift.server.TServer;
 import org.apache.thrift.transport.TTransportException;
@@ -167,8 +174,6 @@
   final static Logger log = LoggerFactory.getLogger(Master.class);
 
   final static int ONE_SECOND = 1000;
-  final private static Text METADATA_TABLE_ID = new Text(MetadataTable.ID);
-  final private static Text ROOT_TABLE_ID = new Text(RootTable.ID);
   final static long TIME_TO_WAIT_BETWEEN_SCANS = 60 * ONE_SECOND;
   final private static long TIME_BETWEEN_MIGRATION_CLEANUPS = 5 * 60 * ONE_SECOND;
   final static long WAIT_BETWEEN_ERRORS = ONE_SECOND;
@@ -182,7 +187,7 @@
   final private String hostname;
   final private Object balancedNotifier = new Object();
   final LiveTServerSet tserverSet;
-  final private List<TabletGroupWatcher> watchers = new ArrayList<TabletGroupWatcher>();
+  final private List<TabletGroupWatcher> watchers = new ArrayList<>();
   final SecurityOperation security;
   final Map<TServerInstance,AtomicInteger> badServers = Collections.synchronizedMap(new DefaultMap<TServerInstance,AtomicInteger>(new AtomicInteger()));
   final Set<TServerInstance> serversToShutdown = Collections.synchronizedSet(new HashSet<TServerInstance>());
@@ -192,6 +197,7 @@
   private ReplicationDriver replicationWorkDriver;
   private WorkDriver replicationWorkAssigner;
   RecoveryManager recoveryManager = null;
+  private final MasterTime timeKeeper;
 
   // Delegation Token classes
   private final boolean delegationTokensAvailable;
@@ -207,6 +213,7 @@
   Fate<Master> fate;
 
   volatile SortedMap<TServerInstance,TabletServerStatus> tserverStatus = Collections.unmodifiableSortedMap(new TreeMap<TServerInstance,TabletServerStatus>());
+  final ServerBulkImportStatus bulkImportStatus = new ServerBulkImportStatus();
 
   @Override
   public synchronized MasterState getMasterState() {
@@ -375,8 +382,8 @@
         String namespaces = ZooUtil.getRoot(getInstance()) + Constants.ZNAMESPACES;
         zoo.putPersistentData(namespaces, new byte[0], NodeExistsPolicy.SKIP);
         for (Pair<String,String> namespace : Iterables.concat(
-            Collections.singleton(new Pair<String,String>(Namespaces.ACCUMULO_NAMESPACE, Namespaces.ACCUMULO_NAMESPACE_ID)),
-            Collections.singleton(new Pair<String,String>(Namespaces.DEFAULT_NAMESPACE, Namespaces.DEFAULT_NAMESPACE_ID)))) {
+            Collections.singleton(new Pair<>(Namespaces.ACCUMULO_NAMESPACE, Namespaces.ACCUMULO_NAMESPACE_ID)),
+            Collections.singleton(new Pair<>(Namespaces.DEFAULT_NAMESPACE, Namespaces.DEFAULT_NAMESPACE_ID)))) {
           String ns = namespace.getFirst();
           String id = namespace.getSecond();
           log.debug("Upgrade creating namespace \"" + ns + "\" (ID: " + id + ")");
@@ -423,6 +430,13 @@
           perm.grantNamespacePermission(user, Namespaces.ACCUMULO_NAMESPACE_ID, NamespacePermission.READ);
         }
         perm.grantNamespacePermission("root", Namespaces.ACCUMULO_NAMESPACE_ID, NamespacePermission.ALTER_TABLE);
+
+        // add the currlog location for root tablet current logs
+        zoo.putPersistentData(ZooUtil.getRoot(getInstance()) + RootTable.ZROOT_TABLET_CURRENT_LOGS, new byte[0], NodeExistsPolicy.SKIP);
+
+        // create tablet server wal logs node in ZK
+        zoo.putPersistentData(ZooUtil.getRoot(getInstance()) + WalStateManager.ZWALS, new byte[0], NodeExistsPolicy.SKIP);
+
         haveUpgradedZooKeeper = true;
       } catch (Exception ex) {
         // ACCUMULO-3651 Changed level to error and added FATAL to message for slf4j compatibility
@@ -498,7 +512,7 @@
     }
   }
 
-  private int assignedOrHosted(Text tableId) {
+  private int assignedOrHosted(String tableId) {
     int result = 0;
     for (TabletGroupWatcher watcher : watchers) {
       TableCounts count = watcher.getStats(tableId);
@@ -518,14 +532,14 @@
   }
 
   private int nonMetaDataTabletsAssignedOrHosted() {
-    return totalAssignedOrHosted() - assignedOrHosted(new Text(MetadataTable.ID)) - assignedOrHosted(new Text(RootTable.ID));
+    return totalAssignedOrHosted() - assignedOrHosted(MetadataTable.ID) - assignedOrHosted(RootTable.ID);
   }
 
   private int notHosted() {
     int result = 0;
     for (TabletGroupWatcher watcher : watchers) {
       for (TableCounts counts : watcher.getStats().values()) {
-        result += counts.assigned() + counts.assignedToDeadServers();
+        result += counts.assigned() + counts.assignedToDeadServers() + counts.suspended();
       }
     }
     return result;
@@ -539,12 +553,12 @@
         // Count offline tablets for online tables
         for (TabletGroupWatcher watcher : watchers) {
           TableManager manager = TableManager.getInstance();
-          for (Entry<Text,TableCounts> entry : watcher.getStats().entrySet()) {
-            Text tableId = entry.getKey();
+          for (Entry<String,TableCounts> entry : watcher.getStats().entrySet()) {
+            String tableId = entry.getKey();
             TableCounts counts = entry.getValue();
-            TableState tableState = manager.getTableState(tableId.toString());
+            TableState tableState = manager.getTableState(tableId);
             if (tableState != null && tableState.equals(TableState.ONLINE)) {
-              result += counts.unassigned() + counts.assignedToDeadServers() + counts.assigned();
+              result += counts.unassigned() + counts.assignedToDeadServers() + counts.assigned() + counts.suspended();
             }
           }
         }
@@ -552,13 +566,15 @@
       case SAFE_MODE:
         // Count offline tablets for the metadata table
         for (TabletGroupWatcher watcher : watchers) {
-          result += watcher.getStats(METADATA_TABLE_ID).unassigned();
+          TableCounts counts = watcher.getStats(MetadataTable.ID);
+          result += counts.unassigned() + counts.suspended();
         }
         break;
       case UNLOAD_METADATA_TABLETS:
       case UNLOAD_ROOT_TABLET:
         for (TabletGroupWatcher watcher : watchers) {
-          result += watcher.getStats(METADATA_TABLE_ID).unassigned();
+          TableCounts counts = watcher.getStats(MetadataTable.ID);
+          result += counts.unassigned() + counts.suspended();
         }
         break;
       default:
@@ -583,6 +599,8 @@
 
     log.info("Version " + Constants.VERSION);
     log.info("Instance " + getInstance().getInstanceID());
+    timeKeeper = new MasterTime(this);
+
     ThriftTransportPool.getInstance().setIdleTime(aconf.getTimeInMillis(Property.GENERAL_RPC_TIMEOUT));
     tserverSet = new LiveTServerSet(this, this);
     this.tabletBalancer = aconf.instantiateClassProperty(Property.MASTER_TABLET_BALANCER, TabletBalancer.class, new DefaultLoadBalancer());
@@ -618,16 +636,17 @@
       log.info("SASL is not enabled, delegation tokens will not be available");
       delegationTokensAvailable = false;
     }
+
   }
 
   public TServerConnection getConnection(TServerInstance server) {
     return tserverSet.getConnection(server);
   }
 
-  public MergeInfo getMergeInfo(Text tableId) {
+  public MergeInfo getMergeInfo(String tableId) {
     synchronized (mergeLock) {
       try {
-        String path = ZooUtil.getRoot(getInstance().getInstanceID()) + Constants.ZTABLES + "/" + tableId.toString() + "/merge";
+        String path = ZooUtil.getRoot(getInstance().getInstanceID()) + Constants.ZTABLES + "/" + tableId + "/merge";
         if (!ZooReaderWriter.getInstance().exists(path))
           return new MergeInfo();
         byte[] data = ZooReaderWriter.getInstance().getData(path, new Stat());
@@ -648,7 +667,7 @@
 
   public void setMergeState(MergeInfo info, MergeState state) throws IOException, KeeperException, InterruptedException {
     synchronized (mergeLock) {
-      String path = ZooUtil.getRoot(getInstance().getInstanceID()) + Constants.ZTABLES + "/" + info.getExtent().getTableId().toString() + "/merge";
+      String path = ZooUtil.getRoot(getInstance().getInstanceID()) + Constants.ZTABLES + "/" + info.getExtent().getTableId() + "/merge";
       info.setState(state);
       if (state.equals(MergeState.NONE)) {
         ZooReaderWriter.getInstance().recursiveDelete(path, NodeMissingPolicy.SKIP);
@@ -667,9 +686,9 @@
     nextEvent.event("Merge state of %s set to %s", info.getExtent(), state);
   }
 
-  public void clearMergeState(Text tableId) throws IOException, KeeperException, InterruptedException {
+  public void clearMergeState(String tableId) throws IOException, KeeperException, InterruptedException {
     synchronized (mergeLock) {
-      String path = ZooUtil.getRoot(getInstance().getInstanceID()) + Constants.ZTABLES + "/" + tableId.toString() + "/merge";
+      String path = ZooUtil.getRoot(getInstance().getInstanceID()) + Constants.ZTABLES + "/" + tableId + "/merge";
       ZooReaderWriter.getInstance().recursiveDelete(path, NodeMissingPolicy.SKIP);
       mergeLock.notifyAll();
     }
@@ -692,7 +711,7 @@
         return MasterGoalState.valueOf(new String(data));
       } catch (Exception e) {
         log.error("Problem getting real goal state from zookeeper: " + e);
-        UtilWaitThread.sleep(1000);
+        sleepUninterruptibly(1, TimeUnit.SECONDS);
       }
   }
 
@@ -710,7 +729,7 @@
       Iterator<KeyExtent> iterator = migrations.keySet().iterator();
       while (iterator.hasNext()) {
         KeyExtent extent = iterator.next();
-        if (extent.getTableId().toString().equals(tableId)) {
+        if (extent.getTableId().equals(tableId)) {
           iterator.remove();
         }
       }
@@ -718,7 +737,18 @@
   }
 
   static enum TabletGoalState {
-    HOSTED, UNASSIGNED, DELETED
+    HOSTED(TUnloadTabletGoal.UNKNOWN), UNASSIGNED(TUnloadTabletGoal.UNASSIGNED), DELETED(TUnloadTabletGoal.DELETED), SUSPENDED(TUnloadTabletGoal.SUSPENDED);
+
+    private final TUnloadTabletGoal unloadGoal;
+
+    TabletGoalState(TUnloadTabletGoal unloadGoal) {
+      this.unloadGoal = unloadGoal;
+    }
+
+    /** The purpose of unloading this tablet. */
+    public TUnloadTabletGoal howUnload() {
+      return unloadGoal;
+    }
   };
 
   TabletGoalState getSystemGoalState(TabletLocationState tls) {
@@ -745,7 +775,7 @@
   }
 
   TabletGoalState getTableGoalState(KeyExtent extent) {
-    TableState tableState = TableManager.getInstance().getTableState(extent.getTableId().toString());
+    TableState tableState = TableManager.getInstance().getTableState(extent.getTableId());
     if (tableState == null)
       return TabletGoalState.DELETED;
     switch (tableState) {
@@ -765,7 +795,7 @@
     TabletGoalState state = getSystemGoalState(tls);
     if (state == TabletGoalState.HOSTED) {
       if (tls.current != null && serversToShutdown.contains(tls.current)) {
-        return TabletGoalState.UNASSIGNED;
+        return TabletGoalState.SUSPENDED;
       }
       // Handle merge transitions
       if (mergeInfo.getExtent() != null) {
@@ -822,7 +852,7 @@
             log.error("Error cleaning up migrations", ex);
           }
         }
-        UtilWaitThread.sleep(TIME_BETWEEN_MIGRATION_CLEANUPS);
+        sleepUninterruptibly(TIME_BETWEEN_MIGRATION_CLEANUPS, TimeUnit.MILLISECONDS);
       }
     }
 
@@ -833,7 +863,7 @@
     private void cleanupNonexistentMigrations(final Connector connector) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
       Scanner scanner = connector.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
       TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.fetch(scanner);
-      Set<KeyExtent> found = new HashSet<KeyExtent>();
+      Set<KeyExtent> found = new HashSet<>();
       for (Entry<Key,Value> entry : scanner) {
         KeyExtent extent = new KeyExtent(entry.getKey().getRow(), entry.getValue());
         if (migrations.containsKey(extent)) {
@@ -914,19 +944,19 @@
                 }
                   break;
                 case UNLOAD_METADATA_TABLETS: {
-                  int count = assignedOrHosted(METADATA_TABLE_ID);
+                  int count = assignedOrHosted(MetadataTable.ID);
                   log.debug(String.format("There are %d metadata tablets assigned or hosted", count));
                   if (count == 0 && goodStats())
                     setMasterState(MasterState.UNLOAD_ROOT_TABLET);
                 }
                   break;
                 case UNLOAD_ROOT_TABLET: {
-                  int count = assignedOrHosted(METADATA_TABLE_ID);
+                  int count = assignedOrHosted(MetadataTable.ID);
                   if (count > 0 && goodStats()) {
                     log.debug(String.format("%d metadata tablets online", count));
                     setMasterState(MasterState.UNLOAD_ROOT_TABLET);
                   }
-                  int root_count = assignedOrHosted(ROOT_TABLE_ID);
+                  int root_count = assignedOrHosted(RootTable.ID);
                   if (root_count > 0 && goodStats())
                     log.debug("The root tablet is still assigned or hosted");
                   if (count + root_count == 0 && goodStats()) {
@@ -960,13 +990,14 @@
           eventListener.waitForEvents(wait);
         } catch (Throwable t) {
           log.error("Error balancing tablets, will wait for " + WAIT_BETWEEN_ERRORS / ONE_SECOND + " (seconds) and then retry", t);
-          UtilWaitThread.sleep(WAIT_BETWEEN_ERRORS);
+          sleepUninterruptibly(WAIT_BETWEEN_ERRORS, TimeUnit.MILLISECONDS);
         }
       }
     }
 
     private long updateStatus() throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-      tserverStatus = Collections.synchronizedSortedMap(gatherTableInformation());
+      Set<TServerInstance> currentServers = tserverSet.getCurrentServers();
+      tserverStatus = Collections.synchronizedSortedMap(gatherTableInformation(currentServers));
       checkForHeldServer(tserverStatus);
 
       if (!badServers.isEmpty()) {
@@ -978,6 +1009,12 @@
       } else if (!serversToShutdown.isEmpty()) {
         log.debug("not balancing while shutting down servers " + serversToShutdown);
       } else {
+        for (TabletGroupWatcher tgw : watchers) {
+          if (!tgw.isSameTserversAsLastScan(currentServers)) {
+            log.debug("not balancing just yet, as collection of live tservers is in flux");
+            return DEFAULT_WAIT_FOR_WATCHER;
+          }
+        }
         return balanceTablets();
       }
       return DEFAULT_WAIT_FOR_WATCHER;
@@ -1011,7 +1048,7 @@
     }
 
     private long balanceTablets() {
-      List<TabletMigration> migrationsOut = new ArrayList<TabletMigration>();
+      List<TabletMigration> migrationsOut = new ArrayList<>();
       long wait = tabletBalancer.balance(Collections.unmodifiableSortedMap(tserverStatus), migrationsSnapshot(), migrationsOut);
 
       for (TabletMigration m : TabletBalancer.checkMigrationSanity(tserverStatus.keySet(), migrationsOut)) {
@@ -1034,42 +1071,55 @@
 
   }
 
-  private SortedMap<TServerInstance,TabletServerStatus> gatherTableInformation() {
+  private SortedMap<TServerInstance,TabletServerStatus> gatherTableInformation(Set<TServerInstance> currentServers) {
     long start = System.currentTimeMillis();
-    SortedMap<TServerInstance,TabletServerStatus> result = new TreeMap<TServerInstance,TabletServerStatus>();
-    Set<TServerInstance> currentServers = tserverSet.getCurrentServers();
-    for (TServerInstance server : currentServers) {
-      try {
-        Thread t = Thread.currentThread();
-        String oldName = t.getName();
-        try {
-          t.setName("Getting status from " + server);
-          TServerConnection connection = tserverSet.getConnection(server);
-          if (connection == null)
-            throw new IOException("No connection to " + server);
-          TabletServerStatus status = connection.getTableMap(false);
-          result.put(server, status);
-        } finally {
-          t.setName(oldName);
-        }
-      } catch (Exception ex) {
-        log.error("unable to get tablet server status " + server + " " + ex.toString());
-        log.debug("unable to get tablet server status " + server, ex);
-        if (badServers.get(server).incrementAndGet() > MAX_BAD_STATUS_COUNT) {
-          log.warn("attempting to stop " + server);
+    int threads = Math.max(getConfiguration().getCount(Property.MASTER_STATUS_THREAD_POOL_SIZE), 1);
+    ExecutorService tp = Executors.newFixedThreadPool(threads);
+    final SortedMap<TServerInstance,TabletServerStatus> result = new TreeMap<>();
+    for (TServerInstance serverInstance : currentServers) {
+      final TServerInstance server = serverInstance;
+      tp.submit(new Runnable() {
+        @Override
+        public void run() {
           try {
-            TServerConnection connection = tserverSet.getConnection(server);
-            if (connection != null) {
-              connection.halt(masterLock);
+            Thread t = Thread.currentThread();
+            String oldName = t.getName();
+            try {
+              t.setName("Getting status from " + server);
+              TServerConnection connection = tserverSet.getConnection(server);
+              if (connection == null)
+                throw new IOException("No connection to " + server);
+              TabletServerStatus status = connection.getTableMap(false);
+              result.put(server, status);
+            } finally {
+              t.setName(oldName);
             }
-          } catch (TTransportException e) {
-            // ignore: it's probably down
-          } catch (Exception e) {
-            log.info("error talking to troublesome tablet server ", e);
+          } catch (Exception ex) {
+            log.error("unable to get tablet server status " + server + " " + ex.toString());
+            log.debug("unable to get tablet server status " + server, ex);
+            if (badServers.get(server).incrementAndGet() > MAX_BAD_STATUS_COUNT) {
+              log.warn("attempting to stop " + server);
+              try {
+                TServerConnection connection = tserverSet.getConnection(server);
+                if (connection != null) {
+                  connection.halt(masterLock);
+                }
+              } catch (TTransportException e) {
+                // ignore: it's probably down
+              } catch (Exception e) {
+                log.info("error talking to troublesome tablet server ", e);
+              }
+              badServers.remove(server);
+            }
           }
-          badServers.remove(server);
         }
-      }
+      });
+    }
+    tp.shutdown();
+    try {
+      tp.awaitTermination(getConfiguration().getTimeInMillis(Property.TSERV_CLIENT_TIMEOUT) * 2, TimeUnit.MILLISECONDS);
+    } catch (InterruptedException e) {
+      log.debug("Interrupted while fetching status");
     }
     synchronized (badServers) {
       badServers.keySet().retainAll(currentServers);
@@ -1111,9 +1161,30 @@
       }
     });
 
-    watchers.add(new TabletGroupWatcher(this, new MetaDataStateStore(this, this), null));
-    watchers.add(new TabletGroupWatcher(this, new RootTabletStateStore(this, this), watchers.get(0)));
-    watchers.add(new TabletGroupWatcher(this, new ZooTabletStateStore(new ZooStore(zroot)), watchers.get(1)));
+    watchers.add(new TabletGroupWatcher(this, new MetaDataStateStore(this, this), null) {
+      @Override
+      boolean canSuspendTablets() {
+        // Always allow user data tablets to enter suspended state.
+        return true;
+      }
+    });
+
+    watchers.add(new TabletGroupWatcher(this, new RootTabletStateStore(this, this), watchers.get(0)) {
+      @Override
+      boolean canSuspendTablets() {
+        // Allow metadata tablets to enter suspended state only if so configured. Generally we'll want metadata tablets to
+        // be immediately reassigned, even if there's a global table.suspension.duration setting.
+        return getConfiguration().getBoolean(Property.MASTER_METADATA_SUSPENDABLE);
+      }
+    });
+
+    watchers.add(new TabletGroupWatcher(this, new ZooTabletStateStore(new ZooStore(zroot)), watchers.get(1)) {
+      @Override
+      boolean canSuspendTablets() {
+        // Never allow root tablet to enter suspended state.
+        return false;
+      }
+    });
     for (TabletGroupWatcher watcher : watchers) {
       watcher.start();
     }
@@ -1122,12 +1193,12 @@
     waitForMetadataUpgrade.await();
 
     try {
-      final AgeOffStore<Master> store = new AgeOffStore<Master>(new org.apache.accumulo.fate.ZooStore<Master>(ZooUtil.getRoot(getInstance()) + Constants.ZFATE,
+      final AgeOffStore<Master> store = new AgeOffStore<>(new org.apache.accumulo.fate.ZooStore<Master>(ZooUtil.getRoot(getInstance()) + Constants.ZFATE,
           ZooReaderWriter.getInstance()), 1000 * 60 * 60 * 8);
 
       int threads = getConfiguration().getCount(Property.MASTER_FATE_THREADPOOL_SIZE);
 
-      fate = new Fate<Master>(this, store);
+      fate = new Fate<>(this, store);
       fate.startTransactionRunners(threads);
 
       SimpleTimer.getInstance(getConfiguration()).schedule(new Runnable() {
@@ -1158,7 +1229,7 @@
           log.info("Waiting for AuthenticationTokenKeyManager to be initialized");
           logged = true;
         }
-        UtilWaitThread.sleep(200);
+        sleepUninterruptibly(200, TimeUnit.MILLISECONDS);
       }
       // And log when we are initialized
       log.info("AuthenticationTokenSecretManager is initialized");
@@ -1169,9 +1240,9 @@
     final Processor<Iface> processor;
     if (ThriftServerType.SASL == getThriftServerType()) {
       Iface tcredsProxy = TCredentialsUpdatingWrapper.service(rpcProxy, clientHandler.getClass(), getConfiguration());
-      processor = new Processor<Iface>(tcredsProxy);
+      processor = new Processor<>(tcredsProxy);
     } else {
-      processor = new Processor<Iface>(rpcProxy);
+      processor = new Processor<>(rpcProxy);
     }
     ServerAddress sa = TServerUtils.startServer(this, hostname, Property.MASTER_CLIENTPORT, processor, "Master", "Master Client Service Handler", null,
         Property.MASTER_MINTHREADS, Property.MASTER_THREADCHECK, Property.GENERAL_MAX_MESSAGE_SIZE);
@@ -1181,7 +1252,7 @@
     masterLock.replaceLockData(address.getBytes());
 
     while (!clientService.isServing()) {
-      UtilWaitThread.sleep(100);
+      sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
     }
 
     // Start the daemon to scan the replication table and make units of work
@@ -1199,8 +1270,8 @@
 
     // Start the replication coordinator which assigns tservers to service replication requests
     MasterReplicationCoordinator impl = new MasterReplicationCoordinator(this);
-    ReplicationCoordinator.Processor<ReplicationCoordinator.Iface> replicationCoordinatorProcessor = new ReplicationCoordinator.Processor<ReplicationCoordinator.Iface>(
-        RpcWrapper.service(impl, new ReplicationCoordinator.Processor<ReplicationCoordinator.Iface>(impl)));
+    ReplicationCoordinator.Processor<ReplicationCoordinator.Iface> replicationCoordinatorProcessor = new ReplicationCoordinator.Processor<>(RpcWrapper.service(
+        impl, new ReplicationCoordinator.Processor<ReplicationCoordinator.Iface>(impl)));
     ServerAddress replAddress = TServerUtils.startServer(this, hostname, Property.MASTER_REPLICATION_COORDINATOR_PORT, replicationCoordinatorProcessor,
         "Master Replication Coordinator", "Replication Coordinator", null, Property.MASTER_REPLICATION_COORDINATOR_MINTHREADS,
         Property.MASTER_REPLICATION_COORDINATOR_THREADCHECK, Property.GENERAL_MAX_MESSAGE_SIZE);
@@ -1221,11 +1292,14 @@
     }
 
     while (clientService.isServing()) {
-      UtilWaitThread.sleep(500);
+      sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
     }
     log.info("Shutting down fate.");
     fate.shutdown();
 
+    log.info("Shutting down timekeeping.");
+    timeKeeper.shutdown();
+
     final long deadline = System.currentTimeMillis() + MAX_CLEANUP_WAIT_TIME;
     statusThread.join(remaining(deadline));
     replicationWorkAssigner.join(remaining(deadline));
@@ -1317,7 +1391,7 @@
   private void getMasterLock(final String zMasterLoc) throws KeeperException, InterruptedException {
     log.info("trying to get master lock");
 
-    final String masterClientAddress = hostname + ":" + getConfiguration().getPort(Property.MASTER_CLIENTPORT);
+    final String masterClientAddress = hostname + ":" + getConfiguration().getPort(Property.MASTER_CLIENTPORT)[0];
 
     while (true) {
 
@@ -1337,7 +1411,7 @@
 
       masterLock.tryToCancelAsyncLockOrUnlock();
 
-      UtilWaitThread.sleep(TIME_TO_WAIT_BETWEEN_LOCK_CHECKS);
+      sleepUninterruptibly(TIME_TO_WAIT_BETWEEN_LOCK_CHECKS, TimeUnit.MILLISECONDS);
     }
 
     setMasterState(MasterState.HAVE_LOCK);
@@ -1382,7 +1456,7 @@
         obit.post(dead.hostPort(), cause);
     }
 
-    Set<TServerInstance> unexpected = new HashSet<TServerInstance>(deleted);
+    Set<TServerInstance> unexpected = new HashSet<>(deleted);
     unexpected.removeAll(this.serversToShutdown);
     if (unexpected.size() > 0) {
       if (stillMaster() && !getMasterGoalState().equals(MasterGoalState.CLEAN_STOP)) {
@@ -1447,7 +1521,7 @@
 
   @Override
   public Set<String> onlineTables() {
-    Set<String> result = new HashSet<String>();
+    Set<String> result = new HashSet<>();
     if (getMasterState() != MasterState.NORMAL) {
       if (getMasterState() != MasterState.UNLOAD_METADATA_TABLETS)
         result.add(MetadataTable.ID);
@@ -1474,9 +1548,9 @@
 
   @Override
   public Collection<MergeInfo> merges() {
-    List<MergeInfo> result = new ArrayList<MergeInfo>();
+    List<MergeInfo> result = new ArrayList<>();
     for (String tableId : Tables.getIdToNameMap(getInstance()).keySet()) {
-      result.add(getMergeInfo(new Text(tableId)));
+      result.add(getMergeInfo(tableId));
     }
     return result;
   }
@@ -1530,8 +1604,8 @@
   public MasterMonitorInfo getMasterMonitorInfo() {
     final MasterMonitorInfo result = new MasterMonitorInfo();
 
-    result.tServerInfo = new ArrayList<TabletServerStatus>();
-    result.tableMap = new DefaultMap<String,TableInfo>(new TableInfo());
+    result.tServerInfo = new ArrayList<>();
+    result.tableMap = new DefaultMap<>(new TableInfo());
     for (Entry<TServerInstance,TabletServerStatus> serverEntry : tserverStatus.entrySet()) {
       final TabletServerStatus status = serverEntry.getValue();
       result.tServerInfo.add(status);
@@ -1539,7 +1613,7 @@
         TableInfoUtil.add(result.tableMap.get(entry.getKey()), entry.getValue());
       }
     }
-    result.badTServers = new HashMap<String,Byte>();
+    result.badTServers = new HashMap<>();
     synchronized (badServers) {
       for (TServerInstance bad : badServers.keySet()) {
         result.badTServers.put(bad.hostPort(), TabletServerState.UNRESPONSIVE.getId());
@@ -1548,13 +1622,14 @@
     result.state = getMasterState();
     result.goalState = getMasterGoalState();
     result.unassignedTablets = displayUnassigned();
-    result.serversShuttingDown = new HashSet<String>();
+    result.serversShuttingDown = new HashSet<>();
     synchronized (serversToShutdown) {
       for (TServerInstance server : serversToShutdown)
         result.serversShuttingDown.add(server.hostPort());
     }
     DeadServerList obit = new DeadServerList(ZooUtil.getRoot(getInstance()) + Constants.ZDEADTSERVERS);
     result.deadTabletServers = obit.getList();
+    result.bulkImports = bulkImportStatus.getBulkLoadStatus();
     return result;
   }
 
@@ -1567,7 +1642,7 @@
 
   @Override
   public Set<KeyExtent> migrationsSnapshot() {
-    Set<KeyExtent> migrationKeys = new HashSet<KeyExtent>();
+    Set<KeyExtent> migrationKeys = new HashSet<>();
     synchronized (migrations) {
       migrationKeys.addAll(migrations.keySet());
     }
@@ -1577,7 +1652,32 @@
   @Override
   public Set<TServerInstance> shutdownServers() {
     synchronized (serversToShutdown) {
-      return new HashSet<TServerInstance>(serversToShutdown);
+      return new HashSet<>(serversToShutdown);
     }
   }
+
+  public void markDeadServerLogsAsClosed(Map<TServerInstance,List<Path>> logsForDeadServers) throws WalMarkerException {
+    WalStateManager mgr = new WalStateManager(this.inst, ZooReaderWriter.getInstance());
+    for (Entry<TServerInstance,List<Path>> server : logsForDeadServers.entrySet()) {
+      for (Path path : server.getValue()) {
+        mgr.closeWal(server.getKey(), path);
+      }
+    }
+  }
+
+  public void updateBulkImportStatus(String directory, BulkImportState state) {
+    bulkImportStatus.updateBulkImportStatus(Collections.singletonList(directory), state);
+  }
+
+  public void removeBulkImportStatus(String directory) {
+    bulkImportStatus.removeBulkImportStatus(Collections.singletonList(directory));
+  }
+
+  /**
+   * Return how long (in milliseconds) there has been a master overseeing this cluster. This is an approximately monotonic clock, which will be approximately
+   * consistent between different masters or different runs of the same master.
+   */
+  public Long getSteadyTime() {
+    return timeKeeper.getTime();
+  }
 }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/MasterClientServiceHandler.java b/server/master/src/main/java/org/apache/accumulo/master/MasterClientServiceHandler.java
index cff8b51..bbbdbed 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/MasterClientServiceHandler.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/MasterClientServiceHandler.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.master;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Collections;
@@ -24,6 +26,7 @@
 import java.util.List;
 import java.util.Map.Entry;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -71,7 +74,6 @@
 import org.apache.accumulo.core.security.thrift.TDelegationTokenConfig;
 import org.apache.accumulo.core.trace.thrift.TInfo;
 import org.apache.accumulo.core.util.ByteBufferUtil;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter.Mutator;
 import org.apache.accumulo.master.tableOps.TraceRepo;
@@ -99,13 +101,13 @@
 
 import com.google.protobuf.InvalidProtocolBufferException;
 
-class MasterClientServiceHandler extends FateServiceHandler implements MasterClientService.Iface {
+public class MasterClientServiceHandler extends FateServiceHandler implements MasterClientService.Iface {
 
   private static final Logger log = Master.log;
   private static final Logger drainLog = LoggerFactory.getLogger("org.apache.accumulo.master.MasterDrainImpl");
   private Instance instance;
 
-  MasterClientServiceHandler(Master master) {
+  protected MasterClientServiceHandler(Master master) {
     super(master);
     this.instance = master.getInstance();
   }
@@ -146,7 +148,7 @@
     if (endRow != null && startRow != null && ByteBufferUtil.toText(startRow).compareTo(ByteBufferUtil.toText(endRow)) >= 0)
       throw new ThriftTableOperationException(tableId, null, TableOperation.FLUSH, TableOperationExceptionType.BAD_RANGE, "start row must be less than end row");
 
-    Set<TServerInstance> serversToFlush = new HashSet<TServerInstance>(master.tserverSet.getCurrentServers());
+    Set<TServerInstance> serversToFlush = new HashSet<>(master.tserverSet.getCurrentServers());
 
     for (long l = 0; l < maxLoops; l++) {
 
@@ -163,7 +165,7 @@
       if (l == maxLoops - 1)
         break;
 
-      UtilWaitThread.sleep(50);
+      sleepUninterruptibly(50, TimeUnit.MILLISECONDS);
 
       serversToFlush.clear();
 
@@ -175,7 +177,8 @@
           scanner.setRange(MetadataSchema.TabletsSection.getRange());
         } else {
           scanner = new IsolatedScanner(conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY));
-          scanner.setRange(new KeyExtent(new Text(tableId), null, ByteBufferUtil.toText(startRow)).toMetadataRange());
+          Range range = new KeyExtent(tableId, null, ByteBufferUtil.toText(startRow)).toMetadataRange();
+          scanner.setRange(range.clip(MetadataSchema.TabletsSection.getRange()));
         }
         TabletsSection.ServerColumnFamily.FLUSH_COLUMN.fetch(scanner);
         TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.fetch(scanner);
@@ -300,7 +303,7 @@
 
     log.debug("Seeding FATE op to shutdown " + tabletServer + " with tid " + tid);
 
-    master.fate.seedTransaction(tid, new TraceRepo<Master>(new ShutdownTServer(doomed, force)), false);
+    master.fate.seedTransaction(tid, new TraceRepo<>(new ShutdownTServer(doomed, force)), false);
     master.fate.waitForCompletion(tid);
     master.fate.delete(tid);
 
@@ -377,6 +380,9 @@
     try {
       SystemPropUtil.setSystemProperty(property, value);
       updatePlugins(property);
+    } catch (IllegalArgumentException iae) {
+      // throw the exception here so it is not caught and converted to a generic TException
+      throw iae;
     } catch (Exception e) {
       Master.log.error("Problem setting config property in zookeeper", e);
       throw new TException(e.getMessage());
@@ -463,7 +469,7 @@
   @Override
   public List<String> getActiveTservers(TInfo tinfo, TCredentials credentials) throws TException {
     Set<TServerInstance> tserverInstances = master.onlineTabletServers();
-    List<String> servers = new ArrayList<String>();
+    List<String> servers = new ArrayList<>();
     for (TServerInstance tserverInstance : tserverInstances) {
       servers.add(tserverInstance.getLocation().toString());
     }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/MasterTime.java b/server/master/src/main/java/org/apache/accumulo/master/MasterTime.java
new file mode 100644
index 0000000..93e81e7
--- /dev/null
+++ b/server/master/src/main/java/org/apache/accumulo/master/MasterTime.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.master;
+
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.Timer;
+import java.util.TimerTask;
+import static java.util.concurrent.TimeUnit.MILLISECONDS;
+import static java.util.concurrent.TimeUnit.NANOSECONDS;
+import static java.util.concurrent.TimeUnit.SECONDS;
+import org.apache.accumulo.core.Constants;
+import org.apache.accumulo.core.zookeeper.ZooUtil;
+import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
+import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** Keep a persistent roughly monotone view of how long a master has been overseeing this cluster. */
+public class MasterTime extends TimerTask {
+  private static final Logger log = LoggerFactory.getLogger(MasterTime.class);
+
+  private final String zPath;
+  private final ZooReaderWriter zk;
+  private final Master master;
+  private final Timer timer;
+
+  /** Difference between time stored in ZooKeeper and System.nanoTime() when we last read from ZooKeeper. */
+  private long skewAmount;
+
+  public MasterTime(Master master) throws IOException {
+    this.zPath = ZooUtil.getRoot(master.getInstance()) + Constants.ZMASTER_TICK;
+    this.zk = ZooReaderWriter.getInstance();
+    this.master = master;
+
+    try {
+      zk.putPersistentData(zPath, "0".getBytes(StandardCharsets.UTF_8), NodeExistsPolicy.SKIP);
+      skewAmount = Long.parseLong(new String(zk.getData(zPath, null), StandardCharsets.UTF_8)) - System.nanoTime();
+    } catch (Exception ex) {
+      throw new IOException("Error updating master time", ex);
+    }
+
+    this.timer = new Timer();
+    timer.schedule(this, 0, MILLISECONDS.convert(10, SECONDS));
+  }
+
+  /**
+   * How long has this cluster had a Master?
+   *
+   * @return Approximate total duration this cluster has had a Master, in milliseconds.
+   */
+  public synchronized long getTime() {
+    return MILLISECONDS.convert(System.nanoTime() + skewAmount, NANOSECONDS);
+  }
+
+  /** Shut down the time keeping. */
+  public void shutdown() {
+    timer.cancel();
+  }
+
+  @Override
+  public void run() {
+    switch (master.getMasterState()) {
+    // If we don't have the lock, periodically re-read the value in ZooKeeper, in case there's another master we're
+    // shadowing for.
+      case INITIAL:
+      case STOP:
+        try {
+          long zkTime = Long.parseLong(new String(zk.getData(zPath, null), StandardCharsets.UTF_8));
+          synchronized (this) {
+            skewAmount = zkTime - System.nanoTime();
+          }
+        } catch (Exception ex) {
+          if (log.isDebugEnabled()) {
+            log.debug("Failed to retrieve master tick time", ex);
+          }
+        }
+        break;
+      // If we do have the lock, periodically write our clock to ZooKeeper.
+      case HAVE_LOCK:
+      case SAFE_MODE:
+      case NORMAL:
+      case UNLOAD_METADATA_TABLETS:
+      case UNLOAD_ROOT_TABLET:
+        try {
+          zk.putPersistentData(zPath, Long.toString(System.nanoTime() + skewAmount).getBytes(StandardCharsets.UTF_8), NodeExistsPolicy.OVERWRITE);
+        } catch (Exception ex) {
+          if (log.isDebugEnabled()) {
+            log.debug("Failed to update master tick time", ex);
+          }
+        }
+    }
+  }
+}
diff --git a/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java b/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java
index 4c47953..76fda21 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/TabletGroupWatcher.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.master;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.lang.Math.min;
 
 import java.io.IOException;
@@ -28,8 +29,10 @@
 import java.util.Map.Entry;
 import java.util.Set;
 import java.util.SortedMap;
+import java.util.SortedSet;
 import java.util.TreeMap;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -41,6 +44,7 @@
 import org.apache.accumulo.core.client.RowIterator;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.PartialKey;
@@ -60,14 +64,16 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.tabletserver.thrift.NotServingTabletException;
 import org.apache.accumulo.core.util.Daemon;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.master.Master.TabletGoalState;
 import org.apache.accumulo.master.state.MergeStats;
 import org.apache.accumulo.master.state.TableCounts;
 import org.apache.accumulo.master.state.TableStats;
 import org.apache.accumulo.server.ServerConstants;
+import org.apache.accumulo.server.conf.TableConfiguration;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.server.fs.VolumeManager.FileType;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.log.WalStateManager.WalMarkerException;
 import org.apache.accumulo.server.master.LiveTServerSet.TServerConnection;
 import org.apache.accumulo.server.master.state.Assignment;
 import org.apache.accumulo.server.master.state.ClosableIterator;
@@ -82,14 +88,16 @@
 import org.apache.accumulo.server.tables.TableManager;
 import org.apache.accumulo.server.tablets.TabletTime;
 import org.apache.accumulo.server.util.MetadataTableUtil;
+import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
 import org.apache.thrift.TException;
 
 import com.google.common.base.Optional;
+import com.google.common.collect.ImmutableSortedSet;
 import com.google.common.collect.Iterators;
 
-class TabletGroupWatcher extends Daemon {
+abstract class TabletGroupWatcher extends Daemon {
   // Constants used to make sure assignment logging isn't excessive in quantity or size
   private static final String ASSIGNMENT_BUFFER_SEPARATOR = ", ";
   private static final int ASSINGMENT_BUFFER_MAX_LENGTH = 4096;
@@ -97,9 +105,11 @@
   private final Master master;
   final TabletStateStore store;
   final TabletGroupWatcher dependentWatcher;
+
   private MasterState masterState;
 
   final TableStats stats = new TableStats();
+  private SortedSet<TServerInstance> lastScanServers = ImmutableSortedSet.of();
 
   TabletGroupWatcher(Master master, TabletStateStore store, TabletGroupWatcher dependentWatcher) {
     this.master = master;
@@ -107,7 +117,10 @@
     this.dependentWatcher = dependentWatcher;
   }
 
-  Map<Text,TableCounts> getStats() {
+  /** Should this {@code TabletGroupWatcher} suspend tablets? */
+  abstract boolean canSuspendTablets();
+
+  Map<String,TableCounts> getStats() {
     return stats.getLast();
   }
 
@@ -116,28 +129,34 @@
     return masterState;
   }
 
-  TableCounts getStats(Text tableId) {
+  TableCounts getStats(String tableId) {
     return stats.getLast(tableId);
   }
 
+  /** True if the collection of live tservers specified in 'candidates' hasn't changed since the last time an assignment scan was started. */
+  public synchronized boolean isSameTserversAsLastScan(Set<TServerInstance> candidates) {
+    return candidates.equals(lastScanServers);
+  }
+
   @Override
   public void run() {
-
     Thread.currentThread().setName("Watching " + store.name());
     int[] oldCounts = new int[TabletState.values().length];
     EventCoordinator.Listener eventListener = this.master.nextEvent.getListener();
 
+    WalStateManager wals = new WalStateManager(master.getInstance(), ZooReaderWriter.getInstance());
+
     while (this.master.stillMaster()) {
       // slow things down a little, otherwise we spam the logs when there are many wake-up events
-      UtilWaitThread.sleep(100);
+      sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       masterState = master.getMasterState();
 
       int totalUnloaded = 0;
       int unloaded = 0;
       ClosableIterator<TabletLocationState> iter = null;
       try {
-        Map<Text,MergeStats> mergeStatsCache = new HashMap<Text,MergeStats>();
-        Map<Text,MergeStats> currentMerges = new HashMap<Text,MergeStats>();
+        Map<String,MergeStats> mergeStatsCache = new HashMap<>();
+        Map<String,MergeStats> currentMerges = new HashMap<>();
         for (MergeInfo merge : master.merges()) {
           if (merge.getExtent() != null) {
             currentMerges.put(merge.getExtent().getTableId(), new MergeStats(merge));
@@ -145,24 +164,29 @@
         }
 
         // Get the current status for the current list of tservers
-        SortedMap<TServerInstance,TabletServerStatus> currentTServers = new TreeMap<TServerInstance,TabletServerStatus>();
+        SortedMap<TServerInstance,TabletServerStatus> currentTServers = new TreeMap<>();
         for (TServerInstance entry : this.master.tserverSet.getCurrentServers()) {
           currentTServers.put(entry, this.master.tserverStatus.get(entry));
         }
 
         if (currentTServers.size() == 0) {
           eventListener.waitForEvents(Master.TIME_TO_WAIT_BETWEEN_SCANS);
+          synchronized (this) {
+            lastScanServers = ImmutableSortedSet.of();
+          }
           continue;
         }
 
         // Don't move tablets to servers that are shutting down
-        SortedMap<TServerInstance,TabletServerStatus> destinations = new TreeMap<TServerInstance,TabletServerStatus>(currentTServers);
+        SortedMap<TServerInstance,TabletServerStatus> destinations = new TreeMap<>(currentTServers);
         destinations.keySet().removeAll(this.master.serversToShutdown);
 
         List<Assignment> assignments = new ArrayList<Assignment>();
         List<Assignment> assigned = new ArrayList<Assignment>();
-        List<TabletLocationState> assignedToDeadServers = new ArrayList<TabletLocationState>();
-        Map<KeyExtent,TServerInstance> unassigned = new HashMap<KeyExtent,TServerInstance>();
+        List<TabletLocationState> assignedToDeadServers = new ArrayList<>();
+        List<TabletLocationState> suspendedToGoneServers = new ArrayList<>();
+        Map<KeyExtent,TServerInstance> unassigned = new HashMap<>();
+        Map<TServerInstance,List<Path>> logsForDeadServers = new TreeMap<>();
 
         MasterState masterState = master.getMasterState();
         int[] counts = new int[TabletState.values().length];
@@ -175,8 +199,9 @@
           if (tls == null) {
             continue;
           }
+          Master.log.debug(store.name() + " location State: " + tls);
           // ignore entries for tables that do not exist in zookeeper
-          if (TableManager.getInstance().getTableState(tls.extent.getTableId().toString()) == null)
+          if (TableManager.getInstance().getTableState(tls.extent.getTableId()) == null)
             continue;
 
           if (Master.log.isTraceEnabled())
@@ -184,15 +209,18 @@
 
           // Don't overwhelm the tablet servers with work
           if (unassigned.size() + unloaded > Master.MAX_TSERVER_WORK_CHUNK * currentTServers.size()) {
-            flushChanges(destinations, assignments, assigned, assignedToDeadServers, unassigned);
+            flushChanges(destinations, assignments, assigned, assignedToDeadServers, logsForDeadServers, suspendedToGoneServers, unassigned);
             assignments.clear();
             assigned.clear();
             assignedToDeadServers.clear();
+            suspendedToGoneServers.clear();
             unassigned.clear();
             unloaded = 0;
             eventListener.waitForEvents(Master.TIME_TO_WAIT_BETWEEN_SCANS);
           }
-          Text tableId = tls.extent.getTableId();
+          String tableId = tls.extent.getTableId();
+          TableConfiguration tableConf = this.master.getConfigurationFactory().getTableConfiguration(tableId);
+
           MergeStats mergeStats = mergeStatsCache.get(tableId);
           if (mergeStats == null) {
             mergeStats = currentMerges.get(tableId);
@@ -204,8 +232,9 @@
           TabletGoalState goal = this.master.getGoalState(tls, mergeStats.getMergeInfo());
           TServerInstance server = tls.getServer();
           TabletState state = tls.getState(currentTServers.keySet());
-          if (Master.log.isTraceEnabled())
-            Master.log.trace("Goal state " + goal + " current " + state);
+          if (Master.log.isTraceEnabled()) {
+            Master.log.trace("Goal state " + goal + " current " + state + " for " + tls.extent);
+          }
           stats.update(tableId, state);
           mergeStats.update(tls.extent, state, tls.chopped, !tls.walogs.isEmpty());
           sendChopRequest(mergeStats.getMergeInfo(), state, tls);
@@ -217,7 +246,7 @@
           }
 
           // if we are shutting down all the tabletservers, we have to do it in order
-          if (goal == TabletGoalState.UNASSIGNED && state == TabletState.HOSTED) {
+          if (goal == TabletGoalState.SUSPENDED && state == TabletState.HOSTED) {
             if (this.master.serversToShutdown.equals(currentTServers.keySet())) {
               if (dependentWatcher != null && dependentWatcher.assignedOrHosted() > 0) {
                 goal = TabletGoalState.HOSTED;
@@ -239,7 +268,33 @@
                 assignedToDeadServers.add(tls);
                 if (server.equals(this.master.migrations.get(tls.extent)))
                   this.master.migrations.remove(tls.extent);
-                // log.info("Current servers " + currentTServers.keySet());
+                TServerInstance tserver = tls.futureOrCurrent();
+                if (!logsForDeadServers.containsKey(tserver)) {
+                  logsForDeadServers.put(tserver, wals.getWalsInUse(tserver));
+                }
+                break;
+              case SUSPENDED:
+                if (master.getSteadyTime() - tls.suspend.suspensionTime < tableConf.getTimeInMillis(Property.TABLE_SUSPEND_DURATION)) {
+                  // Tablet is suspended. See if its tablet server is back.
+                  TServerInstance returnInstance = null;
+                  Iterator<TServerInstance> find = destinations.tailMap(new TServerInstance(tls.suspend.server, " ")).keySet().iterator();
+                  if (find.hasNext()) {
+                    TServerInstance found = find.next();
+                    if (found.getLocation().equals(tls.suspend.server)) {
+                      returnInstance = found;
+                    }
+                  }
+
+                  // Old tablet server is back. Return this tablet to its previous owner.
+                  if (returnInstance != null) {
+                    assignments.add(new Assignment(tls.extent, returnInstance));
+                  } else {
+                    // leave suspended, don't ask for a new assignment.
+                  }
+                } else {
+                  // Treat as unassigned, ask for a new assignment.
+                  unassigned.put(tls.extent, server);
+                }
                 break;
               case UNASSIGNED:
                 // maybe it's a finishing migration
@@ -264,21 +319,27 @@
             }
           } else {
             switch (state) {
+              case SUSPENDED:
+                // Request a move to UNASSIGNED, so as to allow balancing to continue.
+                suspendedToGoneServers.add(tls);
+                // Fall through to unassigned to cancel migrations.
               case UNASSIGNED:
                 TServerInstance dest = this.master.migrations.get(tls.extent);
-                TableState tableState = TableManager.getInstance().getTableState(tls.extent.getTableId().toString());
+                TableState tableState = TableManager.getInstance().getTableState(tls.extent.getTableId());
                 if (dest != null && tableState == TableState.OFFLINE) {
                   this.master.migrations.remove(tls.extent);
                 }
                 break;
               case ASSIGNED_TO_DEAD_SERVER:
                 assignedToDeadServers.add(tls);
-                // log.info("Current servers " + currentTServers.keySet());
+                if (!logsForDeadServers.containsKey(tls.futureOrCurrent())) {
+                  logsForDeadServers.put(tls.futureOrCurrent(), wals.getWalsInUse(tls.futureOrCurrent()));
+                }
                 break;
               case HOSTED:
                 TServerConnection conn = this.master.tserverSet.getConnection(server);
                 if (conn != null) {
-                  conn.unloadTablet(this.master.masterLock, tls.extent, goal != TabletGoalState.DELETED);
+                  conn.unloadTablet(this.master.masterLock, tls.extent, goal.howUnload(), master.getSteadyTime());
                   unloaded++;
                   totalUnloaded++;
                 } else {
@@ -292,7 +353,7 @@
           counts[state.ordinal()]++;
         }
 
-        flushChanges(destinations, assignments, assigned, assignedToDeadServers, unassigned);
+        flushChanges(destinations, assignments, assigned, assignedToDeadServers, logsForDeadServers, suspendedToGoneServers, unassigned);
 
         // provide stats after flushing changes to avoid race conditions w/ delete table
         stats.end(masterState);
@@ -312,14 +373,21 @@
 
         updateMergeState(mergeStatsCache);
 
-        Master.log.debug(String.format("[%s] sleeping for %.2f seconds", store.name(), Master.TIME_TO_WAIT_BETWEEN_SCANS / 1000.));
-        eventListener.waitForEvents(Master.TIME_TO_WAIT_BETWEEN_SCANS);
+        synchronized (this) {
+          lastScanServers = ImmutableSortedSet.copyOf(currentTServers.keySet());
+        }
+        if (this.master.tserverSet.getCurrentServers().equals(currentTServers.keySet())) {
+          Master.log.debug(String.format("[%s] sleeping for %.2f seconds", store.name(), Master.TIME_TO_WAIT_BETWEEN_SCANS / 1000.));
+          eventListener.waitForEvents(Master.TIME_TO_WAIT_BETWEEN_SCANS);
+        } else {
+          Master.log.info("Detected change in current tserver set, re-running state machine.");
+        }
       } catch (Exception ex) {
         Master.log.error("Error processing table state for store " + store.name(), ex);
         if (ex.getCause() != null && ex.getCause() instanceof BadLocationStateException) {
           repairMetadata(((BadLocationStateException) ex.getCause()).getEncodedEndRow());
         } else {
-          UtilWaitThread.sleep(Master.WAIT_BETWEEN_ERRORS);
+          sleepUninterruptibly(Master.WAIT_BETWEEN_ERRORS, TimeUnit.MILLISECONDS);
         }
       } finally {
         if (iter != null) {
@@ -338,8 +406,8 @@
     // ACCUMULO-2261 if a dying tserver writes a location before its lock information propagates, it may cause duplicate assignment.
     // Attempt to find the dead server entry and remove it.
     try {
-      Map<Key,Value> future = new HashMap<Key,Value>();
-      Map<Key,Value> assigned = new HashMap<Key,Value>();
+      Map<Key,Value> future = new HashMap<>();
+      Map<Key,Value> assigned = new HashMap<>();
       KeyExtent extent = new KeyExtent(row, new Value(new byte[] {0}));
       String table = MetadataTable.NAME;
       if (extent.isMeta())
@@ -460,7 +528,7 @@
     }
   }
 
-  private void updateMergeState(Map<Text,MergeStats> mergeStatsCache) {
+  private void updateMergeState(Map<String,MergeStats> mergeStatsCache) {
     for (MergeStats stats : mergeStatsCache.values()) {
       try {
         MergeState update = stats.nextMergeState(this.master.getConnector(), this.master);
@@ -518,7 +586,7 @@
       TabletsSection.ServerColumnFamily.TIME_COLUMN.fetch(scanner);
       scanner.fetchColumnFamily(DataFileColumnFamily.NAME);
       scanner.fetchColumnFamily(TabletsSection.CurrentLocationColumnFamily.NAME);
-      Set<FileRef> datafiles = new TreeSet<FileRef>();
+      Set<FileRef> datafiles = new TreeSet<>();
       for (Entry<Key,Value> entry : scanner) {
         Key key = entry.getKey();
         if (key.compareColumnFamily(DataFileColumnFamily.NAME) == 0) {
@@ -569,7 +637,7 @@
       } else {
         // Recreate the default tablet to hold the end of the table
         Master.log.debug("Recreating the last tablet to point to " + extent.getPrevEndRow());
-        String tdir = master.getFileSystem().choose(Optional.of(extent.getTableId().toString()), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR
+        String tdir = master.getFileSystem().choose(Optional.of(extent.getTableId()), ServerConstants.getBaseUris()) + Constants.HDFS_TABLES_DIR
             + Path.SEPARATOR + extent.getTableId() + Constants.DEFAULT_TABLET_LOCATION;
         MetadataTableUtil.addTablet(new KeyExtent(extent.getTableId(), null, extent.getPrevEndRow()), tdir, master, timeType, this.master.masterLock);
       }
@@ -621,7 +689,7 @@
         } else if (TabletsSection.ServerColumnFamily.TIME_COLUMN.hasColumns(key)) {
           maxLogicalTime = TabletTime.maxMetadataTime(maxLogicalTime, value.toString());
         } else if (TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.hasColumns(key)) {
-          bw.addMutation(MetadataTableUtil.createDeleteMutation(range.getTableId().toString(), entry.getValue().toString()));
+          bw.addMutation(MetadataTableUtil.createDeleteMutation(range.getTableId(), entry.getValue().toString()));
         }
       }
 
@@ -721,7 +789,7 @@
       }
       Entry<Key,Value> entry = iterator.next();
       KeyExtent highTablet = new KeyExtent(entry.getKey().getRow(), KeyExtent.decodePrevEndRow(entry.getValue()));
-      if (highTablet.getTableId() != range.getTableId()) {
+      if (!highTablet.getTableId().equals(range.getTableId())) {
         throw new AccumuloException("No last tablet for merge " + range + " " + highTablet);
       }
       return highTablet;
@@ -731,16 +799,29 @@
   }
 
   private void flushChanges(SortedMap<TServerInstance,TabletServerStatus> currentTServers, List<Assignment> assignments, List<Assignment> assigned,
-      List<TabletLocationState> assignedToDeadServers, Map<KeyExtent,TServerInstance> unassigned) throws DistributedStoreException, TException {
+      List<TabletLocationState> assignedToDeadServers, Map<TServerInstance,List<Path>> logsForDeadServers, List<TabletLocationState> suspendedToGoneServers,
+      Map<KeyExtent,TServerInstance> unassigned) throws DistributedStoreException, TException, WalMarkerException {
+    boolean tabletsSuspendable = canSuspendTablets();
     if (!assignedToDeadServers.isEmpty()) {
       int maxServersToShow = min(assignedToDeadServers.size(), 100);
       Master.log.debug(assignedToDeadServers.size() + " assigned to dead servers: " + assignedToDeadServers.subList(0, maxServersToShow) + "...");
-      store.unassign(assignedToDeadServers);
-      this.master.nextEvent.event("Marked %d tablets as unassigned because they don't have current servers", assignedToDeadServers.size());
+      Master.log.debug("logs for dead servers: " + logsForDeadServers);
+      if (tabletsSuspendable) {
+        store.suspend(assignedToDeadServers, logsForDeadServers, master.getSteadyTime());
+      } else {
+        store.unassign(assignedToDeadServers, logsForDeadServers);
+      }
+      this.master.markDeadServerLogsAsClosed(logsForDeadServers);
+      this.master.nextEvent.event("Marked %d tablets as suspended because they don't have current servers", assignedToDeadServers.size());
+    }
+    if (!suspendedToGoneServers.isEmpty()) {
+      int maxServersToShow = min(assignedToDeadServers.size(), 100);
+      Master.log.debug(assignedToDeadServers.size() + " suspended to gone servers: " + assignedToDeadServers.subList(0, maxServersToShow) + "...");
+      store.unsuspend(suspendedToGoneServers);
     }
 
     if (!currentTServers.isEmpty()) {
-      Map<KeyExtent,TServerInstance> assignedOut = new HashMap<KeyExtent,TServerInstance>();
+      Map<KeyExtent,TServerInstance> assignedOut = new HashMap<>();
       final StringBuilder builder = new StringBuilder(64);
       this.master.tabletBalancer.getAssignments(Collections.unmodifiableSortedMap(currentTServers), Collections.unmodifiableMap(unassigned), assignedOut);
       for (Entry<KeyExtent,TServerInstance> assignment : assignedOut.entrySet()) {
diff --git a/server/master/src/main/java/org/apache/accumulo/master/recovery/RecoveryManager.java b/server/master/src/main/java/org/apache/accumulo/master/recovery/RecoveryManager.java
index c1c1713..bd49a7d 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/recovery/RecoveryManager.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/recovery/RecoveryManager.java
@@ -55,9 +55,9 @@
 
   private static final Logger log = LoggerFactory.getLogger(RecoveryManager.class);
 
-  private Map<String,Long> recoveryDelay = new HashMap<String,Long>();
-  private Set<String> closeTasksQueued = new HashSet<String>();
-  private Set<String> sortsQueued = new HashSet<String>();
+  private Map<String,Long> recoveryDelay = new HashMap<>();
+  private Set<String> closeTasksQueued = new HashSet<>();
+  private Set<String> sortsQueued = new HashSet<>();
   private ScheduledExecutorService executor;
   private Master master;
   private ZooCache zooCache;
diff --git a/server/master/src/main/java/org/apache/accumulo/master/replication/DistributedWorkQueueWorkAssigner.java b/server/master/src/main/java/org/apache/accumulo/master/replication/DistributedWorkQueueWorkAssigner.java
index 4e3a079..a10c90d 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/replication/DistributedWorkQueueWorkAssigner.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/replication/DistributedWorkQueueWorkAssigner.java
@@ -19,6 +19,7 @@
 import java.util.Collection;
 import java.util.Map.Entry;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
@@ -34,7 +35,6 @@
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.replication.ReplicationTableOfflineException;
 import org.apache.accumulo.core.replication.ReplicationTarget;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.server.replication.DistributedWorkQueueWorkAssignerHelper;
 import org.apache.accumulo.server.replication.StatusUtil;
@@ -47,6 +47,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import com.google.protobuf.InvalidProtocolBufferException;
 
 /**
@@ -178,7 +179,7 @@
         workScanner = ReplicationTable.getScanner(conn);
       } catch (ReplicationTableOfflineException e) {
         log.warn("Replication table is offline. Will retry...");
-        UtilWaitThread.sleep(5000);
+        sleepUninterruptibly(5, TimeUnit.SECONDS);
         return;
       }
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/replication/FinishedWorkUpdater.java b/server/master/src/main/java/org/apache/accumulo/master/replication/FinishedWorkUpdater.java
index c40743e..e394b57 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/replication/FinishedWorkUpdater.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/replication/FinishedWorkUpdater.java
@@ -145,10 +145,10 @@
           Value serializedUpdatedStatus = ProtobufUtil.toValue(updatedStatus);
 
           // Pull the sourceTableId into a Text
-          buffer.set(entry.getKey());
+          String srcTableId = entry.getKey();
 
           // Make the mutation
-          StatusSection.add(replMutation, buffer, serializedUpdatedStatus);
+          StatusSection.add(replMutation, srcTableId, serializedUpdatedStatus);
 
           log.debug("Updating replication status entry for {} with {}", serializedRow.getKey().getRow(), ProtobufUtil.toString(updatedStatus));
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/replication/StatusMaker.java b/server/master/src/main/java/org/apache/accumulo/master/replication/StatusMaker.java
index 1ebe65f..4a0ed52 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/replication/StatusMaker.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/replication/StatusMaker.java
@@ -90,7 +90,7 @@
       s.fetchColumnFamily(ReplicationSection.COLF);
       s.setRange(ReplicationSection.getRange());
 
-      Text file = new Text(), tableId = new Text();
+      Text file = new Text();
       for (Entry<Key,Value> entry : s) {
         // Get a writer to the replication table
         if (null == replicationWriter) {
@@ -106,7 +106,7 @@
         }
         // Extract the useful bits from the status key
         MetadataSchema.ReplicationSection.getFile(entry.getKey(), file);
-        MetadataSchema.ReplicationSection.getTableId(entry.getKey(), tableId);
+        String tableId = MetadataSchema.ReplicationSection.getTableId(entry.getKey());
 
         Status status;
         try {
@@ -158,10 +158,10 @@
   /**
    * Create a status record in the replication table
    */
-  protected boolean addStatusRecord(Text file, Text tableId, Value v) {
+  protected boolean addStatusRecord(Text file, String tableId, Value v) {
     try {
       Mutation m = new Mutation(file);
-      m.put(StatusSection.NAME, tableId, v);
+      m.put(StatusSection.NAME, new Text(tableId), v);
 
       try {
         replicationWriter.addMutation(m);
@@ -194,7 +194,7 @@
    * @param value
    *          Serialized version of the Status msg
    */
-  protected boolean addOrderRecord(Text file, Text tableId, Status stat, Value value) {
+  protected boolean addOrderRecord(Text file, String tableId, Status stat, Value value) {
     try {
       if (!stat.hasCreatedTime()) {
         log.error("Status record ({}) for {} in table {} was written to metadata table which lacked createdTime", ProtobufUtil.toString(stat), file, tableId);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/replication/UnorderedWorkAssigner.java b/server/master/src/main/java/org/apache/accumulo/master/replication/UnorderedWorkAssigner.java
index 9a28dd4..ab5b041 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/replication/UnorderedWorkAssigner.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/replication/UnorderedWorkAssigner.java
@@ -19,12 +19,12 @@
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.replication.ReplicationConstants;
 import org.apache.accumulo.core.replication.ReplicationTarget;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.server.replication.DistributedWorkQueueWorkAssignerHelper;
 import org.apache.hadoop.fs.Path;
@@ -32,6 +32,8 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 /**
  * Read work records from the replication table, create work entries for other nodes to complete.
  * <p>
@@ -84,7 +86,7 @@
       } catch (KeeperException e) {
         if (KeeperException.Code.NONODE.equals(e.code())) {
           log.warn("Could not find ZK root for replication work queue, will retry", e);
-          UtilWaitThread.sleep(500);
+          sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
           continue;
         }
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/replication/WorkDriver.java b/server/master/src/main/java/org/apache/accumulo/master/replication/WorkDriver.java
index 3558d2d..f0c368a 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/replication/WorkDriver.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/replication/WorkDriver.java
@@ -16,18 +16,21 @@
  */
 package org.apache.accumulo.master.replication;
 
+import java.util.concurrent.TimeUnit;
+
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.util.Daemon;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.replication.WorkAssigner;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 /**
  * Driver for a {@link WorkAssigner}
  */
@@ -103,7 +106,7 @@
 
       long sleepTime = conf.getTimeInMillis(Property.REPLICATION_WORK_ASSIGNMENT_SLEEP);
       log.debug("Sleeping {} ms before next work assignment", sleepTime);
-      UtilWaitThread.sleep(sleepTime);
+      sleepUninterruptibly(sleepTime, TimeUnit.MILLISECONDS);
 
       // After each loop, make sure that the WorkAssigner implementation didn't change
       configureWorkAssigner();
diff --git a/server/master/src/main/java/org/apache/accumulo/master/replication/WorkMaker.java b/server/master/src/main/java/org/apache/accumulo/master/replication/WorkMaker.java
index 0333c5d..6c5645d 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/replication/WorkMaker.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/replication/WorkMaker.java
@@ -89,11 +89,11 @@
 
       TableConfiguration tableConf;
 
-      Text file = new Text(), tableId = new Text();
+      Text file = new Text();
       for (Entry<Key,Value> entry : s) {
         // Extract the useful bits from the status key
         ReplicationSchema.StatusSection.getFile(entry.getKey(), file);
-        ReplicationSchema.StatusSection.getTableId(entry.getKey(), tableId);
+        String tableId = ReplicationSchema.StatusSection.getTableId(entry.getKey());
         log.info("Processing replication status record for " + file + " on table " + tableId);
 
         Status status;
@@ -107,11 +107,12 @@
         // Don't create the record if we have nothing to do.
         // TODO put this into a filter on serverside
         if (!shouldCreateWork(status)) {
+          log.debug("Not creating work: " + status.toString());
           continue;
         }
 
         // Get the table configuration for the table specified by the status record
-        tableConf = context.getServerConfigurationFactory().getTableConfiguration(tableId.toString());
+        tableConf = context.getServerConfigurationFactory().getTableConfiguration(tableId);
 
         // getTableConfiguration(String) returns null if the table no longer exists
         if (null == tableConf) {
@@ -128,7 +129,7 @@
         if (!replicationTargets.isEmpty()) {
           Span workSpan = Trace.start("createWorkMutations");
           try {
-            addWorkRecord(file, entry.getValue(), replicationTargets, tableId.toString());
+            addWorkRecord(file, entry.getValue(), replicationTargets, tableId);
           } finally {
             workSpan.stop();
           }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/state/MergeStats.java b/server/master/src/main/java/org/apache/accumulo/master/state/MergeStats.java
index a3c7e46..4cb858c 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/state/MergeStats.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/state/MergeStats.java
@@ -30,6 +30,7 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.server.cli.ClientOpts;
@@ -98,7 +99,7 @@
     this.total++;
     if (state.equals(TabletState.HOSTED))
       this.hosted++;
-    if (state.equals(TabletState.UNASSIGNED))
+    if (state.equals(TabletState.UNASSIGNED) || state.equals(TabletState.SUSPENDED))
       this.unassigned++;
   }
 
@@ -184,10 +185,10 @@
     if (start == null) {
       start = new Text();
     }
-    Text tableId = extent.getTableId();
+    String tableId = extent.getTableId();
     Text first = KeyExtent.getMetadataEntry(tableId, start);
     Range range = new Range(first, false, null, true);
-    scanner.setRange(range);
+    scanner.setRange(range.clip(MetadataSchema.TabletsSection.getRange()));
     KeyExtent prevExtent = null;
 
     log.debug("Scanning range " + range);
@@ -216,7 +217,7 @@
           return false;
         }
 
-        if (tls.getState(master.onlineTabletServers()) != TabletState.UNASSIGNED) {
+        if (tls.getState(master.onlineTabletServers()) != TabletState.UNASSIGNED && tls.getState(master.onlineTabletServers()) != TabletState.SUSPENDED) {
           log.debug("failing consistency: assigned or hosted " + tls);
           return false;
         }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/state/TableCounts.java b/server/master/src/main/java/org/apache/accumulo/master/state/TableCounts.java
index 73395ea..dd44bc6 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/state/TableCounts.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/state/TableCounts.java
@@ -36,4 +36,8 @@
   public int hosted() {
     return counts[TabletState.HOSTED.ordinal()];
   }
+
+  public int suspended() {
+    return counts[TabletState.SUSPENDED.ordinal()];
+  }
 }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/state/TableStats.java b/server/master/src/main/java/org/apache/accumulo/master/state/TableStats.java
index 8d87896..931df12 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/state/TableStats.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/state/TableStats.java
@@ -21,21 +21,20 @@
 
 import org.apache.accumulo.core.master.thrift.MasterState;
 import org.apache.accumulo.server.master.state.TabletState;
-import org.apache.hadoop.io.Text;
 
 public class TableStats {
-  private Map<Text,TableCounts> last = new HashMap<Text,TableCounts>();
-  private Map<Text,TableCounts> next;
+  private Map<String,TableCounts> last = new HashMap<>();
+  private Map<String,TableCounts> next;
   private long startScan = 0;
   private long endScan = 0;
   private MasterState state;
 
   public synchronized void begin() {
-    next = new HashMap<Text,TableCounts>();
+    next = new HashMap<>();
     startScan = System.currentTimeMillis();
   }
 
-  public synchronized void update(Text tableId, TabletState state) {
+  public synchronized void update(String tableId, TabletState state) {
     TableCounts counts = next.get(tableId);
     if (counts == null) {
       counts = new TableCounts();
@@ -51,7 +50,7 @@
     this.state = state;
   }
 
-  public synchronized Map<Text,TableCounts> getLast() {
+  public synchronized Map<String,TableCounts> getLast() {
     return last;
   }
 
@@ -59,7 +58,7 @@
     return state;
   }
 
-  public synchronized TableCounts getLast(Text tableId) {
+  public synchronized TableCounts getLast(String tableId) {
     TableCounts result = last.get(tableId);
     if (result == null)
       return new TableCounts();
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/BulkImport.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/BulkImport.java
index ad20473..622690c 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/BulkImport.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/BulkImport.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.master.tableOps;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.util.ArrayList;
@@ -25,15 +27,15 @@
 import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.file.FileOperations;
 import org.apache.accumulo.core.master.state.tables.TableState;
+import org.apache.accumulo.core.master.thrift.BulkImportState;
 import org.apache.accumulo.core.util.SimpleThreadPool;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.ServerConstants;
@@ -99,7 +101,7 @@
         reserve2 = Utils.reserveHdfsDirectory(errorDir, tid);
       return reserve2;
     } else {
-      throw new ThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.OFFLINE, null);
+      throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.OFFLINE, null);
     }
   }
 
@@ -120,17 +122,17 @@
       // ignored
     }
     if (errorStatus == null)
-      throw new ThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.BULK_BAD_ERROR_DIRECTORY, errorDir
-          + " does not exist");
+      throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.BULK_BAD_ERROR_DIRECTORY,
+          errorDir + " does not exist");
     if (!errorStatus.isDirectory())
-      throw new ThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.BULK_BAD_ERROR_DIRECTORY, errorDir
-          + " is not a directory");
+      throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.BULK_BAD_ERROR_DIRECTORY,
+          errorDir + " is not a directory");
     if (fs.listStatus(errorPath).length != 0)
-      throw new ThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.BULK_BAD_ERROR_DIRECTORY, errorDir
-          + " is not empty");
+      throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.BULK_BAD_ERROR_DIRECTORY,
+          errorDir + " is not empty");
 
     ZooArbitrator.start(Constants.BULK_ARBITRATOR_TYPE, tid);
-
+    master.updateBulkImportStatus(sourceDir, BulkImportState.MOVING);
     // move the files into the directory
     try {
       String bulkDir = prepareBulkImport(master, fs, sourceDir, tableId);
@@ -138,8 +140,8 @@
       return new LoadFiles(tableId, sourceDir, bulkDir, errorDir, setTime);
     } catch (IOException ex) {
       log.error("error preparing the bulk import directory", ex);
-      throw new ThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.BULK_BAD_INPUT_DIRECTORY, sourceDir + ": "
-          + ex);
+      throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.BULK_BAD_INPUT_DIRECTORY,
+          sourceDir + ": " + ex);
     }
   }
 
@@ -170,7 +172,7 @@
         return newBulkDir;
       log.warn("Failed to create " + newBulkDir + " for unknown reason");
 
-      UtilWaitThread.sleep(3000);
+      sleepUninterruptibly(3, TimeUnit.SECONDS);
     }
   }
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CleanUp.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CleanUp.java
index f221775..6685eaf 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CleanUp.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CleanUp.java
@@ -52,7 +52,6 @@
 import org.apache.accumulo.server.util.MetadataTableUtil;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -90,7 +89,7 @@
     }
 
     boolean done = true;
-    Range tableRange = new KeyExtent(new Text(tableId), null, null).toMetadataRange();
+    Range tableRange = new KeyExtent(tableId, null, null).toMetadataRange();
     Scanner scanner = master.getConnector().createScanner(MetadataTable.NAME, Authorizations.EMPTY);
     MetaDataTableScanner.configureScanner(scanner, master);
     scanner.setRange(tableRange);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CleanUpBulkImport.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CleanUpBulkImport.java
index 5ca325f..fef327d 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CleanUpBulkImport.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CleanUpBulkImport.java
@@ -18,6 +18,7 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.master.thrift.BulkImportState;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.util.MetadataTableUtil;
@@ -46,6 +47,7 @@
 
   @Override
   public Repo<Master> call(long tid, Master master) throws Exception {
+    master.updateBulkImportStatus(source, BulkImportState.CLEANUP);
     log.debug("removing the bulk processing flag file in " + bulk);
     Path bulkDir = new Path(bulk);
     MetadataTableUtil.removeBulkLoadInProgressFlag(master, "/" + bulkDir.getParent().getName() + "/" + bulkDir.getName());
@@ -59,6 +61,7 @@
     Utils.getReadLock(tableId, tid).unlock();
     log.debug("completing bulk import transaction " + tid);
     ZooArbitrator.cleanup(Constants.BULK_ARBITRATOR_TYPE, tid);
+    master.removeBulkImportStatus(source);
     return null;
   }
 }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/ClonePermissions.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/ClonePermissions.java
index cbcc708..fa58550 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/ClonePermissions.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/ClonePermissions.java
@@ -17,10 +17,10 @@
 package org.apache.accumulo.master.tableOps;
 
 import org.apache.accumulo.core.client.NamespaceNotFoundException;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
 import org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.master.Master;
@@ -61,7 +61,7 @@
     try {
       return new CloneZookeeper(cloneInfo);
     } catch (NamespaceNotFoundException e) {
-      throw new ThriftTableOperationException(null, cloneInfo.tableName, TableOperation.CLONE, TableOperationExceptionType.NAMESPACE_NOTFOUND,
+      throw new AcceptableThriftTableOperationException(null, cloneInfo.tableName, TableOperation.CLONE, TableOperationExceptionType.NAMESPACE_NOTFOUND,
           "Namespace for target table not found");
     }
   }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneTable.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneTable.java
index 84529a6..bae4f26 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneTable.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CloneTable.java
@@ -20,10 +20,10 @@
 import java.util.Set;
 
 import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.client.HdfsZooInstance;
@@ -34,7 +34,7 @@
   private CloneInfo cloneInfo;
 
   public CloneTable(String user, String srcTableId, String tableName, Map<String,String> propertiesToSet, Set<String> propertiesToExclude)
-      throws ThriftTableOperationException {
+      throws AcceptableThriftTableOperationException {
     cloneInfo = new CloneInfo();
     cloneInfo.user = user;
     cloneInfo.srcTableId = srcTableId;
@@ -49,7 +49,8 @@
         // just throw the exception if the illegal argument was thrown by the argument checker and not due to table non-existence
         throw e;
       }
-      throw new ThriftTableOperationException(cloneInfo.srcTableId, "", TableOperation.CLONE, TableOperationExceptionType.NOTFOUND, "Table does not exist");
+      throw new AcceptableThriftTableOperationException(cloneInfo.srcTableId, "", TableOperation.CLONE, TableOperationExceptionType.NOTFOUND,
+          "Table does not exist");
     }
   }
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CompactRange.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CompactRange.java
index 7a9c5d6..d7d5b14 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CompactRange.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CompactRange.java
@@ -24,11 +24,11 @@
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.admin.CompactionStrategyConfig;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.CompactionStrategyConfigUtil;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter.Mutator;
@@ -52,7 +52,7 @@
   private byte[] config;
 
   public CompactRange(String tableId, byte[] startRow, byte[] endRow, List<IteratorSetting> iterators, CompactionStrategyConfig compactionStrategy)
-      throws ThriftTableOperationException {
+      throws AcceptableThriftTableOperationException {
 
     requireNonNull(tableId, "Invalid argument: null tableId");
     requireNonNull(iterators, "Invalid argument: null iterator list");
@@ -69,7 +69,7 @@
     }
 
     if (this.startRow != null && this.endRow != null && new Text(startRow).compareTo(new Text(endRow)) >= 0)
-      throw new ThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.BAD_RANGE,
+      throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.BAD_RANGE,
           "start row must be less than end row");
   }
 
@@ -104,7 +104,7 @@
             log.debug("txidString : " + txidString);
             log.debug("tokens[" + i + "] : " + tokens[i]);
 
-            throw new ThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.OTHER,
+            throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.OTHER,
                 "Another compaction with iterators and/or a compaction strategy is running");
           }
 
@@ -124,7 +124,7 @@
 
       return new CompactionDriver(Long.parseLong(new String(cid, UTF_8).split(",")[0]), tableId, startRow, endRow);
     } catch (NoNodeException nne) {
-      throw new ThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.NOTFOUND, null);
+      throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.NOTFOUND, null);
     }
 
   }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CompactionDriver.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CompactionDriver.java
index 0db93c1..4653472 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CompactionDriver.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CompactionDriver.java
@@ -26,10 +26,10 @@
 import org.apache.accumulo.core.client.IsolatedScanner;
 import org.apache.accumulo.core.client.RowIterator;
 import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
@@ -77,10 +77,10 @@
 
     if (Long.parseLong(new String(zoo.getData(zCancelID, null))) >= compactId) {
       // compaction was canceled
-      throw new ThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.OTHER, "Compaction canceled");
+      throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.OTHER, "Compaction canceled");
     }
 
-    MapCounter<TServerInstance> serversToFlush = new MapCounter<TServerInstance>();
+    MapCounter<TServerInstance> serversToFlush = new MapCounter<>();
     Connector conn = master.getConnector();
 
     Scanner scanner;
@@ -90,7 +90,7 @@
       scanner.setRange(MetadataSchema.TabletsSection.getRange());
     } else {
       scanner = new IsolatedScanner(conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY));
-      Range range = new KeyExtent(new Text(tableId), null, startRow == null ? null : new Text(startRow)).toMetadataRange();
+      Range range = new KeyExtent(tableId, null, startRow == null ? null : new Text(startRow)).toMetadataRange();
       scanner.setRange(range);
     }
 
@@ -140,10 +140,10 @@
     Instance instance = master.getInstance();
     Tables.clearCache(instance);
     if (tabletCount == 0 && !Tables.exists(instance, tableId))
-      throw new ThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.NOTFOUND, null);
+      throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.NOTFOUND, null);
 
     if (serversToFlush.size() == 0 && Tables.getTableState(instance, tableId) == TableState.OFFLINE)
-      throw new ThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.OFFLINE, null);
+      throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.COMPACT, TableOperationExceptionType.OFFLINE, null);
 
     if (tabletsToWaitFor == 0)
       return 0;
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CopyFailed.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CopyFailed.java
index 5f5b298..2f71907 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/CopyFailed.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/CopyFailed.java
@@ -32,6 +32,7 @@
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.data.impl.KeyExtent;
+import org.apache.accumulo.core.master.thrift.BulkImportState;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.security.Authorizations;
@@ -43,7 +44,6 @@
 import org.apache.accumulo.server.master.state.TServerInstance;
 import org.apache.accumulo.server.zookeeper.DistributedWorkQueue;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 import org.apache.thrift.TException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -67,7 +67,7 @@
 
   @Override
   public long isReady(long tid, Master master) throws Exception {
-    Set<TServerInstance> finished = new HashSet<TServerInstance>();
+    Set<TServerInstance> finished = new HashSet<>();
     Set<TServerInstance> running = master.onlineTabletServers();
     for (TServerInstance server : running) {
       try {
@@ -86,14 +86,14 @@
   @Override
   public Repo<Master> call(long tid, Master master) throws Exception {
     // This needs to execute after the arbiter is stopped
-
+    master.updateBulkImportStatus(source, BulkImportState.COPY_FILES);
     VolumeManager fs = master.getFileSystem();
 
     if (!fs.exists(new Path(error, BulkImport.FAILURES_TXT)))
       return new CleanUpBulkImport(tableId, source, bulk, error);
 
-    HashMap<FileRef,String> failures = new HashMap<FileRef,String>();
-    HashMap<FileRef,String> loadedFailures = new HashMap<FileRef,String>();
+    HashMap<FileRef,String> failures = new HashMap<>();
+    HashMap<FileRef,String> loadedFailures = new HashMap<>();
 
     try (BufferedReader in = new BufferedReader(new InputStreamReader(fs.open(new Path(error, BulkImport.FAILURES_TXT)), UTF_8))) {
       String line = null;
@@ -111,16 +111,17 @@
 
     // determine which failed files were loaded
     Connector conn = master.getConnector();
-    Scanner mscanner = new IsolatedScanner(conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY));
-    mscanner.setRange(new KeyExtent(new Text(tableId), null, null).toMetadataRange());
-    mscanner.fetchColumnFamily(TabletsSection.BulkFileColumnFamily.NAME);
+    try (Scanner mscanner = new IsolatedScanner(conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY))) {
+      mscanner.setRange(new KeyExtent(tableId, null, null).toMetadataRange());
+      mscanner.fetchColumnFamily(TabletsSection.BulkFileColumnFamily.NAME);
 
-    for (Entry<Key,Value> entry : mscanner) {
-      if (Long.parseLong(entry.getValue().toString()) == tid) {
-        FileRef loadedFile = new FileRef(fs, entry.getKey());
-        String absPath = failures.remove(loadedFile);
-        if (absPath != null) {
-          loadedFailures.put(loadedFile, absPath);
+      for (Entry<Key,Value> entry : mscanner) {
+        if (Long.parseLong(entry.getValue().toString()) == tid) {
+          FileRef loadedFile = new FileRef(fs, entry.getKey());
+          String absPath = failures.remove(loadedFile);
+          if (absPath != null) {
+            loadedFailures.put(loadedFile, absPath);
+          }
         }
       }
     }
@@ -137,7 +138,7 @@
       DistributedWorkQueue bifCopyQueue = new DistributedWorkQueue(Constants.ZROOT + "/" + master.getInstance().getInstanceID() + Constants.ZBULK_FAILED_COPYQ,
           master.getConfiguration());
 
-      HashSet<String> workIds = new HashSet<String>();
+      HashSet<String> workIds = new HashSet<>();
 
       for (String failure : loadedFailures.values()) {
         Path orig = new Path(failure);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/DeleteTable.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/DeleteTable.java
index a1158f4..8ee385c 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/DeleteTable.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/DeleteTable.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.master.tableOps;
 
+import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.master.state.tables.TableState;
@@ -35,16 +36,31 @@
 
   @Override
   public long isReady(long tid, Master environment) throws Exception {
-    String namespaceId = Tables.getNamespaceId(environment.getInstance(), tableId);
-    return Utils.reserveNamespace(namespaceId, tid, false, false, TableOperation.DELETE) + Utils.reserveTable(tableId, tid, true, true, TableOperation.DELETE);
+    try {
+      String namespaceId = Tables.getNamespaceId(environment.getInstance(), tableId);
+      return Utils.reserveNamespace(namespaceId, tid, false, false, TableOperation.DELETE)
+          + Utils.reserveTable(tableId, tid, true, true, TableOperation.DELETE);
+    } catch (IllegalArgumentException ex) {
+      if (ex.getCause() != null && ex.getCause() instanceof TableNotFoundException) {
+        return 0;
+      }
+      throw ex;
+    }
   }
 
   @Override
   public Repo<Master> call(long tid, Master environment) throws Exception {
-    String namespaceId = Tables.getNamespaceId(environment.getInstance(), tableId);
-    TableManager.getInstance().transitionTableState(tableId, TableState.DELETING);
-    environment.getEventCoordinator().event("deleting table %s ", tableId);
-    return new CleanUp(tableId, namespaceId);
+    try {
+      String namespaceId = Tables.getNamespaceId(environment.getInstance(), tableId);
+      TableManager.getInstance().transitionTableState(tableId, TableState.DELETING);
+      environment.getEventCoordinator().event("deleting table %s ", tableId);
+      return new CleanUp(tableId, namespaceId);
+    } catch (IllegalArgumentException ex) {
+      if (ex.getCause() != null && ex.getCause() instanceof TableNotFoundException) {
+        return null;
+      }
+      throw ex;
+    }
   }
 
   @Override
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/ImportPopulateZookeeper.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/ImportPopulateZookeeper.java
index 71e9124..e76dd09 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/ImportPopulateZookeeper.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/ImportPopulateZookeeper.java
@@ -22,12 +22,12 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.Namespaces;
 import org.apache.accumulo.core.client.impl.TableOperationsImpl;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
 import org.apache.accumulo.master.Master;
@@ -60,7 +60,7 @@
       FileSystem ns = fs.getVolumeByPath(path).getFileSystem();
       return TableOperationsImpl.getExportedProps(ns, path);
     } catch (IOException ioe) {
-      throw new ThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
+      throw new AcceptableThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
           "Error reading table props from " + path + " " + ioe.getMessage());
     }
   }
@@ -87,7 +87,7 @@
 
     for (Entry<String,String> entry : getExportedProps(env.getFileSystem()).entrySet())
       if (!TablePropUtil.setTableProperty(tableInfo.tableId, entry.getKey(), entry.getValue())) {
-        throw new ThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
+        throw new AcceptableThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
             "Invalid table property " + entry.getKey());
       }
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/ImportTable.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/ImportTable.java
index dc33303..b9b5327 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/ImportTable.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/ImportTable.java
@@ -26,9 +26,9 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.ServerConstants;
@@ -77,7 +77,7 @@
     }
   }
 
-  public void checkVersions(Master env) throws ThriftTableOperationException {
+  public void checkVersions(Master env) throws AcceptableThriftTableOperationException {
     Path path = new Path(tableInfo.exportDir, Constants.EXPORT_FILE);
     Integer exportVersion = null;
     Integer dataVersion = null;
@@ -101,17 +101,17 @@
       }
     } catch (IOException ioe) {
       log.warn("{}", ioe.getMessage(), ioe);
-      throw new ThriftTableOperationException(null, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
+      throw new AcceptableThriftTableOperationException(null, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
           "Failed to read export metadata " + ioe.getMessage());
     }
 
     if (exportVersion == null || exportVersion > ExportTable.VERSION)
-      throw new ThriftTableOperationException(null, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
+      throw new AcceptableThriftTableOperationException(null, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
           "Incompatible export version " + exportVersion);
 
     if (dataVersion == null || dataVersion > ServerConstants.DATA_VERSION)
-      throw new ThriftTableOperationException(null, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER, "Incompatible data version "
-          + exportVersion);
+      throw new AcceptableThriftTableOperationException(null, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
+          "Incompatible data version " + exportVersion);
   }
 
   @Override
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/LoadFiles.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/LoadFiles.java
index 14fb18f..ff285fa 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/LoadFiles.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/LoadFiles.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.master.tableOps;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.io.BufferedWriter;
@@ -31,17 +32,18 @@
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Future;
 import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
 
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.thrift.ClientService;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.master.thrift.BulkImportState;
 import org.apache.accumulo.core.rpc.ThriftUtil;
 import org.apache.accumulo.core.trace.Tracer;
 import org.apache.accumulo.core.util.SimpleThreadPool;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.fs.VolumeManager;
@@ -95,10 +97,11 @@
 
   @Override
   public Repo<Master> call(final long tid, final Master master) throws Exception {
+    master.updateBulkImportStatus(source, BulkImportState.LOADING);
     ExecutorService executor = getThreadPool(master);
     final AccumuloConfiguration conf = master.getConfiguration();
     VolumeManager fs = master.getFileSystem();
-    List<FileStatus> files = new ArrayList<FileStatus>();
+    List<FileStatus> files = new ArrayList<>();
     for (FileStatus entry : fs.listStatus(new Path(bulk))) {
       files.add(entry);
     }
@@ -109,7 +112,7 @@
       // Maybe this is a re-try... clear the flag and try again
       fs.delete(writable);
       if (!fs.createNewFile(writable))
-        throw new ThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.BULK_BAD_ERROR_DIRECTORY,
+        throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.BULK_IMPORT, TableOperationExceptionType.BULK_BAD_ERROR_DIRECTORY,
             "Unable to write to " + this.errorDir);
     }
     fs.delete(writable);
@@ -120,13 +123,13 @@
 
     final int RETRIES = Math.max(1, conf.getCount(Property.MASTER_BULK_RETRIES));
     for (int attempt = 0; attempt < RETRIES && filesToLoad.size() > 0; attempt++) {
-      List<Future<List<String>>> results = new ArrayList<Future<List<String>>>();
+      List<Future<List<String>>> results = new ArrayList<>();
 
       if (master.onlineTabletServers().size() == 0)
         log.warn("There are no tablet server to process bulk import, waiting (tid = " + tid + ")");
 
       while (master.onlineTabletServers().size() == 0) {
-        UtilWaitThread.sleep(500);
+        sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
       }
 
       // Use the threadpool to assign files one-at-a-time to the server
@@ -137,7 +140,7 @@
         results.add(executor.submit(new Callable<List<String>>() {
           @Override
           public List<String> call() {
-            List<String> failures = new ArrayList<String>();
+            List<String> failures = new ArrayList<>();
             ClientService.Client client = null;
             HostAndPort server = null;
             try {
@@ -165,13 +168,13 @@
           }
         }));
       }
-      Set<String> failures = new HashSet<String>();
+      Set<String> failures = new HashSet<>();
       for (Future<List<String>> f : results)
         failures.addAll(f.get());
       filesToLoad.removeAll(loaded);
       if (filesToLoad.size() > 0) {
         log.debug("tid " + tid + " attempt " + (attempt + 1) + " " + sampleList(filesToLoad, 10) + " failed");
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       }
     }
 
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/MapImportFileNames.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/MapImportFileNames.java
index 4a43c68..06bd86a 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/MapImportFileNames.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/MapImportFileNames.java
@@ -23,9 +23,9 @@
 import java.io.OutputStreamWriter;
 
 import org.apache.accumulo.core.Constants;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.file.FileOperations;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.master.Master;
@@ -95,7 +95,7 @@
       return new PopulateMetadataTable(tableInfo);
     } catch (IOException ioe) {
       log.warn("{}", ioe.getMessage(), ioe);
-      throw new ThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
+      throw new AcceptableThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
           "Error writing mapping file " + path + " " + ioe.getMessage());
     } finally {
       if (mappingsWriter != null)
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/MoveExportedFiles.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/MoveExportedFiles.java
index c1bcc49..d6eb7be 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/MoveExportedFiles.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/MoveExportedFiles.java
@@ -19,9 +19,9 @@
 import java.io.IOException;
 import java.util.Map;
 
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.fs.VolumeManager;
@@ -50,7 +50,7 @@
 
       for (String oldFileName : fileNameMappings.keySet()) {
         if (!fs.exists(new Path(tableInfo.exportDir, oldFileName))) {
-          throw new ThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
+          throw new AcceptableThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
               "File referenced by exported table does not exists " + oldFileName);
         }
       }
@@ -67,7 +67,7 @@
       return new FinishImportTable(tableInfo);
     } catch (IOException ioe) {
       log.warn("{}", ioe.getMessage(), ioe);
-      throw new ThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
+      throw new AcceptableThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
           "Error renaming files " + ioe.getMessage());
     }
   }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadata.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadata.java
index 45a370d..564f6d0 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadata.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadata.java
@@ -20,7 +20,6 @@
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.util.MetadataTableUtil;
-import org.apache.hadoop.io.Text;
 
 class PopulateMetadata extends MasterRepo {
 
@@ -39,7 +38,7 @@
 
   @Override
   public Repo<Master> call(long tid, Master environment) throws Exception {
-    KeyExtent extent = new KeyExtent(new Text(tableInfo.tableId), null, null);
+    KeyExtent extent = new KeyExtent(tableInfo.tableId, null, null);
     MetadataTableUtil.addTablet(extent, tableInfo.dir, environment, tableInfo.timeType, environment.getMasterLock());
 
     return new FinishCreateTable(tableInfo);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadataTable.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadataTable.java
index e35f01a..81e28b6 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadataTable.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/PopulateMetadataTable.java
@@ -31,9 +31,9 @@
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
@@ -69,7 +69,7 @@
     BufferedReader in = new BufferedReader(new InputStreamReader(fs.open(new Path(tableInfo.importDir, "mappings.txt")), UTF_8));
 
     try {
-      Map<String,String> map = new HashMap<String,String>();
+      Map<String,String> map = new HashMap<>();
 
       String line = null;
       while ((line = in.readLine()) != null) {
@@ -125,7 +125,7 @@
             val.readFields(in);
 
             Text endRow = new KeyExtent(key.getRow(), (Text) null).getEndRow();
-            Text metadataRow = new KeyExtent(new Text(tableInfo.tableId), endRow, null).getMetadataEntry();
+            Text metadataRow = new KeyExtent(tableInfo.tableId, endRow, null).getMetadataEntry();
 
             Text cq;
 
@@ -134,8 +134,8 @@
               String newName = fileNameMappings.get(oldName);
 
               if (newName == null) {
-                throw new ThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
-                    "File " + oldName + " does not exist in import dir");
+                throw new AcceptableThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT,
+                    TableOperationExceptionType.OTHER, "File " + oldName + " does not exist in import dir");
               }
 
               cq = new Text(bulkDir + "/" + newName);
@@ -183,7 +183,7 @@
       return new MoveExportedFiles(tableInfo);
     } catch (IOException ioe) {
       log.warn("{}", ioe.getMessage(), ioe);
-      throw new ThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
+      throw new AcceptableThriftTableOperationException(tableInfo.tableId, tableInfo.tableName, TableOperation.IMPORT, TableOperationExceptionType.OTHER,
           "Error reading " + path + " " + ioe.getMessage());
     } finally {
       if (zis != null) {
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/RenameNamespace.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/RenameNamespace.java
index 1f09db0..5373b94 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/RenameNamespace.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/RenameNamespace.java
@@ -18,10 +18,10 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
@@ -68,7 +68,7 @@
           if (currentName.equals(newName))
             return null; // assume in this case the operation is running again, so we are done
           if (!currentName.equals(oldName)) {
-            throw new ThriftTableOperationException(null, oldName, TableOperation.RENAME, TableOperationExceptionType.NAMESPACE_NOTFOUND,
+            throw new AcceptableThriftTableOperationException(null, oldName, TableOperation.RENAME, TableOperationExceptionType.NAMESPACE_NOTFOUND,
                 "Name changed while processing");
           }
           return newName.getBytes();
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/RenameTable.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/RenameTable.java
index 053749f..f85d411 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/RenameTable.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/RenameTable.java
@@ -21,11 +21,11 @@
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.NamespaceNotFoundException;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.Namespaces;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.Repo;
@@ -63,7 +63,7 @@
 
     // ensure no attempt is made to rename across namespaces
     if (newTableName.contains(".") && !namespaceId.equals(Namespaces.getNamespaceId(instance, qualifiedNewTableName.getFirst())))
-      throw new ThriftTableOperationException(tableId, oldTableName, TableOperation.RENAME, TableOperationExceptionType.INVALID_NAME,
+      throw new AcceptableThriftTableOperationException(tableId, oldTableName, TableOperation.RENAME, TableOperationExceptionType.INVALID_NAME,
           "Namespace in new table name does not match the old table name");
 
     IZooReaderWriter zoo = ZooReaderWriter.getInstance();
@@ -84,7 +84,7 @@
           if (currentName.equals(newName))
             return null; // assume in this case the operation is running again, so we are done
           if (!currentName.equals(oldName)) {
-            throw new ThriftTableOperationException(null, oldTableName, TableOperation.RENAME, TableOperationExceptionType.NOTFOUND,
+            throw new AcceptableThriftTableOperationException(null, oldTableName, TableOperation.RENAME, TableOperationExceptionType.NOTFOUND,
                 "Name changed while processing");
           }
           return newName.getBytes(UTF_8);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableRangeOp.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableRangeOp.java
index 879470b..e2e7018 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableRangeOp.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableRangeOp.java
@@ -16,10 +16,10 @@
  */
 package org.apache.accumulo.master.tableOps;
 
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.util.TextUtil;
@@ -48,7 +48,7 @@
     return Utils.reserveNamespace(namespaceId, tid, false, true, TableOperation.MERGE) + Utils.reserveTable(tableId, tid, true, true, TableOperation.MERGE);
   }
 
-  public TableRangeOp(MergeInfo.Operation op, String tableId, Text startRow, Text endRow) throws ThriftTableOperationException {
+  public TableRangeOp(MergeInfo.Operation op, String tableId, Text startRow, Text endRow) throws AcceptableThriftTableOperationException {
 
     this.tableId = tableId;
     this.startRow = TextUtil.getBytes(startRow);
@@ -65,19 +65,18 @@
 
     Text start = startRow.length == 0 ? null : new Text(startRow);
     Text end = endRow.length == 0 ? null : new Text(endRow);
-    Text tableIdText = new Text(tableId);
 
     if (start != null && end != null)
       if (start.compareTo(end) >= 0)
-        throw new ThriftTableOperationException(tableId, null, TableOperation.MERGE, TableOperationExceptionType.BAD_RANGE,
+        throw new AcceptableThriftTableOperationException(tableId, null, TableOperation.MERGE, TableOperationExceptionType.BAD_RANGE,
             "start row must be less than end row");
 
     env.mustBeOnline(tableId);
 
-    MergeInfo info = env.getMergeInfo(tableIdText);
+    MergeInfo info = env.getMergeInfo(tableId);
 
     if (info.getState() == MergeState.NONE) {
-      KeyExtent range = new KeyExtent(tableIdText, end, start);
+      KeyExtent range = new KeyExtent(tableId, end, start);
       env.setMergeState(new MergeInfo(range, op), MergeState.STARTED);
     }
 
@@ -88,11 +87,10 @@
   public void undo(long tid, Master env) throws Exception {
     String namespaceId = Tables.getNamespaceId(env.getInstance(), tableId);
     // Not sure this is a good thing to do. The Master state engine should be the one to remove it.
-    Text tableIdText = new Text(tableId);
-    MergeInfo mergeInfo = env.getMergeInfo(tableIdText);
+    MergeInfo mergeInfo = env.getMergeInfo(tableId);
     if (mergeInfo.getState() != MergeState.NONE)
       log.info("removing merge information " + mergeInfo);
-    env.clearMergeState(tableIdText);
+    env.clearMergeState(tableId);
     Utils.unreserveNamespace(namespaceId, tid, false);
     Utils.unreserveTable(tableId, tid, true);
   }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableRangeOpWait.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableRangeOpWait.java
index 668c790..a7c82b1 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableRangeOpWait.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/TableRangeOpWait.java
@@ -21,7 +21,6 @@
 import org.apache.accumulo.master.Master;
 import org.apache.accumulo.server.master.state.MergeInfo;
 import org.apache.accumulo.server.master.state.MergeState;
-import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -50,8 +49,7 @@
 
   @Override
   public long isReady(long tid, Master env) throws Exception {
-    Text tableIdText = new Text(tableId);
-    if (!env.getMergeInfo(tableIdText).getState().equals(MergeState.NONE)) {
+    if (!env.getMergeInfo(tableId).getState().equals(MergeState.NONE)) {
       return 50;
     }
     return 0;
@@ -60,10 +58,9 @@
   @Override
   public Repo<Master> call(long tid, Master master) throws Exception {
     String namespaceId = Tables.getNamespaceId(master.getInstance(), tableId);
-    Text tableIdText = new Text(tableId);
-    MergeInfo mergeInfo = master.getMergeInfo(tableIdText);
+    MergeInfo mergeInfo = master.getMergeInfo(tableId);
     log.info("removing merge information " + mergeInfo);
-    master.clearMergeState(tableIdText);
+    master.clearMergeState(tableId);
     Utils.unreserveNamespace(namespaceId, tid, false);
     Utils.unreserveTable(tableId, tid, true);
     return null;
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/TraceRepo.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/TraceRepo.java
index 9388b7b..4bbb12b 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/TraceRepo.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/TraceRepo.java
@@ -57,7 +57,7 @@
       Repo<T> result = repo.call(tid, environment);
       if (result == null)
         return null;
-      return new TraceRepo<T>(result);
+      return new TraceRepo<>(result);
     } finally {
       span.stop();
     }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java
index 0fb9138..2baf7ac 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/Utils.java
@@ -24,11 +24,11 @@
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.Namespaces;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.DistributedReadWriteLock;
@@ -46,15 +46,16 @@
   private static final byte[] ZERO_BYTE = new byte[] {'0'};
   private static final Logger log = LoggerFactory.getLogger(Utils.class);
 
-  static void checkTableDoesNotExist(Instance instance, String tableName, String tableId, TableOperation operation) throws ThriftTableOperationException {
+  static void checkTableDoesNotExist(Instance instance, String tableName, String tableId, TableOperation operation)
+      throws AcceptableThriftTableOperationException {
 
     String id = Tables.getNameToIdMap(instance).get(tableName);
 
     if (id != null && !id.equals(tableId))
-      throw new ThriftTableOperationException(null, tableName, operation, TableOperationExceptionType.EXISTS, null);
+      throw new AcceptableThriftTableOperationException(null, tableName, operation, TableOperationExceptionType.EXISTS, null);
   }
 
-  static String getNextTableId(String tableName, Instance instance) throws ThriftTableOperationException {
+  static String getNextTableId(String tableName, Instance instance) throws AcceptableThriftTableOperationException {
 
     String tableId = null;
     try {
@@ -71,7 +72,7 @@
       return new String(nid, UTF_8);
     } catch (Exception e1) {
       log.error("Failed to assign tableId to " + tableName, e1);
-      throw new ThriftTableOperationException(tableId, tableName, TableOperation.CREATE, TableOperationExceptionType.OTHER, e1.getMessage());
+      throw new AcceptableThriftTableOperationException(tableId, tableName, TableOperation.CREATE, TableOperationExceptionType.OTHER, e1.getMessage());
     }
   }
 
@@ -84,7 +85,7 @@
         Instance instance = HdfsZooInstance.getInstance();
         IZooReaderWriter zk = ZooReaderWriter.getInstance();
         if (!zk.exists(ZooUtil.getRoot(instance) + Constants.ZTABLES + "/" + tableId))
-          throw new ThriftTableOperationException(tableId, "", op, TableOperationExceptionType.NOTFOUND, "Table does not exist");
+          throw new AcceptableThriftTableOperationException(tableId, "", op, TableOperationExceptionType.NOTFOUND, "Table does not exist");
       }
       log.info("table " + tableId + " (" + Long.toHexString(tid) + ") locked for " + (writeLock ? "write" : "read") + " operation: " + op);
       return 0;
@@ -108,7 +109,7 @@
         Instance instance = HdfsZooInstance.getInstance();
         IZooReaderWriter zk = ZooReaderWriter.getInstance();
         if (!zk.exists(ZooUtil.getRoot(instance) + Constants.ZNAMESPACES + "/" + namespaceId))
-          throw new ThriftTableOperationException(namespaceId, "", op, TableOperationExceptionType.NAMESPACE_NOTFOUND, "Namespace does not exist");
+          throw new AcceptableThriftTableOperationException(namespaceId, "", op, TableOperationExceptionType.NAMESPACE_NOTFOUND, "Namespace does not exist");
       }
       log.info("namespace " + namespaceId + " (" + Long.toHexString(id) + ") locked for " + (writeLock ? "write" : "read") + " operation: " + op);
       return 0;
@@ -154,11 +155,11 @@
   }
 
   static void checkNamespaceDoesNotExist(Instance instance, String namespace, String namespaceId, TableOperation operation)
-      throws ThriftTableOperationException {
+      throws AcceptableThriftTableOperationException {
 
     String n = Namespaces.getNameToIdMap(instance).get(namespace);
 
     if (n != null && !n.equals(namespaceId))
-      throw new ThriftTableOperationException(null, namespace, operation, TableOperationExceptionType.NAMESPACE_EXISTS, null);
+      throw new AcceptableThriftTableOperationException(null, namespace, operation, TableOperationExceptionType.NAMESPACE_EXISTS, null);
   }
 }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/tableOps/WriteExportFiles.java b/server/master/src/main/java/org/apache/accumulo/master/tableOps/WriteExportFiles.java
index a492957..d0cbe36 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/tableOps/WriteExportFiles.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/tableOps/WriteExportFiles.java
@@ -35,10 +35,10 @@
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.impl.AcceptableThriftTableOperationException;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
-import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.DefaultConfiguration;
 import org.apache.accumulo.core.conf.Property;
@@ -59,7 +59,6 @@
 import org.apache.accumulo.server.fs.VolumeManager;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 
 class WriteExportFiles extends MasterRepo {
 
@@ -74,7 +73,7 @@
     if (Tables.getTableState(conn.getInstance(), tableInfo.tableID) != TableState.OFFLINE) {
       Tables.clearCache(conn.getInstance());
       if (Tables.getTableState(conn.getInstance(), tableInfo.tableID) != TableState.OFFLINE) {
-        throw new ThriftTableOperationException(tableInfo.tableID, tableInfo.tableName, TableOperation.EXPORT, TableOperationExceptionType.OTHER,
+        throw new AcceptableThriftTableOperationException(tableInfo.tableID, tableInfo.tableName, TableOperation.EXPORT, TableOperationExceptionType.OTHER,
             "Table is not offline");
       }
     }
@@ -93,7 +92,7 @@
     checkOffline(conn);
 
     Scanner metaScanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    metaScanner.setRange(new KeyExtent(new Text(tableInfo.tableID), null, null).toMetadataRange());
+    metaScanner.setRange(new KeyExtent(tableInfo.tableID, null, null).toMetadataRange());
 
     // scan for locations
     metaScanner.fetchColumnFamily(TabletsSection.CurrentLocationColumnFamily.NAME);
@@ -109,7 +108,7 @@
     metaScanner.fetchColumnFamily(LogColumnFamily.NAME);
 
     if (metaScanner.iterator().hasNext()) {
-      throw new ThriftTableOperationException(tableInfo.tableID, tableInfo.tableName, TableOperation.EXPORT, TableOperationExceptionType.OTHER,
+      throw new AcceptableThriftTableOperationException(tableInfo.tableID, tableInfo.tableName, TableOperation.EXPORT, TableOperationExceptionType.OTHER,
           "Write ahead logs found for table");
     }
 
@@ -121,7 +120,7 @@
     try {
       exportTable(master.getFileSystem(), master, tableInfo.tableName, tableInfo.tableID, tableInfo.exportDir);
     } catch (IOException ioe) {
-      throw new ThriftTableOperationException(tableInfo.tableID, tableInfo.tableName, TableOperation.EXPORT, TableOperationExceptionType.OTHER,
+      throw new AcceptableThriftTableOperationException(tableInfo.tableID, tableInfo.tableName, TableOperation.EXPORT, TableOperationExceptionType.OTHER,
           "Failed to create export files " + ioe.getMessage());
     }
     Utils.unreserveNamespace(tableInfo.namespaceID, tid, false);
@@ -203,13 +202,13 @@
       DataOutputStream dataOut) throws IOException, TableNotFoundException, AccumuloException, AccumuloSecurityException {
     zipOut.putNextEntry(new ZipEntry(Constants.EXPORT_METADATA_FILE));
 
-    Map<String,String> uniqueFiles = new HashMap<String,String>();
+    Map<String,String> uniqueFiles = new HashMap<>();
 
     Scanner metaScanner = context.getConnector().createScanner(MetadataTable.NAME, Authorizations.EMPTY);
     metaScanner.fetchColumnFamily(DataFileColumnFamily.NAME);
     TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.fetch(metaScanner);
     TabletsSection.ServerColumnFamily.TIME_COLUMN.fetch(metaScanner);
-    metaScanner.setRange(new KeyExtent(new Text(tableID), null, null).toMetadataRange());
+    metaScanner.setRange(new KeyExtent(tableID, null, null).toMetadataRange());
 
     for (Entry<Key,Value> entry : metaScanner) {
       entry.getKey().write(dataOut);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java b/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java
index 953d630..a1dd303 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/util/FateAdmin.java
@@ -44,7 +44,7 @@
 
   static class TxOpts {
     @Parameter(description = "<txid>...", required = true)
-    List<String> txids = new ArrayList<String>();
+    List<String> txids = new ArrayList<>();
   }
 
   @Parameters(commandDescription = "Stop an existing FATE by transaction id")
@@ -60,7 +60,7 @@
     Help opts = new Help();
     JCommander jc = new JCommander(opts);
     jc.setProgramName(FateAdmin.class.getName());
-    LinkedHashMap<String,TxOpts> txOpts = new LinkedHashMap<String,TxOpts>(2);
+    LinkedHashMap<String,TxOpts> txOpts = new LinkedHashMap<>(2);
     txOpts.put("fail", new FailOpts());
     txOpts.put("delete", new DeleteOpts());
     for (Entry<String,TxOpts> entry : txOpts.entrySet()) {
@@ -76,13 +76,13 @@
     System.err
         .printf("This tool has been deprecated%nFATE administration now available within 'accumulo shell'%n$ fate fail <txid>... | delete <txid>... | print [<txid>...]%n%n");
 
-    AdminUtil<Master> admin = new AdminUtil<Master>();
+    AdminUtil<Master> admin = new AdminUtil<>();
 
     Instance instance = HdfsZooInstance.getInstance();
     String path = ZooUtil.getRoot(instance) + Constants.ZFATE;
     String masterPath = ZooUtil.getRoot(instance) + Constants.ZMASTER_LOCK;
     IZooReaderWriter zk = ZooReaderWriter.getInstance();
-    ZooStore<Master> zs = new ZooStore<Master>(path, zk);
+    ZooStore<Master> zs = new ZooStore<>(path, zk);
 
     if (jc.getParsedCommand().equals("fail")) {
       for (String txid : txOpts.get(jc.getParsedCommand()).txids) {
@@ -98,7 +98,7 @@
         admin.deleteLocks(zs, zk, ZooUtil.getRoot(instance) + Constants.ZTABLE_LOCKS, txid);
       }
     } else if (jc.getParsedCommand().equals("print")) {
-      admin.print(new ReadOnlyStore<Master>(zs), zk, ZooUtil.getRoot(instance) + Constants.ZTABLE_LOCKS);
+      admin.print(new ReadOnlyStore<>(zs), zk, ZooUtil.getRoot(instance) + Constants.ZTABLE_LOCKS);
     }
   }
 }
diff --git a/server/master/src/main/java/org/apache/accumulo/master/util/TableValidators.java b/server/master/src/main/java/org/apache/accumulo/master/util/TableValidators.java
index a770b47..2a26fb0 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/util/TableValidators.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/util/TableValidators.java
@@ -35,7 +35,7 @@
 
   public static final Validator<String> VALID_NAME = new Validator<String>() {
     @Override
-    public boolean isValid(String tableName) {
+    public boolean apply(String tableName) {
       return tableName != null && tableName.matches(VALID_NAME_REGEX);
     }
 
@@ -49,7 +49,7 @@
 
   public static final Validator<String> VALID_ID = new Validator<String>() {
     @Override
-    public boolean isValid(String tableId) {
+    public boolean apply(String tableId) {
       return tableId != null
           && (RootTable.ID.equals(tableId) || MetadataTable.ID.equals(tableId) || ReplicationTable.ID.equals(tableId) || tableId.matches(VALID_ID_REGEX));
     }
@@ -67,7 +67,7 @@
     private List<String> metadataTables = Arrays.asList(RootTable.NAME, MetadataTable.NAME);
 
     @Override
-    public boolean isValid(String tableName) {
+    public boolean apply(String tableName) {
       return !metadataTables.contains(tableName);
     }
 
@@ -80,7 +80,7 @@
   public static final Validator<String> NOT_SYSTEM = new Validator<String>() {
 
     @Override
-    public boolean isValid(String tableName) {
+    public boolean apply(String tableName) {
       return !Namespaces.ACCUMULO_NAMESPACE.equals(qualify(tableName).getFirst());
     }
 
@@ -93,7 +93,7 @@
   public static final Validator<String> NOT_ROOT = new Validator<String>() {
 
     @Override
-    public boolean isValid(String tableName) {
+    public boolean apply(String tableName) {
       return !RootTable.NAME.equals(tableName);
     }
 
@@ -106,7 +106,7 @@
   public static final Validator<String> NOT_ROOT_ID = new Validator<String>() {
 
     @Override
-    public boolean isValid(String tableId) {
+    public boolean apply(String tableId) {
       return !RootTable.ID.equals(tableId);
     }
 
diff --git a/server/master/src/test/java/org/apache/accumulo/master/DefaultMapTest.java b/server/master/src/test/java/org/apache/accumulo/master/DefaultMapTest.java
index c0e9a4a..8d139ac 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/DefaultMapTest.java
+++ b/server/master/src/test/java/org/apache/accumulo/master/DefaultMapTest.java
@@ -26,7 +26,7 @@
 
   @Test
   public void testDefaultMap() {
-    DefaultMap<String,String> map = new DefaultMap<String,String>("");
+    DefaultMap<String,String> map = new DefaultMap<>("");
     map.put("key", "value");
     String empty = map.get("otherKey");
     assertEquals(map.get("key"), "value");
diff --git a/server/master/src/test/java/org/apache/accumulo/master/replication/SequentialWorkAssignerTest.java b/server/master/src/test/java/org/apache/accumulo/master/replication/SequentialWorkAssignerTest.java
index e05a17e..45fe959 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/replication/SequentialWorkAssignerTest.java
+++ b/server/master/src/test/java/org/apache/accumulo/master/replication/SequentialWorkAssignerTest.java
@@ -18,287 +18,38 @@
 
 import static org.easymock.EasyMock.createMock;
 import static org.easymock.EasyMock.expect;
-import static org.easymock.EasyMock.expectLastCall;
 import static org.easymock.EasyMock.replay;
 import static org.easymock.EasyMock.verify;
 
-import java.util.HashMap;
 import java.util.Map;
 import java.util.TreeMap;
 
-import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.protobuf.ProtobufUtil;
 import org.apache.accumulo.core.replication.ReplicationConstants;
-import org.apache.accumulo.core.replication.ReplicationSchema.OrderSection;
-import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
-import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.replication.ReplicationTarget;
-import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.server.replication.DistributedWorkQueueWorkAssignerHelper;
-import org.apache.accumulo.server.replication.proto.Replication.Status;
 import org.apache.accumulo.server.zookeeper.DistributedWorkQueue;
 import org.apache.accumulo.server.zookeeper.ZooCache;
-import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.Before;
-import org.junit.Rule;
 import org.junit.Test;
-import org.junit.rules.TestName;
 
 public class SequentialWorkAssignerTest {
 
-  @Rule
-  public TestName test = new TestName();
-
-  private AccumuloConfiguration conf;
   private Connector conn;
   private SequentialWorkAssigner assigner;
 
   @Before
-  public void init() {
-    conf = createMock(AccumuloConfiguration.class);
+  public void init() throws Exception {
+    AccumuloConfiguration conf = createMock(AccumuloConfiguration.class);
     conn = createMock(Connector.class);
     assigner = new SequentialWorkAssigner(conf, conn);
   }
 
   @Test
-  public void createWorkForFilesInCorrectOrder() throws Exception {
-    ReplicationTarget target = new ReplicationTarget("cluster1", "table1", "1");
-    Text serializedTarget = target.toText();
-
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    // Set the connector
-    assigner.setConnector(conn);
-
-    // grant ourselves write to the replication table
-    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.WRITE);
-
-    // Create two mutations, both of which need replication work done
-    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
-    // We want the name of file2 to sort before file1
-    String filename1 = "z_file1", filename2 = "a_file1";
-    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
-
-    // File1 was closed before file2, however
-    Status stat1 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(250).build();
-    Status stat2 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(500).build();
-
-    Mutation m = new Mutation(file1);
-    WorkSection.add(m, serializedTarget, ProtobufUtil.toValue(stat1));
-    bw.addMutation(m);
-
-    m = new Mutation(file2);
-    WorkSection.add(m, serializedTarget, ProtobufUtil.toValue(stat2));
-    bw.addMutation(m);
-
-    m = OrderSection.createMutation(file1, stat1.getCreatedTime());
-    OrderSection.add(m, new Text(target.getSourceTableId()), ProtobufUtil.toValue(stat1));
-    bw.addMutation(m);
-
-    m = OrderSection.createMutation(file2, stat2.getCreatedTime());
-    OrderSection.add(m, new Text(target.getSourceTableId()), ProtobufUtil.toValue(stat2));
-    bw.addMutation(m);
-
-    bw.close();
-
-    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
-    Map<String,Map<String,String>> queuedWork = new HashMap<>();
-    assigner.setQueuedWork(queuedWork);
-    assigner.setWorkQueue(workQueue);
-    assigner.setMaxQueueSize(Integer.MAX_VALUE);
-
-    // Make sure we expect the invocations in the correct order (accumulo is sorted)
-    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target), file1);
-    expectLastCall().once();
-
-    // file2 is *not* queued because file1 must be replicated first
-
-    replay(workQueue);
-
-    assigner.createWork();
-
-    verify(workQueue);
-
-    Assert.assertEquals(1, queuedWork.size());
-    Assert.assertTrue(queuedWork.containsKey("cluster1"));
-    Map<String,String> cluster1Work = queuedWork.get("cluster1");
-    Assert.assertEquals(1, cluster1Work.size());
-    Assert.assertTrue(cluster1Work.containsKey(target.getSourceTableId()));
-    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target), cluster1Work.get(target.getSourceTableId()));
-  }
-
-  @Test
-  public void workAcrossTablesHappensConcurrently() throws Exception {
-    ReplicationTarget target1 = new ReplicationTarget("cluster1", "table1", "1");
-    Text serializedTarget1 = target1.toText();
-
-    ReplicationTarget target2 = new ReplicationTarget("cluster1", "table2", "2");
-    Text serializedTarget2 = target2.toText();
-
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    // Set the connector
-    assigner.setConnector(conn);
-
-    // grant ourselves write to the replication table
-    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.WRITE);
-
-    // Create two mutations, both of which need replication work done
-    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
-    // We want the name of file2 to sort before file1
-    String filename1 = "z_file1", filename2 = "a_file1";
-    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
-
-    // File1 was closed before file2, however
-    Status stat1 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(250).build();
-    Status stat2 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(500).build();
-
-    Mutation m = new Mutation(file1);
-    WorkSection.add(m, serializedTarget1, ProtobufUtil.toValue(stat1));
-    bw.addMutation(m);
-
-    m = new Mutation(file2);
-    WorkSection.add(m, serializedTarget2, ProtobufUtil.toValue(stat2));
-    bw.addMutation(m);
-
-    m = OrderSection.createMutation(file1, stat1.getCreatedTime());
-    OrderSection.add(m, new Text(target1.getSourceTableId()), ProtobufUtil.toValue(stat1));
-    bw.addMutation(m);
-
-    m = OrderSection.createMutation(file2, stat2.getCreatedTime());
-    OrderSection.add(m, new Text(target2.getSourceTableId()), ProtobufUtil.toValue(stat2));
-    bw.addMutation(m);
-
-    bw.close();
-
-    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
-    Map<String,Map<String,String>> queuedWork = new HashMap<>();
-    assigner.setQueuedWork(queuedWork);
-    assigner.setWorkQueue(workQueue);
-    assigner.setMaxQueueSize(Integer.MAX_VALUE);
-
-    // Make sure we expect the invocations in the correct order (accumulo is sorted)
-    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target1), file1);
-    expectLastCall().once();
-
-    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target2), file2);
-    expectLastCall().once();
-
-    // file2 is *not* queued because file1 must be replicated first
-
-    replay(workQueue);
-
-    assigner.createWork();
-
-    verify(workQueue);
-
-    Assert.assertEquals(1, queuedWork.size());
-    Assert.assertTrue(queuedWork.containsKey("cluster1"));
-
-    Map<String,String> cluster1Work = queuedWork.get("cluster1");
-    Assert.assertEquals(2, cluster1Work.size());
-    Assert.assertTrue(cluster1Work.containsKey(target1.getSourceTableId()));
-    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target1), cluster1Work.get(target1.getSourceTableId()));
-
-    Assert.assertTrue(cluster1Work.containsKey(target2.getSourceTableId()));
-    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target2), cluster1Work.get(target2.getSourceTableId()));
-  }
-
-  @Test
-  public void workAcrossPeersHappensConcurrently() throws Exception {
-    ReplicationTarget target1 = new ReplicationTarget("cluster1", "table1", "1");
-    Text serializedTarget1 = target1.toText();
-
-    ReplicationTarget target2 = new ReplicationTarget("cluster2", "table1", "1");
-    Text serializedTarget2 = target2.toText();
-
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    // Set the connector
-    assigner.setConnector(conn);
-
-    // grant ourselves write to the replication table
-    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.WRITE);
-
-    // Create two mutations, both of which need replication work done
-    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
-    // We want the name of file2 to sort before file1
-    String filename1 = "z_file1", filename2 = "a_file1";
-    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
-
-    // File1 was closed before file2, however
-    Status stat1 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(250).build();
-    Status stat2 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(500).build();
-
-    Mutation m = new Mutation(file1);
-    WorkSection.add(m, serializedTarget1, ProtobufUtil.toValue(stat1));
-    bw.addMutation(m);
-
-    m = new Mutation(file2);
-    WorkSection.add(m, serializedTarget2, ProtobufUtil.toValue(stat2));
-    bw.addMutation(m);
-
-    m = OrderSection.createMutation(file1, stat1.getCreatedTime());
-    OrderSection.add(m, new Text(target1.getSourceTableId()), ProtobufUtil.toValue(stat1));
-    bw.addMutation(m);
-
-    m = OrderSection.createMutation(file2, stat2.getCreatedTime());
-    OrderSection.add(m, new Text(target2.getSourceTableId()), ProtobufUtil.toValue(stat2));
-    bw.addMutation(m);
-
-    bw.close();
-
-    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
-    Map<String,Map<String,String>> queuedWork = new HashMap<>();
-    assigner.setQueuedWork(queuedWork);
-    assigner.setWorkQueue(workQueue);
-    assigner.setMaxQueueSize(Integer.MAX_VALUE);
-
-    // Make sure we expect the invocations in the correct order (accumulo is sorted)
-    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target1), file1);
-    expectLastCall().once();
-
-    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target2), file2);
-    expectLastCall().once();
-
-    // file2 is *not* queued because file1 must be replicated first
-
-    replay(workQueue);
-
-    assigner.createWork();
-
-    verify(workQueue);
-
-    Assert.assertEquals(2, queuedWork.size());
-    Assert.assertTrue(queuedWork.containsKey("cluster1"));
-
-    Map<String,String> cluster1Work = queuedWork.get("cluster1");
-    Assert.assertEquals(1, cluster1Work.size());
-    Assert.assertTrue(cluster1Work.containsKey(target1.getSourceTableId()));
-    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target1), cluster1Work.get(target1.getSourceTableId()));
-
-    Map<String,String> cluster2Work = queuedWork.get("cluster2");
-    Assert.assertEquals(1, cluster2Work.size());
-    Assert.assertTrue(cluster2Work.containsKey(target2.getSourceTableId()));
-    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target2), cluster2Work.get(target2.getSourceTableId()));
-  }
-
-  @Test
   public void basicZooKeeperCleanup() throws Exception {
     DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
     ZooCache zooCache = createMock(ZooCache.class);
@@ -339,79 +90,4 @@
     Assert.assertEquals(1, cluster1Work.size());
     Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey("file2", new ReplicationTarget("cluster1", "2", "2")), cluster1Work.get("2"));
   }
-
-  @Test
-  public void reprocessingOfCompletedWorkRemovesWork() throws Exception {
-    ReplicationTarget target = new ReplicationTarget("cluster1", "table1", "1");
-    Text serializedTarget = target.toText();
-
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    // Set the connector
-    assigner.setConnector(conn);
-
-    // grant ourselves write to the replication table
-    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.WRITE);
-
-    // Create two mutations, both of which need replication work done
-    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
-    // We want the name of file2 to sort before file1
-    String filename1 = "z_file1", filename2 = "a_file1";
-    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
-
-    // File1 was closed before file2, however
-    Status stat1 = Status.newBuilder().setBegin(100).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(250).build();
-    Status stat2 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(500).build();
-
-    Mutation m = new Mutation(file1);
-    WorkSection.add(m, serializedTarget, ProtobufUtil.toValue(stat1));
-    bw.addMutation(m);
-
-    m = new Mutation(file2);
-    WorkSection.add(m, serializedTarget, ProtobufUtil.toValue(stat2));
-    bw.addMutation(m);
-
-    m = OrderSection.createMutation(file1, stat1.getCreatedTime());
-    OrderSection.add(m, new Text(target.getSourceTableId()), ProtobufUtil.toValue(stat1));
-    bw.addMutation(m);
-
-    m = OrderSection.createMutation(file2, stat2.getCreatedTime());
-    OrderSection.add(m, new Text(target.getSourceTableId()), ProtobufUtil.toValue(stat2));
-    bw.addMutation(m);
-
-    bw.close();
-
-    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
-
-    // Treat filename1 as we have already submitted it for replication
-    Map<String,Map<String,String>> queuedWork = new HashMap<>();
-    Map<String,String> queuedWorkForCluster = new HashMap<>();
-    queuedWorkForCluster.put(target.getSourceTableId(), DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target));
-    queuedWork.put("cluster1", queuedWorkForCluster);
-
-    assigner.setQueuedWork(queuedWork);
-    assigner.setWorkQueue(workQueue);
-    assigner.setMaxQueueSize(Integer.MAX_VALUE);
-
-    // Make sure we expect the invocations in the correct order (accumulo is sorted)
-    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target), file2);
-    expectLastCall().once();
-
-    // file2 is queued because we remove file1 because it's fully replicated
-
-    replay(workQueue);
-
-    assigner.createWork();
-
-    verify(workQueue);
-
-    Assert.assertEquals(1, queuedWork.size());
-    Assert.assertTrue(queuedWork.containsKey("cluster1"));
-    Map<String,String> cluster1Work = queuedWork.get("cluster1");
-    Assert.assertEquals(1, cluster1Work.size());
-    Assert.assertTrue(cluster1Work.containsKey(target.getSourceTableId()));
-    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target), cluster1Work.get(target.getSourceTableId()));
-  }
 }
diff --git a/server/master/src/test/java/org/apache/accumulo/master/replication/UnorderedWorkAssignerTest.java b/server/master/src/test/java/org/apache/accumulo/master/replication/UnorderedWorkAssignerTest.java
index 35df344..a9af68b 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/replication/UnorderedWorkAssignerTest.java
+++ b/server/master/src/test/java/org/apache/accumulo/master/replication/UnorderedWorkAssignerTest.java
@@ -30,46 +30,27 @@
 import java.util.UUID;
 
 import org.apache.accumulo.core.Constants;
-import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.protobuf.ProtobufUtil;
 import org.apache.accumulo.core.replication.ReplicationConstants;
-import org.apache.accumulo.core.replication.ReplicationSchema.OrderSection;
-import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
-import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.replication.ReplicationTarget;
-import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.server.replication.DistributedWorkQueueWorkAssignerHelper;
-import org.apache.accumulo.server.replication.StatusUtil;
-import org.apache.accumulo.server.replication.proto.Replication.Status;
 import org.apache.accumulo.server.zookeeper.DistributedWorkQueue;
 import org.apache.accumulo.server.zookeeper.ZooCache;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.Before;
-import org.junit.Rule;
 import org.junit.Test;
-import org.junit.rules.TestName;
 
 public class UnorderedWorkAssignerTest {
 
-  @Rule
-  public TestName test = new TestName();
-
-  private AccumuloConfiguration conf;
   private Connector conn;
   private UnorderedWorkAssigner assigner;
 
   @Before
-  public void init() {
-    conf = createMock(AccumuloConfiguration.class);
+  public void init() throws Exception {
+    AccumuloConfiguration conf = createMock(AccumuloConfiguration.class);
     conn = createMock(Connector.class);
     assigner = new UnorderedWorkAssigner(conf, conn);
   }
@@ -120,114 +101,6 @@
   }
 
   @Test
-  public void createWorkForFilesNeedingIt() throws Exception {
-    ReplicationTarget target1 = new ReplicationTarget("cluster1", "table1", "1"), target2 = new ReplicationTarget("cluster1", "table2", "2");
-    Text serializedTarget1 = target1.toText(), serializedTarget2 = target2.toText();
-    String keyTarget1 = target1.getPeerName() + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR + target1.getRemoteIdentifier()
-        + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR + target1.getSourceTableId(), keyTarget2 = target2.getPeerName()
-        + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR + target2.getRemoteIdentifier() + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR
-        + target2.getSourceTableId();
-
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    // Set the connector
-    assigner.setConnector(conn);
-
-    // grant ourselves write to the replication table
-    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.WRITE);
-
-    Status.Builder builder = Status.newBuilder().setBegin(0).setEnd(0).setInfiniteEnd(true).setClosed(false).setCreatedTime(5l);
-    Status status1 = builder.build();
-    builder.setCreatedTime(10l);
-    Status status2 = builder.build();
-
-    // Create two mutations, both of which need replication work done
-    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
-    String filename1 = UUID.randomUUID().toString(), filename2 = UUID.randomUUID().toString();
-    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
-    Mutation m = new Mutation(file1);
-    WorkSection.add(m, serializedTarget1, ProtobufUtil.toValue(status1));
-    bw.addMutation(m);
-    m = OrderSection.createMutation(file1, status1.getCreatedTime());
-    OrderSection.add(m, new Text(target1.getSourceTableId()), ProtobufUtil.toValue(status1));
-    bw.addMutation(m);
-
-    m = new Mutation(file2);
-    WorkSection.add(m, serializedTarget2, ProtobufUtil.toValue(status2));
-    bw.addMutation(m);
-    m = OrderSection.createMutation(file2, status2.getCreatedTime());
-    OrderSection.add(m, new Text(target2.getSourceTableId()), ProtobufUtil.toValue(status2));
-    bw.addMutation(m);
-
-    bw.close();
-
-    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
-    HashSet<String> queuedWork = new HashSet<>();
-    assigner.setQueuedWork(queuedWork);
-    assigner.setWorkQueue(workQueue);
-    assigner.setMaxQueueSize(Integer.MAX_VALUE);
-
-    // Make sure we expect the invocations in the order they were created
-    String key = filename1 + "|" + keyTarget1;
-    workQueue.addWork(key, file1);
-    expectLastCall().once();
-
-    key = filename2 + "|" + keyTarget2;
-    workQueue.addWork(key, file2);
-    expectLastCall().once();
-
-    replay(workQueue);
-
-    assigner.createWork();
-
-    verify(workQueue);
-  }
-
-  @Test
-  public void doNotCreateWorkForFilesNotNeedingIt() throws Exception {
-    ReplicationTarget target1 = new ReplicationTarget("cluster1", "table1", "1"), target2 = new ReplicationTarget("cluster1", "table2", "2");
-    Text serializedTarget1 = target1.toText(), serializedTarget2 = target2.toText();
-
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    // Set the connector
-    assigner.setConnector(conn);
-
-    // grant ourselves write to the replication table
-    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.WRITE);
-
-    // Create two mutations, both of which need replication work done
-    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
-    String filename1 = UUID.randomUUID().toString(), filename2 = UUID.randomUUID().toString();
-    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
-
-    Mutation m = new Mutation(file1);
-    WorkSection.add(m, serializedTarget1, StatusUtil.fileCreatedValue(5));
-    bw.addMutation(m);
-
-    m = new Mutation(file2);
-    WorkSection.add(m, serializedTarget2, StatusUtil.fileCreatedValue(10));
-    bw.addMutation(m);
-
-    bw.close();
-
-    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
-    HashSet<String> queuedWork = new HashSet<>();
-    assigner.setQueuedWork(queuedWork);
-    assigner.setMaxQueueSize(Integer.MAX_VALUE);
-
-    replay(workQueue);
-
-    assigner.createWork();
-
-    verify(workQueue);
-  }
-
-  @Test
   public void workNotInZooKeeperIsCleanedUp() {
     Set<String> queuedWork = new LinkedHashSet<>(Arrays.asList("wal1", "wal2"));
     assigner.setQueuedWork(queuedWork);
@@ -249,45 +122,4 @@
     Assert.assertTrue("Queued work was not emptied", queuedWork.isEmpty());
   }
 
-  @Test
-  public void workNotReAdded() throws Exception {
-    Set<String> queuedWork = new HashSet<>();
-
-    assigner.setQueuedWork(queuedWork);
-
-    ReplicationTarget target = new ReplicationTarget("cluster1", "table1", "1");
-    String serializedTarget = target.getPeerName() + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR + target.getRemoteIdentifier()
-        + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR + target.getSourceTableId();
-
-    queuedWork.add("wal1|" + serializedTarget.toString());
-
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    // Set the connector
-    assigner.setConnector(conn);
-
-    // grant ourselves write to the replication table
-    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.WRITE);
-
-    // Create two mutations, both of which need replication work done
-    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
-    String file1 = "/accumulo/wal/tserver+port/wal1";
-    Mutation m = new Mutation(file1);
-    WorkSection.add(m, target.toText(), StatusUtil.openWithUnknownLengthValue());
-    bw.addMutation(m);
-
-    bw.close();
-
-    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
-    assigner.setWorkQueue(workQueue);
-    assigner.setMaxQueueSize(Integer.MAX_VALUE);
-
-    replay(workQueue);
-
-    assigner.createWork();
-
-    verify(workQueue);
-  }
 }
diff --git a/server/master/src/test/java/org/apache/accumulo/master/replication/WorkMakerTest.java b/server/master/src/test/java/org/apache/accumulo/master/replication/WorkMakerTest.java
index 6373def..ec849f4 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/replication/WorkMakerTest.java
+++ b/server/master/src/test/java/org/apache/accumulo/master/replication/WorkMakerTest.java
@@ -16,185 +16,16 @@
  */
 package org.apache.accumulo.master.replication;
 
-import java.util.HashSet;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Set;
-
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
-import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
-import org.apache.accumulo.core.replication.ReplicationTable;
-import org.apache.accumulo.core.replication.ReplicationTarget;
-import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.server.AccumuloServerContext;
-import org.apache.accumulo.server.conf.ServerConfigurationFactory;
 import org.apache.accumulo.server.replication.StatusUtil;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Rule;
 import org.junit.Test;
-import org.junit.rules.TestName;
-
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.Iterables;
 
 public class WorkMakerTest {
 
-  private MockInstance instance;
-  private Connector conn;
-
-  @Rule
-  public TestName name = new TestName();
-  private AccumuloServerContext context;
-
-  @Before
-  public void createMockAccumulo() throws Exception {
-    instance = new MockInstance();
-    context = new AccumuloServerContext(new ServerConfigurationFactory(instance));
-    conn = context.getConnector();
-    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.WRITE);
-    conn.tableOperations().deleteRows(ReplicationTable.NAME, null, null);
-  }
-
-  @Test
-  public void singleUnitSingleTarget() throws Exception {
-    String table = name.getMethodName();
-    conn.tableOperations().create(name.getMethodName());
-    String tableId = conn.tableOperations().tableIdMap().get(table);
-    String file = "hdfs://localhost:8020/accumulo/wal/123456-1234-1234-12345678";
-
-    // Create a status record for a file
-    long timeCreated = System.currentTimeMillis();
-    Mutation m = new Mutation(new Path(file).toString());
-    m.put(StatusSection.NAME, new Text(tableId), StatusUtil.fileCreatedValue(timeCreated));
-    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
-    bw.addMutation(m);
-    bw.flush();
-
-    // Assert that we have one record in the status section
-    Scanner s = ReplicationTable.getScanner(conn);
-    StatusSection.limit(s);
-    Assert.assertEquals(1, Iterables.size(s));
-
-    WorkMaker workMaker = new WorkMaker(context, conn);
-
-    // Invoke the addWorkRecord method to create a Work record from the Status record earlier
-    ReplicationTarget expected = new ReplicationTarget("remote_cluster_1", "4", tableId);
-    workMaker.setBatchWriter(bw);
-    workMaker.addWorkRecord(new Text(file), StatusUtil.fileCreatedValue(timeCreated), ImmutableMap.of("remote_cluster_1", "4"), tableId);
-
-    // Scan over just the WorkSection
-    s = ReplicationTable.getScanner(conn);
-    WorkSection.limit(s);
-
-    Entry<Key,Value> workEntry = Iterables.getOnlyElement(s);
-    Key workKey = workEntry.getKey();
-    ReplicationTarget actual = ReplicationTarget.from(workKey.getColumnQualifier());
-
-    Assert.assertEquals(file, workKey.getRow().toString());
-    Assert.assertEquals(WorkSection.NAME, workKey.getColumnFamily());
-    Assert.assertEquals(expected, actual);
-    Assert.assertEquals(workEntry.getValue(), StatusUtil.fileCreatedValue(timeCreated));
-  }
-
-  @Test
-  public void singleUnitMultipleTargets() throws Exception {
-    String table = name.getMethodName();
-    conn.tableOperations().create(name.getMethodName());
-
-    String tableId = conn.tableOperations().tableIdMap().get(table);
-
-    String file = "hdfs://localhost:8020/accumulo/wal/123456-1234-1234-12345678";
-
-    Mutation m = new Mutation(new Path(file).toString());
-    m.put(StatusSection.NAME, new Text(tableId), StatusUtil.fileCreatedValue(System.currentTimeMillis()));
-    BatchWriter bw = ReplicationTable.getBatchWriter(context.getConnector());
-    bw.addMutation(m);
-    bw.flush();
-
-    // Assert that we have one record in the status section
-    Scanner s = ReplicationTable.getScanner(conn);
-    StatusSection.limit(s);
-    Assert.assertEquals(1, Iterables.size(s));
-
-    WorkMaker workMaker = new WorkMaker(context, conn);
-
-    Map<String,String> targetClusters = ImmutableMap.of("remote_cluster_1", "4", "remote_cluster_2", "6", "remote_cluster_3", "8");
-    Set<ReplicationTarget> expectedTargets = new HashSet<>();
-    for (Entry<String,String> cluster : targetClusters.entrySet()) {
-      expectedTargets.add(new ReplicationTarget(cluster.getKey(), cluster.getValue(), tableId));
-    }
-    workMaker.setBatchWriter(bw);
-    workMaker.addWorkRecord(new Text(file), StatusUtil.fileCreatedValue(System.currentTimeMillis()), targetClusters, tableId);
-
-    s = ReplicationTable.getScanner(conn);
-    WorkSection.limit(s);
-
-    Set<ReplicationTarget> actualTargets = new HashSet<>();
-    for (Entry<Key,Value> entry : s) {
-      Assert.assertEquals(file, entry.getKey().getRow().toString());
-      Assert.assertEquals(WorkSection.NAME, entry.getKey().getColumnFamily());
-
-      ReplicationTarget target = ReplicationTarget.from(entry.getKey().getColumnQualifier());
-      actualTargets.add(target);
-    }
-
-    for (ReplicationTarget expected : expectedTargets) {
-      Assert.assertTrue("Did not find expected target: " + expected, actualTargets.contains(expected));
-      actualTargets.remove(expected);
-    }
-
-    Assert.assertTrue("Found extra replication work entries: " + actualTargets, actualTargets.isEmpty());
-  }
-
-  @Test
-  public void dontCreateWorkForEntriesWithNothingToReplicate() throws Exception {
-    String table = name.getMethodName();
-    conn.tableOperations().create(name.getMethodName());
-    String tableId = conn.tableOperations().tableIdMap().get(table);
-    String file = "hdfs://localhost:8020/accumulo/wal/123456-1234-1234-12345678";
-
-    Mutation m = new Mutation(new Path(file).toString());
-    m.put(StatusSection.NAME, new Text(tableId), StatusUtil.fileCreatedValue(System.currentTimeMillis()));
-    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
-    bw.addMutation(m);
-    bw.flush();
-
-    // Assert that we have one record in the status section
-    Scanner s = ReplicationTable.getScanner(conn);
-    StatusSection.limit(s);
-    Assert.assertEquals(1, Iterables.size(s));
-
-    WorkMaker workMaker = new WorkMaker(context, conn);
-
-    conn.tableOperations().setProperty(ReplicationTable.NAME, Property.TABLE_REPLICATION_TARGET.getKey() + "remote_cluster_1", "4");
-
-    workMaker.setBatchWriter(bw);
-
-    // If we don't shortcircuit out, we should get an exception because ServerConfiguration.getTableConfiguration
-    // won't work with MockAccumulo
-    workMaker.run();
-
-    s = ReplicationTable.getScanner(conn);
-    WorkSection.limit(s);
-
-    Assert.assertEquals(0, Iterables.size(s));
-  }
-
   @Test
   public void closedStatusRecordsStillMakeWork() throws Exception {
-    WorkMaker workMaker = new WorkMaker(context, conn);
+    WorkMaker workMaker = new WorkMaker(null, null);
 
     Assert.assertFalse(workMaker.shouldCreateWork(StatusUtil.fileCreated(System.currentTimeMillis())));
     Assert.assertTrue(workMaker.shouldCreateWork(StatusUtil.ingestedUntil(1000)));
diff --git a/server/master/src/test/java/org/apache/accumulo/master/state/RootTabletStateStoreTest.java b/server/master/src/test/java/org/apache/accumulo/master/state/RootTabletStateStoreTest.java
index d2cc0cf..6497f96 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/state/RootTabletStateStoreTest.java
+++ b/server/master/src/test/java/org/apache/accumulo/master/state/RootTabletStateStoreTest.java
@@ -36,7 +36,6 @@
 import org.apache.accumulo.server.master.state.TabletLocationState;
 import org.apache.accumulo.server.master.state.TabletLocationState.BadLocationStateException;
 import org.apache.accumulo.server.master.state.ZooTabletStateStore;
-import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -49,7 +48,7 @@
       this.name = name;
     }
 
-    List<Node> children = new ArrayList<Node>();
+    List<Node> children = new ArrayList<>();
     String name;
     byte[] value = new byte[] {};
 
@@ -59,7 +58,7 @@
           return node;
       return null;
     }
-  };
+  }
 
   static class FakeZooStore implements DistributedStore {
 
@@ -84,7 +83,7 @@
       Node node = navigate(path);
       if (node == null)
         return Collections.emptyList();
-      List<String> children = new ArrayList<String>(node.children.size());
+      List<String> children = new ArrayList<>(node.children.size());
       for (Node child : node.children)
         children.add(child.name);
       return children;
@@ -144,10 +143,10 @@
     assertArrayEquals(store.get("/a/b"), "ab".getBytes());
     store.put("/a/b/b", "abb".getBytes());
     List<String> children = store.getChildren("/a/b");
-    assertEquals(new HashSet<String>(children), new HashSet<String>(Arrays.asList("b", "c")));
+    assertEquals(new HashSet<>(children), new HashSet<>(Arrays.asList("b", "c")));
     store.remove("/a/b/c");
     children = store.getChildren("/a/b");
-    assertEquals(new HashSet<String>(children), new HashSet<String>(Arrays.asList("b")));
+    assertEquals(new HashSet<>(children), new HashSet<>(Arrays.asList("b")));
   }
 
   @Test
@@ -177,11 +176,11 @@
     assertEquals(count, 1);
     TabletLocationState assigned = null;
     try {
-      assigned = new TabletLocationState(root, server, null, null, null, false);
+      assigned = new TabletLocationState(root, server, null, null, null, null, false);
     } catch (BadLocationStateException e) {
       fail("Unexpected error " + e);
     }
-    tstore.unassign(Collections.singletonList(assigned));
+    tstore.unassign(Collections.singletonList(assigned), null);
     count = 0;
     for (TabletLocationState location : tstore) {
       assertEquals(location.extent, root);
@@ -191,7 +190,7 @@
     }
     assertEquals(count, 1);
 
-    KeyExtent notRoot = new KeyExtent(new Text("0"), null, null);
+    KeyExtent notRoot = new KeyExtent("0", null, null);
     try {
       tstore.setLocations(Collections.singletonList(new Assignment(notRoot, server)));
       Assert.fail("should not get here");
@@ -204,12 +203,12 @@
 
     TabletLocationState broken = null;
     try {
-      broken = new TabletLocationState(notRoot, server, null, null, null, false);
+      broken = new TabletLocationState(notRoot, server, null, null, null, null, false);
     } catch (BadLocationStateException e) {
       fail("Unexpected error " + e);
     }
     try {
-      tstore.unassign(Collections.singletonList(broken));
+      tstore.unassign(Collections.singletonList(broken), null);
       Assert.fail("should not get here");
     } catch (IllegalArgumentException ex) {}
   }
diff --git a/server/monitor/pom.xml b/server/monitor/pom.xml
index 0106ec0..6c56e59 100644
--- a/server/monitor/pom.xml
+++ b/server/monitor/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-monitor</artifactId>
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/EmbeddedWebServer.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/EmbeddedWebServer.java
index f0213e7..a292fd9 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/EmbeddedWebServer.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/EmbeddedWebServer.java
@@ -129,4 +129,8 @@
   public boolean isUsingSsl() {
     return usingSsl;
   }
+
+  public boolean isRunning() {
+    return server.isRunning();
+  }
 }
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/Monitor.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/Monitor.java
index c3dd773..63fb32c 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/Monitor.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/Monitor.java
@@ -16,9 +16,11 @@
  */
 package org.apache.accumulo.monitor;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.HashSet;
@@ -29,6 +31,7 @@
 import java.util.Map.Entry;
 import java.util.Set;
 import java.util.TimerTask;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.concurrent.atomic.AtomicReference;
 
@@ -53,12 +56,12 @@
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.core.util.ServerServices;
 import org.apache.accumulo.core.util.ServerServices.Service;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.util.LoggingRunnable;
 import org.apache.accumulo.fate.zookeeper.ZooLock.LockLossReason;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
+import org.apache.accumulo.monitor.servlets.BulkImportServlet;
 import org.apache.accumulo.monitor.servlets.DefaultServlet;
 import org.apache.accumulo.monitor.servlets.GcStatusServlet;
 import org.apache.accumulo.monitor.servlets.JSONServlet;
@@ -170,20 +173,20 @@
   private ZooLock monitorLock;
 
   private static final String DEFAULT_INSTANCE_NAME = "(Unavailable)";
-  public static final AtomicReference<String> cachedInstanceName = new AtomicReference<String>(DEFAULT_INSTANCE_NAME);
+  public static final AtomicReference<String> cachedInstanceName = new AtomicReference<>(DEFAULT_INSTANCE_NAME);
 
   private static class EventCounter {
 
-    Map<String,Pair<Long,Long>> prevSamples = new HashMap<String,Pair<Long,Long>>();
-    Map<String,Pair<Long,Long>> samples = new HashMap<String,Pair<Long,Long>>();
-    Set<String> serversUpdated = new HashSet<String>();
+    Map<String,Pair<Long,Long>> prevSamples = new HashMap<>();
+    Map<String,Pair<Long,Long>> samples = new HashMap<>();
+    Set<String> serversUpdated = new HashSet<>();
 
     void startingUpdates() {
       serversUpdated.clear();
     }
 
     void updateTabletServer(String name, long sampleTime, long numEvents) {
-      Pair<Long,Long> newSample = new Pair<Long,Long>(sampleTime, numEvents);
+      Pair<Long,Long> newSample = new Pair<>(sampleTime, numEvents);
       Pair<Long,Long> lastSample = samples.get(name);
 
       if (lastSample == null || !lastSample.equals(newSample)) {
@@ -291,7 +294,7 @@
           }
         }
         if (mmi == null)
-          UtilWaitThread.sleep(1000);
+          sleepUninterruptibly(1, TimeUnit.SECONDS);
       }
       if (mmi != null) {
         int majorCompactions = 0;
@@ -344,25 +347,25 @@
         Monitor.totalHoldTime = totalHoldTime;
         Monitor.totalLookups = totalLookups;
 
-        ingestRateOverTime.add(new Pair<Long,Double>(currentTime, totalIngestRate));
-        ingestByteRateOverTime.add(new Pair<Long,Double>(currentTime, totalIngestByteRate));
+        ingestRateOverTime.add(new Pair<>(currentTime, totalIngestRate));
+        ingestByteRateOverTime.add(new Pair<>(currentTime, totalIngestByteRate));
 
         double totalLoad = 0.;
         for (TabletServerStatus status : mmi.tServerInfo) {
           if (status != null)
             totalLoad += status.osLoad;
         }
-        loadOverTime.add(new Pair<Long,Double>(currentTime, totalLoad));
+        loadOverTime.add(new Pair<>(currentTime, totalLoad));
 
-        minorCompactionsOverTime.add(new Pair<Long,Integer>(currentTime, minorCompactions));
-        majorCompactionsOverTime.add(new Pair<Long,Integer>(currentTime, majorCompactions));
+        minorCompactionsOverTime.add(new Pair<>(currentTime, minorCompactions));
+        majorCompactionsOverTime.add(new Pair<>(currentTime, majorCompactions));
 
-        lookupsOverTime.add(new Pair<Long,Double>(currentTime, lookupRateTracker.calculateRate()));
+        lookupsOverTime.add(new Pair<>(currentTime, lookupRateTracker.calculateRate()));
 
-        queryRateOverTime.add(new Pair<Long,Integer>(currentTime, (int) totalQueryRate));
-        queryByteRateOverTime.add(new Pair<Long,Double>(currentTime, totalQueryByteRate));
+        queryRateOverTime.add(new Pair<>(currentTime, (int) totalQueryRate));
+        queryByteRateOverTime.add(new Pair<>(currentTime, totalQueryByteRate));
 
-        scanRateOverTime.add(new Pair<Long,Integer>(currentTime, (int) totalScanRate));
+        scanRateOverTime.add(new Pair<>(currentTime, (int) totalScanRate));
 
         calcCacheHitRate(indexCacheHitRateOverTime, currentTime, indexCacheHitTracker, indexCacheRequestTracker);
         calcCacheHitRate(dataCacheHitRateOverTime, currentTime, dataCacheHitTracker, dataCacheRequestTracker);
@@ -387,7 +390,7 @@
   private static void calcCacheHitRate(List<Pair<Long,Double>> hitRate, long currentTime, EventCounter cacheHits, EventCounter cacheReq) {
     long req = cacheReq.calculateCount();
     if (req > 0)
-      hitRate.add(new Pair<Long,Double>(currentTime, cacheHits.calculateCount() / (double) cacheReq.calculateCount()));
+      hitRate.add(new Pair<>(currentTime, cacheHits.calculateCount() / (double) cacheReq.calculateCount()));
     else
       hitRate.add(new Pair<Long,Double>(currentTime, null));
   }
@@ -452,34 +455,39 @@
     }
 
     Monitor.START_TIME = System.currentTimeMillis();
-    int port = config.getConfiguration().getPort(Property.MONITOR_PORT);
-    try {
-      log.debug("Creating monitor on port " + port);
-      server = new EmbeddedWebServer(hostname, port);
-    } catch (Throwable ex) {
-      log.error("Unable to start embedded web server", ex);
-      throw new RuntimeException(ex);
+    int ports[] = config.getConfiguration().getPort(Property.MONITOR_PORT);
+    for (int port : ports) {
+      try {
+        log.debug("Creating monitor on port " + port);
+        server = new EmbeddedWebServer(hostname, port);
+        server.addServlet(DefaultServlet.class, "/");
+        server.addServlet(OperationServlet.class, "/op");
+        server.addServlet(MasterServlet.class, "/master");
+        server.addServlet(TablesServlet.class, "/tables");
+        server.addServlet(TServersServlet.class, "/tservers");
+        server.addServlet(ProblemServlet.class, "/problems");
+        server.addServlet(GcStatusServlet.class, "/gc");
+        server.addServlet(LogServlet.class, "/log");
+        server.addServlet(XMLServlet.class, "/xml");
+        server.addServlet(JSONServlet.class, "/json");
+        server.addServlet(VisServlet.class, "/vis");
+        server.addServlet(ScanServlet.class, "/scans");
+        server.addServlet(BulkImportServlet.class, "/bulkImports");
+        server.addServlet(Summary.class, "/trace/summary");
+        server.addServlet(ListType.class, "/trace/listType");
+        server.addServlet(ShowTrace.class, "/trace/show");
+        server.addServlet(ReplicationServlet.class, "/replication");
+        if (server.isUsingSsl())
+          server.addServlet(ShellServlet.class, "/shell");
+        server.start();
+        break;
+      } catch (Throwable ex) {
+        log.error("Unable to start embedded web server", ex);
+      }
     }
-
-    server.addServlet(DefaultServlet.class, "/");
-    server.addServlet(OperationServlet.class, "/op");
-    server.addServlet(MasterServlet.class, "/master");
-    server.addServlet(TablesServlet.class, "/tables");
-    server.addServlet(TServersServlet.class, "/tservers");
-    server.addServlet(ProblemServlet.class, "/problems");
-    server.addServlet(GcStatusServlet.class, "/gc");
-    server.addServlet(LogServlet.class, "/log");
-    server.addServlet(XMLServlet.class, "/xml");
-    server.addServlet(JSONServlet.class, "/json");
-    server.addServlet(VisServlet.class, "/vis");
-    server.addServlet(ScanServlet.class, "/scans");
-    server.addServlet(Summary.class, "/trace/summary");
-    server.addServlet(ListType.class, "/trace/listType");
-    server.addServlet(ShowTrace.class, "/trace/show");
-    server.addServlet(ReplicationServlet.class, "/replication");
-    if (server.isUsingSsl())
-      server.addServlet(ShellServlet.class, "/shell");
-    server.start();
+    if (!server.isRunning()) {
+      throw new RuntimeException("Unable to start embedded web server on ports: " + Arrays.toString(ports));
+    }
 
     try {
       log.debug("Using " + hostname + " to advertise monitor location in ZooKeeper");
@@ -513,7 +521,7 @@
             log.warn("{}", e.getMessage(), e);
           }
 
-          UtilWaitThread.sleep(333);
+          sleepUninterruptibly(333, TimeUnit.MILLISECONDS);
         }
 
       }
@@ -528,7 +536,7 @@
           } catch (Exception e) {
             log.warn("{}", e.getMessage(), e);
           }
-          UtilWaitThread.sleep(5000);
+          sleepUninterruptibly(5, TimeUnit.SECONDS);
         }
       }
     }), "Scan scanner").start();
@@ -550,11 +558,11 @@
     }
   }
 
-  static final Map<HostAndPort,ScanStats> allScans = new HashMap<HostAndPort,ScanStats>();
+  static final Map<HostAndPort,ScanStats> allScans = new HashMap<>();
 
   public static Map<HostAndPort,ScanStats> getScans() {
     synchronized (allScans) {
-      return new HashMap<HostAndPort,ScanStats>(allScans);
+      return new HashMap<>(allScans);
     }
   }
 
@@ -639,7 +647,7 @@
 
       monitorLock.tryToCancelAsyncLockOrUnlock();
 
-      UtilWaitThread.sleep(getContext().getConfiguration().getTimeInMillis(Property.MONITOR_LOCK_CHECK_INTERVAL));
+      sleepUninterruptibly(getContext().getConfiguration().getTimeInMillis(Property.MONITOR_LOCK_CHECK_INTERVAL), TimeUnit.MILLISECONDS);
     }
 
     log.info("Got Monitor lock.");
@@ -754,37 +762,37 @@
 
   public static List<Pair<Long,Double>> getLoadOverTime() {
     synchronized (loadOverTime) {
-      return new ArrayList<Pair<Long,Double>>(loadOverTime);
+      return new ArrayList<>(loadOverTime);
     }
   }
 
   public static List<Pair<Long,Double>> getIngestRateOverTime() {
     synchronized (ingestRateOverTime) {
-      return new ArrayList<Pair<Long,Double>>(ingestRateOverTime);
+      return new ArrayList<>(ingestRateOverTime);
     }
   }
 
   public static List<Pair<Long,Double>> getIngestByteRateOverTime() {
     synchronized (ingestByteRateOverTime) {
-      return new ArrayList<Pair<Long,Double>>(ingestByteRateOverTime);
+      return new ArrayList<>(ingestByteRateOverTime);
     }
   }
 
   public static List<Pair<Long,Integer>> getMinorCompactionsOverTime() {
     synchronized (minorCompactionsOverTime) {
-      return new ArrayList<Pair<Long,Integer>>(minorCompactionsOverTime);
+      return new ArrayList<>(minorCompactionsOverTime);
     }
   }
 
   public static List<Pair<Long,Integer>> getMajorCompactionsOverTime() {
     synchronized (majorCompactionsOverTime) {
-      return new ArrayList<Pair<Long,Integer>>(majorCompactionsOverTime);
+      return new ArrayList<>(majorCompactionsOverTime);
     }
   }
 
   public static List<Pair<Long,Double>> getLookupsOverTime() {
     synchronized (lookupsOverTime) {
-      return new ArrayList<Pair<Long,Double>>(lookupsOverTime);
+      return new ArrayList<>(lookupsOverTime);
     }
   }
 
@@ -794,31 +802,31 @@
 
   public static List<Pair<Long,Integer>> getQueryRateOverTime() {
     synchronized (queryRateOverTime) {
-      return new ArrayList<Pair<Long,Integer>>(queryRateOverTime);
+      return new ArrayList<>(queryRateOverTime);
     }
   }
 
   public static List<Pair<Long,Integer>> getScanRateOverTime() {
     synchronized (scanRateOverTime) {
-      return new ArrayList<Pair<Long,Integer>>(scanRateOverTime);
+      return new ArrayList<>(scanRateOverTime);
     }
   }
 
   public static List<Pair<Long,Double>> getQueryByteRateOverTime() {
     synchronized (queryByteRateOverTime) {
-      return new ArrayList<Pair<Long,Double>>(queryByteRateOverTime);
+      return new ArrayList<>(queryByteRateOverTime);
     }
   }
 
   public static List<Pair<Long,Double>> getIndexCacheHitRateOverTime() {
     synchronized (indexCacheHitRateOverTime) {
-      return new ArrayList<Pair<Long,Double>>(indexCacheHitRateOverTime);
+      return new ArrayList<>(indexCacheHitRateOverTime);
     }
   }
 
   public static List<Pair<Long,Double>> getDataCacheHitRateOverTime() {
     synchronized (dataCacheHitRateOverTime) {
-      return new ArrayList<Pair<Long,Double>>(dataCacheHitRateOverTime);
+      return new ArrayList<>(dataCacheHitRateOverTime);
     }
   }
 
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/ZooKeeperStatus.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/ZooKeeperStatus.java
index 2e89344..f2a295d 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/ZooKeeperStatus.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/ZooKeeperStatus.java
@@ -22,17 +22,18 @@
 import java.util.Objects;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.core.rpc.TTimeoutTransport;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.thrift.transport.TTransport;
 import org.apache.thrift.transport.TTransportException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.net.HostAndPort;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 public class ZooKeeperStatus implements Runnable {
 
@@ -81,7 +82,7 @@
     }
   }
 
-  private static SortedSet<ZooKeeperState> status = new TreeSet<ZooKeeperState>();
+  private static SortedSet<ZooKeeperState> status = new TreeSet<>();
 
   public static Collection<ZooKeeperState> getZooKeeperStatus() {
     return status;
@@ -92,7 +93,7 @@
 
     while (!stop) {
 
-      TreeSet<ZooKeeperState> update = new TreeSet<ZooKeeperState>();
+      TreeSet<ZooKeeperState> update = new TreeSet<>();
 
       String zookeepers[] = SiteConfiguration.getInstance().get(Property.INSTANCE_ZK_HOST).split(",");
       for (String keeper : zookeepers) {
@@ -142,7 +143,7 @@
         }
       }
       status = update;
-      UtilWaitThread.sleep(5 * 1000);
+      sleepUninterruptibly(5, TimeUnit.SECONDS);
     }
   }
 
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BasicServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BasicServlet.java
index fac18cd..fc329b8 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BasicServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BasicServlet.java
@@ -168,6 +168,7 @@
     sb.append("<a href='/master'>Master&nbsp;Server</a><br />\n");
     sb.append("<a href='/tservers'>Tablet&nbsp;Servers</a><br />\n");
     sb.append("<a href='/scans'>Active&nbsp;Scans</a><br />\n");
+    sb.append("<a href='/bulkImports'>Bulk&nbsp;Imports</a><br />\n");
     sb.append("<a href='/vis'>Server Activity</a><br />\n");
     sb.append("<a href='/gc'>Garbage&nbsp;Collector</a><br />\n");
     sb.append("<a href='/tables'>Tables</a><br />\n");
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BulkImportServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BulkImportServlet.java
new file mode 100644
index 0000000..ad09abe
--- /dev/null
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/BulkImportServlet.java
@@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.monitor.servlets;
+
+import java.io.IOException;
+import java.util.List;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.accumulo.core.master.thrift.BulkImportStatus;
+import org.apache.accumulo.core.master.thrift.TabletServerStatus;
+import org.apache.accumulo.monitor.Monitor;
+import org.apache.accumulo.monitor.util.Table;
+import org.apache.accumulo.monitor.util.TableRow;
+import org.apache.accumulo.monitor.util.celltypes.BulkImportStateType;
+import org.apache.accumulo.monitor.util.celltypes.DurationType;
+import org.apache.accumulo.monitor.util.celltypes.PreciseNumberType;
+import org.apache.accumulo.monitor.util.celltypes.TServerLinkType;
+
+public class BulkImportServlet extends BasicServlet {
+
+  private static final long serialVersionUID = 1L;
+
+  @Override
+  protected String getTitle(HttpServletRequest req) {
+    return "Bulk Imports";
+  }
+
+  static private long duration(long start) {
+    return (System.currentTimeMillis() - start) / 1000L;
+  }
+
+  @Override
+  protected void pageBody(HttpServletRequest req, HttpServletResponse response, StringBuilder sb) throws IOException {
+    Table table = new Table("masterBulkImportStatus", "Bulk&nbsp;Import&nbsp;Status");
+    table.addSortableColumn("Directory");
+    table.addSortableColumn("Age", new DurationType(0l, 5 * 60 * 1000l), "The age the import.");
+    table.addSortableColumn("State", new BulkImportStateType(), "The current state of the bulk import");
+    for (BulkImportStatus bulk : Monitor.getMmi().bulkImports) {
+      TableRow row = table.prepareRow();
+      row.add(bulk.filename);
+      row.add(duration(bulk.startTime));
+      row.add(bulk.state);
+      table.addRow(row);
+    }
+    table.generate(req, sb);
+
+    table = new Table("bulkImportStatus", "TabletServer&nbsp;Bulk&nbsp;Import&nbsp;Status");
+    table.addSortableColumn("Server", new TServerLinkType(), null);
+    table.addSortableColumn("#", new PreciseNumberType(0, 20, 0, 100), "Number of imports presently running");
+    table.addSortableColumn("Oldest&nbsp;Age", new DurationType(0l, 5 * 60 * 1000l), "The age of the oldest import running on this server.");
+    for (TabletServerStatus tserverInfo : Monitor.getMmi().getTServerInfo()) {
+      TableRow row = table.prepareRow();
+      row.add(tserverInfo);
+      List<BulkImportStatus> stats = tserverInfo.bulkImports;
+      if (stats != null) {
+        row.add(stats.size());
+        long oldest = Long.MAX_VALUE;
+        for (BulkImportStatus bulk : stats) {
+          oldest = Math.min(oldest, bulk.startTime);
+        }
+        if (oldest != Long.MAX_VALUE) {
+          row.add(duration(oldest));
+        } else {
+          row.add(0L);
+        }
+      } else {
+        row.add(0);
+        row.add(0L);
+      }
+      table.addRow(row);
+    }
+    table.generate(req, sb);
+  }
+
+}
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
index 8d130aa..383d7bc 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
@@ -32,21 +32,13 @@
 import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletResponse;
 
-import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
 import org.apache.accumulo.core.util.Duration;
-import org.apache.accumulo.core.util.NumUtil;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.core.volume.VolumeConfiguration;
 import org.apache.accumulo.monitor.Monitor;
 import org.apache.accumulo.monitor.ZooKeeperStatus;
 import org.apache.accumulo.monitor.ZooKeeperStatus.ZooKeeperState;
 import org.apache.accumulo.monitor.util.celltypes.NumberType;
-import org.apache.accumulo.server.fs.VolumeManager;
-import org.apache.accumulo.server.fs.VolumeManagerImpl;
-import org.apache.hadoop.fs.ContentSummary;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
 
 public class DefaultServlet extends BasicServlet {
 
@@ -203,7 +195,7 @@
     sb.append("</tr></table>\n");
     sb.append("<br />\n");
 
-    sb.append("<p/><table class=\"noborder\">\n");
+    sb.append("<p><table class=\"noborder\">\n");
 
     sb.append("<tr><td>\n");
     plotData(sb, "Ingest (Entries/s)", Monitor.getIngestRateOverTime(), false);
@@ -240,66 +232,15 @@
 
   private void doAccumuloTable(StringBuilder sb) throws IOException {
     // Accumulo
-    VolumeManager vm = VolumeManagerImpl.get(SiteConfiguration.getInstance());
     MasterMonitorInfo info = Monitor.getMmi();
     sb.append("<table>\n");
     sb.append("<tr><th colspan='2'><a href='/master'>Accumulo Master</a></th></tr>\n");
     if (info == null) {
       sb.append("<tr><td colspan='2'><span class='error'>Master is Down</span></td></tr>\n");
     } else {
-      long totalAcuBytesUsed = 0l;
-      long totalHdfsBytesUsed = 0l;
 
       try {
-        for (String baseDir : VolumeConfiguration.getVolumeUris(SiteConfiguration.getInstance())) {
-          final Path basePath = new Path(baseDir);
-          final FileSystem fs = vm.getVolumeByPath(basePath).getFileSystem();
-
-          try {
-            // Calculate the amount of space used by Accumulo on the FileSystem
-            ContentSummary accumuloSummary = fs.getContentSummary(basePath);
-            long bytesUsedByAcuOnFs = accumuloSummary.getLength();
-            totalAcuBytesUsed += bytesUsedByAcuOnFs;
-
-            // Catch the overflow -- this is big data
-            if (totalAcuBytesUsed < bytesUsedByAcuOnFs) {
-              log.debug("Overflowed long in bytes used by Accumulo for " + baseDir);
-              totalAcuBytesUsed = 0l;
-              break;
-            }
-
-            // Calculate the total amount of space used on the FileSystem
-            ContentSummary volumeSummary = fs.getContentSummary(new Path("/"));
-            long bytesUsedOnVolume = volumeSummary.getLength();
-            totalHdfsBytesUsed += bytesUsedOnVolume;
-
-            // Catch the overflow -- this is big data
-            if (totalHdfsBytesUsed < bytesUsedOnVolume) {
-              log.debug("Overflowed long in bytes used in HDFS for " + baseDir);
-              totalHdfsBytesUsed = 0;
-              break;
-            }
-          } catch (Exception ex) {
-            log.trace("Unable to get disk usage information for " + baseDir, ex);
-          }
-        }
-
-        String diskUsed = "Unknown";
-        String consumed = null;
-        if (totalAcuBytesUsed > 0) {
-          // Convert Accumulo usage to a readable String
-          diskUsed = bytes(totalAcuBytesUsed);
-
-          if (totalHdfsBytesUsed > 0) {
-            // Compute amount of space used by Accumulo as a percentage of total space usage.
-            consumed = String.format("%.2f%%", totalAcuBytesUsed * 100. / totalHdfsBytesUsed);
-          }
-        }
-
         boolean highlight = false;
-        tableRow(sb, (highlight = !highlight), "Disk&nbsp;Used", diskUsed);
-        if (null != consumed)
-          tableRow(sb, (highlight = !highlight), "%&nbsp;of&nbsp;Used&nbsp;DFS", consumed);
         tableRow(sb, (highlight = !highlight), "<a href='/tables'>Tables</a>", NumberType.commas(Monitor.getTotalTables()));
         tableRow(sb, (highlight = !highlight), "<a href='/tservers'>Tablet&nbsp;Servers</a>", NumberType.commas(info.tServerInfo.size(), 1, Long.MAX_VALUE));
         tableRow(sb, (highlight = !highlight), "<a href='/tservers'>Dead&nbsp;Tablet&nbsp;Servers</a>", NumberType.commas(info.deadTabletServers.size(), 0, 0));
@@ -331,10 +272,6 @@
     sb.append("</table>\n");
   }
 
-  private static String bytes(long big) {
-    return NumUtil.bigNumberForSize(big);
-  }
-
   public static void tableRow(StringBuilder sb, boolean highlight, Object... cells) {
     sb.append(highlight ? "<tr class='highlight'>" : "<tr>");
     for (int i = 0; i < cells.length; ++i) {
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/GcStatusServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/GcStatusServlet.java
index 926eedc..74fc21c 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/GcStatusServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/GcStatusServlet.java
@@ -48,7 +48,7 @@
       gcActivity.addSortableColumn("Candidates", new NumberType<Long>(), null);
       gcActivity.addSortableColumn("Deleted", new NumberType<Long>(), null);
       gcActivity.addSortableColumn("In&nbsp;Use", new NumberType<Long>(), null);
-      gcActivity.addSortableColumn("Errors", new NumberType<Long>(0l, 1l), null);
+      gcActivity.addSortableColumn("Errors", new NumberType<>(0l, 1l), null);
       gcActivity.addSortableColumn("Duration", new DurationType(), null);
 
       if (status.last.finished > 0)
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/JSONServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/JSONServlet.java
index dc75537..745fd96 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/JSONServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/JSONServlet.java
@@ -49,7 +49,7 @@
 
   private static Map<String,Object> addServer(String ip, String hostname, double osload, double ingest, double query, double ingestMB, double queryMB,
       int scans, double scansessions, long holdtime) {
-    Map<String,Object> map = new HashMap<String,Object>();
+    Map<String,Object> map = new HashMap<>();
     map.put("ip", ip);
     map.put("hostname", hostname);
     map.put("osload", osload);
@@ -69,8 +69,8 @@
       return;
     }
 
-    Map<String,Object> results = new HashMap<String,Object>();
-    List<Map<String,Object>> servers = new ArrayList<Map<String,Object>>();
+    Map<String,Object> results = new HashMap<>();
+    List<Map<String,Object>> servers = new ArrayList<>();
 
     for (TabletServerStatus status : Monitor.getMmi().tServerInfo) {
       TableInfo summary = TableInfoUtil.summarizeTableStats(status);
@@ -80,14 +80,14 @@
     }
 
     for (Entry<String,Byte> entry : Monitor.getMmi().badTServers.entrySet()) {
-      Map<String,Object> badServer = new HashMap<String,Object>();
+      Map<String,Object> badServer = new HashMap<>();
       badServer.put("ip", entry.getKey());
       badServer.put("bad", true);
       servers.add(badServer);
     }
 
     for (DeadServer dead : Monitor.getMmi().deadTabletServers) {
-      Map<String,Object> deadServer = new HashMap<String,Object>();
+      Map<String,Object> deadServer = new HashMap<>();
       deadServer.put("ip", dead.server);
       deadServer.put("dead", true);
       servers.add(deadServer);
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/MasterServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/MasterServlet.java
index 64b8648..42d771c 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/MasterServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/MasterServlet.java
@@ -122,7 +122,7 @@
       }
 
       int guessHighLoad = ManagementFactory.getOperatingSystemMXBean().getAvailableProcessors();
-      List<String> slaves = new ArrayList<String>();
+      List<String> slaves = new ArrayList<>();
       for (TabletServerStatus up : Monitor.getMmi().tServerInfo) {
         slaves.add(up.name);
       }
@@ -137,8 +137,8 @@
           (int) (slaves.size() * 0.6 + 1.0), slaves.size()), "Number of tablet servers currently available");
       masterStatus.addSortableColumn("#&nbsp;Total<br />Tablet&nbsp;Servers", new PreciseNumberType(), "The total number of tablet servers configured");
       masterStatus.addSortableColumn("Last&nbsp;GC", null, "The last time files were cleaned-up from HDFS.");
-      masterStatus.addSortableColumn("#&nbsp;Tablets", new NumberType<Integer>(0, Integer.MAX_VALUE, 2, Integer.MAX_VALUE), null);
-      masterStatus.addSortableColumn("#&nbsp;Unassigned<br />Tablets", new NumberType<Integer>(0, 0), null);
+      masterStatus.addSortableColumn("#&nbsp;Tablets", new NumberType<>(0, Integer.MAX_VALUE, 2, Integer.MAX_VALUE), null);
+      masterStatus.addSortableColumn("#&nbsp;Unassigned<br />Tablets", new NumberType<>(0, 0), null);
       masterStatus.addSortableColumn("Entries", new NumberType<Long>(), "The total number of key/value pairs in Accumulo");
       masterStatus.addSortableColumn("Ingest", new NumberType<Long>(), "The number of Key/Value pairs inserted, per second. "
           + " Note that deleted records are \"inserted\" and will make the ingest " + "rate increase in the near-term.");
@@ -147,7 +147,7 @@
       masterStatus.addSortableColumn("Entries<br />Returned", new NumberType<Long>(), "The total number of Key/Value pairs returned as a result of scans.");
       masterStatus.addSortableColumn("Hold&nbsp;Time", new DurationType(0l, 0l), "The maximum amount of time that ingest has been held "
           + "across all servers due to a lack of memory to store the records");
-      masterStatus.addSortableColumn("OS&nbsp;Load", new NumberType<Double>(0., guessHighLoad * 1., 0., guessHighLoad * 3.),
+      masterStatus.addSortableColumn("OS&nbsp;Load", new NumberType<>(0., guessHighLoad * 1., 0., guessHighLoad * 3.),
           "The one-minute load average on the computer that runs the monitor web server.");
       TableRow row = masterStatus.prepareRow();
       row.add(masters.size() == 0 ? "<div class='error'>Down</div>" : AddressUtil.parseAddress(masters.get(0), false).getHostText());
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ProblemServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ProblemServlet.java
index e78c26a..29ab6c6 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ProblemServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ProblemServlet.java
@@ -93,7 +93,7 @@
     if (Monitor.getProblemException() != null)
       return;
 
-    ArrayList<ProblemReport> problemReports = new ArrayList<ProblemReport>();
+    ArrayList<ProblemReport> problemReports = new ArrayList<>();
     Iterator<ProblemReport> iter = tableId == null ? ProblemReports.getInstance(Monitor.getContext()).iterator() : ProblemReports.getInstance(
         Monitor.getContext()).iterator(tableId);
     while (iter.hasNext())
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
index bf582c7..abc9ec4 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
@@ -122,7 +122,7 @@
 
     // Up to 2x the number of slots for replication available, WARN
     // More than 2x the number of slots for replication available, ERROR
-    NumberType<Long> filesPendingFormat = new NumberType<Long>(Long.valueOf(0), Long.valueOf(2 * totalWorkQueueSize), Long.valueOf(0),
+    NumberType<Long> filesPendingFormat = new NumberType<>(Long.valueOf(0), Long.valueOf(2 * totalWorkQueueSize), Long.valueOf(0),
         Long.valueOf(4 * totalWorkQueueSize));
 
     String utilization = filesPendingFormat.format(filesPendingOverAllTargets);
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ShellServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ShellServlet.java
index 8268c4f..31bea15 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ShellServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ShellServlet.java
@@ -16,13 +16,9 @@
  */
 package org.apache.accumulo.monitor.servlets;
 
-import static java.nio.charset.StandardCharsets.UTF_8;
-
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
-import java.io.OutputStreamWriter;
-import java.io.PrintWriter;
 import java.util.HashMap;
 import java.util.Map;
 import java.util.UUID;
@@ -45,7 +41,7 @@
 
   private synchronized Map<String,ShellExecutionThread> userShells() {
     if (userShells == null) {
-      userShells = new HashMap<String,ShellExecutionThread>();
+      userShells = new HashMap<>();
     }
     return userShells;
   }
@@ -266,7 +262,7 @@
       this.readWait = false;
       this.output = new StringBuilderOutputStream();
       ConsoleReader reader = new ConsoleReader(this, output);
-      this.shell = new Shell(reader, new PrintWriter(new OutputStreamWriter(output, UTF_8)));
+      this.shell = new Shell(reader);
       shell.setLogErrorsToConsole();
       if (mock != null) {
         if (shell.config("--fake", "-u", username, "-p", password))
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
index 16b05d4..858192b 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TServersServlet.java
@@ -99,7 +99,7 @@
 
       doDeadTserverList(req, sb);
 
-      ArrayList<TabletServerStatus> tservers = new ArrayList<TabletServerStatus>();
+      ArrayList<TabletServerStatus> tservers = new ArrayList<>();
       if (Monitor.getMmi() != null)
         tservers.addAll(Monitor.getMmi().tServerInfo);
 
@@ -124,7 +124,7 @@
 
     HostAndPort address = HostAndPort.fromString(tserverAddress);
     TabletStats historical = new TabletStats(null, new ActionStats(), new ActionStats(), new ActionStats(), 0, 0, 0, 0);
-    List<TabletStats> tsStats = new ArrayList<TabletStats>();
+    List<TabletStats> tsStats = new ArrayList<>();
     try {
       ClientContext context = Monitor.getContext();
       TabletClientService.Client client = ThriftUtil.getClient(new TabletClientService.Client.Factory(), address, context);
@@ -166,7 +166,7 @@
       ActionStatsUpdator.update(total.majors, info.majors);
 
       KeyExtent extent = new KeyExtent(info.extent);
-      String tableId = extent.getTableId().toString();
+      String tableId = extent.getTableId();
       MessageDigest digester = MessageDigest.getInstance("MD5");
       if (extent.getEndRow() != null && extent.getEndRow().getLength() > 0) {
         digester.update(extent.getEndRow().getBytes(), 0, extent.getEndRow().getLength());
@@ -319,7 +319,7 @@
     }
     final long MINUTES = 3 * 60 * 1000;
     tServerList.addSortableColumn("Server", new TServerLinkType(), null);
-    tServerList.addSortableColumn("Hosted&nbsp;Tablets", new NumberType<Integer>(0, Integer.MAX_VALUE), null);
+    tServerList.addSortableColumn("Hosted&nbsp;Tablets", new NumberType<>(0, Integer.MAX_VALUE), null);
     tServerList.addSortableColumn("Last&nbsp;Contact", new DurationType(0l, (long) Math.min(avgLastContact * 4, MINUTES)), null);
     tServerList.addSortableColumn("Entries", new NumberType<Long>(), "The number of key/value pairs.");
     tServerList.addSortableColumn("Ingest", new NumberType<Long>(), "The number of key/value pairs inserted. (Note that deletes are also 'inserted')");
@@ -336,7 +336,7 @@
             + "Major compactions are the operations where many smaller files are grouped into a larger file, eliminating duplicates and cleaning up deletes.");
     tServerList.addSortableColumn("Index Cache<br />Hit Rate", new PercentageType(), "The recent index cache hit rate.");
     tServerList.addSortableColumn("Data Cache<br />Hit Rate", new PercentageType(), "The recent data cache hit rate.");
-    tServerList.addSortableColumn("OS&nbsp;Load", new NumberType<Double>(0., guessHighLoad * 1., 0., guessHighLoad * 3.),
+    tServerList.addSortableColumn("OS&nbsp;Load", new NumberType<>(0., guessHighLoad * 1., 0., guessHighLoad * 3.),
         "The Unix one minute load average. The average number of processes in the run queue over a one minute interval.");
 
     log.debug("tableId: " + tableId);
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
index cb177ac..c1751d8 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
@@ -84,7 +84,7 @@
     tableList.addSortableColumn("Table&nbsp;Name", new TableLinkType(), null);
     tableList.addSortableColumn("State", new TableStateType(), null);
     tableList.addSortableColumn("#&nbsp;Tablets", new NumberType<Integer>(), "Tables are broken down into ranges of rows called tablets.");
-    tableList.addSortableColumn("#&nbsp;Offline<br />Tablets", new NumberType<Integer>(0, 0), "Tablets unavailable for query or ingest.  "
+    tableList.addSortableColumn("#&nbsp;Offline<br />Tablets", new NumberType<>(0, 0), "Tablets unavailable for query or ingest.  "
         + "May be a transient condition when tablets are moved for balancing.");
     tableList.addSortableColumn("Entries", new NumberType<Long>(), "Key/value pairs over each instance, table or tablet.");
     tableList.addSortableColumn("Entries<br />In&nbsp;Memory", new NumberType<Long>(),
@@ -106,7 +106,7 @@
         "Gathering up many small files and rewriting them as one larger file is called a 'Major Compaction'. "
             + "Major Compactions are performed as a consequence of new files created from Minor Compactions and Bulk Load operations.  "
             + "They reduce the number of files used during queries.");
-    SortedMap<String,TableInfo> tableStats = new TreeMap<String,TableInfo>();
+    SortedMap<String,TableInfo> tableStats = new TreeMap<>();
 
     if (Monitor.getMmi() != null && Monitor.getMmi().tableMap != null)
       for (Entry<String,TableInfo> te : Monitor.getMmi().tableMap.entrySet())
@@ -145,13 +145,13 @@
   private void doTableDetails(HttpServletRequest req, StringBuilder sb, Map<String,String> tidToNameMap, String tableId) {
     String displayName = Tables.getPrintableTableNameFromId(tidToNameMap, tableId);
     Instance instance = Monitor.getContext().getInstance();
-    TreeSet<String> locs = new TreeSet<String>();
+    TreeSet<String> locs = new TreeSet<>();
     if (RootTable.ID.equals(tableId)) {
       locs.add(instance.getRootTabletLocation());
     } else {
       String systemTableName = MetadataTable.ID.equals(tableId) ? RootTable.NAME : MetadataTable.NAME;
-      MetaDataTableScanner scanner = new MetaDataTableScanner(Monitor.getContext(), new Range(KeyExtent.getMetadataEntry(new Text(tableId), new Text()),
-          KeyExtent.getMetadataEntry(new Text(tableId), null)), systemTableName);
+      MetaDataTableScanner scanner = new MetaDataTableScanner(Monitor.getContext(), new Range(KeyExtent.getMetadataEntry(tableId, new Text()),
+          KeyExtent.getMetadataEntry(tableId, null)), systemTableName);
 
       while (scanner.hasNext()) {
         TabletLocationState state = scanner.next();
@@ -168,7 +168,7 @@
 
     log.debug("Locs: " + locs);
 
-    List<TabletServerStatus> tservers = new ArrayList<TabletServerStatus>();
+    List<TabletServerStatus> tservers = new ArrayList<>();
     if (Monitor.getMmi() != null) {
       for (TabletServerStatus tss : Monitor.getMmi().tServerInfo) {
         try {
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/VisServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/VisServlet.java
index eedf598..298dcdf 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/VisServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/VisServlet.java
@@ -150,7 +150,7 @@
         cfg.spacing = 80;
     }
 
-    ArrayList<TabletServerStatus> tservers = new ArrayList<TabletServerStatus>();
+    ArrayList<TabletServerStatus> tservers = new ArrayList<>();
     if (Monitor.getMmi() != null)
       tservers.addAll(Monitor.getMmi().tServerInfo);
 
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/XMLServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/XMLServlet.java
index 1662069..ea0ee59 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/XMLServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/XMLServlet.java
@@ -63,7 +63,7 @@
       sb.append("</servers>\n");
       return;
     }
-    SortedMap<String,TableInfo> tableStats = new TreeMap<String,TableInfo>(Monitor.getMmi().tableMap);
+    SortedMap<String,TableInfo> tableStats = new TreeMap<>(Monitor.getMmi().tableMap);
 
     for (TabletServerStatus status : Monitor.getMmi().tServerInfo) {
 
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/Basic.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/Basic.java
index 2143766..af68a98 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/Basic.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/Basic.java
@@ -141,7 +141,7 @@
       scanner = getScanner(table, principal, at, sb);
     }
 
-    return new AbstractMap.SimpleEntry<Scanner,UserGroupInformation>(scanner, traceUgi);
+    return new AbstractMap.SimpleEntry<>(scanner, traceUgi);
   }
 
   private Scanner getScanner(String table, String principal, AuthenticationToken at, StringBuilder sb) throws AccumuloException, AccumuloSecurityException {
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/NullScanner.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/NullScanner.java
index 0ba13c7..b91d454 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/NullScanner.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/NullScanner.java
@@ -22,6 +22,7 @@
 
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.IteratorSetting.Column;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
@@ -123,4 +124,36 @@
   public void setReadaheadThreshold(long batches) {
 
   }
+
+  @Override
+  public void setBatchTimeout(long timeout, TimeUnit milliseconds) {
+
+  }
+
+  @Override
+  public long getBatchTimeout(TimeUnit timeUnit) {
+    return 0;
+  }
+
+  @Override
+  public void setSamplerConfiguration(SamplerConfiguration samplerConfig) {}
+
+  @Override
+  public SamplerConfiguration getSamplerConfiguration() {
+    return null;
+  }
+
+  @Override
+  public void clearSamplerConfiguration() {}
+
+  @Override
+  public void setClassLoaderContext(String context) {}
+
+  @Override
+  public void clearClassLoaderContext() {}
+
+  @Override
+  public String getClassLoaderContext() {
+    return null;
+  }
 }
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/Summary.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/Summary.java
index 5c58375..56da108 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/Summary.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/trace/Summary.java
@@ -168,7 +168,7 @@
     }
     Range range = getRangeForTrace(minutes);
     scanner.setRange(range);
-    final Map<String,Stats> summary = new TreeMap<String,Stats>();
+    final Map<String,Stats> summary = new TreeMap<>();
     if (null != pair.getValue()) {
       pair.getValue().doAs(new PrivilegedAction<Void>() {
         @Override
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/util/Table.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/Table.java
index 522ebb6..e7bc30b 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/util/Table.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/Table.java
@@ -26,7 +26,7 @@
 import org.apache.accumulo.monitor.util.celltypes.StringType;
 
 public class Table {
-  private String table;
+  private String tableName;
   private String caption;
   private String captionclass;
   private String subcaption;
@@ -39,12 +39,12 @@
   }
 
   public Table(String tableName, String caption, String captionClass) {
-    this.table = tableName;
+    this.tableName = tableName;
     this.caption = caption;
     this.captionclass = captionClass;
     this.subcaption = null;
-    this.columns = new ArrayList<TableColumn<?>>();
-    this.rows = new ArrayList<TableRow>();
+    this.columns = new ArrayList<>();
+    this.rows = new ArrayList<>();
   }
 
   public synchronized void setSubCaption(String subcaption) {
@@ -59,9 +59,9 @@
 
   private synchronized <T> void addColumn(String title, CellType<T> type, String legend, boolean sortable) {
     if (type == null)
-      type = new StringType<T>();
+      type = new StringType<>();
     type.setSortable(sortable);
-    addColumn(new TableColumn<T>(title, type, legend));
+    addColumn(new TableColumn<>(title, type, legend));
   }
 
   public synchronized <T> void addUnsortableColumn(String title, CellType<T> type, String legend) {
@@ -109,8 +109,8 @@
       if (row.size() != columns.size())
         throw new RuntimeException("Each row must have the same number of columns");
 
-    boolean sortAscending = !"false".equals(BasicServlet.getCookieValue(req, "tableSort." + BasicServlet.encode(page) + "." + BasicServlet.encode(table) + "."
-        + "sortAsc"));
+    boolean sortAscending = !"false".equals(BasicServlet.getCookieValue(req, "tableSort." + BasicServlet.encode(page) + "." + BasicServlet.encode(tableName)
+        + "." + "sortAsc"));
 
     int sortCol = -1; // set to first sortable column by default
     int numLegends = 0;
@@ -124,7 +124,7 @@
 
     // only get cookie if there is a possibility that it is sortable
     if (sortCol >= 0) {
-      String sortColStr = BasicServlet.getCookieValue(req, "tableSort." + BasicServlet.encode(page) + "." + BasicServlet.encode(table) + "." + "sortCol");
+      String sortColStr = BasicServlet.getCookieValue(req, "tableSort." + BasicServlet.encode(page) + "." + BasicServlet.encode(tableName) + "." + "sortCol");
       if (sortColStr != null) {
         try {
           int col = Integer.parseInt(sortColStr);
@@ -139,13 +139,13 @@
 
     boolean showLegend = false;
     if (numLegends > 0) {
-      String showStr = BasicServlet.getCookieValue(req, "tableLegend." + BasicServlet.encode(page) + "." + BasicServlet.encode(table) + "." + "show");
+      String showStr = BasicServlet.getCookieValue(req, "tableLegend." + BasicServlet.encode(page) + "." + BasicServlet.encode(tableName) + "." + "show");
       showLegend = showStr != null && Boolean.parseBoolean(showStr);
     }
 
     sb.append("<div>\n");
-    sb.append("<a name='").append(table).append("'>&nbsp;</a>\n");
-    sb.append("<table id='").append(table).append("' class='sortable'>\n");
+    sb.append("<a name='").append(tableName).append("'>&nbsp;</a>\n");
+    sb.append("<table id='").append(tableName).append("' class='sortable'>\n");
     sb.append("<caption");
     if (captionclass != null && !captionclass.isEmpty())
       sb.append(" class='").append(captionclass).append("'");
@@ -157,7 +157,7 @@
 
     String redir = BasicServlet.currentPage(req);
     if (numLegends > 0) {
-      String legendUrl = String.format("/op?action=toggleLegend&redir=%s&page=%s&table=%s&show=%s", redir, page, table, !showLegend);
+      String legendUrl = String.format("/op?action=toggleLegend&redir=%s&page=%s&table=%s&show=%s", redir, page, tableName, !showLegend);
       sb.append("<a href='").append(legendUrl).append("'>").append(showLegend ? "Hide" : "Show").append("&nbsp;Legend</a>\n");
       if (showLegend)
         sb.append("<div class='left show'><dl>\n");
@@ -166,7 +166,7 @@
       TableColumn<?> col = columns.get(i);
       String title = col.getTitle();
       if (rows.size() > 1 && col.getCellType().isSortable()) {
-        String url = String.format("/op?action=sortTable&redir=%s&page=%s&table=%s&%s=%s", redir, page, table, sortCol == i ? "asc" : "col",
+        String url = String.format("/op?action=sortTable&redir=%s&page=%s&table=%s&%s=%s", redir, page, tableName, sortCol == i ? "asc" : "col",
             sortCol == i ? !sortAscending : i);
         String img = "";
         if (sortCol == i)
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/util/TableRow.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/TableRow.java
index c1ce1c5..42853f4 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/util/TableRow.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/TableRow.java
@@ -25,7 +25,7 @@
 
   TableRow(int size) {
     this.size = size;
-    this.row = new ArrayList<Object>(size);
+    this.row = new ArrayList<>(size);
   }
 
   public boolean add(Object obj) {
@@ -47,7 +47,7 @@
   }
 
   public static <T> Comparator<TableRow> getComparator(int index, Comparator<T> comp) {
-    return new TableRowComparator<T>(index, comp);
+    return new TableRowComparator<>(index, comp);
   }
 
   private static class TableRowComparator<T> implements Comparator<TableRow> {
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/util/celltypes/BulkImportStateType.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/celltypes/BulkImportStateType.java
new file mode 100644
index 0000000..194278e
--- /dev/null
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/util/celltypes/BulkImportStateType.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.monitor.util.celltypes;
+
+import org.apache.accumulo.core.master.thrift.BulkImportState;
+
+public class BulkImportStateType extends CellType<BulkImportState> {
+
+  private static final long serialVersionUID = 1L;
+
+  @Override
+  public String alignment() {
+    return "left";
+  }
+
+  @Override
+  public String format(Object obj) {
+    BulkImportState state = (BulkImportState) obj;
+    return state.name();
+  }
+
+  @Override
+  public int compare(BulkImportState o1, BulkImportState o2) {
+    if (o1 == null && o2 == null)
+      return 0;
+    else if (o1 == null)
+      return -1;
+    return o1.compareTo(o2);
+  }
+
+}
diff --git a/server/monitor/src/test/java/org/apache/accumulo/monitor/ShowTraceLinkTypeTest.java b/server/monitor/src/test/java/org/apache/accumulo/monitor/ShowTraceLinkTypeTest.java
index 786e8e3..1201d92 100644
--- a/server/monitor/src/test/java/org/apache/accumulo/monitor/ShowTraceLinkTypeTest.java
+++ b/server/monitor/src/test/java/org/apache/accumulo/monitor/ShowTraceLinkTypeTest.java
@@ -31,7 +31,7 @@
 
   @Test
   public void testTraceSortingForMonitor() {
-    ArrayList<RemoteSpan> spans = new ArrayList<RemoteSpan>(10), expectedOrdering = new ArrayList<RemoteSpan>(10);
+    ArrayList<RemoteSpan> spans = new ArrayList<>(10), expectedOrdering = new ArrayList<>(10);
 
     // "Random" ordering
     spans.add(rs(55l, 75l, "desc5"));
diff --git a/server/monitor/src/test/java/org/apache/accumulo/monitor/ZooKeeperStatusTest.java b/server/monitor/src/test/java/org/apache/accumulo/monitor/ZooKeeperStatusTest.java
index 7f56931..88a0c5a 100644
--- a/server/monitor/src/test/java/org/apache/accumulo/monitor/ZooKeeperStatusTest.java
+++ b/server/monitor/src/test/java/org/apache/accumulo/monitor/ZooKeeperStatusTest.java
@@ -32,13 +32,13 @@
     List<String> expectedHosts = Arrays.asList("rack1node1", "rack2node1", "rack4node1", "rack4node4");
 
     // Add the states in a not correctly sorted order
-    TreeSet<ZooKeeperState> states = new TreeSet<ZooKeeperState>();
+    TreeSet<ZooKeeperState> states = new TreeSet<>();
     states.add(new ZooKeeperState("rack4node4", "leader", 10));
     states.add(new ZooKeeperState("rack4node1", "follower", 10));
     states.add(new ZooKeeperState("rack1node1", "follower", 10));
     states.add(new ZooKeeperState("rack2node1", "follower", 10));
 
-    List<String> actualHosts = new ArrayList<String>(4);
+    List<String> actualHosts = new ArrayList<>(4);
     for (ZooKeeperState state : states) {
       actualHosts.add(state.keeper);
     }
diff --git a/server/native/pom.xml b/server/native/pom.xml
index b524299..d0e5ee8 100644
--- a/server/native/pom.xml
+++ b/server/native/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-native</artifactId>
@@ -116,7 +116,7 @@
             <goals>
               <goal>exec</goal>
             </goals>
-            <phase>integration-test</phase>
+            <phase>package</phase>
             <configuration>
               <executable>make</executable>
               <workingDirectory>${project.build.directory}/${project.artifactId}-${project.version}/${project.artifactId}-${project.version}</workingDirectory>
diff --git a/server/tracer/pom.xml b/server/tracer/pom.xml
index 9b83f89..cd69d9d 100644
--- a/server/tracer/pom.xml
+++ b/server/tracer/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-tracer</artifactId>
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/AsyncSpanReceiver.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/AsyncSpanReceiver.java
index a35734d..be924ba 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/AsyncSpanReceiver.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/AsyncSpanReceiver.java
@@ -57,7 +57,7 @@
   public static final String QUEUE_SIZE = "tracer.queue.size";
   public static final String SPAN_MIN_MS = "tracer.span.min.ms";
 
-  private final Map<SpanKey,Destination> clients = new HashMap<SpanKey,Destination>();
+  private final Map<SpanKey,Destination> clients = new HashMap<>();
 
   protected String host = null;
   protected String service = null;
@@ -69,7 +69,7 @@
   protected abstract SpanKey getSpanKey(Map<String,String> data);
 
   Timer timer = new Timer("SpanSender", true);
-  protected final AbstractQueue<RemoteSpan> sendQueue = new ConcurrentLinkedQueue<RemoteSpan>();
+  protected final AbstractQueue<RemoteSpan> sendQueue = new ConcurrentLinkedQueue<>();
   protected final AtomicInteger sendQueueSize = new AtomicInteger(0);
   int maxQueueSize = 5000;
   long lastNotificationOfDroppedSpans = 0;
@@ -150,7 +150,7 @@
   public static List<Annotation> convertToAnnotations(List<TimelineAnnotation> annotations) {
     if (annotations == null)
       return null;
-    List<Annotation> result = new ArrayList<Annotation>();
+    List<Annotation> result = new ArrayList<>();
     for (TimelineAnnotation annotation : annotations) {
       result.add(new Annotation(annotation.getTime(), annotation.getMessage()));
     }
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/SpanTree.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/SpanTree.java
index c7682c1..05c3212 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/SpanTree.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/SpanTree.java
@@ -27,8 +27,8 @@
 import org.apache.htrace.Span;
 
 public class SpanTree {
-  final Map<Long,List<Long>> parentChildren = new HashMap<Long,List<Long>>();
-  public final Map<Long,RemoteSpan> nodes = new HashMap<Long,RemoteSpan>();
+  final Map<Long,List<Long>> parentChildren = new HashMap<>();
+  public final Map<Long,RemoteSpan> nodes = new HashMap<>();
 
   public SpanTree() {}
 
@@ -40,7 +40,7 @@
   }
 
   public Set<Long> visit(SpanTreeVisitor visitor) {
-    Set<Long> visited = new HashSet<Long>();
+    Set<Long> visited = new HashSet<>();
     List<Long> root = parentChildren.get(Long.valueOf(Span.ROOT_SPAN_ID));
     if (root == null || root.isEmpty())
       return visited;
@@ -57,7 +57,7 @@
     if (visited.contains(node.spanId))
       return;
     visited.add(node.spanId);
-    List<RemoteSpan> children = new ArrayList<RemoteSpan>();
+    List<RemoteSpan> children = new ArrayList<>();
     List<Long> childrenIds = parentChildren.get(node.spanId);
     if (childrenIds != null) {
       for (Long childId : childrenIds) {
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceDump.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceDump.java
index 0a87469..b5c00ed 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceDump.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceDump.java
@@ -52,7 +52,7 @@
     @Parameter(names = {"-d", "--dump"}, description = "Dump all traces")
     boolean dump = false;
     @Parameter(description = " <trace id> { <trace id> ... }")
-    List<String> traceIds = new ArrayList<String>();
+    List<String> traceIds = new ArrayList<>();
 
     Opts() {
       super("trace");
@@ -74,7 +74,7 @@
   }
 
   public static List<RemoteSpan> sortByStart(Collection<RemoteSpan> spans) {
-    List<RemoteSpan> spanList = new ArrayList<RemoteSpan>(spans);
+    List<RemoteSpan> spanList = new ArrayList<>(spans);
     Collections.sort(spanList, new Comparator<RemoteSpan>() {
       @Override
       public int compare(RemoteSpan o1, RemoteSpan o2) {
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceFormatter.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceFormatter.java
index 48ec8cf..775e6aa 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceFormatter.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceFormatter.java
@@ -20,11 +20,12 @@
 import java.util.Date;
 import java.util.Iterator;
 import java.util.Map.Entry;
-
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.util.format.DateFormatSupplier;
 import org.apache.accumulo.core.util.format.DefaultFormatter;
 import org.apache.accumulo.core.util.format.Formatter;
+import org.apache.accumulo.core.util.format.FormatterConfig;
 import org.apache.accumulo.tracer.thrift.Annotation;
 import org.apache.accumulo.tracer.thrift.RemoteSpan;
 import org.apache.commons.lang.NotImplementedException;
@@ -38,14 +39,9 @@
  *
  */
 public class TraceFormatter implements Formatter {
-  public static final String DATE_FORMAT = "yyyy/MM/dd HH:mm:ss.SSS";
+  public static final String DATE_FORMAT = DateFormatSupplier.HUMAN_READABLE_FORMAT;
   // ugh... SimpleDataFormat is not thread safe
-  private static final ThreadLocal<SimpleDateFormat> formatter = new ThreadLocal<SimpleDateFormat>() {
-    @Override
-    protected SimpleDateFormat initialValue() {
-      return new SimpleDateFormat(DATE_FORMAT);
-    }
-  };
+  private static final DateFormatSupplier formatter = DateFormatSupplier.createSimpleFormatSupplier(DATE_FORMAT);
 
   public static String formatDate(final Date date) {
     return formatter.get().format(date);
@@ -54,7 +50,7 @@
   private final static Text SPAN_CF = new Text("span");
 
   private Iterator<Entry<Key,Value>> scanner;
-  private boolean printTimeStamps;
+  private FormatterConfig config;
 
   public static RemoteSpan getRemoteSpan(Entry<Key,Value> entry) {
     TMemoryInputTransport transport = new TMemoryInputTransport(entry.getValue().get());
@@ -99,12 +95,12 @@
         }
       }
 
-      if (printTimeStamps) {
+      if (config.willPrintTimestamps()) {
         result.append(String.format(" %-12s:%d%n", "timestamp", next.getKey().getTimestamp()));
       }
       return result.toString();
     }
-    return DefaultFormatter.formatEntry(next, printTimeStamps);
+    return DefaultFormatter.formatEntry(next, config.willPrintTimestamps());
   }
 
   @Override
@@ -113,8 +109,8 @@
   }
 
   @Override
-  public void initialize(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps) {
+  public void initialize(Iterable<Entry<Key,Value>> scanner, FormatterConfig config) {
     this.scanner = scanner.iterator();
-    this.printTimeStamps = printTimestamps;
+    this.config = new FormatterConfig(config);
   }
 }
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceServer.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceServer.java
index 2d5d68d..45b6669 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceServer.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/TraceServer.java
@@ -22,6 +22,7 @@
 import java.net.InetSocketAddress;
 import java.net.ServerSocket;
 import java.nio.channels.ServerSocketChannel;
+import java.util.Arrays;
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.concurrent.TimeUnit;
@@ -43,7 +44,6 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.user.AgeOffFilter;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
 import org.apache.accumulo.server.Accumulo;
@@ -78,6 +78,8 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class TraceServer implements Watcher {
 
   final private static Logger log = LoggerFactory.getLogger(TraceServer.class);
@@ -85,7 +87,7 @@
   final private TServer server;
   final private AtomicReference<BatchWriter> writer;
   final private Connector connector;
-  final String table;
+  final String tableName;
   final private static int BATCH_WRITER_MAX_LATENCY = 5;
   final private static long SCHEDULE_PERIOD = 1000;
   final private static long SCHEDULE_DELAY = 1000;
@@ -181,7 +183,7 @@
     log.info("Version " + Constants.VERSION);
     log.info("Instance " + serverConfiguration.getInstance().getInstanceID());
     AccumuloConfiguration conf = serverConfiguration.getConfiguration();
-    table = conf.get(Property.TRACE_TABLE);
+    tableName = conf.get(Property.TRACE_TABLE);
     Connector connector = null;
     while (true) {
       try {
@@ -213,33 +215,45 @@
         }
 
         connector = serverConfiguration.getInstance().getConnector(principal, at);
-        if (!connector.tableOperations().exists(table)) {
-          connector.tableOperations().create(table);
+        if (!connector.tableOperations().exists(tableName)) {
+          connector.tableOperations().create(tableName);
           IteratorSetting setting = new IteratorSetting(10, "ageoff", AgeOffFilter.class.getName());
           AgeOffFilter.setTTL(setting, 7 * 24 * 60 * 60 * 1000l);
-          connector.tableOperations().attachIterator(table, setting);
+          connector.tableOperations().attachIterator(tableName, setting);
         }
-        connector.tableOperations().setProperty(table, Property.TABLE_FORMATTER_CLASS.getKey(), TraceFormatter.class.getName());
+        connector.tableOperations().setProperty(tableName, Property.TABLE_FORMATTER_CLASS.getKey(), TraceFormatter.class.getName());
         break;
       } catch (RuntimeException ex) {
         log.info("Waiting to checking/create the trace table.", ex);
-        UtilWaitThread.sleep(1000);
+        sleepUninterruptibly(1, TimeUnit.SECONDS);
       }
     }
     this.connector = connector;
     // make sure we refer to the final variable from now on.
     connector = null;
 
-    int port = conf.getPort(Property.TRACE_PORT);
-    final ServerSocket sock = ServerSocketChannel.open().socket();
-    sock.setReuseAddress(true);
-    sock.bind(new InetSocketAddress(hostname, port));
+    int ports[] = conf.getPort(Property.TRACE_PORT);
+    ServerSocket sock = null;
+    for (int port : ports) {
+      ServerSocket s = ServerSocketChannel.open().socket();
+      s.setReuseAddress(true);
+      try {
+        s.bind(new InetSocketAddress(hostname, port));
+        sock = s;
+        break;
+      } catch (Exception e) {
+        log.warn("Unable to start trace server on port {}", port);
+      }
+    }
+    if (null == sock) {
+      throw new RuntimeException("Unable to start trace server on configured ports: " + Arrays.toString(ports));
+    }
     final TServerTransport transport = new TServerSocket(sock);
     TThreadPoolServer.Args options = new TThreadPoolServer.Args(transport);
     options.processor(new Processor<Iface>(new Receiver()));
     server = new TThreadPoolServer(options);
     registerInZooKeeper(sock.getInetAddress().getHostAddress() + ":" + sock.getLocalPort(), conf.get(Property.TRACE_ZK_PATH));
-    writer = new AtomicReference<BatchWriter>(this.connector.createBatchWriter(table,
+    writer = new AtomicReference<>(this.connector.createBatchWriter(tableName,
         new BatchWriterConfig().setMaxLatency(BATCH_WRITER_MAX_LATENCY, TimeUnit.SECONDS)));
   }
 
@@ -260,7 +274,7 @@
         writer.flush();
       } else {
         // We don't have a writer. If the table exists, try to make a new writer.
-        if (connector.tableOperations().exists(table)) {
+        if (connector.tableOperations().exists(tableName)) {
           resetWriter();
         }
       }
@@ -279,7 +293,7 @@
   private void resetWriter() {
     BatchWriter writer = null;
     try {
-      writer = connector.createBatchWriter(table, new BatchWriterConfig().setMaxLatency(BATCH_WRITER_MAX_LATENCY, TimeUnit.SECONDS));
+      writer = connector.createBatchWriter(tableName, new BatchWriterConfig().setMaxLatency(BATCH_WRITER_MAX_LATENCY, TimeUnit.SECONDS));
     } catch (Exception ex) {
       log.warn("Unable to create a batch writer, will retry. Set log level to DEBUG to see stacktrace. cause: " + ex);
       log.debug("batch writer creation failed with exception.", ex);
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/ZooTraceClient.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/ZooTraceClient.java
index aa5a9ee..6954983 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/ZooTraceClient.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/ZooTraceClient.java
@@ -51,7 +51,7 @@
   String path;
   boolean pathExists = false;
   final Random random = new Random();
-  final List<String> hosts = new ArrayList<String>();
+  final List<String> hosts = new ArrayList<>();
   long retryPause = 5000l;
 
   // Visible for testing
@@ -144,7 +144,7 @@
   synchronized private void updateHosts(String path, List<String> children) {
     log.debug("Scanning trace hosts in zookeeper: " + path);
     try {
-      List<String> hosts = new ArrayList<String>();
+      List<String> hosts = new ArrayList<>();
       for (String child : children) {
         byte[] data = zoo.getData(path + "/" + child, null);
         hosts.add(new String(data, UTF_8));
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/Annotation.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/Annotation.java
index af4cfd5..3997c21 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/Annotation.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/Annotation.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class Annotation implements org.apache.thrift.TBase<Annotation, Annotation._Fields>, java.io.Serializable, Cloneable, Comparable<Annotation> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class Annotation implements org.apache.thrift.TBase<Annotation, Annotation._Fields>, java.io.Serializable, Cloneable, Comparable<Annotation> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("Annotation");
 
   private static final org.apache.thrift.protocol.TField TIME_FIELD_DESC = new org.apache.thrift.protocol.TField("time", org.apache.thrift.protocol.TType.I64, (short)1);
@@ -244,7 +247,7 @@
   public Object getFieldValue(_Fields field) {
     switch (field) {
     case TIME:
-      return Long.valueOf(getTime());
+      return getTime();
 
     case MSG:
       return getMsg();
@@ -304,7 +307,19 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_time = true;
+    list.add(present_time);
+    if (present_time)
+      list.add(time);
+
+    boolean present_msg = true && (isSetMsg());
+    list.add(present_msg);
+    if (present_msg)
+      list.add(msg);
+
+    return list.hashCode();
   }
 
   @Override
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/RemoteSpan.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/RemoteSpan.java
index 285aebd..34025ef 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/RemoteSpan.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/RemoteSpan.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class RemoteSpan implements org.apache.thrift.TBase<RemoteSpan, RemoteSpan._Fields>, java.io.Serializable, Cloneable, Comparable<RemoteSpan> {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class RemoteSpan implements org.apache.thrift.TBase<RemoteSpan, RemoteSpan._Fields>, java.io.Serializable, Cloneable, Comparable<RemoteSpan> {
   private static final org.apache.thrift.protocol.TStruct STRUCT_DESC = new org.apache.thrift.protocol.TStruct("RemoteSpan");
 
   private static final org.apache.thrift.protocol.TField SENDER_FIELD_DESC = new org.apache.thrift.protocol.TField("sender", org.apache.thrift.protocol.TType.STRING, (short)1);
@@ -644,19 +647,19 @@
       return getSvc();
 
     case TRACE_ID:
-      return Long.valueOf(getTraceId());
+      return getTraceId();
 
     case SPAN_ID:
-      return Long.valueOf(getSpanId());
+      return getSpanId();
 
     case PARENT_ID:
-      return Long.valueOf(getParentId());
+      return getParentId();
 
     case START:
-      return Long.valueOf(getStart());
+      return getStart();
 
     case STOP:
-      return Long.valueOf(getStop());
+      return getStop();
 
     case DESCRIPTION:
       return getDescription();
@@ -810,7 +813,59 @@
 
   @Override
   public int hashCode() {
-    return 0;
+    List<Object> list = new ArrayList<Object>();
+
+    boolean present_sender = true && (isSetSender());
+    list.add(present_sender);
+    if (present_sender)
+      list.add(sender);
+
+    boolean present_svc = true && (isSetSvc());
+    list.add(present_svc);
+    if (present_svc)
+      list.add(svc);
+
+    boolean present_traceId = true;
+    list.add(present_traceId);
+    if (present_traceId)
+      list.add(traceId);
+
+    boolean present_spanId = true;
+    list.add(present_spanId);
+    if (present_spanId)
+      list.add(spanId);
+
+    boolean present_parentId = true;
+    list.add(present_parentId);
+    if (present_parentId)
+      list.add(parentId);
+
+    boolean present_start = true;
+    list.add(present_start);
+    if (present_start)
+      list.add(start);
+
+    boolean present_stop = true;
+    list.add(present_stop);
+    if (present_stop)
+      list.add(stop);
+
+    boolean present_description = true && (isSetDescription());
+    list.add(present_description);
+    if (present_description)
+      list.add(description);
+
+    boolean present_data = true && (isSetData());
+    list.add(present_data);
+    if (present_data)
+      list.add(data);
+
+    boolean present_annotations = true && (isSetAnnotations());
+    list.add(present_annotations);
+    if (present_annotations)
+      list.add(annotations);
+
+    return list.hashCode();
   }
 
   @Override
@@ -1114,13 +1169,13 @@
               {
                 org.apache.thrift.protocol.TMap _map0 = iprot.readMapBegin();
                 struct.data = new HashMap<String,String>(2*_map0.size);
-                for (int _i1 = 0; _i1 < _map0.size; ++_i1)
+                String _key1;
+                String _val2;
+                for (int _i3 = 0; _i3 < _map0.size; ++_i3)
                 {
-                  String _key2;
-                  String _val3;
-                  _key2 = iprot.readString();
-                  _val3 = iprot.readString();
-                  struct.data.put(_key2, _val3);
+                  _key1 = iprot.readString();
+                  _val2 = iprot.readString();
+                  struct.data.put(_key1, _val2);
                 }
                 iprot.readMapEnd();
               }
@@ -1134,12 +1189,12 @@
               {
                 org.apache.thrift.protocol.TList _list4 = iprot.readListBegin();
                 struct.annotations = new ArrayList<Annotation>(_list4.size);
-                for (int _i5 = 0; _i5 < _list4.size; ++_i5)
+                Annotation _elem5;
+                for (int _i6 = 0; _i6 < _list4.size; ++_i6)
                 {
-                  Annotation _elem6;
-                  _elem6 = new Annotation();
-                  _elem6.read(iprot);
-                  struct.annotations.add(_elem6);
+                  _elem5 = new Annotation();
+                  _elem5.read(iprot);
+                  struct.annotations.add(_elem5);
                 }
                 iprot.readListEnd();
               }
@@ -1352,13 +1407,13 @@
         {
           org.apache.thrift.protocol.TMap _map11 = new org.apache.thrift.protocol.TMap(org.apache.thrift.protocol.TType.STRING, org.apache.thrift.protocol.TType.STRING, iprot.readI32());
           struct.data = new HashMap<String,String>(2*_map11.size);
-          for (int _i12 = 0; _i12 < _map11.size; ++_i12)
+          String _key12;
+          String _val13;
+          for (int _i14 = 0; _i14 < _map11.size; ++_i14)
           {
-            String _key13;
-            String _val14;
-            _key13 = iprot.readString();
-            _val14 = iprot.readString();
-            struct.data.put(_key13, _val14);
+            _key12 = iprot.readString();
+            _val13 = iprot.readString();
+            struct.data.put(_key12, _val13);
           }
         }
         struct.setDataIsSet(true);
@@ -1367,12 +1422,12 @@
         {
           org.apache.thrift.protocol.TList _list15 = new org.apache.thrift.protocol.TList(org.apache.thrift.protocol.TType.STRUCT, iprot.readI32());
           struct.annotations = new ArrayList<Annotation>(_list15.size);
-          for (int _i16 = 0; _i16 < _list15.size; ++_i16)
+          Annotation _elem16;
+          for (int _i17 = 0; _i17 < _list15.size; ++_i17)
           {
-            Annotation _elem17;
-            _elem17 = new Annotation();
-            _elem17.read(iprot);
-            struct.annotations.add(_elem17);
+            _elem16 = new Annotation();
+            _elem16.read(iprot);
+            struct.annotations.add(_elem16);
           }
         }
         struct.setAnnotationsIsSet(true);
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/SpanReceiver.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/SpanReceiver.java
index 6728522..b1e913a 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/SpanReceiver.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/SpanReceiver.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class SpanReceiver {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class SpanReceiver {
 
   public interface Iface {
 
@@ -91,7 +94,7 @@
     {
       span_args args = new span_args();
       args.setSpan(span);
-      sendBase("span", args);
+      sendBaseOneway("span", args);
     }
 
   }
@@ -127,7 +130,7 @@
       }
 
       public void write_args(org.apache.thrift.protocol.TProtocol prot) throws org.apache.thrift.TException {
-        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("span", org.apache.thrift.protocol.TMessageType.CALL, 0));
+        prot.writeMessageBegin(new org.apache.thrift.protocol.TMessage("span", org.apache.thrift.protocol.TMessageType.ONEWAY, 0));
         span_args args = new span_args();
         args.setSpan(span);
         args.write(prot);
@@ -421,7 +424,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_span = true && (isSetSpan());
+      list.add(present_span);
+      if (present_span)
+        list.add(span);
+
+      return list.hashCode();
     }
 
     @Override
diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/TestService.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/TestService.java
index 262e71a..f99c2c8 100644
--- a/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/TestService.java
+++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/thrift/TestService.java
@@ -15,7 +15,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.9.1)
+ * Autogenerated by Thrift Compiler (0.9.3)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -45,10 +45,13 @@
 import java.util.BitSet;
 import java.nio.ByteBuffer;
 import java.util.Arrays;
+import javax.annotation.Generated;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-@SuppressWarnings({"unchecked", "serial", "rawtypes", "unused"}) public class TestService {
+@SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
+@Generated(value = "Autogenerated by Thrift Compiler (0.9.3)")
+public class TestService {
 
   public interface Iface {
 
@@ -522,7 +525,19 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_tinfo = true && (isSetTinfo());
+      list.add(present_tinfo);
+      if (present_tinfo)
+        list.add(tinfo);
+
+      boolean present_message = true && (isSetMessage());
+      list.add(present_message);
+      if (present_message)
+        list.add(message);
+
+      return list.hashCode();
     }
 
     @Override
@@ -878,7 +893,7 @@
     public Object getFieldValue(_Fields field) {
       switch (field) {
       case SUCCESS:
-        return Boolean.valueOf(isSuccess());
+        return isSuccess();
 
       }
       throw new IllegalStateException();
@@ -924,7 +939,14 @@
 
     @Override
     public int hashCode() {
-      return 0;
+      List<Object> list = new ArrayList<Object>();
+
+      boolean present_success = true;
+      list.add(present_success);
+      if (present_success)
+        list.add(success);
+
+      return list.hashCode();
     }
 
     @Override
diff --git a/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java b/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java
index 703670f..6dcf7a4 100644
--- a/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java
+++ b/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java
@@ -79,7 +79,7 @@
   }
 
   static class TestReceiver implements SpanReceiver {
-    public Map<Long,List<SpanStruct>> traces = new HashMap<Long,List<SpanStruct>>();
+    public Map<Long,List<SpanStruct>> traces = new HashMap<>();
 
     public TestReceiver() {}
 
diff --git a/server/tserver/pom.xml b/server/tserver/pom.xml
index 159866b..062ee1a 100644
--- a/server/tserver/pom.xml
+++ b/server/tserver/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <artifactId>accumulo-tserver</artifactId>
diff --git a/server/tserver/src/main/findbugs/exclude-filter.xml b/server/tserver/src/main/findbugs/exclude-filter.xml
index 47dd1f5..a334163 100644
--- a/server/tserver/src/main/findbugs/exclude-filter.xml
+++ b/server/tserver/src/main/findbugs/exclude-filter.xml
@@ -18,7 +18,7 @@
   <Match>
     <!-- locking is confusing, but probably correct -->
     <Class name="org.apache.accumulo.tserver.tablet.Tablet" />
-    <Method name="beginUpdatingLogsUsed" params="org.apache.accumulo.tserver.InMemoryMap,java.util.Collection,boolean" returns="boolean" />
+    <Method name="beginUpdatingLogsUsed" params="org.apache.accumulo.tserver.InMemoryMap,org.apache.accumulo.tserver.log.DfsLogger,boolean" returns="boolean" />
     <Bug code="UL" pattern="UL_UNRELEASED_LOCK" />
   </Match>
   <Match>
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/CompactionQueue.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/CompactionQueue.java
index f87131e..3d820dd 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/CompactionQueue.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/CompactionQueue.java
@@ -33,7 +33,7 @@
  */
 class CompactionQueue extends AbstractQueue<TraceRunnable> implements BlockingQueue<TraceRunnable> {
 
-  private List<TraceRunnable> task = new LinkedList<TraceRunnable>();
+  private List<TraceRunnable> task = new LinkedList<>();
 
   private static final Comparator<TraceRunnable> comparator = new Comparator<TraceRunnable>() {
     @SuppressWarnings("unchecked")
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/ConditionalMutationSet.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/ConditionalMutationSet.java
index 82e5057..e1a8de8 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/ConditionalMutationSet.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/ConditionalMutationSet.java
@@ -54,8 +54,8 @@
   static void defer(Map<KeyExtent,List<ServerConditionalMutation>> updates, Map<KeyExtent,List<ServerConditionalMutation>> deferredMutations, DeferFilter filter) {
     for (Entry<KeyExtent,List<ServerConditionalMutation>> entry : updates.entrySet()) {
       List<ServerConditionalMutation> scml = entry.getValue();
-      List<ServerConditionalMutation> okMutations = new ArrayList<ServerConditionalMutation>(scml.size());
-      List<ServerConditionalMutation> deferred = new ArrayList<ServerConditionalMutation>();
+      List<ServerConditionalMutation> okMutations = new ArrayList<>(scml.size());
+      List<ServerConditionalMutation> deferred = new ArrayList<>();
       filter.defer(scml, okMutations, deferred);
 
       if (deferred.size() > 0) {
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/FileManager.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/FileManager.java
index 1c4676e..8bc0fed 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/FileManager.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/FileManager.java
@@ -29,6 +29,7 @@
 import java.util.concurrent.Semaphore;
 import java.util.concurrent.atomic.AtomicBoolean;
 
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
@@ -43,6 +44,7 @@
 import org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.DataSource;
 import org.apache.accumulo.core.iterators.system.TimeSettingIterator;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.server.AccumuloServerContext;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.server.fs.VolumeManager;
@@ -123,7 +125,7 @@
 
       long curTime = System.currentTimeMillis();
 
-      ArrayList<FileSKVIterator> filesToClose = new ArrayList<FileSKVIterator>();
+      ArrayList<FileSKVIterator> filesToClose = new ArrayList<>();
 
       // determine which files to close in a sync block, and then close the
       // files outside of the sync block
@@ -174,8 +176,8 @@
     this.maxOpen = maxOpen;
     this.fs = fs;
 
-    this.openFiles = new HashMap<String,List<OpenReader>>();
-    this.reservedReaders = new HashMap<FileSKVIterator,String>();
+    this.openFiles = new HashMap<>();
+    this.reservedReaders = new HashMap<>();
 
     this.maxIdleTime = context.getConfiguration().getTimeInMillis(Property.TSERV_MAX_IDLE);
     SimpleTimer.getInstance(context.getConfiguration()).schedule(new IdleFileCloser(), maxIdleTime, maxIdleTime / 2);
@@ -194,7 +196,7 @@
 
   private List<FileSKVIterator> takeLRUOpenFiles(int numToTake) {
 
-    ArrayList<OpenReader> openReaders = new ArrayList<OpenReader>();
+    ArrayList<OpenReader> openReaders = new ArrayList<>();
 
     for (Entry<String,List<OpenReader>> entry : openFiles.entrySet()) {
       openReaders.addAll(entry.getValue());
@@ -202,7 +204,7 @@
 
     Collections.sort(openReaders);
 
-    ArrayList<FileSKVIterator> ret = new ArrayList<FileSKVIterator>();
+    ArrayList<FileSKVIterator> ret = new ArrayList<>();
 
     for (int i = 0; i < numToTake && i < openReaders.size(); i++) {
       OpenReader or = openReaders.get(i);
@@ -225,7 +227,7 @@
   private static <T> List<T> getFileList(String file, Map<String,List<T>> files) {
     List<T> ofl = files.get(file);
     if (ofl == null) {
-      ofl = new ArrayList<T>();
+      ofl = new ArrayList<>();
       files.put(file, ofl);
     }
 
@@ -243,7 +245,7 @@
   }
 
   private List<String> takeOpenFiles(Collection<String> files, List<FileSKVIterator> reservedFiles, Map<FileSKVIterator,String> readersReserved) {
-    List<String> filesToOpen = new LinkedList<String>(files);
+    List<String> filesToOpen = new LinkedList<>(files);
     for (Iterator<String> iterator = filesToOpen.iterator(); iterator.hasNext();) {
       String file = iterator.next();
 
@@ -278,8 +280,8 @@
 
     List<String> filesToOpen = null;
     List<FileSKVIterator> filesToClose = Collections.emptyList();
-    List<FileSKVIterator> reservedFiles = new ArrayList<FileSKVIterator>();
-    Map<FileSKVIterator,String> readersReserved = new HashMap<FileSKVIterator,String>();
+    List<FileSKVIterator> reservedFiles = new ArrayList<>();
+    Map<FileSKVIterator,String> readersReserved = new HashMap<>();
 
     if (!tablet.isMeta()) {
       filePermits.acquireUninterruptibly(files.size());
@@ -314,13 +316,13 @@
         Path path = new Path(file);
         FileSystem ns = fs.getVolumeByPath(path).getFileSystem();
         // log.debug("Opening "+file + " path " + path);
-        FileSKVIterator reader = FileOperations.getInstance().openReader(path.toString(), false, ns, ns.getConf(),
-            context.getServerConfigurationFactory().getTableConfiguration(tablet), dataCache, indexCache);
+        FileSKVIterator reader = FileOperations.getInstance().newReaderBuilder().forFile(path.toString(), ns, ns.getConf())
+            .withTableConfiguration(context.getServerConfigurationFactory().getTableConfiguration(tablet)).withBlockCache(dataCache, indexCache).build();
         reservedFiles.add(reader);
         readersReserved.put(reader, file);
       } catch (Exception e) {
 
-        ProblemReports.getInstance(context).report(new ProblemReport(tablet.getTableId().toString(), ProblemType.FILE_READ, file, e));
+        ProblemReports.getInstance(context).report(new ProblemReport(tablet.getTableId(), ProblemType.FILE_READ, file, e));
 
         if (continueOnFailure) {
           // release the permit for the file that failed to open
@@ -399,7 +401,7 @@
     FileDataSource(String file, SortedKeyValueIterator<Key,Value> iter) {
       this.file = file;
       this.iter = iter;
-      this.deepCopies = new ArrayList<FileManager.FileDataSource>();
+      this.deepCopies = new ArrayList<>();
     }
 
     public FileDataSource(IteratorEnvironment env, SortedKeyValueIterator<Key,Value> deepCopy, ArrayList<FileDataSource> deepCopies) {
@@ -458,7 +460,6 @@
       this.iflag = flag;
       ((InterruptibleIterator) this.iter).setInterruptFlag(iflag);
     }
-
   }
 
   public class ScanFileManager {
@@ -469,8 +470,8 @@
     private boolean continueOnFailure;
 
     ScanFileManager(KeyExtent tablet) {
-      tabletReservedReaders = new ArrayList<FileSKVIterator>();
-      dataSources = new ArrayList<FileDataSource>();
+      tabletReservedReaders = new ArrayList<>();
+      dataSources = new ArrayList<>();
       this.tablet = tablet;
 
       continueOnFailure = context.getServerConfigurationFactory().getTableConfiguration(tablet).getBoolean(Property.TABLE_FAILURES_IGNORE);
@@ -481,7 +482,7 @@
     }
 
     private List<FileSKVIterator> openFileRefs(Collection<FileRef> files) throws TooManyFilesException, IOException {
-      List<String> strings = new ArrayList<String>(files.size());
+      List<String> strings = new ArrayList<>(files.size());
       for (FileRef ref : files)
         strings.add(ref.path().toString());
       return openFiles(strings);
@@ -502,22 +503,32 @@
       return newlyReservedReaders;
     }
 
-    public synchronized List<InterruptibleIterator> openFiles(Map<FileRef,DataFileValue> files, boolean detachable) throws IOException {
+    public synchronized List<InterruptibleIterator> openFiles(Map<FileRef,DataFileValue> files, boolean detachable, SamplerConfigurationImpl samplerConfig)
+        throws IOException {
 
       List<FileSKVIterator> newlyReservedReaders = openFileRefs(files.keySet());
 
-      ArrayList<InterruptibleIterator> iters = new ArrayList<InterruptibleIterator>();
+      ArrayList<InterruptibleIterator> iters = new ArrayList<>();
 
       for (FileSKVIterator reader : newlyReservedReaders) {
         String filename = getReservedReadeFilename(reader);
         InterruptibleIterator iter;
+
+        FileSKVIterator source = reader;
+        if (samplerConfig != null) {
+          source = source.getSample(samplerConfig);
+          if (source == null) {
+            throw new SampleNotPresentException();
+          }
+        }
+
         if (detachable) {
-          FileDataSource fds = new FileDataSource(filename, reader);
+          FileDataSource fds = new FileDataSource(filename, source);
           dataSources.add(fds);
           SourceSwitchingIterator ssi = new SourceSwitchingIterator(fds);
-          iter = new ProblemReportingIterator(context, tablet.getTableId().toString(), filename, continueOnFailure, ssi);
+          iter = new ProblemReportingIterator(context, tablet.getTableId(), filename, continueOnFailure, ssi);
         } else {
-          iter = new ProblemReportingIterator(context, tablet.getTableId().toString(), filename, continueOnFailure, reader);
+          iter = new ProblemReportingIterator(context, tablet.getTableId(), filename, continueOnFailure, source);
         }
         DataFileValue value = files.get(new FileRef(filename));
         if (value.isTimeSet()) {
@@ -539,21 +550,21 @@
         fds.unsetIterator();
     }
 
-    public synchronized void reattach() throws IOException {
+    public synchronized void reattach(SamplerConfigurationImpl samplerConfig) throws IOException {
       if (tabletReservedReaders.size() != 0)
         throw new IllegalStateException();
 
-      Collection<String> files = new ArrayList<String>();
+      Collection<String> files = new ArrayList<>();
       for (FileDataSource fds : dataSources)
         files.add(fds.file);
 
       List<FileSKVIterator> newlyReservedReaders = openFiles(files);
-      Map<String,List<FileSKVIterator>> map = new HashMap<String,List<FileSKVIterator>>();
+      Map<String,List<FileSKVIterator>> map = new HashMap<>();
       for (FileSKVIterator reader : newlyReservedReaders) {
         String fileName = getReservedReadeFilename(reader);
         List<FileSKVIterator> list = map.get(fileName);
         if (list == null) {
-          list = new LinkedList<FileSKVIterator>();
+          list = new LinkedList<>();
           map.put(fileName, list);
         }
 
@@ -562,7 +573,14 @@
 
       for (FileDataSource fds : dataSources) {
         FileSKVIterator reader = map.get(fds.file).remove(0);
-        fds.setIterator(reader);
+        FileSKVIterator source = reader;
+        if (samplerConfig != null) {
+          source = source.getSample(samplerConfig);
+          if (source == null) {
+            throw new SampleNotPresentException();
+          }
+        }
+        fds.setIterator(source);
       }
     }
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/InMemoryMap.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/InMemoryMap.java
index c2bf890..c1ae9e6 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/InMemoryMap.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/InMemoryMap.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.tserver;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collections;
@@ -29,11 +31,16 @@
 import java.util.SortedMap;
 import java.util.UUID;
 import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
 
+import org.apache.accumulo.core.client.SampleNotPresentException;
+import org.apache.accumulo.core.client.sample.Sampler;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.ConfigurationCopy;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.conf.SiteConfiguration;
 import org.apache.accumulo.core.data.ByteSequence;
@@ -50,17 +57,20 @@
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.SortedMapIterator;
 import org.apache.accumulo.core.iterators.WrappingIterator;
+import org.apache.accumulo.core.iterators.system.EmptyIterator;
 import org.apache.accumulo.core.iterators.system.InterruptibleIterator;
 import org.apache.accumulo.core.iterators.system.LocalityGroupIterator;
 import org.apache.accumulo.core.iterators.system.LocalityGroupIterator.LocalityGroup;
 import org.apache.accumulo.core.iterators.system.SourceSwitchingIterator;
 import org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.DataSource;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
+import org.apache.accumulo.core.sample.impl.SamplerFactory;
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.LocalityGroupUtil;
 import org.apache.accumulo.core.util.LocalityGroupUtil.LocalityGroupConfigurationError;
 import org.apache.accumulo.core.util.LocalityGroupUtil.Partitioner;
+import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.core.util.PreAllocatedArray;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.commons.lang.mutable.MutableLong;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
@@ -68,6 +78,9 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import com.google.common.base.Predicate;
+import com.google.common.collect.Iterables;
+
 public class InMemoryMap {
   private SimpleMap map = null;
 
@@ -79,26 +92,77 @@
 
   private Map<String,Set<ByteSequence>> lggroups;
 
+  private static Pair<SamplerConfigurationImpl,Sampler> getSampler(AccumuloConfiguration config) {
+    try {
+      SamplerConfigurationImpl sampleConfig = SamplerConfigurationImpl.newSamplerConfig(config);
+      if (sampleConfig == null) {
+        return new Pair<>(null, null);
+      }
+
+      return new Pair<>(sampleConfig, SamplerFactory.newSampler(sampleConfig, config));
+    } catch (IOException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
   public static final String TYPE_NATIVE_MAP_WRAPPER = "NativeMapWrapper";
   public static final String TYPE_DEFAULT_MAP = "DefaultMap";
   public static final String TYPE_LOCALITY_GROUP_MAP = "LocalityGroupMap";
   public static final String TYPE_LOCALITY_GROUP_MAP_NATIVE = "LocalityGroupMap with native";
 
-  public InMemoryMap(boolean useNativeMap, String memDumpDir) {
-    this(new HashMap<String,Set<ByteSequence>>(), useNativeMap, memDumpDir);
+  private AtomicReference<Pair<SamplerConfigurationImpl,Sampler>> samplerRef = new AtomicReference<>(null);
+
+  private AccumuloConfiguration config;
+
+  // defer creating sampler until first write. This was done because an empty sample map configured with no sampler will not flush after a user changes sample
+  // config.
+  private Sampler getOrCreateSampler() {
+    Pair<SamplerConfigurationImpl,Sampler> pair = samplerRef.get();
+    if (pair == null) {
+      pair = getSampler(config);
+      if (!samplerRef.compareAndSet(null, pair)) {
+        pair = samplerRef.get();
+      }
+    }
+
+    return pair.getSecond();
   }
 
-  public InMemoryMap(Map<String,Set<ByteSequence>> lggroups, boolean useNativeMap, String memDumpDir) {
-    this.memDumpDir = memDumpDir;
-    this.lggroups = lggroups;
+  public InMemoryMap(AccumuloConfiguration config) throws LocalityGroupConfigurationError {
+
+    boolean useNativeMap = config.getBoolean(Property.TSERV_NATIVEMAP_ENABLED);
+
+    this.memDumpDir = config.get(Property.TSERV_MEMDUMP_DIR);
+    this.lggroups = LocalityGroupUtil.getLocalityGroups(config);
+
+    this.config = config;
+
+    SimpleMap allMap;
+    SimpleMap sampleMap;
 
     if (lggroups.size() == 0) {
-      map = newMap(useNativeMap);
+      allMap = newMap(useNativeMap);
+      sampleMap = newMap(useNativeMap);
       mapType = useNativeMap ? TYPE_NATIVE_MAP_WRAPPER : TYPE_DEFAULT_MAP;
     } else {
-      map = new LocalityGroupMap(lggroups, useNativeMap);
+      allMap = new LocalityGroupMap(lggroups, useNativeMap);
+      sampleMap = new LocalityGroupMap(lggroups, useNativeMap);
       mapType = useNativeMap ? TYPE_LOCALITY_GROUP_MAP_NATIVE : TYPE_LOCALITY_GROUP_MAP;
     }
+
+    map = new SampleMap(allMap, sampleMap);
+  }
+
+  private static SimpleMap newMap(boolean useNativeMap) {
+    if (useNativeMap && NativeMap.isLoaded()) {
+      try {
+        return new NativeMapWrapper();
+      } catch (Throwable t) {
+        log.error("Failed to create native map", t);
+      }
+    }
+
+    return new DefaultMap();
   }
 
   /**
@@ -114,22 +178,6 @@
     return mapType;
   }
 
-  public InMemoryMap(AccumuloConfiguration config) throws LocalityGroupConfigurationError {
-    this(LocalityGroupUtil.getLocalityGroups(config), config.getBoolean(Property.TSERV_NATIVEMAP_ENABLED), config.get(Property.TSERV_MEMDUMP_DIR));
-  }
-
-  private static SimpleMap newMap(boolean useNativeMap) {
-    if (useNativeMap && NativeMap.isLoaded()) {
-      try {
-        return new NativeMapWrapper();
-      } catch (Throwable t) {
-        log.error("Failed to create native map", t);
-      }
-    }
-
-    return new DefaultMap();
-  }
-
   private interface SimpleMap {
     Value get(Key key);
 
@@ -137,7 +185,7 @@
 
     int size();
 
-    InterruptibleIterator skvIterator();
+    InterruptibleIterator skvIterator(SamplerConfigurationImpl samplerConfig);
 
     void delete();
 
@@ -146,6 +194,95 @@
     void mutate(List<Mutation> mutations, int kvCount);
   }
 
+  private class SampleMap implements SimpleMap {
+
+    private SimpleMap map;
+    private SimpleMap sample;
+
+    public SampleMap(SimpleMap map, SimpleMap sampleMap) {
+      this.map = map;
+      this.sample = sampleMap;
+    }
+
+    @Override
+    public Value get(Key key) {
+      return map.get(key);
+    }
+
+    @Override
+    public Iterator<Entry<Key,Value>> iterator(Key startKey) {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public int size() {
+      return map.size();
+    }
+
+    @Override
+    public InterruptibleIterator skvIterator(SamplerConfigurationImpl samplerConfig) {
+      if (samplerConfig == null)
+        return map.skvIterator(null);
+      else {
+        Pair<SamplerConfigurationImpl,Sampler> samplerAndConf = samplerRef.get();
+        if (samplerAndConf == null) {
+          return EmptyIterator.EMPTY_ITERATOR;
+        } else if (samplerAndConf.getFirst() != null && samplerAndConf.getFirst().equals(samplerConfig)) {
+          return sample.skvIterator(null);
+        } else {
+          throw new SampleNotPresentException();
+        }
+      }
+    }
+
+    @Override
+    public void delete() {
+      map.delete();
+      sample.delete();
+    }
+
+    @Override
+    public long getMemoryUsed() {
+      return map.getMemoryUsed() + sample.getMemoryUsed();
+    }
+
+    @Override
+    public void mutate(List<Mutation> mutations, int kvCount) {
+      map.mutate(mutations, kvCount);
+
+      Sampler sampler = getOrCreateSampler();
+      if (sampler != null) {
+        List<Mutation> sampleMutations = null;
+
+        for (Mutation m : mutations) {
+          List<ColumnUpdate> colUpdates = m.getUpdates();
+          List<ColumnUpdate> sampleColUpdates = null;
+          for (ColumnUpdate cvp : colUpdates) {
+            Key k = new Key(m.getRow(), cvp.getColumnFamily(), cvp.getColumnQualifier(), cvp.getColumnVisibility(), cvp.getTimestamp(), cvp.isDeleted(), false);
+            if (sampler.accept(k)) {
+              if (sampleColUpdates == null) {
+                sampleColUpdates = new ArrayList<>();
+              }
+              sampleColUpdates.add(cvp);
+            }
+          }
+
+          if (sampleColUpdates != null) {
+            if (sampleMutations == null) {
+              sampleMutations = new ArrayList<>();
+            }
+
+            sampleMutations.add(new LocalityGroupUtil.PartitionedMutation(m.getRow(), sampleColUpdates));
+          }
+        }
+
+        if (sampleMutations != null) {
+          sample.mutate(sampleMutations, kvCount);
+        }
+      }
+    }
+  }
+
   private static class LocalityGroupMap implements SimpleMap {
 
     private PreAllocatedArray<Map<ByteSequence,MutableLong>> groupFams;
@@ -160,7 +297,7 @@
       this.groupFams = new PreAllocatedArray<>(groups.size());
       this.maps = new SimpleMap[groups.size() + 1];
       this.partitioned = new PreAllocatedArray<>(groups.size() + 1);
-      this.nonDefaultColumnFamilies = new HashSet<ByteSequence>();
+      this.nonDefaultColumnFamilies = new HashSet<>();
 
       for (int i = 0; i < maps.length; i++) {
         maps[i] = newMap(useNativeMap);
@@ -168,7 +305,7 @@
 
       int count = 0;
       for (Set<ByteSequence> cfset : groups.values()) {
-        HashMap<ByteSequence,MutableLong> map = new HashMap<ByteSequence,MutableLong>();
+        HashMap<ByteSequence,MutableLong> map = new HashMap<>();
         for (ByteSequence bs : cfset)
           map.put(bs, new MutableLong(1));
         this.groupFams.set(count++, map);
@@ -201,13 +338,16 @@
     }
 
     @Override
-    public InterruptibleIterator skvIterator() {
+    public InterruptibleIterator skvIterator(SamplerConfigurationImpl samplerConfig) {
+      if (samplerConfig != null)
+        throw new SampleNotPresentException();
+
       LocalityGroup groups[] = new LocalityGroup[maps.length];
       for (int i = 0; i < groups.length; i++) {
         if (i < groupFams.length)
-          groups[i] = new LocalityGroup(maps[i].skvIterator(), groupFams.get(i), false);
+          groups[i] = new LocalityGroup(maps[i].skvIterator(null), groupFams.get(i), false);
         else
-          groups[i] = new LocalityGroup(maps[i].skvIterator(), null, true);
+          groups[i] = new LocalityGroup(maps[i].skvIterator(null), null, true);
       }
 
       return new LocalityGroupIterator(groups, nonDefaultColumnFamilies);
@@ -254,7 +394,7 @@
   }
 
   private static class DefaultMap implements SimpleMap {
-    private ConcurrentSkipListMap<Key,Value> map = new ConcurrentSkipListMap<Key,Value>(new MemKeyComparator());
+    private ConcurrentSkipListMap<Key,Value> map = new ConcurrentSkipListMap<>(new MemKeyComparator());
     private AtomicLong bytesInMemory = new AtomicLong();
     private AtomicInteger size = new AtomicInteger();
 
@@ -284,7 +424,9 @@
     }
 
     @Override
-    public synchronized InterruptibleIterator skvIterator() {
+    public InterruptibleIterator skvIterator(SamplerConfigurationImpl samplerConfig) {
+      if (samplerConfig != null)
+        throw new SampleNotPresentException();
       if (map == null)
         throw new IllegalStateException();
 
@@ -347,7 +489,9 @@
     }
 
     @Override
-    public InterruptibleIterator skvIterator() {
+    public InterruptibleIterator skvIterator(SamplerConfigurationImpl samplerConfig) {
+      if (samplerConfig != null)
+        throw new SampleNotPresentException();
       return (InterruptibleIterator) nativeMap.skvIterator();
     }
 
@@ -430,16 +574,30 @@
     private MemoryDataSource parent;
     private IteratorEnvironment env;
     private AtomicBoolean iflag;
+    private SamplerConfigurationImpl iteratorSamplerConfig;
 
-    MemoryDataSource() {
-      this(null, false, null, null);
+    private SamplerConfigurationImpl getSamplerConfig() {
+      if (env != null) {
+        if (env.isSamplingEnabled()) {
+          return new SamplerConfigurationImpl(env.getSamplerConfiguration());
+        } else {
+          return null;
+        }
+      } else {
+        return iteratorSamplerConfig;
+      }
     }
 
-    public MemoryDataSource(MemoryDataSource parent, boolean switched, IteratorEnvironment env, AtomicBoolean iflag) {
+    MemoryDataSource(SamplerConfigurationImpl samplerConfig) {
+      this(null, false, null, null, samplerConfig);
+    }
+
+    public MemoryDataSource(MemoryDataSource parent, boolean switched, IteratorEnvironment env, AtomicBoolean iflag, SamplerConfigurationImpl samplerConfig) {
       this.parent = parent;
       this.switched = switched;
       this.env = env;
       this.iflag = iflag;
+      this.iteratorSamplerConfig = samplerConfig;
     }
 
     @Override
@@ -474,9 +632,14 @@
         Configuration conf = CachedConfiguration.getInstance();
         FileSystem fs = FileSystem.getLocal(conf);
 
-        reader = new RFileOperations().openReader(memDumpFile, true, fs, conf, SiteConfiguration.getInstance());
+        reader = new RFileOperations().newReaderBuilder().forFile(memDumpFile, fs, conf).withTableConfiguration(SiteConfiguration.getInstance())
+            .seekToBeginning().build();
         if (iflag != null)
           reader.setInterruptFlag(iflag);
+
+        if (getSamplerConfig() != null) {
+          reader = reader.getSample(getSamplerConfig());
+        }
       }
 
       return reader;
@@ -486,7 +649,7 @@
     public SortedKeyValueIterator<Key,Value> iterator() throws IOException {
       if (iter == null)
         if (!switched) {
-          iter = map.skvIterator();
+          iter = map.skvIterator(getSamplerConfig());
           if (iflag != null)
             iter.setInterruptFlag(iflag);
         } else {
@@ -505,7 +668,7 @@
 
     @Override
     public DataSource getDeepCopyDataSource(IteratorEnvironment env) {
-      return new MemoryDataSource(parent == null ? this : parent, switched, env, iflag);
+      return new MemoryDataSource(parent == null ? this : parent, switched, env, iflag, iteratorSamplerConfig);
     }
 
     @Override
@@ -582,7 +745,7 @@
 
   }
 
-  public synchronized MemoryIterator skvIterator() {
+  public synchronized MemoryIterator skvIterator(SamplerConfigurationImpl iteratorSamplerConfig) {
     if (map == null)
       throw new NullPointerException();
 
@@ -590,8 +753,9 @@
       throw new IllegalStateException("Can not obtain iterator after map deleted");
 
     int mc = kvCount.get();
-    MemoryDataSource mds = new MemoryDataSource();
-    SourceSwitchingIterator ssi = new SourceSwitchingIterator(new MemoryDataSource());
+    MemoryDataSource mds = new MemoryDataSource(iteratorSamplerConfig);
+    // TODO seems like a bug that two MemoryDataSources are created... may need to fix in older branches
+    SourceSwitchingIterator ssi = new SourceSwitchingIterator(mds);
     MemoryIterator mi = new MemoryIterator(new PartialMutationSkippingIterator(ssi, mc));
     mi.setSSI(ssi);
     mi.setMDS(mds);
@@ -604,7 +768,7 @@
     if (nextKVCount.get() - 1 != kvCount.get())
       throw new IllegalStateException("Memory map in unexpected state : nextKVCount = " + nextKVCount.get() + " kvCount = " + kvCount.get());
 
-    return map.skvIterator();
+    return map.skvIterator(null);
   }
 
   private boolean deleted = false;
@@ -621,7 +785,7 @@
     long t1 = System.currentTimeMillis();
 
     while (activeIters.size() > 0 && System.currentTimeMillis() - t1 < waitTime) {
-      UtilWaitThread.sleep(50);
+      sleepUninterruptibly(50, TimeUnit.MILLISECONDS);
     }
 
     if (activeIters.size() > 0) {
@@ -635,11 +799,17 @@
         Configuration newConf = new Configuration(conf);
         newConf.setInt("io.seqfile.compress.blocksize", 100000);
 
-        FileSKVWriter out = new RFileOperations().openWriter(tmpFile, fs, newConf, SiteConfiguration.getInstance());
+        AccumuloConfiguration siteConf = SiteConfiguration.getInstance();
 
-        InterruptibleIterator iter = map.skvIterator();
+        if (getOrCreateSampler() != null) {
+          siteConf = createSampleConfig(siteConf);
+        }
 
-        HashSet<ByteSequence> allfams = new HashSet<ByteSequence>();
+        FileSKVWriter out = new RFileOperations().newWriterBuilder().forFile(tmpFile, fs, newConf).withTableConfiguration(siteConf).build();
+
+        InterruptibleIterator iter = map.skvIterator(null);
+
+        HashSet<ByteSequence> allfams = new HashSet<>();
 
         for (Entry<String,Set<ByteSequence>> entry : lggroups.entrySet()) {
           allfams.addAll(entry.getValue());
@@ -673,7 +843,7 @@
         log.error("Failed to create mem dump file ", ioe);
 
         while (activeIters.size() > 0) {
-          UtilWaitThread.sleep(100);
+          sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
         }
       }
 
@@ -688,14 +858,28 @@
     tmpMap.delete();
   }
 
+  private AccumuloConfiguration createSampleConfig(AccumuloConfiguration siteConf) {
+    ConfigurationCopy confCopy = new ConfigurationCopy(Iterables.filter(siteConf, new Predicate<Entry<String,String>>() {
+      @Override
+      public boolean apply(Entry<String,String> input) {
+        return !input.getKey().startsWith(Property.TABLE_SAMPLER.getKey());
+      }
+    }));
+
+    for (Entry<String,String> entry : samplerRef.get().getFirst().toTablePropertiesMap().entrySet()) {
+      confCopy.set(entry.getKey(), entry.getValue());
+    }
+
+    siteConf = confCopy;
+    return siteConf;
+  }
+
   private void dumpLocalityGroup(FileSKVWriter out, InterruptibleIterator iter) throws IOException {
     while (iter.hasTop() && activeIters.size() > 0) {
       // RFile does not support MemKey, so we move the kv count into the value only for the RFile.
       // There is no need to change the MemKey to a normal key because the kvCount info gets lost when it is written
-      Value newValue = new MemValue(iter.getTopValue(), ((MemKey) iter.getTopKey()).getKVCount());
-      out.append(iter.getTopKey(), newValue);
+      out.append(iter.getTopKey(), MemValue.encode(iter.getTopValue(), ((MemKey) iter.getTopKey()).getKVCount()));
       iter.next();
-
     }
   }
 }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/MemKeyConversionIterator.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/MemKeyConversionIterator.java
index 00c8be9..71a4cbd 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/MemKeyConversionIterator.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/MemKeyConversionIterator.java
@@ -61,10 +61,10 @@
       currVal = v;
       return;
     }
-    currVal = new Value(v);
-    int mc = MemValue.splitKVCount(currVal);
-    currKey = new MemKey(k, mc);
 
+    MemValue mv = MemValue.decode(v);
+    currVal = mv.value;
+    currKey = new MemKey(k, mv.kvCount);
   }
 
   @Override
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/MemValue.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/MemValue.java
index bc44459..af6f2f1 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/MemValue.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/MemValue.java
@@ -16,69 +16,38 @@
  */
 package org.apache.accumulo.tserver;
 
-import java.io.DataOutput;
-import java.io.IOException;
-
 import org.apache.accumulo.core.data.Value;
 
 /**
  *
  */
-public class MemValue extends Value {
-  int kvCount;
-  boolean merged = false;
+public class MemValue {
 
-  public MemValue() {
-    super();
-    this.kvCount = Integer.MAX_VALUE;
-  }
+  Value value;
+  int kvCount;
 
   public MemValue(Value value, int kv) {
-    super(value);
+    this.value = value;
     this.kvCount = kv;
   }
 
-  // Override
-  @Override
-  public void write(final DataOutput out) throws IOException {
-    if (!merged) {
-      byte[] combinedBytes = new byte[getSize() + 4];
-      System.arraycopy(value, 0, combinedBytes, 4, getSize());
-      combinedBytes[0] = (byte) (kvCount >>> 24);
-      combinedBytes[1] = (byte) (kvCount >>> 16);
-      combinedBytes[2] = (byte) (kvCount >>> 8);
-      combinedBytes[3] = (byte) (kvCount);
-      value = combinedBytes;
-      merged = true;
-    }
-    super.write(out);
+  public static Value encode(Value value, int kv) {
+    byte[] combinedBytes = new byte[value.getSize() + 4];
+    System.arraycopy(value.get(), 0, combinedBytes, 4, value.getSize());
+    combinedBytes[0] = (byte) (kv >>> 24);
+    combinedBytes[1] = (byte) (kv >>> 16);
+    combinedBytes[2] = (byte) (kv >>> 8);
+    combinedBytes[3] = (byte) (kv);
+    return new Value(combinedBytes);
   }
 
-  @Override
-  public void set(final byte[] b) {
-    super.set(b);
-    merged = false;
-  }
-
-  @Override
-  public void copy(byte[] b) {
-    super.copy(b);
-    merged = false;
-  }
-
-  /**
-   * Takes a Value and will take out the embedded kvCount, and then return that value while replacing the Value with the original unembedded version
-   *
-   * @return The kvCount embedded in v.
-   */
-  public static int splitKVCount(Value v) {
-    if (v instanceof MemValue)
-      return ((MemValue) v).kvCount;
-
+  public static MemValue decode(Value v) {
     byte[] originalBytes = new byte[v.getSize() - 4];
     byte[] combined = v.get();
     System.arraycopy(combined, 4, originalBytes, 0, originalBytes.length);
     v.set(originalBytes);
-    return (combined[0] << 24) + ((combined[1] & 0xFF) << 16) + ((combined[2] & 0xFF) << 8) + (combined[3] & 0xFF);
+    int kv = (combined[0] << 24) + ((combined[1] & 0xFF) << 16) + ((combined[2] & 0xFF) << 8) + (combined[3] & 0xFF);
+
+    return new MemValue(new Value(originalBytes), kv);
   }
 }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java
index a6f7cf1..00f6ba8 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java
@@ -35,6 +35,7 @@
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.ColumnUpdate;
 import org.apache.accumulo.core.data.Key;
@@ -68,7 +69,7 @@
   // Load native library
   static {
     // Check standard directories
-    List<File> directories = new ArrayList<File>(Arrays.asList(new File[] {new File("/usr/lib64"), new File("/usr/lib")}));
+    List<File> directories = new ArrayList<>(Arrays.asList(new File[] {new File("/usr/lib64"), new File("/usr/lib")}));
     // Check in ACCUMULO_HOME location, too
     String envAccumuloHome = System.getenv("ACCUMULO_HOME");
     if (envAccumuloHome != null) {
@@ -105,7 +106,7 @@
   public static void loadNativeLib(List<File> searchPath) {
     if (!isLoaded()) {
       List<String> names = getValidLibraryNames();
-      List<File> tryList = new ArrayList<File>(searchPath.size() * names.size());
+      List<File> tryList = new ArrayList<>(searchPath.size() * names.size());
 
       for (File p : searchPath)
         if (p.exists() && p.isDirectory())
@@ -130,7 +131,7 @@
   }
 
   private static List<String> getValidLibraryNames() {
-    ArrayList<String> names = new ArrayList<String>(3);
+    ArrayList<String> names = new ArrayList<>(3);
 
     String libname = System.mapLibraryName("accumulo");
     names.add(libname);
@@ -198,18 +199,13 @@
   private static synchronized long createNativeMap() {
 
     if (!init) {
-      allocatedNativeMaps = new HashSet<Long>();
+      allocatedNativeMaps = new HashSet<>();
 
       Runnable r = new Runnable() {
         @Override
         public void run() {
           if (allocatedNativeMaps.size() > 0) {
-            // print to system err in case log4j is shutdown...
-            try {
-              log.warn("There are " + allocatedNativeMaps.size() + " allocated native maps");
-            } catch (Throwable t) {
-              log.error("There are " + allocatedNativeMaps.size() + " allocated native maps");
-            }
+            log.info("There are " + allocatedNativeMaps.size() + " allocated native maps");
           }
 
           log.debug(totalAllocations + " native maps were allocated");
@@ -456,7 +452,7 @@
 
       hasNext = nmiNext(nmiPointer, fieldsLens);
 
-      return new SimpleImmutableEntry<Key,Value>(k, v);
+      return new SimpleImmutableEntry<>(k, v);
     }
 
     @Override
@@ -743,6 +739,9 @@
 
     @Override
     public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
+      if (env != null && env.isSamplingEnabled()) {
+        throw new SampleNotPresentException();
+      }
       return new NMSKVIter(map, interruptFlag);
     }
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/RowLocks.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/RowLocks.java
index 03707d1..62f321e 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/RowLocks.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/RowLocks.java
@@ -34,7 +34,7 @@
  */
 class RowLocks {
 
-  private Map<ByteSequence,RowLock> rowLocks = new HashMap<ByteSequence,RowLock>();
+  private Map<ByteSequence,RowLock> rowLocks = new HashMap<>();
 
   static class RowLock {
     ReentrantLock rlock;
@@ -82,7 +82,7 @@
   }
 
   List<RowLock> acquireRowlocks(Map<KeyExtent,List<ServerConditionalMutation>> updates, Map<KeyExtent,List<ServerConditionalMutation>> deferred) {
-    ArrayList<RowLock> locks = new ArrayList<RowLock>();
+    ArrayList<RowLock> locks = new ArrayList<>();
 
     // assume that mutations are in sorted order to avoid deadlock
     synchronized (rowLocks) {
@@ -100,7 +100,7 @@
       for (RowLock rowLock : locks) {
         if (!rowLock.tryLock()) {
           if (rowsNotLocked == null)
-            rowsNotLocked = new HashSet<ByteSequence>();
+            rowsNotLocked = new HashSet<>();
           rowsNotLocked.add(rowLock.rowSeq);
         }
       }
@@ -126,8 +126,8 @@
         }
       });
 
-      ArrayList<RowLock> filteredLocks = new ArrayList<RowLock>();
-      ArrayList<RowLock> locksToReturn = new ArrayList<RowLock>();
+      ArrayList<RowLock> filteredLocks = new ArrayList<>();
+      ArrayList<RowLock> locksToReturn = new ArrayList<>();
       for (RowLock rowLock : locks) {
         if (rowsNotLocked.contains(rowLock.rowSeq)) {
           locksToReturn.add(rowLock);
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletIteratorEnvironment.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletIteratorEnvironment.java
index 6c5b63d..445391e 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletIteratorEnvironment.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletIteratorEnvironment.java
@@ -21,6 +21,8 @@
 import java.util.Collections;
 import java.util.Map;
 
+import org.apache.accumulo.core.client.SampleNotPresentException;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
@@ -29,6 +31,7 @@
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.system.MultiIterator;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.tserver.FileManager.ScanFileManager;
@@ -40,10 +43,12 @@
   private final IteratorScope scope;
   private final boolean fullMajorCompaction;
   private final AccumuloConfiguration config;
-  private final ArrayList<SortedKeyValueIterator<Key,Value>> topLevelIterators = new ArrayList<SortedKeyValueIterator<Key,Value>>();
+  private final ArrayList<SortedKeyValueIterator<Key,Value>> topLevelIterators;
   private Map<FileRef,DataFileValue> files;
 
   private final Authorizations authorizations; // these will only be supplied during scan scope
+  private SamplerConfiguration samplerConfig;
+  private boolean enableSampleForDeepCopy;
 
   public TabletIteratorEnvironment(IteratorScope scope, AccumuloConfiguration config) {
     if (scope == IteratorScope.majc)
@@ -54,10 +59,11 @@
     this.config = config;
     this.fullMajorCompaction = false;
     this.authorizations = Authorizations.EMPTY;
+    this.topLevelIterators = new ArrayList<>();
   }
 
-  public TabletIteratorEnvironment(IteratorScope scope, AccumuloConfiguration config, ScanFileManager trm, Map<FileRef,DataFileValue> files,
-      Authorizations authorizations) {
+  private TabletIteratorEnvironment(IteratorScope scope, AccumuloConfiguration config, ScanFileManager trm, Map<FileRef,DataFileValue> files,
+      Authorizations authorizations, SamplerConfigurationImpl samplerConfig, ArrayList<SortedKeyValueIterator<Key,Value>> topLevelIterators) {
     if (scope == IteratorScope.majc)
       throw new IllegalArgumentException("must set if compaction is full");
 
@@ -67,6 +73,19 @@
     this.fullMajorCompaction = false;
     this.files = files;
     this.authorizations = authorizations;
+    if (samplerConfig != null) {
+      enableSampleForDeepCopy = true;
+      this.samplerConfig = samplerConfig.toSamplerConfiguration();
+    } else {
+      enableSampleForDeepCopy = false;
+    }
+
+    this.topLevelIterators = topLevelIterators;
+  }
+
+  public TabletIteratorEnvironment(IteratorScope scope, AccumuloConfiguration config, ScanFileManager trm, Map<FileRef,DataFileValue> files,
+      Authorizations authorizations, SamplerConfigurationImpl samplerConfig) {
+    this(scope, config, trm, files, authorizations, samplerConfig, new ArrayList<SortedKeyValueIterator<Key,Value>>());
   }
 
   public TabletIteratorEnvironment(IteratorScope scope, boolean fullMajC, AccumuloConfiguration config) {
@@ -78,6 +97,7 @@
     this.config = config;
     this.fullMajorCompaction = fullMajC;
     this.authorizations = Authorizations.EMPTY;
+    this.topLevelIterators = new ArrayList<>();
   }
 
   @Override
@@ -100,7 +120,7 @@
   @Override
   public SortedKeyValueIterator<Key,Value> reserveMapFileReader(String mapFileName) throws IOException {
     FileRef ref = new FileRef(mapFileName, new Path(mapFileName));
-    return trm.openFiles(Collections.singletonMap(ref, files.get(ref)), false).get(0);
+    return trm.openFiles(Collections.singletonMap(ref, files.get(ref)), false, null).get(0);
   }
 
   @Override
@@ -118,8 +138,41 @@
   public SortedKeyValueIterator<Key,Value> getTopLevelIterator(SortedKeyValueIterator<Key,Value> iter) {
     if (topLevelIterators.isEmpty())
       return iter;
-    ArrayList<SortedKeyValueIterator<Key,Value>> allIters = new ArrayList<SortedKeyValueIterator<Key,Value>>(topLevelIterators);
+    ArrayList<SortedKeyValueIterator<Key,Value>> allIters = new ArrayList<>(topLevelIterators);
     allIters.add(iter);
     return new MultiIterator(allIters, false);
   }
+
+  @Override
+  public boolean isSamplingEnabled() {
+    return enableSampleForDeepCopy;
+  }
+
+  @Override
+  public SamplerConfiguration getSamplerConfiguration() {
+    if (samplerConfig == null) {
+      // only create this once so that it stays the same, even if config changes
+      SamplerConfigurationImpl sci = SamplerConfigurationImpl.newSamplerConfig(config);
+      if (sci == null) {
+        return null;
+      }
+      samplerConfig = sci.toSamplerConfiguration();
+    }
+    return samplerConfig;
+  }
+
+  @Override
+  public IteratorEnvironment cloneWithSamplingEnabled() {
+    if (!scope.equals(IteratorScope.scan)) {
+      throw new UnsupportedOperationException();
+    }
+
+    SamplerConfigurationImpl sci = SamplerConfigurationImpl.newSamplerConfig(config);
+    if (sci == null) {
+      throw new SampleNotPresentException();
+    }
+
+    TabletIteratorEnvironment te = new TabletIteratorEnvironment(scope, config, trm, files, authorizations, sci, topLevelIterators);
+    return te;
+  }
 }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java
index a7abe05..c4df66d 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java
@@ -16,10 +16,10 @@
  */
 package org.apache.accumulo.tserver;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.apache.accumulo.server.problems.ProblemType.TABLET_LOAD;
 
-import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.lang.management.ManagementFactory;
 import java.net.UnknownHostException;
@@ -30,6 +30,7 @@
 import java.util.Collection;
 import java.util.Collections;
 import java.util.Comparator;
+import java.util.EnumSet;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
@@ -45,6 +46,7 @@
 import java.util.TreeSet;
 import java.util.concurrent.BlockingDeque;
 import java.util.concurrent.CancellationException;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.LinkedBlockingDeque;
 import java.util.concurrent.ThreadPoolExecutor;
@@ -59,6 +61,7 @@
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.Durability;
 import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.client.impl.CompressedIterators;
 import org.apache.accumulo.core.client.impl.DurabilityImpl;
 import org.apache.accumulo.core.client.impl.ScannerImpl;
@@ -98,6 +101,7 @@
 import org.apache.accumulo.core.data.thrift.TRange;
 import org.apache.accumulo.core.data.thrift.UpdateErrors;
 import org.apache.accumulo.core.iterators.IterationInterruptedException;
+import org.apache.accumulo.core.master.thrift.BulkImportState;
 import org.apache.accumulo.core.master.thrift.Compacting;
 import org.apache.accumulo.core.master.thrift.MasterClientService;
 import org.apache.accumulo.core.master.thrift.TableInfo;
@@ -109,6 +113,7 @@
 import org.apache.accumulo.core.replication.ReplicationConstants;
 import org.apache.accumulo.core.replication.thrift.ReplicationServicer;
 import org.apache.accumulo.core.rpc.ThriftUtil;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.thrift.TCredentials;
 import org.apache.accumulo.core.tabletserver.log.LogEntry;
@@ -118,6 +123,8 @@
 import org.apache.accumulo.core.tabletserver.thrift.NoSuchScanIDException;
 import org.apache.accumulo.core.tabletserver.thrift.NotServingTabletException;
 import org.apache.accumulo.core.tabletserver.thrift.TDurability;
+import org.apache.accumulo.core.tabletserver.thrift.TSampleNotPresentException;
+import org.apache.accumulo.core.tabletserver.thrift.TSamplerConfiguration;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService.Iface;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService.Processor;
@@ -135,7 +142,8 @@
 import org.apache.accumulo.core.util.ServerServices;
 import org.apache.accumulo.core.util.ServerServices.Service;
 import org.apache.accumulo.core.util.SimpleThreadPool;
-import org.apache.accumulo.core.util.UtilWaitThread;
+import org.apache.accumulo.core.util.ratelimit.RateLimiter;
+import org.apache.accumulo.core.util.ratelimit.SharedRateLimiterFactory;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.util.LoggingRunnable;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
@@ -146,8 +154,8 @@
 import org.apache.accumulo.server.Accumulo;
 import org.apache.accumulo.server.AccumuloServerContext;
 import org.apache.accumulo.server.GarbageCollectionLogger;
-import org.apache.accumulo.server.ServerConstants;
 import org.apache.accumulo.server.ServerOpts;
+import org.apache.accumulo.server.TabletLevel;
 import org.apache.accumulo.server.client.ClientServiceHandler;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 import org.apache.accumulo.server.conf.ServerConfigurationFactory;
@@ -157,8 +165,9 @@
 import org.apache.accumulo.server.fs.VolumeManager;
 import org.apache.accumulo.server.fs.VolumeManager.FileType;
 import org.apache.accumulo.server.fs.VolumeManagerImpl;
-import org.apache.accumulo.server.fs.VolumeUtil;
 import org.apache.accumulo.server.log.SortedLogState;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.log.WalStateManager.WalMarkerException;
 import org.apache.accumulo.server.master.recovery.RecoveryPath;
 import org.apache.accumulo.server.master.state.Assignment;
 import org.apache.accumulo.server.master.state.DistributedStoreException;
@@ -186,6 +195,7 @@
 import org.apache.accumulo.server.util.Halt;
 import org.apache.accumulo.server.util.MasterMetadataUtil;
 import org.apache.accumulo.server.util.MetadataTableUtil;
+import org.apache.accumulo.server.util.ServerBulkImportStatus;
 import org.apache.accumulo.server.util.time.RelativeTime;
 import org.apache.accumulo.server.util.time.SimpleTimer;
 import org.apache.accumulo.server.zookeeper.DistributedWorkQueue;
@@ -222,14 +232,16 @@
 import org.apache.accumulo.tserver.session.Session;
 import org.apache.accumulo.tserver.session.SessionManager;
 import org.apache.accumulo.tserver.session.UpdateSession;
+import org.apache.accumulo.tserver.tablet.BulkImportCacheCleaner;
 import org.apache.accumulo.tserver.tablet.CommitSession;
 import org.apache.accumulo.tserver.tablet.CompactionInfo;
 import org.apache.accumulo.tserver.tablet.CompactionWatcher;
 import org.apache.accumulo.tserver.tablet.Compactor;
+import org.apache.accumulo.tserver.tablet.KVEntry;
 import org.apache.accumulo.tserver.tablet.ScanBatch;
-import org.apache.accumulo.tserver.tablet.SplitInfo;
 import org.apache.accumulo.tserver.tablet.Tablet;
 import org.apache.accumulo.tserver.tablet.TabletClosedException;
+import org.apache.accumulo.tserver.tablet.TabletData;
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.hadoop.fs.FSError;
 import org.apache.hadoop.fs.FileSystem;
@@ -246,8 +258,12 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.net.HostAndPort;
+import static java.util.concurrent.TimeUnit.MILLISECONDS;
+import static java.util.concurrent.TimeUnit.NANOSECONDS;
+import org.apache.accumulo.core.tabletserver.thrift.TUnloadTabletGoal;
 
 public class TabletServer extends AccumuloServerContext implements Runnable {
+
   private static final Logger log = LoggerFactory.getLogger(TabletServer.class);
   private static final long MAX_TIME_TO_WAIT_FOR_SCAN_RESULT_MILLIS = 1000;
   private static final long RECENTLY_SPLIT_MILLIES = 60 * 1000;
@@ -288,7 +304,7 @@
   private final TabletServerResourceManager resourceManager;
   private final SecurityOperation security;
 
-  private final BlockingDeque<MasterMessage> masterMessages = new LinkedBlockingDeque<MasterMessage>();
+  private final BlockingDeque<MasterMessage> masterMessages = new LinkedBlockingDeque<>();
 
   private Thread majorCompactorThread;
 
@@ -314,12 +330,13 @@
   private final ServerConfigurationFactory confFactory;
 
   private final ZooAuthenticationKeyWatcher authKeyWatcher;
+  private final WalStateManager walMarker;
 
-  public TabletServer(ServerConfigurationFactory confFactory, VolumeManager fs) {
+  public TabletServer(ServerConfigurationFactory confFactory, VolumeManager fs) throws IOException {
     super(confFactory);
     this.confFactory = confFactory;
     this.fs = fs;
-    AccumuloConfiguration aconf = getConfiguration();
+    final AccumuloConfiguration aconf = getConfiguration();
     Instance instance = getInstance();
     log.info("Version " + Constants.VERSION);
     log.info("Instance " + instance.getInstanceID());
@@ -371,6 +388,7 @@
         TabletLocator.clearLocators();
       }
     }, jitter(TIME_BETWEEN_LOCATOR_CACHE_CLEARS), jitter(TIME_BETWEEN_LOCATOR_CACHE_CLEARS));
+    walMarker = new WalStateManager(instance, ZooReaderWriter.getInstance());
 
     // Create the secret manager
     setSecretManager(new AuthenticationTokenSecretManager(instance, aconf.getTimeInMillis(Property.GENERAL_DELEGATION_TOKEN_LIFETIME)));
@@ -398,6 +416,8 @@
 
   private final AtomicLong totalQueuedMutationSize = new AtomicLong(0);
   private final ReentrantLock recoveryLock = new ReentrantLock(true);
+  private ThriftClientHandler clientHandler;
+  private final ServerBulkImportStatus bulkImportStatus = new ServerBulkImportStatus();
 
   private class ThriftClientHandler extends ClientServiceHandler implements TabletClientService.Iface {
 
@@ -413,12 +433,12 @@
       if (!security.canPerformSystemActions(credentials))
         throw new ThriftSecurityException(credentials.getPrincipal(), SecurityErrorCode.PERMISSION_DENIED);
 
-      List<TKeyExtent> failures = new ArrayList<TKeyExtent>();
+      List<TKeyExtent> failures = new ArrayList<>();
 
       for (Entry<TKeyExtent,Map<String,MapFileInfo>> entry : files.entrySet()) {
         TKeyExtent tke = entry.getKey();
         Map<String,MapFileInfo> fileMap = entry.getValue();
-        Map<FileRef,MapFileInfo> fileRefMap = new HashMap<FileRef,MapFileInfo>();
+        Map<FileRef,MapFileInfo> fileRefMap = new HashMap<>();
         for (Entry<String,MapFileInfo> mapping : fileMap.entrySet()) {
           Path path = new Path(mapping.getKey());
           FileSystem ns = fs.getVolumeByPath(path).getFileSystem();
@@ -445,7 +465,8 @@
     @Override
     public InitialScan startScan(TInfo tinfo, TCredentials credentials, TKeyExtent textent, TRange range, List<TColumn> columns, int batchSize,
         List<IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated,
-        long readaheadThreshold) throws NotServingTabletException, ThriftSecurityException, org.apache.accumulo.core.tabletserver.thrift.TooManyFilesException {
+        long readaheadThreshold, TSamplerConfiguration tSamplerConfig, long batchTimeOut, String context) throws NotServingTabletException,
+        ThriftSecurityException, org.apache.accumulo.core.tabletserver.thrift.TooManyFilesException, TSampleNotPresentException {
 
       String tableId = new String(textent.getTable(), UTF_8);
       if (!security.canScan(credentials, tableId, Tables.getNamespaceId(getInstance(), tableId), range, columns, ssiList, ssio, authorizations))
@@ -473,13 +494,15 @@
       if (tablet == null)
         throw new NotServingTabletException(textent);
 
-      Set<Column> columnSet = new HashSet<Column>();
+      Set<Column> columnSet = new HashSet<>();
       for (TColumn tcolumn : columns) {
         columnSet.add(new Column(tcolumn));
       }
-      final ScanSession scanSession = new ScanSession(credentials, extent, columnSet, ssiList, ssio, new Authorizations(authorizations), readaheadThreshold);
+
+      final ScanSession scanSession = new ScanSession(credentials, extent, columnSet, ssiList, ssio, new Authorizations(authorizations), readaheadThreshold,
+          batchTimeOut, context);
       scanSession.scanner = tablet.createScanner(new Range(range), batchSize, scanSession.columnSet, scanSession.auths, ssiList, ssio, isolated,
-          scanSession.interruptFlag);
+          scanSession.interruptFlag, SamplerConfigurationImpl.fromThrift(tSamplerConfig), scanSession.batchTimeOut, scanSession.context);
 
       long sid = sessionManager.createSession(scanSession, true);
 
@@ -498,7 +521,7 @@
 
     @Override
     public ScanResult continueScan(TInfo tinfo, long scanID) throws NoSuchScanIDException, NotServingTabletException,
-        org.apache.accumulo.core.tabletserver.thrift.TooManyFilesException {
+        org.apache.accumulo.core.tabletserver.thrift.TooManyFilesException, TSampleNotPresentException {
       ScanSession scanSession = (ScanSession) sessionManager.reserveSession(scanID);
       if (scanSession == null) {
         throw new NoSuchScanIDException();
@@ -512,7 +535,7 @@
     }
 
     private ScanResult continueScan(TInfo tinfo, long scanID, ScanSession scanSession) throws NoSuchScanIDException, NotServingTabletException,
-        org.apache.accumulo.core.tabletserver.thrift.TooManyFilesException {
+        org.apache.accumulo.core.tabletserver.thrift.TooManyFilesException, TSampleNotPresentException {
 
       if (scanSession.nextBatchTask == null) {
         scanSession.nextBatchTask = new NextBatchTask(TabletServer.this, scanID, scanSession.interruptFlag);
@@ -529,8 +552,16 @@
           throw (NotServingTabletException) e.getCause();
         else if (e.getCause() instanceof TooManyFilesException)
           throw new org.apache.accumulo.core.tabletserver.thrift.TooManyFilesException(scanSession.extent.toThrift());
-        else
+        else if (e.getCause() instanceof SampleNotPresentException)
+          throw new TSampleNotPresentException(scanSession.extent.toThrift());
+        else if (e.getCause() instanceof IOException) {
+          sleepUninterruptibly(MAX_TIME_TO_WAIT_FOR_SCAN_RESULT_MILLIS, TimeUnit.MILLISECONDS);
+          List<KVEntry> empty = Collections.emptyList();
+          bresult = new ScanBatch(empty, true);
+          scanSession.nextBatchTask = null;
+        } else {
           throw new RuntimeException(e);
+        }
       } catch (CancellationException ce) {
         sessionManager.removeSession(scanID);
         Tablet tablet = onlineTablets.get(scanSession.extent);
@@ -574,8 +605,8 @@
       if (ss != null) {
         long t2 = System.currentTimeMillis();
 
-        log.debug(String.format("ScanSess tid %s %s %,d entries in %.2f secs, nbTimes = [%s] ", TServerUtils.clientAddress.get(), ss.extent.getTableId()
-            .toString(), ss.entriesReturned, (t2 - ss.startTime) / 1000.0, ss.nbTimes.toString()));
+        log.debug(String.format("ScanSess tid %s %s %,d entries in %.2f secs, nbTimes = [%s] ", TServerUtils.clientAddress.get(), ss.extent.getTableId(),
+            ss.entriesReturned, (t2 - ss.startTime) / 1000.0, ss.nbTimes.toString()));
         if (scanMetrics.isEnabled()) {
           scanMetrics.add(TabletServerScanMetrics.SCAN, t2 - ss.startTime);
           scanMetrics.add(TabletServerScanMetrics.RESULT_SIZE, ss.entriesReturned);
@@ -585,9 +616,10 @@
 
     @Override
     public InitialMultiScan startMultiScan(TInfo tinfo, TCredentials credentials, Map<TKeyExtent,List<TRange>> tbatch, List<TColumn> tcolumns,
-        List<IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites) throws ThriftSecurityException {
+        List<IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites,
+        TSamplerConfiguration tSamplerConfig, long batchTimeOut, String context) throws ThriftSecurityException, TSampleNotPresentException {
       // find all of the tables that need to be scanned
-      final HashSet<String> tables = new HashSet<String>();
+      final HashSet<String> tables = new HashSet<>();
       for (TKeyExtent keyExtent : tbatch.keySet()) {
         tables.add(new String(keyExtent.getTable(), UTF_8));
       }
@@ -607,8 +639,7 @@
         log.error("{} is not authorized", credentials.getPrincipal(), tse);
         throw tse;
       }
-      Map<KeyExtent,List<Range>> batch = Translator.translate(tbatch, new TKeyExtentTranslator(), new Translator.ListTranslator<TRange,Range>(
-          new TRangeTranslator()));
+      Map<KeyExtent,List<Range>> batch = Translator.translate(tbatch, new TKeyExtentTranslator(), new Translator.ListTranslator<>(new TRangeTranslator()));
 
       // This is used to determine which thread pool to use
       KeyExtent threadPoolExtent = batch.keySet().iterator().next();
@@ -616,7 +647,8 @@
       if (waitForWrites)
         writeTracker.waitForWrites(TabletType.type(batch.keySet()));
 
-      final MultiScanSession mss = new MultiScanSession(credentials, threadPoolExtent, batch, ssiList, ssio, new Authorizations(authorizations));
+      final MultiScanSession mss = new MultiScanSession(credentials, threadPoolExtent, batch, ssiList, ssio, new Authorizations(authorizations),
+          SamplerConfigurationImpl.fromThrift(tSamplerConfig), batchTimeOut, context);
 
       mss.numTablets = batch.size();
       for (List<Range> ranges : batch.values()) {
@@ -642,7 +674,7 @@
     }
 
     @Override
-    public MultiScanResult continueMultiScan(TInfo tinfo, long scanID) throws NoSuchScanIDException {
+    public MultiScanResult continueMultiScan(TInfo tinfo, long scanID) throws NoSuchScanIDException, TSampleNotPresentException {
 
       MultiScanSession session = (MultiScanSession) sessionManager.reserveSession(scanID);
 
@@ -657,7 +689,7 @@
       }
     }
 
-    private MultiScanResult continueMultiScan(TInfo tinfo, long scanID, MultiScanSession session) throws NoSuchScanIDException {
+    private MultiScanResult continueMultiScan(TInfo tinfo, long scanID, MultiScanSession session) throws NoSuchScanIDException, TSampleNotPresentException {
 
       if (session.lookupTask == null) {
         session.lookupTask = new LookupTask(TabletServer.this, scanID);
@@ -668,6 +700,14 @@
         MultiScanResult scanResult = session.lookupTask.get(MAX_TIME_TO_WAIT_FOR_SCAN_RESULT_MILLIS, TimeUnit.MILLISECONDS);
         session.lookupTask = null;
         return scanResult;
+      } catch (ExecutionException e) {
+        sessionManager.removeSession(scanID);
+        if (e.getCause() instanceof SampleNotPresentException) {
+          throw new TSampleNotPresentException();
+        } else {
+          log.warn("Failed to get multiscan result", e);
+          throw new RuntimeException(e);
+        }
       } catch (TimeoutException e1) {
         long timeout = TabletServer.this.getConfiguration().getTimeInMillis(Property.TSERV_CLIENT_TIMEOUT);
         sessionManager.removeIfNotAccessed(scanID, timeout);
@@ -720,7 +760,7 @@
         // if user has no permission to write to this table, add it to
         // the failures list
         boolean sameTable = us.currentTablet != null && (us.currentTablet.getExtent().getTableId().equals(keyExtent.getTableId()));
-        String tableId = keyExtent.getTableId().toString();
+        String tableId = keyExtent.getTableId();
         if (sameTable || security.canWrite(us.getCredentials(), tableId, Tables.getNamespaceId(getInstance(), tableId))) {
           long t2 = System.currentTimeMillis();
           us.authTimes.addStat(t2 - t1);
@@ -801,7 +841,7 @@
     private void flush(UpdateSession us) {
 
       int mutationCount = 0;
-      Map<CommitSession,Mutations> sendables = new HashMap<CommitSession,Mutations>();
+      Map<CommitSession,Mutations> sendables = new HashMap<>();
       Throwable error = null;
 
       long pt1 = System.currentTimeMillis();
@@ -1079,7 +1119,7 @@
             results.add(new TCMResult(scm.getID(), TCMStatus.IGNORED));
           iter.remove();
         } else {
-          final List<ServerConditionalMutation> okMutations = new ArrayList<ServerConditionalMutation>(entry.getValue().size());
+          final List<ServerConditionalMutation> okMutations = new ArrayList<>(entry.getValue().size());
           final List<TCMResult> resultsSubList = results.subList(results.size(), results.size());
 
           ConditionChecker checker = checkerContext.newChecker(entry.getValue(), okMutations, resultsSubList);
@@ -1107,7 +1147,7 @@
     private void writeConditionalMutations(Map<KeyExtent,List<ServerConditionalMutation>> updates, ArrayList<TCMResult> results, ConditionalSession sess) {
       Set<Entry<KeyExtent,List<ServerConditionalMutation>>> es = updates.entrySet();
 
-      Map<CommitSession,Mutations> sendables = new HashMap<CommitSession,Mutations>();
+      Map<CommitSession,Mutations> sendables = new HashMap<>();
 
       boolean sessionCanceled = sess.interruptFlag.get();
 
@@ -1201,7 +1241,7 @@
       // sort each list of mutations, this is done to avoid deadlock and doing seeks in order is more efficient and detect duplicate rows.
       ConditionalMutationSet.sortConditionalMutations(updates);
 
-      Map<KeyExtent,List<ServerConditionalMutation>> deferred = new HashMap<KeyExtent,List<ServerConditionalMutation>>();
+      Map<KeyExtent,List<ServerConditionalMutation>> deferred = new HashMap<>();
 
       // can not process two mutations for the same row, because one will not see what the other writes
       ConditionalMutationSet.deferDuplicatesRows(updates, deferred);
@@ -1230,7 +1270,7 @@
 
     @Override
     public TConditionalSession startConditionalUpdate(TInfo tinfo, TCredentials credentials, List<ByteBuffer> authorizations, String tableId,
-        TDurability tdurabilty) throws ThriftSecurityException, TException {
+        TDurability tdurabilty, String classLoaderContext) throws ThriftSecurityException, TException {
 
       Authorizations userauths = null;
       if (!security.canConditionallyUpdate(credentials, tableId, Tables.getNamespaceId(getInstance(), tableId), authorizations))
@@ -1241,7 +1281,8 @@
         if (!userauths.contains(ByteBufferUtil.toBytes(auth)))
           throw new ThriftSecurityException(credentials.getPrincipal(), SecurityErrorCode.BAD_AUTHORIZATIONS);
 
-      ConditionalSession cs = new ConditionalSession(credentials, new Authorizations(authorizations), tableId, DurabilityImpl.fromThrift(tdurabilty));
+      ConditionalSession cs = new ConditionalSession(credentials, new Authorizations(authorizations), tableId, DurabilityImpl.fromThrift(tdurabilty),
+          classLoaderContext);
 
       long sid = sessionManager.createSession(cs, false);
       return new TConditionalSession(sid, lockID, sessionManager.getMaxIdleTime());
@@ -1266,18 +1307,18 @@
         }
       }
 
-      Text tid = new Text(cs.tableId);
+      String tid = cs.tableId;
       long opid = writeTracker.startWrite(TabletType.type(new KeyExtent(tid, null, null)));
 
       try {
-        Map<KeyExtent,List<ServerConditionalMutation>> updates = Translator.translate(mutations, Translators.TKET,
-            new Translator.ListTranslator<TConditionalMutation,ServerConditionalMutation>(ServerConditionalMutation.TCMT));
+        Map<KeyExtent,List<ServerConditionalMutation>> updates = Translator.translate(mutations, Translators.TKET, new Translator.ListTranslator<>(
+            ServerConditionalMutation.TCMT));
 
         for (KeyExtent ke : updates.keySet())
           if (!ke.getTableId().equals(tid))
             throw new IllegalArgumentException("Unexpected table id " + tid + " != " + ke.getTableId());
 
-        ArrayList<TCMResult> results = new ArrayList<TCMResult>();
+        ArrayList<TCMResult> results = new ArrayList<>();
 
         Map<KeyExtent,List<ServerConditionalMutation>> deferred = conditionalUpdate(cs, updates, results, symbols);
 
@@ -1357,10 +1398,10 @@
     public List<TabletStats> getTabletStats(TInfo tinfo, TCredentials credentials, String tableId) throws ThriftSecurityException, TException {
       TreeMap<KeyExtent,Tablet> onlineTabletsCopy;
       synchronized (onlineTablets) {
-        onlineTabletsCopy = new TreeMap<KeyExtent,Tablet>(onlineTablets);
+        onlineTabletsCopy = new TreeMap<>(onlineTablets);
       }
-      List<TabletStats> result = new ArrayList<TabletStats>();
-      Text text = new Text(tableId);
+      List<TabletStats> result = new ArrayList<>();
+      String text = tableId;
       KeyExtent start = new KeyExtent(text, new Text(), null);
       for (Entry<KeyExtent,Tablet> entry : onlineTabletsCopy.tailMap(start).entrySet()) {
         KeyExtent ke = entry.getKey();
@@ -1394,7 +1435,7 @@
       }
 
       if (tabletServerLock == null || !tabletServerLock.wasLockAcquired()) {
-        log.warn("Got " + request + " message from master before lock acquired, ignoring...");
+        log.debug("Got " + request + " message before my lock was acquired, ignoring...");
         throw new RuntimeException("Lock not acquired");
       }
 
@@ -1451,7 +1492,7 @@
             Set<KeyExtent> openingOverlapping = KeyExtent.findOverlapping(extent, openingTablets);
             Set<KeyExtent> onlineOverlapping = KeyExtent.findOverlapping(extent, onlineTablets);
 
-            Set<KeyExtent> all = new HashSet<KeyExtent>();
+            Set<KeyExtent> all = new HashSet<>();
             all.addAll(unopenedOverlapping);
             all.addAll(openingOverlapping);
             all.addAll(onlineOverlapping);
@@ -1487,6 +1528,7 @@
       final AssignmentHandler ah = new AssignmentHandler(extent);
       // final Runnable ah = new LoggingRunnable(log, );
       // Root tablet assignment must take place immediately
+
       if (extent.isRootTablet()) {
         new Daemon("Root Tablet Assignment") {
           @Override
@@ -1510,7 +1552,7 @@
     }
 
     @Override
-    public void unloadTablet(TInfo tinfo, TCredentials credentials, String lock, TKeyExtent textent, boolean save) {
+    public void unloadTablet(TInfo tinfo, TCredentials credentials, String lock, TKeyExtent textent, TUnloadTabletGoal goal, long requestTime) {
       try {
         checkPermission(credentials, lock, "unloadTablet");
       } catch (ThriftSecurityException e) {
@@ -1520,7 +1562,7 @@
 
       KeyExtent extent = new KeyExtent(textent);
 
-      resourceManager.addMigration(extent, new LoggingRunnable(log, new UnloadTabletHandler(extent, save)));
+      resourceManager.addMigration(extent, new LoggingRunnable(log, new UnloadTabletHandler(extent, goal, requestTime)));
     }
 
     @Override
@@ -1532,9 +1574,9 @@
         throw new RuntimeException(e);
       }
 
-      ArrayList<Tablet> tabletsToFlush = new ArrayList<Tablet>();
+      ArrayList<Tablet> tabletsToFlush = new ArrayList<>();
 
-      KeyExtent ke = new KeyExtent(new Text(tableId), ByteBufferUtil.toText(endRow), ByteBufferUtil.toText(startRow));
+      KeyExtent ke = new KeyExtent(tableId, ByteBufferUtil.toText(endRow), ByteBufferUtil.toText(startRow));
 
       synchronized (onlineTablets) {
         for (Tablet tablet : onlineTablets.values())
@@ -1652,9 +1694,9 @@
         throw new RuntimeException(e);
       }
 
-      KeyExtent ke = new KeyExtent(new Text(tableId), ByteBufferUtil.toText(endRow), ByteBufferUtil.toText(startRow));
+      KeyExtent ke = new KeyExtent(tableId, ByteBufferUtil.toText(endRow), ByteBufferUtil.toText(startRow));
 
-      ArrayList<Tablet> tabletsToCompact = new ArrayList<Tablet>();
+      ArrayList<Tablet> tabletsToCompact = new ArrayList<>();
       synchronized (onlineTablets) {
         for (Tablet tablet : onlineTablets.values())
           if (ke.overlaps(tablet.getExtent()))
@@ -1679,70 +1721,6 @@
     }
 
     @Override
-    public void removeLogs(TInfo tinfo, TCredentials credentials, List<String> filenames) throws TException {
-      String myname = getClientAddressString();
-      myname = myname.replace(':', '+');
-      Set<String> loggers = new HashSet<String>();
-      logger.getLogFiles(loggers);
-      Set<String> loggerUUIDs = new HashSet<String>();
-      for (String logger : loggers)
-        loggerUUIDs.add(new Path(logger).getName());
-
-      nextFile: for (String filename : filenames) {
-        String uuid = new Path(filename).getName();
-        // skip any log we're currently using
-        if (loggerUUIDs.contains(uuid))
-          continue nextFile;
-
-        List<Tablet> onlineTabletsCopy = new ArrayList<Tablet>();
-        synchronized (onlineTablets) {
-          onlineTabletsCopy.addAll(onlineTablets.values());
-        }
-        for (Tablet tablet : onlineTabletsCopy) {
-          for (String current : tablet.getCurrentLogFiles()) {
-            if (current.contains(uuid)) {
-              log.info("Attempted to delete " + filename + " from tablet " + tablet.getExtent());
-              continue nextFile;
-            }
-          }
-        }
-
-        try {
-          Path source = new Path(filename);
-          if (TabletServer.this.getConfiguration().getBoolean(Property.TSERV_ARCHIVE_WALOGS)) {
-            Path walogArchive = fs.matchingFileSystem(source, ServerConstants.getWalogArchives());
-            if (walogArchive == null) {
-              throw new IOException(filename + " is not in a volume configured for Accumulo");
-            }
-
-            fs.mkdirs(walogArchive);
-            Path dest = new Path(walogArchive, source.getName());
-            log.info("Archiving walog " + source + " to " + dest);
-            if (!fs.rename(source, dest))
-              log.error("rename is unsuccessful");
-          } else {
-            log.info("Deleting walog " + filename);
-            Path sourcePath = new Path(filename);
-            if (!(!TabletServer.this.getConfiguration().getBoolean(Property.GC_TRASH_IGNORE) && fs.moveToTrash(sourcePath))
-                && !fs.deleteRecursively(sourcePath))
-              log.warn("Failed to delete walog " + source);
-            for (String recovery : ServerConstants.getRecoveryDirs()) {
-              Path recoveryPath = new Path(recovery, source.getName());
-              try {
-                if (fs.moveToTrash(recoveryPath) || fs.deleteRecursively(recoveryPath))
-                  log.info("Deleted any recovery log " + filename);
-              } catch (FileNotFoundException ex) {
-                // ignore
-              }
-            }
-          }
-        } catch (IOException e) {
-          log.warn("Error attempting to delete write-ahead log " + filename + ": " + e);
-        }
-      }
-    }
-
-    @Override
     public List<ActiveCompaction> getActiveCompactions(TInfo tinfo, TCredentials credentials) throws ThriftSecurityException, TException {
       try {
         checkPermission(credentials, null, "getActiveCompactions");
@@ -1752,7 +1730,7 @@
       }
 
       List<CompactionInfo> compactions = Compactor.getRunningCompactions();
-      List<ActiveCompaction> ret = new ArrayList<ActiveCompaction>(compactions.size());
+      List<ActiveCompaction> ret = new ArrayList<>(compactions.size());
 
       for (CompactionInfo compactionInfo : compactions) {
         ret.add(compactionInfo.toThrift());
@@ -1763,14 +1741,23 @@
 
     @Override
     public List<String> getActiveLogs(TInfo tinfo, TCredentials credentials) throws TException {
-      Set<String> logs = new HashSet<String>();
-      logger.getLogFiles(logs);
-      return new ArrayList<String>(logs);
+      String log = logger.getLogFile();
+      // Might be null if there no active logger
+      if (null == log) {
+        return Collections.emptyList();
+      }
+      return Collections.singletonList(log);
+    }
+
+    @Override
+    public void removeLogs(TInfo tinfo, TCredentials credentials, List<String> filenames) throws TException {
+      log.warn("Garbage collector is attempting to remove logs through the tablet server");
+      log.warn("This is probably because your file Garbage Collector is an older version than your tablet servers.\n" + "Restart your file Garbage Collector.");
     }
   }
 
   private class SplitRunner implements Runnable {
-    private Tablet tablet;
+    private final Tablet tablet;
 
     public SplitRunner(Tablet tablet) {
       this.tablet = tablet;
@@ -1818,9 +1805,9 @@
     public void run() {
       while (!majorCompactorDisabled) {
         try {
-          UtilWaitThread.sleep(getConfiguration().getTimeInMillis(Property.TSERV_MAJC_DELAY));
+          sleepUninterruptibly(getConfiguration().getTimeInMillis(Property.TSERV_MAJC_DELAY), TimeUnit.MILLISECONDS);
 
-          TreeMap<KeyExtent,Tablet> copyOnlineTablets = new TreeMap<KeyExtent,Tablet>();
+          TreeMap<KeyExtent,Tablet> copyOnlineTablets = new TreeMap<>();
 
           synchronized (onlineTablets) {
             copyOnlineTablets.putAll(onlineTablets); // avoid
@@ -1879,7 +1866,7 @@
           }
         } catch (Throwable t) {
           log.error("Unexpected exception in " + Thread.currentThread().getName(), t);
-          UtilWaitThread.sleep(1000);
+          sleepUninterruptibly(1, TimeUnit.SECONDS);
         }
       }
     }
@@ -1888,7 +1875,7 @@
   private void splitTablet(Tablet tablet) {
     try {
 
-      TreeMap<KeyExtent,SplitInfo> tabletInfo = splitTablet(tablet, null);
+      TreeMap<KeyExtent,TabletData> tabletInfo = splitTablet(tablet, null);
       if (tabletInfo == null) {
         // either split or compact not both
         // were not able to split... so see if a major compaction is
@@ -1904,10 +1891,10 @@
     }
   }
 
-  private TreeMap<KeyExtent,SplitInfo> splitTablet(Tablet tablet, byte[] splitPoint) throws IOException {
+  private TreeMap<KeyExtent,TabletData> splitTablet(Tablet tablet, byte[] splitPoint) throws IOException {
     long t1 = System.currentTimeMillis();
 
-    TreeMap<KeyExtent,SplitInfo> tabletInfo = tablet.split(splitPoint);
+    TreeMap<KeyExtent,TabletData> tabletInfo = tablet.split(splitPoint);
     if (tabletInfo == null) {
       return null;
     }
@@ -1918,11 +1905,11 @@
 
     Tablet[] newTablets = new Tablet[2];
 
-    Entry<KeyExtent,SplitInfo> first = tabletInfo.firstEntry();
+    Entry<KeyExtent,TabletData> first = tabletInfo.firstEntry();
     TabletResourceManager newTrm0 = resourceManager.createTabletResourceManager(first.getKey(), getTableConfiguration(first.getKey()));
     newTablets[0] = new Tablet(TabletServer.this, first.getKey(), newTrm0, first.getValue());
 
-    Entry<KeyExtent,SplitInfo> last = tabletInfo.lastEntry();
+    Entry<KeyExtent,TabletData> last = tabletInfo.lastEntry();
     TabletResourceManager newTrm1 = resourceManager.createTabletResourceManager(last.getKey(), getTableConfiguration(last.getKey()));
     newTablets[1] = new Tablet(TabletServer.this, last.getKey(), newTrm1, last.getValue());
 
@@ -1955,11 +1942,13 @@
 
   private class UnloadTabletHandler implements Runnable {
     private final KeyExtent extent;
-    private final boolean saveState;
+    private final TUnloadTabletGoal goalState;
+    private final long requestTimeSkew;
 
-    public UnloadTabletHandler(KeyExtent extent, boolean saveState) {
+    public UnloadTabletHandler(KeyExtent extent, TUnloadTabletGoal goalState, long requestTime) {
       this.extent = extent;
-      this.saveState = saveState;
+      this.goalState = goalState;
+      this.requestTimeSkew = requestTime - MILLISECONDS.convert(System.nanoTime(), NANOSECONDS);
     }
 
     @Override
@@ -1998,7 +1987,7 @@
       }
 
       try {
-        t.close(saveState);
+        t.close(!goalState.equals(TUnloadTabletGoal.DELETED));
       } catch (Throwable e) {
 
         if ((t.isClosing() || t.isClosed()) && e instanceof IllegalStateException) {
@@ -2019,12 +2008,18 @@
         TServerInstance instance = new TServerInstance(clientAddress, getLock().getSessionId());
         TabletLocationState tls = null;
         try {
-          tls = new TabletLocationState(extent, null, instance, null, null, false);
+          tls = new TabletLocationState(extent, null, instance, null, null, null, false);
         } catch (BadLocationStateException e) {
           log.error("Unexpected error ", e);
         }
-        log.debug("Unassigning " + tls);
-        TabletStateStore.unassign(TabletServer.this, tls);
+        if (!goalState.equals(TUnloadTabletGoal.SUSPENDED) || extent.isRootTablet()
+            || (extent.isMeta() && !getConfiguration().getBoolean(Property.MASTER_METADATA_SUSPENDABLE))) {
+          log.debug("Unassigning " + tls);
+          TabletStateStore.unassign(TabletServer.this, tls, null);
+        } else {
+          log.debug("Suspending " + tls);
+          TabletStateStore.suspend(TabletServer.this, tls, null, requestTimeSkew + MILLISECONDS.convert(System.nanoTime(), NANOSECONDS));
+        }
       } catch (DistributedStoreException ex) {
         log.warn("Unable to update storage", ex);
       } catch (KeeperException e) {
@@ -2093,7 +2088,7 @@
 
       // check Metadata table before accepting assignment
       Text locationToOpen = null;
-      SortedMap<Key,Value> tabletsKeyValues = new TreeMap<Key,Value>();
+      SortedMap<Key,Value> tabletsKeyValues = new TreeMap<>();
       try {
         Pair<Text,KeyExtent> pair = verifyTabletInformation(TabletServer.this, extent, TabletServer.this.getTabletSession(), tabletsKeyValues,
             getClientAddressString(), getLock());
@@ -2104,7 +2099,7 @@
               openingTablets.remove(extent);
               openingTablets.notifyAll();
               // it expected that the new extent will overlap the old one... if it does not, it should not be added to unopenedTablets
-              if (!KeyExtent.findOverlapping(extent, new TreeSet<KeyExtent>(Arrays.asList(pair.getSecond()))).contains(pair.getSecond())) {
+              if (!KeyExtent.findOverlapping(extent, new TreeSet<>(Arrays.asList(pair.getSecond()))).contains(pair.getSecond())) {
                 throw new IllegalStateException("Fixed split does not overlap " + extent + " " + pair.getSecond());
               }
               unopenedTablets.add(pair.getSecond());
@@ -2141,11 +2136,14 @@
         acquireRecoveryMemory(extent);
 
         TabletResourceManager trm = resourceManager.createTabletResourceManager(extent, getTableConfiguration(extent));
+        TabletData data;
+        if (extent.isRootTablet()) {
+          data = new TabletData(fs, ZooReaderWriter.getInstance(), getTableConfiguration(extent));
+        } else {
+          data = new TabletData(extent, fs, tabletsKeyValues.entrySet().iterator());
+        }
 
-        // this opens the tablet file and fills in the endKey in the extent
-        locationToOpen = VolumeUtil.switchRootTabletVolume(extent, locationToOpen);
-
-        tablet = new Tablet(TabletServer.this, extent, locationToOpen, trm, tabletsKeyValues);
+        tablet = new Tablet(TabletServer.this, extent, trm, data);
         // If a minor compaction starts after a tablet opens, this indicates a log recovery occurred. This recovered data must be minor compacted.
         // There are three reasons to wait for this minor compaction to finish before placing the tablet in online tablets.
         //
@@ -2176,8 +2174,8 @@
           log.warn("{}", e.getMessage());
         }
 
-        String table = extent.getTableId().toString();
-        ProblemReports.getInstance(TabletServer.this).report(new ProblemReport(table, TABLET_LOAD, extent.getUUID().toString(), getClientAddressString(), e));
+        String tableId = extent.getTableId();
+        ProblemReports.getInstance(TabletServer.this).report(new ProblemReport(tableId, TABLET_LOAD, extent.getUUID().toString(), getClientAddressString(), e));
       } finally {
         releaseRecoveryMemory(extent);
       }
@@ -2228,29 +2226,6 @@
     }
   }
 
-  public void addLoggersToMetadata(List<DfsLogger> logs, KeyExtent extent, int id) {
-    if (!this.onlineTablets.containsKey(extent)) {
-      log.info("Not adding " + logs.size() + " logs for extent " + extent + " as alias " + id + " tablet is offline");
-      // minor compaction due to recovery... don't make updates... if it finishes, there will be no WALs,
-      // if it doesn't, we'll need to do the same recovery with the old files.
-      return;
-    }
-
-    log.info("Adding " + logs.size() + " logs for extent " + extent + " as alias " + id);
-    long now = RelativeTime.currentTimeMillis();
-    List<String> logSet = new ArrayList<String>();
-    for (DfsLogger log : logs)
-      logSet.add(log.getFileName());
-    LogEntry entry = new LogEntry();
-    entry.extent = extent;
-    entry.tabletId = id;
-    entry.timestamp = now;
-    entry.server = logs.get(0).getLogger();
-    entry.filename = logs.get(0).getFileName();
-    entry.logSet = logSet;
-    MetadataTableUtil.addLogEntry(this, entry, getLock());
-  }
-
   private HostAndPort startServer(AccumuloConfiguration conf, String address, Property portHint, TProcessor processor, String threadName)
       throws UnknownHostException {
     Property maxMessageSizeProperty = (conf.get(Property.TSERV_MAX_MESSAGE_SIZE) != null ? Property.TSERV_MAX_MESSAGE_SIZE : Property.GENERAL_MAX_MESSAGE_SIZE);
@@ -2294,14 +2269,14 @@
 
   private HostAndPort startTabletClientService() throws UnknownHostException {
     // start listening for client connection last
-    ThriftClientHandler handler = new ThriftClientHandler();
-    Iface rpcProxy = RpcWrapper.service(handler, new Processor<Iface>(handler));
+    clientHandler = new ThriftClientHandler();
+    Iface rpcProxy = RpcWrapper.service(clientHandler, new Processor<Iface>(clientHandler));
     final Processor<Iface> processor;
     if (ThriftServerType.SASL == getThriftServerType()) {
       Iface tcredProxy = TCredentialsUpdatingWrapper.service(rpcProxy, ThriftClientHandler.class, getConfiguration());
-      processor = new Processor<Iface>(tcredProxy);
+      processor = new Processor<>(tcredProxy);
     } else {
-      processor = new Processor<Iface>(rpcProxy);
+      processor = new Processor<>(rpcProxy);
     }
     HostAndPort address = startServer(getServerConfigurationFactory().getConfiguration(), clientAddress.getHostText(), Property.TSERV_CLIENTPORT, processor,
         "Thrift Client Server");
@@ -2313,7 +2288,7 @@
     final ReplicationServicerHandler handler = new ReplicationServicerHandler(this);
     ReplicationServicer.Iface rpcProxy = RpcWrapper.service(handler, new ReplicationServicer.Processor<ReplicationServicer.Iface>(handler));
     ReplicationServicer.Iface repl = TCredentialsUpdatingWrapper.service(rpcProxy, handler.getClass(), getConfiguration());
-    ReplicationServicer.Processor<ReplicationServicer.Iface> processor = new ReplicationServicer.Processor<ReplicationServicer.Iface>(repl);
+    ReplicationServicer.Processor<ReplicationServicer.Iface> processor = new ReplicationServicer.Processor<>(repl);
     AccumuloConfiguration conf = getServerConfigurationFactory().getConfiguration();
     Property maxMessageSizeProperty = (conf.get(Property.TSERV_MAX_MESSAGE_SIZE) != null ? Property.TSERV_MAX_MESSAGE_SIZE : Property.GENERAL_MAX_MESSAGE_SIZE);
     ServerAddress sp = TServerUtils.startServer(this, clientAddress.getHostText(), Property.REPLICATION_RECEIPT_SERVICE_PORT, processor,
@@ -2390,7 +2365,7 @@
           return;
         }
         log.info("Waiting for tablet server lock");
-        UtilWaitThread.sleep(5000);
+        sleepUninterruptibly(5, TimeUnit.SECONDS);
       }
       String msg = "Too many retries, exiting.";
       log.info(msg);
@@ -2444,6 +2419,12 @@
       throw new RuntimeException("Failed to start the tablet client service", e1);
     }
     announceExistence();
+    try {
+      walMarker.initWalMarker(getTabletSession());
+    } catch (Exception e) {
+      log.error("Unable to create WAL marker node in zookeeper", e);
+      throw new RuntimeException(e);
+    }
 
     ThreadPoolExecutor distWorkQThreadPool = new SimpleThreadPool(getConfiguration().getCount(Property.TSERV_WORKQ_THREADS), "distributed work queue");
 
@@ -2487,6 +2468,9 @@
     };
     SimpleTimer.getInstance(aconf).schedule(replicationWorkThreadPoolResizer, 10000, 30000);
 
+    final long CLEANUP_BULK_LOADED_CACHE_MILLIS = 15 * 60 * 1000;
+    SimpleTimer.getInstance(aconf).schedule(new BulkImportCacheCleaner(this), CLEANUP_BULK_LOADED_CACHE_MILLIS, CLEANUP_BULK_LOADED_CACHE_MILLIS);
+
     HostAndPort masterHost;
     while (!serverStopRequested) {
       // send all of the pending messages
@@ -2533,7 +2517,7 @@
           }
           returnMasterConnection(iface);
 
-          UtilWaitThread.sleep(1000);
+          sleepUninterruptibly(1, TimeUnit.SECONDS);
         }
       } catch (InterruptedException e) {
         log.info("Interrupt Exception received, shutting down");
@@ -2600,7 +2584,7 @@
     }
 
     try {
-      return new Pair<Text,KeyExtent>(new Text(MetadataTableUtil.getRootTabletDir()), null);
+      return new Pair<>(new Text(MetadataTableUtil.getRootTabletDir()), null);
     } catch (IOException e) {
       throw new AccumuloException(e);
     }
@@ -2621,12 +2605,12 @@
         TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN, TabletsSection.TabletColumnFamily.SPLIT_RATIO_COLUMN,
         TabletsSection.TabletColumnFamily.OLD_PREV_ROW_COLUMN, TabletsSection.ServerColumnFamily.TIME_COLUMN});
 
-    ScannerImpl scanner = new ScannerImpl(context, tableToVerify, Authorizations.EMPTY);
-    scanner.setRange(extent.toMetadataRange());
-
-    TreeMap<Key,Value> tkv = new TreeMap<Key,Value>();
-    for (Entry<Key,Value> entry : scanner)
-      tkv.put(entry.getKey(), entry.getValue());
+    TreeMap<Key,Value> tkv = new TreeMap<>();
+    try (ScannerImpl scanner = new ScannerImpl(context, tableToVerify, Authorizations.EMPTY)) {
+      scanner.setRange(extent.toMetadataRange());
+      for (Entry<Key,Value> entry : scanner)
+        tkv.put(entry.getKey(), entry.getValue());
+    }
 
     // only populate map after success
     if (tabletsKeyValues == null) {
@@ -2662,7 +2646,7 @@
       }
 
       if (!fke.equals(extent)) {
-        return new Pair<Text,KeyExtent>(null, fke);
+        return new Pair<>(null, fke);
       }
 
       // reread and reverify metadata entries now that metadata entries were fixed
@@ -2670,7 +2654,7 @@
       return verifyTabletInformation(context, fke, instance, tabletsKeyValues, clientAddress, lock);
     }
 
-    return new Pair<Text,KeyExtent>(new Text(dir.get()), null);
+    return new Pair<>(new Text(dir.get()), null);
   }
 
   static Value checkTabletMetadata(KeyExtent extent, TServerInstance instance, SortedMap<Key,Value> tabletsKeyValues, Text metadataEntry)
@@ -2783,8 +2767,9 @@
     Runnable contextCleaner = new Runnable() {
       @Override
       public void run() {
-        Set<String> contextProperties = getConfiguration().getAllPropertiesWithPrefix(Property.VFS_CONTEXT_CLASSPATH_PROPERTY).keySet();
-        Set<String> configuredContexts = new HashSet<String>();
+        Set<String> contextProperties = getServerConfigurationFactory().getConfiguration().getAllPropertiesWithPrefix(Property.VFS_CONTEXT_CLASSPATH_PROPERTY)
+            .keySet();
+        Set<String> configuredContexts = new HashSet<>();
         for (String prop : contextProperties) {
           configuredContexts.add(prop.substring(Property.VFS_CONTEXT_CLASSPATH_PROPERTY.name().length()));
         }
@@ -2818,7 +2803,7 @@
         ArrayList<Tablet> tablets;
 
         synchronized (onlineTablets) {
-          tablets = new ArrayList<Tablet>(onlineTablets.values());
+          tablets = new ArrayList<>(onlineTablets.values());
         }
 
         for (Tablet tablet : tablets) {
@@ -2835,12 +2820,12 @@
 
     Map<KeyExtent,Tablet> onlineTabletsCopy;
     synchronized (this.onlineTablets) {
-      onlineTabletsCopy = new HashMap<KeyExtent,Tablet>(this.onlineTablets);
+      onlineTabletsCopy = new HashMap<>(this.onlineTablets);
     }
-    Map<String,TableInfo> tables = new HashMap<String,TableInfo>();
+    Map<String,TableInfo> tables = new HashMap<>();
 
     for (Entry<KeyExtent,Tablet> entry : onlineTabletsCopy.entrySet()) {
-      String tableId = entry.getKey().getTableId().toString();
+      String tableId = entry.getKey().getTableId();
       TableInfo table = tables.get(tableId);
       if (table == null) {
         table = new TableInfo();
@@ -2884,7 +2869,7 @@
       table.scans.running += entry.getValue().get(ScanRunState.RUNNING);
     }
 
-    ArrayList<KeyExtent> offlineTabletsCopy = new ArrayList<KeyExtent>();
+    ArrayList<KeyExtent> offlineTabletsCopy = new ArrayList<>();
     synchronized (this.unopenedTablets) {
       synchronized (this.openingTablets) {
         offlineTabletsCopy.addAll(this.unopenedTablets);
@@ -2893,7 +2878,7 @@
     }
 
     for (KeyExtent extent : offlineTabletsCopy) {
-      String tableId = extent.getTableId().toString();
+      String tableId = extent.getTableId();
       TableInfo table = tables.get(tableId);
       if (table == null) {
         table = new TableInfo();
@@ -2915,6 +2900,9 @@
     result.logSorts = logSorter.getLogSorts();
     result.flushs = flushCounter.get();
     result.syncs = syncCounter.get();
+    result.bulkImports = new ArrayList<>();
+    result.bulkImports.addAll(clientHandler.getBulkLoadStatus());
+    result.bulkImports.addAll(bulkImportStatus.getBulkLoadStatus());
     return result;
   }
 
@@ -2967,6 +2955,7 @@
     Durability durability = getMincEventDurability(tablet.getExtent());
     totalMinorCompactions.incrementAndGet();
     logger.minorCompactionFinished(tablet, newDatafile, walogSeq, durability);
+    markUnusedWALs();
   }
 
   public void minorCompactionStarted(CommitSession tablet, int lastUpdateSequence, String newMapfileLocation) throws IOException {
@@ -2976,8 +2965,8 @@
 
   public void recover(VolumeManager fs, KeyExtent extent, TableConfiguration tconf, List<LogEntry> logEntries, Set<String> tabletFiles,
       MutationReceiver mutationReceiver) throws IOException {
-    List<Path> recoveryLogs = new ArrayList<Path>();
-    List<LogEntry> sorted = new ArrayList<LogEntry>(logEntries);
+    List<Path> recoveryLogs = new ArrayList<>();
+    List<LogEntry> sorted = new ArrayList<>(logEntries);
     Collections.sort(sorted, new Comparator<LogEntry>() {
       @Override
       public int compare(LogEntry e1, LogEntry e2) {
@@ -2986,14 +2975,11 @@
     });
     for (LogEntry entry : sorted) {
       Path recovery = null;
-      for (String log : entry.logSet) {
-        Path finished = RecoveryPath.getRecoveryPath(fs, fs.getFullPath(FileType.WAL, log));
-        finished = SortedLogState.getFinishedMarkerPath(finished);
-        TabletServer.log.info("Looking for " + finished);
-        if (fs.exists(finished)) {
-          recovery = finished.getParent();
-          break;
-        }
+      Path finished = RecoveryPath.getRecoveryPath(fs, fs.getFullPath(FileType.WAL, entry.filename));
+      finished = SortedLogState.getFinishedMarkerPath(finished);
+      TabletServer.log.info("Looking for " + finished);
+      if (fs.exists(finished)) {
+        recovery = finished.getParent();
       }
       if (recovery == null)
         throw new IOException("Unable to find recovery files for extent " + extent + " logEntry: " + entry);
@@ -3011,7 +2997,7 @@
   }
 
   public TableConfiguration getTableConfiguration(KeyExtent extent) {
-    return confFactory.getTableConfiguration(extent.getTableId().toString());
+    return confFactory.getTableConfiguration(extent.getTableId());
   }
 
   public DfsLogger.ServerResources getServerConfig() {
@@ -3030,7 +3016,9 @@
   }
 
   public Collection<Tablet> getOnlineTablets() {
-    return Collections.unmodifiableCollection(onlineTablets.values());
+    synchronized (onlineTablets) {
+      return new ArrayList<>(onlineTablets.values());
+    }
   }
 
   public VolumeManager getFileSystem() {
@@ -3056,4 +3044,86 @@
   public SecurityOperation getSecurityOperation() {
     return security;
   }
+
+  // avoid unnecessary redundant markings to meta
+  final ConcurrentHashMap<DfsLogger,EnumSet<TabletLevel>> metadataTableLogs = new ConcurrentHashMap<>();
+  final Object levelLocks[] = new Object[TabletLevel.values().length];
+
+  {
+    for (int i = 0; i < levelLocks.length; i++) {
+      levelLocks[i] = new Object();
+    }
+  }
+
+  // remove any meta entries after a rolled log is no longer referenced
+  Set<DfsLogger> closedLogs = new HashSet<>();
+
+  private void markUnusedWALs() {
+    Set<DfsLogger> candidates;
+    synchronized (closedLogs) {
+      candidates = new HashSet<>(closedLogs);
+    }
+    for (Tablet tablet : getOnlineTablets()) {
+      candidates.removeAll(tablet.getCurrentLogFiles());
+    }
+    try {
+      TServerInstance session = this.getTabletSession();
+      for (DfsLogger candidate : candidates) {
+        log.info("Marking " + candidate.getPath() + " as unreferenced");
+        walMarker.walUnreferenced(session, candidate.getPath());
+      }
+      synchronized (closedLogs) {
+        closedLogs.removeAll(candidates);
+      }
+    } catch (WalMarkerException ex) {
+      log.info(ex.toString(), ex);
+    }
+  }
+
+  public void addNewLogMarker(DfsLogger copy) throws WalMarkerException {
+    log.info("Writing log marker for " + copy.getPath());
+    walMarker.addNewWalMarker(getTabletSession(), copy.getPath());
+  }
+
+  public void walogClosed(DfsLogger currentLog) throws WalMarkerException {
+    metadataTableLogs.remove(currentLog);
+    synchronized (closedLogs) {
+      closedLogs.add(currentLog);
+    }
+    log.info("Marking " + currentLog.getPath() + " as closed");
+    walMarker.closeWal(getTabletSession(), currentLog.getPath());
+  }
+
+  public void updateBulkImportState(List<String> files, BulkImportState state) {
+    bulkImportStatus.updateBulkImportStatus(files, state);
+  }
+
+  public void removeBulkImportState(List<String> files) {
+    bulkImportStatus.removeBulkImportStatus(files);
+  }
+
+  private static final String MAJC_READ_LIMITER_KEY = "tserv_majc_read";
+  private static final String MAJC_WRITE_LIMITER_KEY = "tserv_majc_write";
+  private final SharedRateLimiterFactory.RateProvider rateProvider = new SharedRateLimiterFactory.RateProvider() {
+    @Override
+    public long getDesiredRate() {
+      return getConfiguration().getMemoryInBytes(Property.TSERV_MAJC_THROUGHPUT);
+    }
+  };
+
+  /**
+   * Get the {@link RateLimiter} for reads during major compactions on this tserver. All writes performed during major compactions are throttled to conform to
+   * this RateLimiter.
+   */
+  public final RateLimiter getMajorCompactionReadLimiter() {
+    return SharedRateLimiterFactory.getInstance().create(MAJC_READ_LIMITER_KEY, rateProvider);
+  }
+
+  /**
+   * Get the RateLimiter for writes during major compations on this tserver. All reads performed during major compactions are throttled to conform to this
+   * RateLimiter.
+   */
+  public final RateLimiter getMajorCompactionWriteLimiter() {
+    return SharedRateLimiterFactory.getInstance().create(MAJC_WRITE_LIMITER_KEY, rateProvider);
+  }
 }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java
index d2790da..97606ea 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServerResourceManager.java
@@ -42,7 +42,6 @@
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
 import org.apache.accumulo.core.util.Daemon;
 import org.apache.accumulo.core.util.NamingThreadFactory;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.util.LoggingRunnable;
 import org.apache.accumulo.server.conf.ServerConfigurationFactory;
 import org.apache.accumulo.server.fs.FileRef;
@@ -64,6 +63,7 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.annotations.VisibleForTesting;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 /**
  * ResourceManager is responsible for managing the resources of all tablets within a tablet server.
@@ -84,7 +84,7 @@
   private final ExecutorService assignMetaDataPool;
   private final ExecutorService readAheadThreadPool;
   private final ExecutorService defaultReadAheadThreadPool;
-  private final Map<String,ExecutorService> threadPools = new TreeMap<String,ExecutorService>();
+  private final Map<String,ExecutorService> threadPools = new TreeMap<>();
 
   private final ConcurrentHashMap<KeyExtent,RunnableStartedAt> activeAssignments;
 
@@ -207,7 +207,7 @@
 
     assignMetaDataPool = createEs(0, 1, 60, "metadata tablet assignment");
 
-    activeAssignments = new ConcurrentHashMap<KeyExtent,RunnableStartedAt>();
+    activeAssignments = new ConcurrentHashMap<>();
 
     readAheadThreadPool = createEs(Property.TSERV_READ_AHEAD_MAXCONCURRENT, "tablet read ahead");
     defaultReadAheadThreadPool = createEs(Property.TSERV_METADATA_READ_AHEAD_MAXCONCURRENT, "metadata tablets read ahead");
@@ -332,7 +332,7 @@
 
     MemoryManagementFramework() {
       tabletReports = Collections.synchronizedMap(new HashMap<KeyExtent,TabletStateImpl>());
-      memUsageReports = new LinkedBlockingQueue<TabletStateImpl>();
+      memUsageReports = new LinkedBlockingQueue<>();
       maxMem = conf.getConfiguration().getMemoryInBytes(Property.TSERV_MAXMEM);
 
       Runnable r1 = new Runnable() {
@@ -408,7 +408,7 @@
         Map<KeyExtent,TabletStateImpl> tabletReportsCopy = null;
         try {
           synchronized (tabletReports) {
-            tabletReportsCopy = new HashMap<KeyExtent,TabletStateImpl>(tabletReports);
+            tabletReportsCopy = new HashMap<>(tabletReports);
           }
           ArrayList<TabletState> tabletStates = new ArrayList<TabletState>(tabletReportsCopy.values());
           mma = memoryManager.getMemoryManagementActions(tabletStates);
@@ -454,7 +454,7 @@
           log.error("Minor compactions for memory managment failed", t);
         }
 
-        UtilWaitThread.sleep(250);
+        sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
       }
     }
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/WriteTracker.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/WriteTracker.java
index db02526..c999c09 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/WriteTracker.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/WriteTracker.java
@@ -39,7 +39,7 @@
   private static final Logger log = LoggerFactory.getLogger(WriteTracker.class);
 
   private static final AtomicLong operationCounter = new AtomicLong(1);
-  private final Map<TabletType,TreeSet<Long>> inProgressWrites = new EnumMap<TabletType,TreeSet<Long>>(TabletType.class);
+  private final Map<TabletType,TreeSet<Long>> inProgressWrites = new EnumMap<>(TabletType.class);
 
   WriteTracker() {
     for (TabletType ttype : TabletType.values()) {
@@ -87,7 +87,7 @@
     if (keySet.size() == 0)
       return -1;
 
-    List<KeyExtent> extents = new ArrayList<KeyExtent>(keySet.size());
+    List<KeyExtent> extents = new ArrayList<>(keySet.size());
 
     for (Tablet tablet : keySet)
       extents.add(tablet.getExtent());
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionPlan.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionPlan.java
index 8f98761..845d779 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionPlan.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionPlan.java
@@ -31,8 +31,8 @@
  * create the resulting output file.
  */
 public class CompactionPlan {
-  public final List<FileRef> inputFiles = new ArrayList<FileRef>();
-  public final List<FileRef> deleteFiles = new ArrayList<FileRef>();
+  public final List<FileRef> inputFiles = new ArrayList<>();
+  public final List<FileRef> deleteFiles = new ArrayList<>();
   public WriteParameters writeParameters = null;
 
   @Override
@@ -67,8 +67,8 @@
    *           thrown when validation fails.
    */
   public final void validate(Set<FileRef> allFiles) {
-    Set<FileRef> inputSet = new HashSet<FileRef>(inputFiles);
-    Set<FileRef> deleteSet = new HashSet<FileRef>(deleteFiles);
+    Set<FileRef> inputSet = new HashSet<>(inputFiles);
+    Set<FileRef> deleteSet = new HashSet<>(deleteFiles);
 
     if (!allFiles.containsAll(inputSet)) {
       inputSet.removeAll(allFiles);
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/DefaultCompactionStrategy.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/DefaultCompactionStrategy.java
index 1f0dc3a..faf9534 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/DefaultCompactionStrategy.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/DefaultCompactionStrategy.java
@@ -60,16 +60,16 @@
   private List<FileRef> findMapFilesToCompact(MajorCompactionRequest request) {
     MajorCompactionReason reason = request.getReason();
     if (reason == MajorCompactionReason.USER) {
-      return new ArrayList<FileRef>(request.getFiles().keySet());
+      return new ArrayList<>(request.getFiles().keySet());
     }
     if (reason == MajorCompactionReason.CHOP) {
       // should not happen, but this is safe
-      return new ArrayList<FileRef>(request.getFiles().keySet());
+      return new ArrayList<>(request.getFiles().keySet());
     }
 
     if (request.getFiles().size() <= 1)
       return null;
-    TreeSet<CompactionFile> candidateFiles = new TreeSet<CompactionFile>(new Comparator<CompactionFile>() {
+    TreeSet<CompactionFile> candidateFiles = new TreeSet<>(new Comparator<CompactionFile>() {
       @Override
       public int compare(CompactionFile o1, CompactionFile o2) {
         if (o1 == o2)
@@ -95,7 +95,7 @@
       totalSize += mfi.size;
     }
 
-    List<FileRef> files = new ArrayList<FileRef>();
+    List<FileRef> files = new ArrayList<>();
 
     while (candidateFiles.size() > 1) {
       CompactionFile max = candidateFiles.last();
@@ -121,7 +121,7 @@
 
     if (files.size() < totalFilesToCompact) {
 
-      TreeMap<FileRef,Long> tfc = new TreeMap<FileRef,Long>();
+      TreeMap<FileRef,Long> tfc = new TreeMap<>();
       for (Entry<FileRef,DataFileValue> entry : request.getFiles().entrySet()) {
         tfc.put(entry.getKey(), entry.getValue().getSize());
       }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/MajorCompactionRequest.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/MajorCompactionRequest.java
index c6733f8..08bff26 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/MajorCompactionRequest.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/MajorCompactionRequest.java
@@ -56,15 +56,6 @@
     this.files = mcr.files;
   }
 
-  /**
-   * @return org.apache.accumulo.core.data.KeyExtent
-   * @deprecated since 1.7. Use {@link #getTabletId()} instead.
-   */
-  @Deprecated
-  public org.apache.accumulo.core.data.KeyExtent getExtent() {
-    return new org.apache.accumulo.core.data.KeyExtent(extent.getTableId(), extent.getEndRow(), extent.getPrevEndRow());
-  }
-
   public TabletId getTabletId() {
     return new TabletIdImpl(extent);
   }
@@ -86,7 +77,8 @@
     // @TODO ensure these files are always closed?
     FileOperations fileFactory = FileOperations.getInstance();
     FileSystem ns = volumeManager.getVolumeByPath(ref.path()).getFileSystem();
-    FileSKVIterator openReader = fileFactory.openReader(ref.path().toString(), true, ns, ns.getConf(), tableConfig);
+    FileSKVIterator openReader = fileFactory.newReaderBuilder().forFile(ref.path().toString(), ns, ns.getConf()).withTableConfiguration(tableConfig)
+        .seekToBeginning().build();
     return openReader;
   }
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/SizeLimitCompactionStrategy.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/SizeLimitCompactionStrategy.java
index 6cc9025..69e3269 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/SizeLimitCompactionStrategy.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/SizeLimitCompactionStrategy.java
@@ -39,7 +39,7 @@
   }
 
   private MajorCompactionRequest filterFiles(MajorCompactionRequest mcr) {
-    Map<FileRef,DataFileValue> filteredFiles = new HashMap<FileRef,DataFileValue>();
+    Map<FileRef,DataFileValue> filteredFiles = new HashMap<>();
     for (Entry<FileRef,DataFileValue> entry : mcr.getFiles().entrySet()) {
       if (entry.getValue().getSize() <= limit) {
         filteredFiles.put(entry.getKey(), entry.getValue());
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/strategies/ConfigurableCompactionStrategy.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/strategies/ConfigurableCompactionStrategy.java
index b97b88b..04915ef 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/strategies/ConfigurableCompactionStrategy.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/strategies/ConfigurableCompactionStrategy.java
@@ -26,7 +26,10 @@
 import java.util.regex.Pattern;
 
 import org.apache.accumulo.core.compaction.CompactionSettings;
+import org.apache.accumulo.core.conf.ConfigurationCopy;
+import org.apache.accumulo.core.file.FileSKVIterator;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.tserver.compaction.CompactionPlan;
 import org.apache.accumulo.tserver.compaction.CompactionStrategy;
@@ -40,6 +43,22 @@
     boolean shouldCompact(Entry<FileRef,DataFileValue> file, MajorCompactionRequest request);
   }
 
+  private static class NoSampleTest implements Test {
+
+    @Override
+    public boolean shouldCompact(Entry<FileRef,DataFileValue> file, MajorCompactionRequest request) {
+      try (FileSKVIterator reader = request.openReader(file.getKey())) {
+        SamplerConfigurationImpl sc = SamplerConfigurationImpl.newSamplerConfig(new ConfigurationCopy(request.getTableProperties()));
+        if (sc == null) {
+          return false;
+        }
+        return reader.getSample(sc) == null;
+      } catch (IOException e) {
+        throw new RuntimeException(e);
+      }
+    }
+  }
+
   private static abstract class FileSizeTest implements Test {
     private final long esize;
 
@@ -83,6 +102,9 @@
     for (Entry<String,String> entry : es) {
 
       switch (CompactionSettings.valueOf(entry.getKey())) {
+        case SF_NO_SAMPLE:
+          tests.add(new NoSampleTest());
+          break;
         case SF_LT_ESIZE_OPT:
           tests.add(new FileSizeTest(entry.getValue()) {
             @Override
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/constraints/ConstraintChecker.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/constraints/ConstraintChecker.java
index 13e7c4f..9d065c7 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/constraints/ConstraintChecker.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/constraints/ConstraintChecker.java
@@ -47,7 +47,7 @@
   private AtomicLong lastCheck = new AtomicLong(0);
 
   public ConstraintChecker(TableConfiguration conf) {
-    constrains = new ArrayList<Constraint>();
+    constrains = new ArrayList<>();
 
     this.conf = conf;
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java
index a36463d..5280d41 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java
@@ -62,6 +62,8 @@
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSOutputStream;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -72,11 +74,12 @@
  * Wrap a connection to a logger.
  *
  */
-public class DfsLogger {
+public class DfsLogger implements Comparable<DfsLogger> {
   public static final String LOG_FILE_HEADER_V2 = "--- Log File Header (v2) ---";
   public static final String LOG_FILE_HEADER_V3 = "--- Log File Header (v3) ---";
 
   private static final Logger log = LoggerFactory.getLogger(DfsLogger.class);
+  private static final DatanodeInfo[] EMPTY_PIPELINE = new DatanodeInfo[0];
 
   public static class LogClosedException extends IOException {
     private static final long serialVersionUID = 1L;
@@ -131,7 +134,7 @@
     VolumeManager getFileSystem();
   }
 
-  private final LinkedBlockingQueue<DfsLogger.LogWork> workQueue = new LinkedBlockingQueue<DfsLogger.LogWork>();
+  private final LinkedBlockingQueue<DfsLogger.LogWork> workQueue = new LinkedBlockingQueue<>();
 
   private final Object closeLock = new Object();
 
@@ -142,10 +145,11 @@
   private boolean closed = false;
 
   private class LogSyncingTask implements Runnable {
+    private int expectedReplication = 0;
 
     @Override
     public void run() {
-      ArrayList<DfsLogger.LogWork> work = new ArrayList<DfsLogger.LogWork>();
+      ArrayList<DfsLogger.LogWork> work = new ArrayList<>();
       boolean sawClosedMarker = false;
       while (!sawClosedMarker) {
         work.clear();
@@ -176,6 +180,7 @@
           }
         }
 
+        long start = System.currentTimeMillis();
         try {
           if (durabilityMethod != null) {
             durabilityMethod.invoke(logFile);
@@ -186,9 +191,30 @@
             }
           }
         } catch (Exception ex) {
-          log.warn("Exception syncing " + ex);
-          for (DfsLogger.LogWork logWork : work) {
-            logWork.exception = ex;
+          fail(work, ex, "synching");
+        }
+        long duration = System.currentTimeMillis() - start;
+        if (duration > slowFlushMillis) {
+          String msg = new StringBuilder(128).append("Slow sync cost: ").append(duration).append(" ms, current pipeline: ")
+              .append(Arrays.toString(getPipeLine())).toString();
+          log.info(msg);
+          if (expectedReplication > 0) {
+            int current = expectedReplication;
+            try {
+              current = ((DFSOutputStream) logFile.getWrappedStream()).getCurrentBlockReplication();
+            } catch (IOException e) {
+              fail(work, e, "getting replication level");
+            }
+            if (current < expectedReplication) {
+              fail(work, new IOException("replication of " + current + " is less than " + expectedReplication), "replication check");
+            }
+          }
+        }
+        if (expectedReplication == 0 && logFile.getWrappedStream() instanceof DFSOutputStream) {
+          try {
+            expectedReplication = ((DFSOutputStream) logFile.getWrappedStream()).getCurrentBlockReplication();
+          } catch (IOException e) {
+            fail(work, e, "getting replication level");
           }
         }
 
@@ -199,6 +225,13 @@
             logWork.latch.countDown();
       }
     }
+
+    private void fail(ArrayList<DfsLogger.LogWork> work, Exception ex, String why) {
+      log.warn("Exception " + why + " " + ex);
+      for (DfsLogger.LogWork logWork : work) {
+        logWork.exception = ex;
+      }
+    }
   }
 
   private static class LogWork {
@@ -279,9 +312,15 @@
   private String metaReference;
   private AtomicLong syncCounter;
   private AtomicLong flushCounter;
+  private final long slowFlushMillis;
+
+  private DfsLogger(ServerResources conf) {
+    this.conf = conf;
+    this.slowFlushMillis = conf.getConfiguration().getTimeInMillis(Property.TSERV_SLOW_FLUSH_MILLIS);
+  }
 
   public DfsLogger(ServerResources conf, AtomicLong syncCounter, AtomicLong flushCounter) throws IOException {
-    this.conf = conf;
+    this(conf);
     this.syncCounter = syncCounter;
     this.flushCounter = flushCounter;
   }
@@ -293,7 +332,7 @@
    *          the cq for the "log" entry in +r/!0
    */
   public DfsLogger(ServerResources conf, String filename, String meta) throws IOException {
-    this.conf = conf;
+    this(conf);
     this.logPath = filename;
     metaReference = meta;
   }
@@ -336,7 +375,7 @@
 
           // If it's null, we won't have any parameters whatsoever. First, let's attempt to read
           // parameters
-          Map<String,String> opts = new HashMap<String,String>();
+          Map<String,String> opts = new HashMap<>();
           int count = input.readInt();
           for (int i = 0; i < count; i++) {
             String key = input.readUTF();
@@ -383,8 +422,17 @@
     return new DFSLoggerInputStreams(input, decryptingInput);
   }
 
+  /**
+   * Opens a Write-Ahead Log file and writes the necessary header information and OPEN entry to the file. The file is ready to be used for ingest if this method
+   * returns successfully. If an exception is thrown from this method, it is the callers responsibility to ensure that {@link #close()} is called to prevent
+   * leaking the file handle and/or syncing thread.
+   *
+   * @param address
+   *          The address of the host using this WAL
+   */
   public synchronized void open(String address) throws IOException {
     String filename = UUID.randomUUID().toString();
+    log.debug("Address is " + address);
     String logger = Joiner.on("+").join(address.split(":"));
 
     log.debug("DfsLogger.open() begin");
@@ -394,6 +442,7 @@
         + Path.SEPARATOR + filename;
 
     metaReference = toString();
+    LoggerOperation op = null;
     try {
       short replication = (short) conf.getConfiguration().getCount(Property.TSERV_WAL_REPLICATION);
       if (replication == 0)
@@ -405,7 +454,6 @@
         logFile = fs.createSyncable(new Path(logPath), 0, replication, blockSize);
       else
         logFile = fs.create(new Path(logPath), true, 0, replication, blockSize);
-
       sync = logFile.getClass().getMethod("hsync");
       flush = logFile.getClass().getMethod("hflush");
 
@@ -443,8 +491,7 @@
       key.event = OPEN;
       key.tserverSession = filename;
       key.filename = filename;
-      write(key, EMPTY);
-      log.debug("Got new write-ahead log: " + this);
+      op = logFileData(Collections.singletonList(new Pair<>(key, EMPTY)), Durability.SYNC);
     } catch (Exception ex) {
       if (logFile != null)
         logFile.close();
@@ -456,6 +503,8 @@
     syncThread = new Daemon(new LoggingRunnable(log, new LogSyncingTask()));
     syncThread.setName("Accumulo WALog thread " + toString());
     syncThread.start();
+    op.await();
+    log.debug("Got new write-ahead log: " + this);
   }
 
   @Override
@@ -477,7 +526,11 @@
   }
 
   public String getFileName() {
-    return logPath.toString();
+    return logPath;
+  }
+
+  public Path getPath() {
+    return new Path(logPath);
   }
 
   public void close() throws IOException {
@@ -575,7 +628,7 @@
 
   public LoggerOperation logManyTablets(List<TabletMutations> mutations) throws IOException {
     Durability durability = Durability.NONE;
-    List<Pair<LogFileKey,LogFileValue>> data = new ArrayList<Pair<LogFileKey,LogFileValue>>();
+    List<Pair<LogFileKey,LogFileValue>> data = new ArrayList<>();
     for (TabletMutations tabletMutations : mutations) {
       LogFileKey key = new LogFileKey();
       key.event = MANY_MUTATIONS;
@@ -583,7 +636,7 @@
       key.tid = tabletMutations.getTid();
       LogFileValue value = new LogFileValue();
       value.mutations = tabletMutations.getMutations();
-      data.add(new Pair<LogFileKey,LogFileValue>(key, value));
+      data.add(new Pair<>(key, value));
       if (tabletMutations.getDurability().ordinal() > durability.ordinal()) {
         durability = tabletMutations.getDurability();
       }
@@ -606,7 +659,7 @@
     key.event = COMPACTION_FINISH;
     key.seq = seq;
     key.tid = tid;
-    return logFileData(Collections.singletonList(new Pair<LogFileKey,LogFileValue>(key, EMPTY)), durability);
+    return logFileData(Collections.singletonList(new Pair<>(key, EMPTY)), durability);
   }
 
   public LoggerOperation minorCompactionStarted(int seq, int tid, String fqfn, Durability durability) throws IOException {
@@ -615,7 +668,7 @@
     key.seq = seq;
     key.tid = tid;
     key.filename = fqfn;
-    return logFileData(Collections.singletonList(new Pair<LogFileKey,LogFileValue>(key, EMPTY)), durability);
+    return logFileData(Collections.singletonList(new Pair<>(key, EMPTY)), durability);
   }
 
   public String getLogger() {
@@ -623,4 +676,30 @@
     return Joiner.on(":").join(parts[parts.length - 2].split("[+]"));
   }
 
+  @Override
+  public int compareTo(DfsLogger o) {
+    return getFileName().compareTo(o.getFileName());
+  }
+
+  /*
+   * The following method was shamelessly lifted from HBASE-11240 (sans reflection). Thanks HBase!
+   */
+
+  /**
+   * This method gets the pipeline for the current walog.
+   *
+   * @return non-null array of DatanodeInfo
+   */
+  DatanodeInfo[] getPipeLine() {
+    if (null != logFile) {
+      OutputStream os = logFile.getWrappedStream();
+      if (os instanceof DFSOutputStream) {
+        return ((DFSOutputStream) os).getPipeline();
+      }
+    }
+
+    // Don't have a pipeline or can't figure it out.
+    return EMPTY_PIPELINE;
+  }
+
 }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LogSorter.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LogSorter.java
index f152f9c..11097ce 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LogSorter.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LogSorter.java
@@ -132,7 +132,7 @@
         final long bufferSize = conf.getMemoryInBytes(Property.TSERV_SORT_BUFFER_SIZE);
         Thread.currentThread().setName("Sorting " + name + " for recovery");
         while (true) {
-          final ArrayList<Pair<LogFileKey,LogFileValue>> buffer = new ArrayList<Pair<LogFileKey,LogFileValue>>();
+          final ArrayList<Pair<LogFileKey,LogFileValue>> buffer = new ArrayList<>();
           try {
             long start = input.getPos();
             while (input.getPos() - start < bufferSize) {
@@ -140,7 +140,7 @@
               LogFileValue value = new LogFileValue();
               key.readFields(decryptingInput);
               value.readFields(decryptingInput);
-              buffer.add(new Pair<LogFileKey,LogFileValue>(key, value));
+              buffer.add(new Pair<>(key, value));
             }
             writeBuffer(destPath, buffer, part++);
             buffer.clear();
@@ -236,7 +236,7 @@
   }
 
   public List<RecoveryStatus> getLogSorts() {
-    List<RecoveryStatus> result = new ArrayList<RecoveryStatus>();
+    List<RecoveryStatus> result = new ArrayList<>();
     synchronized (currentWork) {
       for (Entry<String,LogProcessor> entries : currentWork.entrySet()) {
         RecoveryStatus status = new RecoveryStatus();
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/SortedLogRecovery.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/SortedLogRecovery.java
index 37882cd..3d403e0 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/SortedLogRecovery.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/SortedLogRecovery.java
@@ -69,7 +69,7 @@
 
   private enum Status {
     INITIAL, LOOKING_FOR_FINISH, COMPLETE
-  };
+  }
 
   private static class LastStartToFinish {
     long lastStart = -1;
@@ -152,7 +152,7 @@
   int findLastStartToFinish(MultiReader reader, int fileno, KeyExtent extent, Set<String> tabletFiles, LastStartToFinish lastStartToFinish) throws IOException,
       EmptyMapFileException, UnusedException {
 
-    HashSet<String> suffixes = new HashSet<String>();
+    HashSet<String> suffixes = new HashSet<>();
     for (String path : tabletFiles)
       suffixes.add(getPathSuffix(path));
 
@@ -180,7 +180,7 @@
     // find the maximum tablet id... because a tablet may leave a tserver and then come back, in which case it would have a different tablet id
     // for the maximum tablet id, find the minimum sequence #... may be ok to find the max seq, but just want to make the code behave like it used to
     while (reader.next(key, value)) {
-      // LogReader.printEntry(entry);
+      // log.debug("Event " + key.event + " tablet " + key.tablet);
       if (key.event != DEFINE_TABLET)
         break;
       if (key.tablet.equals(extent) || key.tablet.equals(alternative)) {
@@ -209,7 +209,7 @@
         if (lastStartToFinish.compactionStatus == Status.INITIAL)
           lastStartToFinish.compactionStatus = Status.COMPLETE;
         if (key.seq <= lastStartToFinish.lastStart)
-          throw new RuntimeException("Sequence numbers are not increasing for start/stop events.");
+          throw new RuntimeException("Sequence numbers are not increasing for start/stop events: " + key.seq + " vs " + lastStartToFinish.lastStart);
         lastStartToFinish.update(fileno, key.seq);
 
         // Tablet server finished the minor compaction, but didn't remove the entry from the METADATA table.
@@ -218,7 +218,7 @@
           lastStartToFinish.update(-1);
       } else if (key.event == COMPACTION_FINISH) {
         if (key.seq <= lastStartToFinish.lastStart)
-          throw new RuntimeException("Sequence numbers are not increasing for start/stop events.");
+          throw new RuntimeException("Sequence numbers are not increasing for start/stop events: " + key.seq + " vs " + lastStartToFinish.lastStart);
         if (lastStartToFinish.compactionStatus == Status.INITIAL)
           lastStartToFinish.compactionStatus = Status.LOOKING_FOR_FINISH;
         else if (lastStartToFinish.lastFinish > lastStartToFinish.lastStart)
@@ -249,8 +249,6 @@
         break;
       if (key.tid != tid)
         break;
-      // log.info("Replaying " + key);
-      // log.info(value);
       if (key.event == MUTATION) {
         mr.receive(value.mutations.get(0));
       } else if (key.event == MANY_MUTATIONS) {
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/TabletServerLogger.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/TabletServerLogger.java
index bb8ae6f..a4cd6b0 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/TabletServerLogger.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/TabletServerLogger.java
@@ -16,18 +16,23 @@
  */
 package org.apache.accumulo.tserver.log;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.HashMap;
-import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Set;
+import java.util.concurrent.SynchronousQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
 import java.util.concurrent.locks.ReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 
@@ -36,7 +41,8 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.protobuf.ProtobufUtil;
 import org.apache.accumulo.core.replication.ReplicationConfigurationUtil;
-import org.apache.accumulo.core.util.UtilWaitThread;
+import org.apache.accumulo.core.util.SimpleThreadPool;
+import org.apache.accumulo.fate.util.LoggingRunnable;
 import org.apache.accumulo.fate.zookeeper.Retry;
 import org.apache.accumulo.fate.zookeeper.RetryFactory;
 import org.apache.accumulo.server.conf.TableConfiguration;
@@ -49,6 +55,7 @@
 import org.apache.accumulo.tserver.TabletMutations;
 import org.apache.accumulo.tserver.TabletServer;
 import org.apache.accumulo.tserver.log.DfsLogger.LoggerOperation;
+import org.apache.accumulo.tserver.log.DfsLogger.ServerResources;
 import org.apache.accumulo.tserver.tablet.CommitSession;
 import org.apache.hadoop.fs.Path;
 import org.slf4j.Logger;
@@ -71,20 +78,22 @@
 
   private final TabletServer tserver;
 
-  // The current log set: always updated to a new set with every change of loggers
-  private final List<DfsLogger> loggers = new ArrayList<DfsLogger>();
+  // The current logger
+  private DfsLogger currentLog = null;
+  private final SynchronousQueue<Object> nextLog = new SynchronousQueue<>();
+  private ThreadPoolExecutor nextLogMaker;
 
-  // The current generation of logSet.
-  // Because multiple threads can be using a log set at one time, a log
+  // The current generation of logs.
+  // Because multiple threads can be using a log at one time, a log
   // failure is likely to affect multiple threads, who will all attempt to
-  // create a new logSet. This will cause many unnecessary updates to the
+  // create a new log. This will cause many unnecessary updates to the
   // metadata table.
   // We'll use this generational counter to determine if another thread has
-  // already fetched a new logSet.
-  private AtomicInteger logSetId = new AtomicInteger();
+  // already fetched a new log.
+  private final AtomicInteger logId = new AtomicInteger();
 
   // Use a ReadWriteLock to allow multiple threads to use the log set, but obtain a write lock to change them
-  private final ReentrantReadWriteLock logSetLock = new ReentrantReadWriteLock();
+  private final ReentrantReadWriteLock logIdLock = new ReentrantReadWriteLock();
 
   private final AtomicInteger seqGen = new AtomicInteger();
 
@@ -149,69 +158,81 @@
     this.maxAge = maxAge;
   }
 
-  private int initializeLoggers(final List<DfsLogger> copy) throws IOException {
-    final int[] result = {-1};
-    testLockAndRun(logSetLock, new TestCallWithWriteLock() {
+  private DfsLogger initializeLoggers(final AtomicInteger logIdOut) throws IOException {
+    final AtomicReference<DfsLogger> result = new AtomicReference<>();
+    testLockAndRun(logIdLock, new TestCallWithWriteLock() {
       @Override
       boolean test() {
-        copy.clear();
-        copy.addAll(loggers);
-        if (!loggers.isEmpty())
-          result[0] = logSetId.get();
-        return loggers.isEmpty();
+        result.set(currentLog);
+        if (currentLog != null)
+          logIdOut.set(logId.get());
+        return currentLog == null;
       }
 
       @Override
       void withWriteLock() throws IOException {
         try {
-          createLoggers();
-          copy.clear();
-          copy.addAll(loggers);
-          if (copy.size() > 0)
-            result[0] = logSetId.get();
+          createLogger();
+          result.set(currentLog);
+          if (currentLog != null)
+            logIdOut.set(logId.get());
           else
-            result[0] = -1;
+            logIdOut.set(-1);
         } catch (IOException e) {
           log.error("Unable to create loggers", e);
         }
       }
     });
-    return result[0];
+    return result.get();
   }
 
-  public void getLogFiles(Set<String> loggersOut) {
-    logSetLock.readLock().lock();
+  /**
+   * Get the current WAL file
+   *
+   * @return The name of the current log, or null if there is no current log.
+   */
+  public String getLogFile() {
+    logIdLock.readLock().lock();
     try {
-      for (DfsLogger logger : loggers) {
-        loggersOut.add(logger.getFileName());
+      if (null == currentLog) {
+        return null;
       }
+      return currentLog.getFileName();
     } finally {
-      logSetLock.readLock().unlock();
+      logIdLock.readLock().unlock();
     }
   }
 
-  synchronized private void createLoggers() throws IOException {
-    if (!logSetLock.isWriteLockedByCurrentThread()) {
+  synchronized private void createLogger() throws IOException {
+    if (!logIdLock.isWriteLockedByCurrentThread()) {
       throw new IllegalStateException("createLoggers should be called with write lock held!");
     }
 
-    if (loggers.size() != 0) {
-      throw new IllegalStateException("createLoggers should not be called when loggers.size() is " + loggers.size());
+    if (currentLog != null) {
+      throw new IllegalStateException("createLoggers should not be called when current log is set");
     }
 
     try {
-      DfsLogger alog = new DfsLogger(tserver.getServerConfig(), syncCounter, flushCounter);
-      alog.open(tserver.getClientAddressString());
-      loggers.add(alog);
-      logSetId.incrementAndGet();
-
-      // When we successfully create a WAL, make sure to reset the Retry.
-      if (null != retry) {
-        retry = null;
+      startLogMaker();
+      Object next = nextLog.take();
+      if (next instanceof Exception) {
+        throw (Exception) next;
       }
+      if (next instanceof DfsLogger) {
+        currentLog = (DfsLogger) next;
+        logId.incrementAndGet();
+        log.info("Using next log " + currentLog.getFileName());
 
-      this.createTime = System.currentTimeMillis();
-      return;
+        // When we successfully create a WAL, make sure to reset the Retry.
+        if (null != retry) {
+          retry = null;
+        }
+
+        this.createTime = System.currentTimeMillis();
+        return;
+      } else {
+        throw new RuntimeException("Error: unexpected type seen: " + next);
+      }
     } catch (Exception t) {
       if (null == retry) {
         retry = retryFactory.create();
@@ -240,22 +261,87 @@
     }
   }
 
+  private synchronized void startLogMaker() {
+    if (nextLogMaker != null) {
+      return;
+    }
+    nextLogMaker = new SimpleThreadPool(1, "WALog creator");
+    nextLogMaker.submit(new LoggingRunnable(log, new Runnable() {
+      @Override
+      public void run() {
+        final ServerResources conf = tserver.getServerConfig();
+        final VolumeManager fs = conf.getFileSystem();
+        while (!nextLogMaker.isShutdown()) {
+          DfsLogger alog = null;
+          try {
+            log.debug("Creating next WAL");
+            alog = new DfsLogger(conf, syncCounter, flushCounter);
+            alog.open(tserver.getClientAddressString());
+            String fileName = alog.getFileName();
+            log.debug("Created next WAL " + fileName);
+            tserver.addNewLogMarker(alog);
+            while (!nextLog.offer(alog, 12, TimeUnit.HOURS)) {
+              log.info("Our WAL was not used for 12 hours: " + fileName);
+            }
+          } catch (Exception t) {
+            log.error("Failed to open WAL", t);
+            if (null != alog) {
+              // It's possible that the sync of the header and OPEN record to the WAL failed
+              // We want to make sure that clean up the resources/thread inside the DfsLogger
+              // object before trying to create a new one.
+              try {
+                alog.close();
+              } catch (Exception e) {
+                log.error("Failed to close WAL after it failed to open", e);
+              }
+              // Try to avoid leaving a bunch of empty WALs lying around
+              try {
+                Path path = alog.getPath();
+                if (fs.exists(path)) {
+                  fs.delete(path);
+                }
+              } catch (Exception e) {
+                log.warn("Failed to delete a WAL that failed to open", e);
+              }
+            }
+            try {
+              nextLog.offer(t, 12, TimeUnit.HOURS);
+            } catch (InterruptedException ex) {
+              // ignore
+            }
+          }
+        }
+      }
+    }));
+  }
+
+  public void resetLoggers() throws IOException {
+    logIdLock.writeLock().lock();
+    try {
+      close();
+    } finally {
+      logIdLock.writeLock().unlock();
+    }
+  }
+
   synchronized private void close() throws IOException {
-    if (!logSetLock.isWriteLockedByCurrentThread()) {
+    if (!logIdLock.isWriteLockedByCurrentThread()) {
       throw new IllegalStateException("close should be called with write lock held!");
     }
     try {
-      for (DfsLogger logger : loggers) {
+      if (null != currentLog) {
         try {
-          logger.close();
+          currentLog.close();
         } catch (DfsLogger.LogClosedException ex) {
           // ignore
         } catch (Throwable ex) {
-          log.error("Unable to cleanly close log " + logger.getFileName() + ": " + ex, ex);
+          log.error("Unable to cleanly close log " + currentLog.getFileName() + ": " + ex, ex);
+        } finally {
+          this.tserver.walogClosed(currentLog);
         }
+        currentLog = null;
+        logSizeEstimate.set(0);
       }
-      loggers.clear();
-      logSizeEstimate.set(0);
     } catch (Throwable t) {
       throw new IOException(t);
     }
@@ -272,7 +358,7 @@
 
   private int write(final Collection<CommitSession> sessions, boolean mincFinish, Writer writer) throws IOException {
     // Work very hard not to lock this during calls to the outside world
-    int currentLogSet = logSetId.get();
+    int currentLogId = logId.get();
 
     int seq = -1;
     int attempt = 1;
@@ -280,20 +366,20 @@
     while (!success) {
       try {
         // get a reference to the loggers that no other thread can touch
-        ArrayList<DfsLogger> copy = new ArrayList<DfsLogger>();
-        currentLogSet = initializeLoggers(copy);
+        DfsLogger copy = null;
+        AtomicInteger currentId = new AtomicInteger(-1);
+        copy = initializeLoggers(currentId);
+        currentLogId = currentId.get();
 
         // add the logger to the log set for the memory in the tablet,
         // update the metadata table if we've never used this tablet
 
-        if (currentLogSet == logSetId.get()) {
+        if (currentLogId == logId.get()) {
           for (CommitSession commitSession : sessions) {
             if (commitSession.beginUpdatingLogsUsed(copy, mincFinish)) {
               try {
                 // Scribble out a tablet definition and then write to the metadata table
                 defineTablet(commitSession);
-                if (currentLogSet == logSetId.get())
-                  tserver.addLoggersToMetadata(copy, commitSession.getExtent(), commitSession.getLogId());
               } finally {
                 commitSession.finishUpdatingLogsUsed();
               }
@@ -301,37 +387,27 @@
               // Need to release
               KeyExtent extent = commitSession.getExtent();
               if (ReplicationConfigurationUtil.isEnabled(extent, tserver.getTableConfiguration(extent))) {
-                Set<String> logs = new HashSet<String>();
-                for (DfsLogger logger : copy) {
-                  logs.add(logger.getFileName());
-                }
-                Status status = StatusUtil.fileCreated(System.currentTimeMillis());
-                log.debug("Writing " + ProtobufUtil.toString(status) + " to metadata table for " + logs);
+                Status status = StatusUtil.openWithUnknownLength(System.currentTimeMillis());
+                log.debug("Writing " + ProtobufUtil.toString(status) + " to metadata table for " + copy.getFileName());
                 // Got some new WALs, note this in the metadata table
-                ReplicationTableUtil.updateFiles(tserver, commitSession.getExtent(), logs, status);
+                ReplicationTableUtil.updateFiles(tserver, commitSession.getExtent(), copy.getFileName(), status);
               }
             }
           }
         }
 
         // Make sure that the logs haven't changed out from underneath our copy
-        if (currentLogSet == logSetId.get()) {
+        if (currentLogId == logId.get()) {
 
           // write the mutation to the logs
           seq = seqGen.incrementAndGet();
           if (seq < 0)
             throw new RuntimeException("Logger sequence generator wrapped!  Onos!!!11!eleven");
-          ArrayList<LoggerOperation> queuedOperations = new ArrayList<LoggerOperation>(copy.size());
-          for (DfsLogger wal : copy) {
-            queuedOperations.add(writer.write(wal, seq));
-          }
-
-          for (LoggerOperation lop : queuedOperations) {
-            lop.await();
-          }
+          LoggerOperation lop = writer.write(copy, seq);
+          lop.await();
 
           // double-check: did the log set change?
-          success = (currentLogSet == logSetId.get());
+          success = (currentLogId == logId.get());
         }
       } catch (DfsLogger.LogClosedException ex) {
         log.debug("Logs closed while writing, retrying " + attempt);
@@ -339,20 +415,20 @@
         if (attempt != 1) {
           log.error("Unexpected error writing to log, retrying attempt " + attempt, t);
         }
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       } finally {
         attempt++;
       }
       // Some sort of write failure occurred. Grab the write lock and reset the logs.
       // But since multiple threads will attempt it, only attempt the reset when
       // the logs haven't changed.
-      final int finalCurrent = currentLogSet;
+      final int finalCurrent = currentLogId;
       if (!success) {
-        testLockAndRun(logSetLock, new TestCallWithWriteLock() {
+        testLockAndRun(logIdLock, new TestCallWithWriteLock() {
 
           @Override
           boolean test() {
-            return finalCurrent == logSetId.get();
+            return finalCurrent == logId.get();
           }
 
           @Override
@@ -365,7 +441,7 @@
     }
     // if the log gets too big or too old, reset it .. grab the write lock first
     logSizeEstimate.addAndGet(4 * 3); // event, tid, seq overhead
-    testLockAndRun(logSetLock, new TestCallWithWriteLock() {
+    testLockAndRun(logIdLock, new TestCallWithWriteLock() {
       @Override
       boolean test() {
         return (logSizeEstimate.get() > maxSize) || ((System.currentTimeMillis() - createTime) > maxAge);
@@ -414,7 +490,7 @@
 
   public int logManyTablets(Map<CommitSession,Mutations> mutations) throws IOException {
 
-    final Map<CommitSession,Mutations> loggables = new HashMap<CommitSession,Mutations>(mutations);
+    final Map<CommitSession,Mutations> loggables = new HashMap<>(mutations);
     for (Entry<CommitSession,Mutations> entry : mutations.entrySet()) {
       if (entry.getValue().getDurability() == Durability.NONE) {
         loggables.remove(entry.getKey());
@@ -426,7 +502,7 @@
     int seq = write(loggables.keySet(), false, new Writer() {
       @Override
       public LoggerOperation write(DfsLogger logger, int ignored) throws Exception {
-        List<TabletMutations> copy = new ArrayList<TabletMutations>(loggables.size());
+        List<TabletMutations> copy = new ArrayList<>(loggables.size());
         for (Entry<CommitSession,Mutations> entry : loggables.entrySet()) {
           CommitSession cs = entry.getKey();
           Durability durability = entry.getValue().getDurability();
@@ -454,8 +530,7 @@
     int seq = write(commitSession, true, new Writer() {
       @Override
       public LoggerOperation write(DfsLogger logger, int ignored) throws Exception {
-        logger.minorCompactionFinished(walogSeq, commitSession.getLogId(), fullyQualifiedFileName, durability).await();
-        return DfsLogger.NO_WAIT_LOGGER_OP;
+        return logger.minorCompactionFinished(walogSeq, commitSession.getLogId(), fullyQualifiedFileName, durability);
       }
     });
 
@@ -469,8 +544,7 @@
     write(commitSession, false, new Writer() {
       @Override
       public LoggerOperation write(DfsLogger logger, int ignored) throws Exception {
-        logger.minorCompactionStarted(seq, commitSession.getLogId(), fullyQualifiedFileName, durability).await();
-        return DfsLogger.NO_WAIT_LOGGER_OP;
+        return logger.minorCompactionStarted(seq, commitSession.getLogId(), fullyQualifiedFileName, durability);
       }
     });
     return seq;
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogFileValue.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogFileValue.java
index 87e17b3..f49eb71 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogFileValue.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogFileValue.java
@@ -39,7 +39,7 @@
   @Override
   public void readFields(DataInput in) throws IOException {
     int count = in.readInt();
-    mutations = new ArrayList<Mutation>(count);
+    mutations = new ArrayList<>(count);
     for (int i = 0; i < count; i++) {
       ServerMutation mutation = new ServerMutation();
       mutation.readFields(in);
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogReader.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogReader.java
index 6ceba5a..d5a4db9 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogReader.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/logger/LogReader.java
@@ -59,7 +59,7 @@
     @Parameter(names = "-p", description = "search for a row that matches the given regex")
     String regexp;
     @Parameter(description = "<logfile> { <logfile> ... }")
-    List<String> files = new ArrayList<String>();
+    List<String> files = new ArrayList<>();
   }
 
   /**
@@ -84,14 +84,14 @@
       row = new Text(opts.row);
     if (opts.extent != null) {
       String sa[] = opts.extent.split(";");
-      ke = new KeyExtent(new Text(sa[0]), new Text(sa[1]), new Text(sa[2]));
+      ke = new KeyExtent(sa[0], new Text(sa[1]), new Text(sa[2]));
     }
     if (opts.regexp != null) {
       Pattern pattern = Pattern.compile(opts.regexp);
       rowMatcher = pattern.matcher("");
     }
 
-    Set<Integer> tabletIds = new HashSet<Integer>();
+    Set<Integer> tabletIds = new HashSet<>();
 
     for (String file : opts.files) {
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystem.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystem.java
index 3624e74..5808f2e 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystem.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystem.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.tserver.replication;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.util.Objects.requireNonNull;
 
 import java.io.ByteArrayOutputStream;
@@ -31,6 +32,7 @@
 import java.util.Map;
 import java.util.Objects;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.accumulo.core.client.AccumuloException;
@@ -60,7 +62,6 @@
 import org.apache.accumulo.core.trace.ProbabilitySampler;
 import org.apache.accumulo.core.trace.Span;
 import org.apache.accumulo.core.trace.Trace;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 import org.apache.accumulo.server.conf.ServerConfigurationFactory;
 import org.apache.accumulo.server.fs.VolumeManager;
@@ -192,7 +193,7 @@
             KerberosToken token;
             try {
               // Do *not* replace the current user
-              token = new KerberosToken(principal, keytab, false);
+              token = new KerberosToken(principal, keytab);
             } catch (IOException e) {
               log.error("Failed to create KerberosToken", e);
               return status;
@@ -300,7 +301,7 @@
           return finalStatus;
         } catch (TTransportException | AccumuloException | AccumuloSecurityException e) {
           log.warn("Could not connect to remote server {}, will retry", peerTserverStr, e);
-          UtilWaitThread.sleep(1000);
+          sleepUninterruptibly(1, TimeUnit.SECONDS);
         }
       }
 
@@ -670,7 +671,7 @@
 
       switch (key.event) {
         case DEFINE_TABLET:
-          if (target.getSourceTableId().equals(key.tablet.getTableId().toString())) {
+          if (target.getSourceTableId().equals(key.tablet.getTableId())) {
             desiredTids.add(key.tid);
           }
           break;
@@ -690,7 +691,7 @@
   protected WalReplication getWalEdits(ReplicationTarget target, DataInputStream wal, Path p, Status status, long sizeLimit, Set<Integer> desiredTids)
       throws IOException {
     WalEdits edits = new WalEdits();
-    edits.edits = new ArrayList<ByteBuffer>();
+    edits.edits = new ArrayList<>();
     long size = 0l;
     long entriesConsumed = 0l;
     long numUpdates = 0l;
@@ -715,7 +716,7 @@
       switch (key.event) {
         case DEFINE_TABLET:
           // For new DEFINE_TABLETs, we also need to record the new tids we see
-          if (target.getSourceTableId().equals(key.tablet.getTableId().toString())) {
+          if (target.getSourceTableId().equals(key.tablet.getTableId())) {
             desiredTids.add(key.tid);
           }
           break;
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/BatchWriterReplicationReplayer.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/BatchWriterReplicationReplayer.java
index 8a80ea3..e5e9e80 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/BatchWriterReplicationReplayer.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/replication/BatchWriterReplicationReplayer.java
@@ -16,7 +16,6 @@
  */
 package org.apache.accumulo.tserver.replication;
 
-import java.io.ByteArrayInputStream;
 import java.io.DataInputStream;
 import java.io.IOException;
 import java.nio.ByteBuffer;
@@ -40,6 +39,7 @@
 import org.apache.accumulo.core.replication.thrift.RemoteReplicationException;
 import org.apache.accumulo.core.replication.thrift.WalEdits;
 import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.core.util.ByteBufferUtil;
 import org.apache.accumulo.server.data.ServerMutation;
 import org.apache.accumulo.tserver.logger.LogFileKey;
 import org.apache.accumulo.tserver.logger.LogFileValue;
@@ -64,7 +64,7 @@
     long mutationsApplied = 0l;
     try {
       for (ByteBuffer edit : data.getEdits()) {
-        DataInputStream dis = new DataInputStream(new ByteArrayInputStream(edit.array()));
+        DataInputStream dis = new DataInputStream(ByteBufferUtil.toByteArrayInputStream(edit));
         try {
           key.readFields(dis);
           // TODO this is brittle because AccumuloReplicaSystem isn't actually calling LogFileValue.write, but we're expecting
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/LookupTask.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/LookupTask.java
index 08597f4..139512f 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/LookupTask.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/LookupTask.java
@@ -25,6 +25,7 @@
 import java.util.Map;
 import java.util.Map.Entry;
 
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.client.impl.Translator;
 import org.apache.accumulo.core.client.impl.Translators;
 import org.apache.accumulo.core.conf.Property;
@@ -77,9 +78,9 @@
 
       long startTime = System.currentTimeMillis();
 
-      List<KVEntry> results = new ArrayList<KVEntry>();
-      Map<KeyExtent,List<Range>> failures = new HashMap<KeyExtent,List<Range>>();
-      List<KeyExtent> fullScans = new ArrayList<KeyExtent>();
+      List<KVEntry> results = new ArrayList<>();
+      Map<KeyExtent,List<Range>> failures = new HashMap<>();
+      List<KeyExtent> fullScans = new ArrayList<>();
       KeyExtent partScan = null;
       Key partNextKey = null;
       boolean partNextKeyInclusive = false;
@@ -111,7 +112,7 @@
             interruptFlag.set(true);
 
           lookupResult = tablet.lookup(entry.getValue(), session.columnSet, session.auths, results, maxResultsSize - bytesAdded, session.ssiList, session.ssio,
-              interruptFlag);
+              interruptFlag, session.samplerConfig, session.batchTimeOut, session.context);
 
           // if the tablet was closed it it possible that the
           // interrupt flag was set.... do not want it set for
@@ -145,10 +146,10 @@
       session.numEntries += results.size();
 
       // convert everything to thrift before adding result
-      List<TKeyValue> retResults = new ArrayList<TKeyValue>();
+      List<TKeyValue> retResults = new ArrayList<>();
       for (KVEntry entry : results)
         retResults.add(new TKeyValue(entry.getKey().toThrift(), ByteBuffer.wrap(entry.getValue().get())));
-      Map<TKeyExtent,List<TRange>> retFailures = Translator.translate(failures, Translators.KET, new Translator.ListTranslator<Range,TRange>(Translators.RT));
+      Map<TKeyExtent,List<TRange>> retFailures = Translator.translate(failures, Translators.KET, new Translator.ListTranslator<>(Translators.RT));
       List<TKeyExtent> retFullScans = Translator.translate(fullScans, Translators.KET);
       TKeyExtent retPartScan = null;
       TKey retPartNextKey = null;
@@ -163,6 +164,8 @@
         log.warn("Iteration interrupted, when scan not cancelled", iie);
         addResult(iie);
       }
+    } catch (SampleNotPresentException e) {
+      addResult(e);
     } catch (Throwable e) {
       log.warn("exception while doing multi-scan ", e);
       addResult(e);
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/NextBatchTask.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/NextBatchTask.java
index 768cc53..110eda3 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/NextBatchTask.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/NextBatchTask.java
@@ -18,7 +18,9 @@
 
 import java.util.concurrent.atomic.AtomicBoolean;
 
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.iterators.IterationInterruptedException;
+import org.apache.accumulo.server.util.Halt;
 import org.apache.accumulo.tserver.TabletServer;
 import org.apache.accumulo.tserver.TooManyFilesException;
 import org.apache.accumulo.tserver.session.ScanSession;
@@ -83,8 +85,11 @@
         log.warn("Iteration interrupted, when scan not cancelled", iie);
         addResult(iie);
       }
-    } catch (TooManyFilesException tmfe) {
-      addResult(tmfe);
+    } catch (TooManyFilesException | SampleNotPresentException e) {
+      addResult(e);
+    } catch (OutOfMemoryError ome) {
+      Halt.halt("Ran out of memory scanning " + scanSession.extent + " for " + scanSession.client, 1);
+      addResult(ome);
     } catch (Throwable e) {
       log.warn("exception while scanning tablet " + (scanSession == null ? "(unknown)" : scanSession.extent), e);
       addResult(e);
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/ScanTask.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/ScanTask.java
index 6d5adce..cd99ba2 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/ScanTask.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/scan/ScanTask.java
@@ -43,9 +43,9 @@
   ScanTask(TabletServer server) {
     this.server = server;
     interruptFlag = new AtomicBoolean(false);
-    runState = new AtomicReference<ScanRunState>(ScanRunState.QUEUED);
+    runState = new AtomicReference<>(ScanRunState.QUEUED);
     state = new AtomicInteger(INITIAL);
-    resultQueue = new ArrayBlockingQueue<Object>(1);
+    resultQueue = new ArrayBlockingQueue<>(1);
   }
 
   protected void addResult(Object o) {
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/ConditionalSession.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/ConditionalSession.java
index 138f558..dc62312 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/ConditionalSession.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/ConditionalSession.java
@@ -28,13 +28,15 @@
   public final String tableId;
   public final AtomicBoolean interruptFlag = new AtomicBoolean();
   public final Durability durability;
+  public final String classLoaderContext;
 
-  public ConditionalSession(TCredentials credentials, Authorizations authorizations, String tableId, Durability durability) {
+  public ConditionalSession(TCredentials credentials, Authorizations authorizations, String tableId, Durability durability, String classLoaderContext) {
     super(credentials);
     this.credentials = credentials;
     this.auths = authorizations;
     this.tableId = tableId;
     this.durability = durability;
+    this.classLoaderContext = classLoaderContext;
   }
 
   @Override
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/MultiScanSession.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/MultiScanSession.java
index 2fd590c..d698a1f 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/MultiScanSession.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/MultiScanSession.java
@@ -20,6 +20,7 @@
 import java.util.List;
 import java.util.Map;
 
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Column;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.impl.KeyExtent;
@@ -31,11 +32,14 @@
 
 public class MultiScanSession extends Session {
   public final KeyExtent threadPoolExtent;
-  public final HashSet<Column> columnSet = new HashSet<Column>();
+  public final HashSet<Column> columnSet = new HashSet<>();
   public final Map<KeyExtent,List<Range>> queries;
   public final List<IterInfo> ssiList;
   public final Map<String,Map<String,String>> ssio;
   public final Authorizations auths;
+  public final SamplerConfiguration samplerConfig;
+  public final long batchTimeOut;
+  public final String context;
 
   // stats
   public int numRanges;
@@ -46,13 +50,16 @@
   public volatile ScanTask<MultiScanResult> lookupTask;
 
   public MultiScanSession(TCredentials credentials, KeyExtent threadPoolExtent, Map<KeyExtent,List<Range>> queries, List<IterInfo> ssiList,
-      Map<String,Map<String,String>> ssio, Authorizations authorizations) {
+      Map<String,Map<String,String>> ssio, Authorizations authorizations, SamplerConfiguration samplerConfig, long batchTimeOut, String context) {
     super(credentials);
     this.queries = queries;
     this.ssiList = ssiList;
     this.ssio = ssio;
     this.auths = authorizations;
     this.threadPoolExtent = threadPoolExtent;
+    this.samplerConfig = samplerConfig;
+    this.batchTimeOut = batchTimeOut;
+    this.context = context;
   }
 
   @Override
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/ScanSession.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/ScanSession.java
index 36a86ad..06b8a07 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/ScanSession.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/ScanSession.java
@@ -44,9 +44,11 @@
   public volatile ScanTask<ScanBatch> nextBatchTask;
   public Scanner scanner;
   public final long readaheadThreshold;
+  public final long batchTimeOut;
+  public final String context;
 
   public ScanSession(TCredentials credentials, KeyExtent extent, Set<Column> columnSet, List<IterInfo> ssiList, Map<String,Map<String,String>> ssio,
-      Authorizations authorizations, long readaheadThreshold) {
+      Authorizations authorizations, long readaheadThreshold, long batchTimeOut, String context) {
     super(credentials);
     this.extent = extent;
     this.columnSet = columnSet;
@@ -54,6 +56,8 @@
     this.ssio = ssio;
     this.auths = authorizations;
     this.readaheadThreshold = readaheadThreshold;
+    this.batchTimeOut = batchTimeOut;
+    this.context = context;
   }
 
   @Override
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java
index 97018df..bf37855 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java
@@ -50,10 +50,10 @@
   private static final Logger log = LoggerFactory.getLogger(SessionManager.class);
 
   private final SecureRandom random = new SecureRandom();
-  private final Map<Long,Session> sessions = new HashMap<Long,Session>();
+  private final Map<Long,Session> sessions = new HashMap<>();
   private final long maxIdle;
   private final long maxUpdateIdle;
-  private final List<Session> idleSessions = new ArrayList<Session>();
+  private final List<Session> idleSessions = new ArrayList<>();
   private final Long expiredSessionMarker = Long.valueOf(-1);
   private final AccumuloConfiguration aconf;
 
@@ -169,7 +169,7 @@
   }
 
   private void sweep(final long maxIdle, final long maxUpdateIdle) {
-    List<Session> sessionsToCleanup = new ArrayList<Session>();
+    List<Session> sessionsToCleanup = new ArrayList<>();
     synchronized (this) {
       Iterator<Session> iter = sessions.values().iterator();
       while (iter.hasNext()) {
@@ -231,8 +231,8 @@
   }
 
   public synchronized Map<String,MapCounter<ScanRunState>> getActiveScansPerTable() {
-    Map<String,MapCounter<ScanRunState>> counts = new HashMap<String,MapCounter<ScanRunState>>();
-    Set<Entry<Long,Session>> copiedIdleSessions = new HashSet<Entry<Long,Session>>();
+    Map<String,MapCounter<ScanRunState>> counts = new HashMap<>();
+    Set<Entry<Long,Session>> copiedIdleSessions = new HashSet<>();
 
     synchronized (idleSessions) {
       /**
@@ -253,11 +253,11 @@
       if (session instanceof ScanSession) {
         ScanSession ss = (ScanSession) session;
         nbt = ss.nextBatchTask;
-        tableID = ss.extent.getTableId().toString();
+        tableID = ss.extent.getTableId();
       } else if (session instanceof MultiScanSession) {
         MultiScanSession mss = (MultiScanSession) session;
         nbt = mss.lookupTask;
-        tableID = mss.threadPoolExtent.getTableId().toString();
+        tableID = mss.threadPoolExtent.getTableId();
       }
 
       if (nbt == null)
@@ -270,7 +270,7 @@
 
       MapCounter<ScanRunState> stateCounts = counts.get(tableID);
       if (stateCounts == null) {
-        stateCounts = new MapCounter<ScanRunState>();
+        stateCounts = new MapCounter<>();
         counts.put(tableID, stateCounts);
       }
 
@@ -282,9 +282,9 @@
 
   public synchronized List<ActiveScan> getActiveScans() {
 
-    final List<ActiveScan> activeScans = new ArrayList<ActiveScan>();
+    final List<ActiveScan> activeScans = new ArrayList<>();
     final long ct = System.currentTimeMillis();
-    final Set<Entry<Long,Session>> copiedIdleSessions = new HashSet<Entry<Long,Session>>();
+    final Set<Entry<Long,Session>> copiedIdleSessions = new HashSet<>();
 
     synchronized (idleSessions) {
       /**
@@ -320,9 +320,8 @@
           }
         }
 
-        ActiveScan activeScan = new ActiveScan(ss.client, ss.getUser(), ss.extent.getTableId().toString(), ct - ss.startTime, ct - ss.lastAccessTime,
-            ScanType.SINGLE, state, ss.extent.toThrift(), Translator.translate(ss.columnSet, Translators.CT), ss.ssiList, ss.ssio,
-            ss.auths.getAuthorizationsBB());
+        ActiveScan activeScan = new ActiveScan(ss.client, ss.getUser(), ss.extent.getTableId(), ct - ss.startTime, ct - ss.lastAccessTime, ScanType.SINGLE,
+            state, ss.extent.toThrift(), Translator.translate(ss.columnSet, Translators.CT), ss.ssiList, ss.ssio, ss.auths.getAuthorizationsBB(), ss.context);
 
         // scanId added by ACCUMULO-2641 is an optional thrift argument and not available in ActiveScan constructor
         activeScan.setScanId(entry.getKey());
@@ -351,9 +350,9 @@
           }
         }
 
-        activeScans.add(new ActiveScan(mss.client, mss.getUser(), mss.threadPoolExtent.getTableId().toString(), ct - mss.startTime, ct - mss.lastAccessTime,
+        activeScans.add(new ActiveScan(mss.client, mss.getUser(), mss.threadPoolExtent.getTableId(), ct - mss.startTime, ct - mss.lastAccessTime,
             ScanType.BATCH, state, mss.threadPoolExtent.toThrift(), Translator.translate(mss.columnSet, Translators.CT), mss.ssiList, mss.ssio, mss.auths
-                .getAuthorizationsBB()));
+                .getAuthorizationsBB(), mss.context));
       }
     }
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/UpdateSession.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/UpdateSession.java
index 4a9b265..c53f560 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/UpdateSession.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/UpdateSession.java
@@ -33,14 +33,14 @@
 
 public class UpdateSession extends Session {
   public final TservConstraintEnv cenv;
-  public final MapCounter<Tablet> successfulCommits = new MapCounter<Tablet>();
-  public final Map<KeyExtent,Long> failures = new HashMap<KeyExtent,Long>();
-  public final HashMap<KeyExtent,SecurityErrorCode> authFailures = new HashMap<KeyExtent,SecurityErrorCode>();
+  public final MapCounter<Tablet> successfulCommits = new MapCounter<>();
+  public final Map<KeyExtent,Long> failures = new HashMap<>();
+  public final HashMap<KeyExtent,SecurityErrorCode> authFailures = new HashMap<>();
   public final Stat prepareTimes = new Stat();
   public final Stat walogTimes = new Stat();
   public final Stat commitTimes = new Stat();
   public final Stat authTimes = new Stat();
-  public final Map<Tablet,List<Mutation>> queuedMutations = new HashMap<Tablet,List<Mutation>>();
+  public final Map<Tablet,List<Mutation>> queuedMutations = new HashMap<>();
   public final Violations violations;
 
   public Tablet currentTablet = null;
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/BulkImportCacheCleaner.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/BulkImportCacheCleaner.java
new file mode 100644
index 0000000..fff2be2
--- /dev/null
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/BulkImportCacheCleaner.java
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.tserver.tablet;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import org.apache.accumulo.core.Constants;
+import org.apache.accumulo.server.zookeeper.TransactionWatcher.ZooArbitrator;
+import org.apache.accumulo.tserver.TabletServer;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class BulkImportCacheCleaner implements Runnable {
+
+  private static final Logger log = LoggerFactory.getLogger(BulkImportCacheCleaner.class);
+  private final TabletServer server;
+
+  public BulkImportCacheCleaner(TabletServer server) {
+    this.server = server;
+  }
+
+  @Override
+  public void run() {
+    // gather the list of transactions the tablets have cached
+    final Set<Long> tids = new HashSet<>();
+    for (Tablet tablet : server.getOnlineTablets()) {
+      tids.addAll(tablet.getBulkIngestedFiles().keySet());
+    }
+    try {
+      // get the current transactions from ZooKeeper
+      final Set<Long> allTransactionsAlive = ZooArbitrator.allTransactionsAlive(Constants.BULK_ARBITRATOR_TYPE);
+      // remove any that are still alive
+      tids.removeAll(allTransactionsAlive);
+      // cleanup any memory of these transactions
+      for (Tablet tablet : server.getOnlineTablets()) {
+        tablet.cleanupBulkLoadedFiles(tids);
+      }
+    } catch (KeeperException | InterruptedException e) {
+      // we'll just clean it up again later
+      log.debug("Error reading bulk import live transactions {}", e.toString());
+    }
+  }
+
+}
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CommitSession.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CommitSession.java
index d908f1d..dee705c 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CommitSession.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CommitSession.java
@@ -16,7 +16,6 @@
  */
 package org.apache.accumulo.tserver.tablet;
 
-import java.util.ArrayList;
 import java.util.List;
 
 import org.apache.accumulo.core.data.Mutation;
@@ -86,7 +85,7 @@
     return committer;
   }
 
-  public boolean beginUpdatingLogsUsed(ArrayList<DfsLogger> copy, boolean mincFinish) {
+  public boolean beginUpdatingLogsUsed(DfsLogger copy, boolean mincFinish) {
     return committer.beginUpdatingLogsUsed(memTable, copy, mincFinish);
   }
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CompactionInfo.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CompactionInfo.java
index c7ca29d..5c4f7c4 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CompactionInfo.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CompactionInfo.java
@@ -114,14 +114,14 @@
       }
     }
 
-    List<IterInfo> iiList = new ArrayList<IterInfo>();
-    Map<String,Map<String,String>> iterOptions = new HashMap<String,Map<String,String>>();
+    List<IterInfo> iiList = new ArrayList<>();
+    Map<String,Map<String,String>> iterOptions = new HashMap<>();
 
     for (IteratorSetting iterSetting : compactor.getIterators()) {
       iiList.add(new IterInfo(iterSetting.getPriority(), iterSetting.getIteratorClass(), iterSetting.getName()));
       iterOptions.put(iterSetting.getName(), iterSetting.getOptions());
     }
-    List<String> filesToCompact = new ArrayList<String>();
+    List<String> filesToCompact = new ArrayList<>();
     for (FileRef ref : compactor.getFilesToCompact())
       filesToCompact.add(ref.toString());
     return new ActiveCompaction(compactor.extent.toThrift(), System.currentTimeMillis() - compactor.getStartTime(), filesToCompact, compactor.getOutputFile(),
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CompactionWatcher.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CompactionWatcher.java
index 6ca4407..64345c2 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CompactionWatcher.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CompactionWatcher.java
@@ -32,7 +32,7 @@
  *
  */
 public class CompactionWatcher implements Runnable {
-  private final Map<List<Long>,ObservedCompactionInfo> observedCompactions = new HashMap<List<Long>,ObservedCompactionInfo>();
+  private final Map<List<Long>,ObservedCompactionInfo> observedCompactions = new HashMap<>();
   private final AccumuloConfiguration config;
   private static boolean watching = false;
 
@@ -55,7 +55,7 @@
   public void run() {
     List<CompactionInfo> runningCompactions = Compactor.getRunningCompactions();
 
-    Set<List<Long>> newKeys = new HashSet<List<Long>>();
+    Set<List<Long>> newKeys = new HashSet<>();
 
     long time = System.currentTimeMillis();
 
@@ -69,7 +69,7 @@
     }
 
     // look for compactions that finished or made progress and logged a warning
-    HashMap<List<Long>,ObservedCompactionInfo> copy = new HashMap<List<Long>,ObservedCompactionInfo>(observedCompactions);
+    HashMap<List<Long>,ObservedCompactionInfo> copy = new HashMap<>(observedCompactions);
     copy.keySet().removeAll(newKeys);
 
     for (ObservedCompactionInfo oci : copy.values()) {
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Compactor.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Compactor.java
index 4f38655..6c83f19 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Compactor.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Compactor.java
@@ -52,6 +52,7 @@
 import org.apache.accumulo.core.trace.Trace;
 import org.apache.accumulo.core.util.LocalityGroupUtil;
 import org.apache.accumulo.core.util.LocalityGroupUtil.LocalityGroupConfigurationError;
+import org.apache.accumulo.core.util.ratelimit.RateLimiter;
 import org.apache.accumulo.server.AccumuloServerContext;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.server.fs.VolumeManager;
@@ -81,6 +82,10 @@
     boolean isCompactionEnabled();
 
     IteratorScope getIteratorScope();
+
+    RateLimiter getReadLimiter();
+
+    RateLimiter getWriteLimiter();
   }
 
   private final Map<FileRef,DataFileValue> filesToCompact;
@@ -128,7 +133,7 @@
   protected static final Set<Compactor> runningCompactions = Collections.synchronizedSet(new HashSet<Compactor>());
 
   public static List<CompactionInfo> getRunningCompactions() {
-    ArrayList<CompactionInfo> compactions = new ArrayList<CompactionInfo>();
+    ArrayList<CompactionInfo> compactions = new ArrayList<>();
 
     synchronized (runningCompactions) {
       for (Compactor compactor : runningCompactions) {
@@ -192,7 +197,8 @@
     try {
       FileOperations fileFactory = FileOperations.getInstance();
       FileSystem ns = this.fs.getVolumeByPath(outputFilePath).getFileSystem();
-      mfw = fileFactory.openWriter(outputFilePathName, ns, ns.getConf(), acuTableConf);
+      mfw = fileFactory.newWriterBuilder().forFile(outputFilePathName, ns, ns.getConf()).withTableConfiguration(acuTableConf)
+          .withRateLimiter(env.getWriteLimiter()).build();
 
       Map<String,Set<ByteSequence>> lGroups;
       try {
@@ -203,7 +209,7 @@
 
       long t1 = System.currentTimeMillis();
 
-      HashSet<ByteSequence> allColumnFamilies = new HashSet<ByteSequence>();
+      HashSet<ByteSequence> allColumnFamilies = new HashSet<>();
 
       if (mfw.supportsLocalityGroups()) {
         for (Entry<String,Set<ByteSequence>> entry : lGroups.entrySet()) {
@@ -220,21 +226,22 @@
 
       FileSKVWriter mfwTmp = mfw;
       mfw = null; // set this to null so we do not try to close it again in finally if the close fails
-      mfwTmp.close(); // if the close fails it will cause the compaction to fail
-
-      // Verify the file, since hadoop 0.20.2 sometimes lies about the success of close()
       try {
-        FileSKVIterator openReader = fileFactory.openReader(outputFilePathName, false, ns, ns.getConf(), acuTableConf);
-        openReader.close();
+        mfwTmp.close(); // if the close fails it will cause the compaction to fail
       } catch (IOException ex) {
-        log.error("Verification of successful compaction fails!!! " + extent + " " + outputFile, ex);
+        if (!fs.deleteRecursively(outputFile.path())) {
+          if (fs.exists(outputFile.path())) {
+            log.error("Unable to delete " + outputFile);
+          }
+        }
         throw ex;
       }
 
-      log.debug(String.format("Compaction %s %,d read | %,d written | %,6d entries/sec | %6.3f secs", extent, majCStats.getEntriesRead(),
-          majCStats.getEntriesWritten(), (int) (majCStats.getEntriesRead() / ((t2 - t1) / 1000.0)), (t2 - t1) / 1000.0));
+      log.debug(String.format("Compaction %s %,d read | %,d written | %,6d entries/sec | %,6.3f secs | %,12d bytes | %9.3f byte/sec", extent,
+          majCStats.getEntriesRead(), majCStats.getEntriesWritten(), (int) (majCStats.getEntriesRead() / ((t2 - t1) / 1000.0)), (t2 - t1) / 1000.0,
+          mfwTmp.getLength(), mfwTmp.getLength() / ((t2 - t1) / 1000.0)));
 
-      majCStats.setFileSize(fileFactory.getFileSize(outputFilePathName, ns, ns.getConf(), acuTableConf));
+      majCStats.setFileSize(mfwTmp.getLength());
       return majCStats;
     } catch (IOException e) {
       log.error("{}", e.getMessage(), e);
@@ -270,7 +277,7 @@
 
   private List<SortedKeyValueIterator<Key,Value>> openMapDataFiles(String lgName, ArrayList<FileSKVIterator> readers) throws IOException {
 
-    List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<SortedKeyValueIterator<Key,Value>>(filesToCompact.size());
+    List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<>(filesToCompact.size());
 
     for (FileRef mapFile : filesToCompact.keySet()) {
       try {
@@ -279,11 +286,12 @@
         FileSystem fs = this.fs.getVolumeByPath(mapFile.path()).getFileSystem();
         FileSKVIterator reader;
 
-        reader = fileFactory.openReader(mapFile.path().toString(), false, fs, fs.getConf(), acuTableConf);
+        reader = fileFactory.newReaderBuilder().forFile(mapFile.path().toString(), fs, fs.getConf()).withTableConfiguration(acuTableConf)
+            .withRateLimiter(env.getReadLimiter()).build();
 
         readers.add(reader);
 
-        SortedKeyValueIterator<Key,Value> iter = new ProblemReportingIterator(context, extent.getTableId().toString(), mapFile.path().toString(), false, reader);
+        SortedKeyValueIterator<Key,Value> iter = new ProblemReportingIterator(context, extent.getTableId(), mapFile.path().toString(), false, reader);
 
         if (filesToCompact.get(mapFile).isTimeSet()) {
           iter = new TimeSettingIterator(iter, filesToCompact.get(mapFile).getTime());
@@ -293,7 +301,7 @@
 
       } catch (Throwable e) {
 
-        ProblemReports.getInstance(context).report(new ProblemReport(extent.getTableId().toString(), ProblemType.FILE_READ, mapFile.path().toString(), e));
+        ProblemReports.getInstance(context).report(new ProblemReport(extent.getTableId(), ProblemType.FILE_READ, mapFile.path().toString(), e));
 
         log.warn("Some problem opening map file {} {}", mapFile, e.getMessage(), e);
         // failed to open some map file... close the ones that were opened
@@ -318,7 +326,7 @@
 
   private void compactLocalityGroup(String lgName, Set<ByteSequence> columnFamilies, boolean inclusive, FileSKVWriter mfw, CompactionStats majCStats)
       throws IOException, CompactionCanceledException {
-    ArrayList<FileSKVIterator> readers = new ArrayList<FileSKVIterator>(filesToCompact.size());
+    ArrayList<FileSKVIterator> readers = new ArrayList<>(filesToCompact.size());
     Span span = Trace.start("compact");
     try {
       long entriesCompacted = 0;
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CountingIterator.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CountingIterator.java
index 8716695..be22778 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CountingIterator.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/CountingIterator.java
@@ -47,7 +47,7 @@
   }
 
   public CountingIterator(SortedKeyValueIterator<Key,Value> source, AtomicLong entriesRead) {
-    deepCopies = new ArrayList<CountingIterator>();
+    deepCopies = new ArrayList<>();
     this.setSource(source);
     count = 0;
     this.entriesRead = entriesRead;
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/DatafileManager.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/DatafileManager.java
index db1b418..b488e13 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/DatafileManager.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/DatafileManager.java
@@ -16,20 +16,21 @@
  */
 package org.apache.accumulo.tserver.tablet;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.io.IOException;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.HashSet;
-import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
-import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
@@ -38,7 +39,6 @@
 import org.apache.accumulo.core.trace.Trace;
 import org.apache.accumulo.core.util.MapCounter;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.server.ServerConstants;
 import org.apache.accumulo.server.fs.FileRef;
@@ -70,13 +70,13 @@
   }
 
   private FileRef mergingMinorCompactionFile = null;
-  private final Set<FileRef> filesToDeleteAfterScan = new HashSet<FileRef>();
-  private final Map<Long,Set<FileRef>> scanFileReservations = new HashMap<Long,Set<FileRef>>();
-  private final MapCounter<FileRef> fileScanReferenceCounts = new MapCounter<FileRef>();
+  private final Set<FileRef> filesToDeleteAfterScan = new HashSet<>();
+  private final Map<Long,Set<FileRef>> scanFileReservations = new HashMap<>();
+  private final MapCounter<FileRef> fileScanReferenceCounts = new MapCounter<>();
   private long nextScanReservationId = 0;
   private boolean reservationsBlocked = false;
 
-  private final Set<FileRef> majorCompactingFiles = new HashSet<FileRef>();
+  private final Set<FileRef> majorCompactingFiles = new HashSet<>();
 
   static void rename(VolumeManager fs, Path src, Path dst) throws IOException {
     if (!fs.rename(src, dst)) {
@@ -95,26 +95,26 @@
         }
       }
 
-      Set<FileRef> absFilePaths = new HashSet<FileRef>(datafileSizes.keySet());
+      Set<FileRef> absFilePaths = new HashSet<>(datafileSizes.keySet());
 
       long rid = nextScanReservationId++;
 
       scanFileReservations.put(rid, absFilePaths);
 
-      Map<FileRef,DataFileValue> ret = new HashMap<FileRef,DataFileValue>();
+      Map<FileRef,DataFileValue> ret = new HashMap<>();
 
       for (FileRef path : absFilePaths) {
         fileScanReferenceCounts.increment(path, 1);
         ret.put(path, datafileSizes.get(path));
       }
 
-      return new Pair<Long,Map<FileRef,DataFileValue>>(rid, ret);
+      return new Pair<>(rid, ret);
     }
   }
 
   void returnFilesForScan(Long reservationId) {
 
-    final Set<FileRef> filesToDelete = new HashSet<FileRef>();
+    final Set<FileRef> filesToDelete = new HashSet<>();
 
     synchronized (tablet) {
       Set<FileRef> absFilePaths = scanFileReservations.remove(reservationId);
@@ -147,7 +147,7 @@
     if (scanFiles.size() == 0)
       return;
 
-    Set<FileRef> filesToDelete = new HashSet<FileRef>();
+    Set<FileRef> filesToDelete = new HashSet<>();
 
     synchronized (tablet) {
       for (FileRef path : scanFiles) {
@@ -166,7 +166,7 @@
 
   private TreeSet<FileRef> waitForScansToFinish(Set<FileRef> pathsToWaitFor, boolean blockNewScans, long maxWaitTime) {
     long startTime = System.currentTimeMillis();
-    TreeSet<FileRef> inUse = new TreeSet<FileRef>();
+    TreeSet<FileRef> inUse = new TreeSet<>();
 
     Span waitForScans = Trace.start("waitForScans");
     try {
@@ -207,10 +207,9 @@
 
   public void importMapFiles(long tid, Map<FileRef,DataFileValue> pathsString, boolean setTime) throws IOException {
 
-    final KeyExtent extent = tablet.getExtent();
     String bulkDir = null;
 
-    Map<FileRef,DataFileValue> paths = new HashMap<FileRef,DataFileValue>();
+    Map<FileRef,DataFileValue> paths = new HashMap<>();
     for (Entry<FileRef,DataFileValue> entry : pathsString.entrySet())
       paths.put(entry.getKey(), entry.getValue());
 
@@ -219,7 +218,7 @@
       boolean inTheRightDirectory = false;
       Path parent = tpath.path().getParent().getParent();
       for (String tablesDir : ServerConstants.getTablesDirs()) {
-        if (parent.equals(new Path(tablesDir, tablet.getExtent().getTableId().toString()))) {
+        if (parent.equals(new Path(tablesDir, tablet.getExtent().getTableId()))) {
           inTheRightDirectory = true;
           break;
         }
@@ -235,23 +234,11 @@
 
     }
 
-    if (tablet.getExtent().isRootTablet()) {
-      throw new IllegalArgumentException("Can not import files to root tablet");
+    if (tablet.getExtent().isMeta()) {
+      throw new IllegalArgumentException("Can not import files to a metadata tablet");
     }
 
     synchronized (bulkFileImportLock) {
-      Connector conn;
-      try {
-        conn = tablet.getTabletServer().getConnector();
-      } catch (Exception ex) {
-        throw new IOException(ex);
-      }
-      // Remove any bulk files we've previously loaded and compacted away
-      List<FileRef> files = MetadataTableUtil.getBulkFilesLoaded(conn, extent, tid);
-
-      for (FileRef file : files)
-        if (paths.keySet().remove(file))
-          log.debug("Ignoring request to re-import a file already imported: " + extent + ": " + file);
 
       if (paths.size() > 0) {
         long bulkTime = Long.MIN_VALUE;
@@ -368,7 +355,7 @@
         break;
       } catch (IOException ioe) {
         log.warn("Tablet " + tablet.getExtent() + " failed to rename " + newDatafile + " after MinC, will retry in 60 secs...", ioe);
-        UtilWaitThread.sleep(60 * 1000);
+        sleepUninterruptibly(1, TimeUnit.MINUTES);
       }
     } while (true);
 
@@ -424,7 +411,9 @@
         if (log.isDebugEnabled()) {
           log.debug("Recording that data has been ingested into " + tablet.getExtent() + " using " + logFileOnly);
         }
-        ReplicationTableUtil.updateFiles(tablet.getTabletServer(), tablet.getExtent(), logFileOnly, StatusUtil.openWithUnknownLength());
+        for (String logFile : logFileOnly) {
+          ReplicationTableUtil.updateFiles(tablet.getTabletServer(), tablet.getExtent(), logFile, StatusUtil.openWithUnknownLength());
+        }
       }
     } finally {
       tablet.finishClearingUnusedLogs();
@@ -439,7 +428,7 @@
         break;
       } catch (IOException e) {
         log.error("Failed to write to write-ahead log " + e.getMessage() + " will retry", e);
-        UtilWaitThread.sleep(1 * 1000);
+        sleepUninterruptibly(1, TimeUnit.SECONDS);
       }
     } while (true);
 
@@ -586,14 +575,14 @@
 
   public SortedMap<FileRef,DataFileValue> getDatafileSizes() {
     synchronized (tablet) {
-      TreeMap<FileRef,DataFileValue> copy = new TreeMap<FileRef,DataFileValue>(datafileSizes);
+      TreeMap<FileRef,DataFileValue> copy = new TreeMap<>(datafileSizes);
       return Collections.unmodifiableSortedMap(copy);
     }
   }
 
   public Set<FileRef> getFiles() {
     synchronized (tablet) {
-      HashSet<FileRef> files = new HashSet<FileRef>(datafileSizes.keySet());
+      HashSet<FileRef> files = new HashSet<>(datafileSizes.keySet());
       return Collections.unmodifiableSet(files);
     }
   }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/MinorCompactor.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/MinorCompactor.java
index 2aa772f..6bd2545 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/MinorCompactor.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/MinorCompactor.java
@@ -16,17 +16,20 @@
  */
 package org.apache.accumulo.tserver.tablet;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 import java.io.IOException;
 import java.util.Collections;
 import java.util.Map;
 import java.util.Random;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.master.state.tables.TableState;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
-import org.apache.accumulo.core.util.UtilWaitThread;
+import org.apache.accumulo.core.util.ratelimit.RateLimiter;
 import org.apache.accumulo.server.conf.TableConfiguration;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.accumulo.server.problems.ProblemReport;
@@ -67,13 +70,23 @@
       public IteratorScope getIteratorScope() {
         return IteratorScope.minc;
       }
+
+      @Override
+      public RateLimiter getReadLimiter() {
+        return null;
+      }
+
+      @Override
+      public RateLimiter getWriteLimiter() {
+        return null;
+      }
     }, Collections.<IteratorSetting> emptyList(), mincReason.ordinal(), tableConfig);
     this.tabletServer = tabletServer;
   }
 
   private boolean isTableDeleting() {
     try {
-      return Tables.getTableState(tabletServer.getInstance(), extent.getTableId().toString()) == TableState.DELETING;
+      return Tables.getTableState(tabletServer.getInstance(), extent.getTableId()) == TableState.DELETING;
     } catch (Exception e) {
       log.warn("Failed to determine if table " + extent.getTableId() + " was deleting ", e);
       return false; // can not get positive confirmation that its deleting.
@@ -101,19 +114,19 @@
           // (int)(map.size()/((t2 - t1)/1000.0)), (t2 - t1)/1000.0, estimatedSizeInBytes()));
 
           if (reportedProblem) {
-            ProblemReports.getInstance(tabletServer).deleteProblemReport(getExtent().getTableId().toString(), ProblemType.FILE_WRITE, outputFileName);
+            ProblemReports.getInstance(tabletServer).deleteProblemReport(getExtent().getTableId(), ProblemType.FILE_WRITE, outputFileName);
           }
 
           return ret;
         } catch (IOException e) {
           log.warn("MinC failed ({}) to create {} retrying ...", e.getMessage(), outputFileName);
-          ProblemReports.getInstance(tabletServer).report(new ProblemReport(getExtent().getTableId().toString(), ProblemType.FILE_WRITE, outputFileName, e));
+          ProblemReports.getInstance(tabletServer).report(new ProblemReport(getExtent().getTableId(), ProblemType.FILE_WRITE, outputFileName, e));
           reportedProblem = true;
         } catch (RuntimeException e) {
           // if this is coming from a user iterator, it is possible that the user could change the iterator config and that the
           // minor compaction would succeed
           log.warn("MinC failed ({}) to create {} retrying ...", e.getMessage(), outputFileName, e);
-          ProblemReports.getInstance(tabletServer).report(new ProblemReport(getExtent().getTableId().toString(), ProblemType.FILE_WRITE, outputFileName, e));
+          ProblemReports.getInstance(tabletServer).report(new ProblemReport(getExtent().getTableId(), ProblemType.FILE_WRITE, outputFileName, e));
           reportedProblem = true;
         } catch (CompactionCanceledException e) {
           throw new IllegalStateException(e);
@@ -123,7 +136,7 @@
 
         int sleep = sleepTime + random.nextInt(sleepTime);
         log.debug("MinC failed sleeping " + sleep + " ms before retrying");
-        UtilWaitThread.sleep(sleep);
+        sleepUninterruptibly(sleep, TimeUnit.MILLISECONDS);
         sleepTime = (int) Math.round(Math.min(maxSleepTime, sleepTime * growthFactor));
 
         // clean up
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/RootFiles.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/RootFiles.java
index 9ba8e38..56dbec9 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/RootFiles.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/RootFiles.java
@@ -80,7 +80,7 @@
     /*
      * called in constructor and before major compactions
      */
-    Collection<String> goodFiles = new ArrayList<String>(files.length);
+    Collection<String> goodFiles = new ArrayList<>(files.length);
 
     for (FileStatus file : files) {
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanBatch.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanBatch.java
index 888d6f5..7a29427 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanBatch.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanBatch.java
@@ -22,7 +22,7 @@
   private final boolean more;
   private final List<KVEntry> results;
 
-  ScanBatch(List<KVEntry> results, boolean more) {
+  public ScanBatch(List<KVEntry> results, boolean more) {
     this.results = results;
     this.more = more;
   }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanDataSource.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanDataSource.java
index f586e2e..e48d91e 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanDataSource.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanDataSource.java
@@ -26,6 +26,7 @@
 import java.util.Set;
 import java.util.concurrent.atomic.AtomicBoolean;
 
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Column;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
@@ -35,15 +36,12 @@
 import org.apache.accumulo.core.iterators.IteratorUtil;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
-import org.apache.accumulo.core.iterators.system.ColumnFamilySkippingIterator;
-import org.apache.accumulo.core.iterators.system.ColumnQualifierFilter;
-import org.apache.accumulo.core.iterators.system.DeletingIterator;
 import org.apache.accumulo.core.iterators.system.InterruptibleIterator;
 import org.apache.accumulo.core.iterators.system.MultiIterator;
 import org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.DataSource;
 import org.apache.accumulo.core.iterators.system.StatsIterator;
-import org.apache.accumulo.core.iterators.system.VisibilityFilter;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.server.fs.FileRef;
@@ -52,6 +50,8 @@
 import org.apache.accumulo.tserver.TabletIteratorEnvironment;
 import org.apache.accumulo.tserver.TabletServer;
 
+import com.google.common.collect.Iterables;
+
 class ScanDataSource implements DataSource {
 
   // data source state
@@ -70,10 +70,10 @@
   private static final Set<Column> EMPTY_COLS = Collections.emptySet();
 
   ScanDataSource(Tablet tablet, Authorizations authorizations, byte[] defaultLabels, HashSet<Column> columnSet, List<IterInfo> ssiList,
-      Map<String,Map<String,String>> ssio, AtomicBoolean interruptFlag) {
+      Map<String,Map<String,String>> ssio, AtomicBoolean interruptFlag, SamplerConfiguration samplerConfig, long batchTimeOut, String context) {
     this.tablet = tablet;
     expectedDeletionCount = tablet.getDataSourceDeletions();
-    this.options = new ScanOptions(-1, authorizations, defaultLabels, columnSet, ssiList, ssio, interruptFlag, false);
+    this.options = new ScanOptions(-1, authorizations, defaultLabels, columnSet, ssiList, ssio, interruptFlag, false, samplerConfig, batchTimeOut, context);
     this.interruptFlag = interruptFlag;
     this.loadIters = true;
   }
@@ -89,7 +89,7 @@
   ScanDataSource(Tablet tablet, Authorizations authorizations, byte[] defaultLabels, AtomicBoolean iFlag) {
     this.tablet = tablet;
     expectedDeletionCount = tablet.getDataSourceDeletions();
-    this.options = new ScanOptions(-1, authorizations, defaultLabels, EMPTY_COLS, null, null, iFlag, false);
+    this.options = new ScanOptions(-1, authorizations, defaultLabels, EMPTY_COLS, null, null, iFlag, false, null, -1, null);
     this.interruptFlag = iFlag;
     this.loadIters = false;
   }
@@ -132,6 +132,8 @@
 
     Map<FileRef,DataFileValue> files;
 
+    SamplerConfigurationImpl samplerConfig = options.getSamplerConfigurationImpl();
+
     synchronized (tablet) {
 
       if (memIters != null)
@@ -156,42 +158,40 @@
       // getIterators() throws an exception
       expectedDeletionCount = tablet.getDataSourceDeletions();
 
-      memIters = tablet.getTabletMemory().getIterators();
+      memIters = tablet.getTabletMemory().getIterators(samplerConfig);
       Pair<Long,Map<FileRef,DataFileValue>> reservation = tablet.getDatafileManager().reserveFilesForScan();
       fileReservationId = reservation.getFirst();
       files = reservation.getSecond();
     }
 
-    Collection<InterruptibleIterator> mapfiles = fileManager.openFiles(files, options.isIsolated());
+    Collection<InterruptibleIterator> mapfiles = fileManager.openFiles(files, options.isIsolated(), samplerConfig);
 
-    List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<SortedKeyValueIterator<Key,Value>>(mapfiles.size() + memIters.size());
+    for (SortedKeyValueIterator<Key,Value> skvi : Iterables.concat(mapfiles, memIters))
+      ((InterruptibleIterator) skvi).setInterruptFlag(interruptFlag);
+
+    List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<>(mapfiles.size() + memIters.size());
 
     iters.addAll(mapfiles);
     iters.addAll(memIters);
 
-    for (SortedKeyValueIterator<Key,Value> skvi : iters)
-      ((InterruptibleIterator) skvi).setInterruptFlag(interruptFlag);
-
     MultiIterator multiIter = new MultiIterator(iters, tablet.getExtent());
 
     TabletIteratorEnvironment iterEnv = new TabletIteratorEnvironment(IteratorScope.scan, tablet.getTableConfiguration(), fileManager, files,
-        options.getAuthorizations());
+        options.getAuthorizations(), samplerConfig);
 
     statsIterator = new StatsIterator(multiIter, TabletServer.seekCount, tablet.getScannedCounter());
 
-    DeletingIterator delIter = new DeletingIterator(statsIterator, false);
-
-    ColumnFamilySkippingIterator cfsi = new ColumnFamilySkippingIterator(delIter);
-
-    ColumnQualifierFilter colFilter = new ColumnQualifierFilter(cfsi, options.getColumnSet());
-
-    VisibilityFilter visFilter = new VisibilityFilter(colFilter, options.getAuthorizations(), options.getDefaultLabels());
+    SortedKeyValueIterator<Key,Value> visFilter = IteratorUtil.setupSystemScanIterators(statsIterator, options.getColumnSet(), options.getAuthorizations(),
+        options.getDefaultLabels());
 
     if (!loadIters) {
       return visFilter;
-    } else {
+    } else if (null == options.getClassLoaderContext()) {
       return iterEnv.getTopLevelIterator(IteratorUtil.loadIterators(IteratorScope.scan, visFilter, tablet.getExtent(), tablet.getTableConfiguration(),
           options.getSsiList(), options.getSsio(), iterEnv));
+    } else {
+      return iterEnv.getTopLevelIterator(IteratorUtil.loadIterators(IteratorScope.scan, visFilter, tablet.getExtent(), tablet.getTableConfiguration(),
+          options.getSsiList(), options.getSsio(), iterEnv, true, options.getClassLoaderContext()));
     }
   }
 
@@ -231,7 +231,7 @@
 
   public void reattachFileManager() throws IOException {
     if (fileManager != null)
-      fileManager.reattach();
+      fileManager.reattach(options.getSamplerConfigurationImpl());
   }
 
   public void detachFileManager() {
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanOptions.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanOptions.java
index 93e8eee..dceac08 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanOptions.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/ScanOptions.java
@@ -21,8 +21,10 @@
 import java.util.Set;
 import java.util.concurrent.atomic.AtomicBoolean;
 
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.data.Column;
 import org.apache.accumulo.core.data.thrift.IterInfo;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.security.Authorizations;
 
 final class ScanOptions {
@@ -35,9 +37,12 @@
   private final AtomicBoolean interruptFlag;
   private final int num;
   private final boolean isolated;
+  private SamplerConfiguration samplerConfig;
+  private final long batchTimeOut;
+  private String classLoaderContext;
 
   ScanOptions(int num, Authorizations authorizations, byte[] defaultLabels, Set<Column> columnSet, List<IterInfo> ssiList, Map<String,Map<String,String>> ssio,
-      AtomicBoolean interruptFlag, boolean isolated) {
+      AtomicBoolean interruptFlag, boolean isolated, SamplerConfiguration samplerConfig, long batchTimeOut, String classLoaderContext) {
     this.num = num;
     this.authorizations = authorizations;
     this.defaultLabels = defaultLabels;
@@ -46,6 +51,9 @@
     this.ssio = ssio;
     this.interruptFlag = interruptFlag;
     this.isolated = isolated;
+    this.samplerConfig = samplerConfig;
+    this.batchTimeOut = batchTimeOut;
+    this.classLoaderContext = classLoaderContext;
   }
 
   public Authorizations getAuthorizations() {
@@ -79,4 +87,26 @@
   public boolean isIsolated() {
     return isolated;
   }
+
+  public SamplerConfiguration getSamplerConfiguration() {
+    return samplerConfig;
+  }
+
+  public SamplerConfigurationImpl getSamplerConfigurationImpl() {
+    if (samplerConfig == null)
+      return null;
+    return new SamplerConfigurationImpl(samplerConfig);
+  }
+
+  public long getBatchTimeOut() {
+    return batchTimeOut;
+  }
+
+  public String getClassLoaderContext() {
+    return classLoaderContext;
+  }
+
+  public void setClassLoaderContext(String context) {
+    this.classLoaderContext = context;
+  }
 }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Scanner.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Scanner.java
index c96c75a..15526d7 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Scanner.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Scanner.java
@@ -95,7 +95,7 @@
         iter = new SourceSwitchingIterator(dataSource, false);
       }
 
-      results = tablet.nextBatch(iter, range, options.getNum(), options.getColumnSet());
+      results = tablet.nextBatch(iter, range, options.getNum(), options.getColumnSet(), options.getBatchTimeOut());
 
       if (results.getResults() == null) {
         range = null;
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/SplitInfo.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/SplitInfo.java
deleted file mode 100644
index f8f2183..0000000
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/SplitInfo.java
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.tserver.tablet;
-
-import java.util.SortedMap;
-
-import org.apache.accumulo.core.metadata.schema.DataFileValue;
-import org.apache.accumulo.server.fs.FileRef;
-import org.apache.accumulo.server.master.state.TServerInstance;
-
-/**
- * operations are disallowed while we split which is ok since splitting is fast
- *
- * a minor compaction should have taken place before calling this so there should be relatively little left to compact
- *
- * we just need to make sure major compactions aren't occurring if we have the major compactor thread decide who needs splitting we can avoid synchronization
- * issues with major compactions
- *
- */
-
-final public class SplitInfo {
-  private final String dir;
-  private final SortedMap<FileRef,DataFileValue> datafiles;
-  private final String time;
-  private final long initFlushID;
-  private final long initCompactID;
-  private final TServerInstance lastLocation;
-
-  SplitInfo(String d, SortedMap<FileRef,DataFileValue> dfv, String time, long initFlushID, long initCompactID, TServerInstance lastLocation) {
-    this.dir = d;
-    this.datafiles = dfv;
-    this.time = time;
-    this.initFlushID = initFlushID;
-    this.initCompactID = initCompactID;
-    this.lastLocation = lastLocation;
-  }
-
-  public String getDir() {
-    return dir;
-  }
-
-  public SortedMap<FileRef,DataFileValue> getDatafiles() {
-    return datafiles;
-  }
-
-  public String getTime() {
-    return time;
-  }
-
-  public long getInitFlushID() {
-    return initFlushID;
-  }
-
-  public long getInitCompactID() {
-    return initCompactID;
-  }
-
-  public TServerInstance getLastLocation() {
-    return lastLocation;
-  }
-
-}
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java
index 3231ce9..6637521 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/Tablet.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.tserver.tablet;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 import static java.util.Objects.requireNonNull;
 
@@ -38,6 +39,10 @@
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ConcurrentSkipListSet;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.concurrent.atomic.AtomicReference;
@@ -49,6 +54,7 @@
 import org.apache.accumulo.core.client.admin.CompactionStrategyConfig;
 import org.apache.accumulo.core.client.impl.DurabilityImpl;
 import org.apache.accumulo.core.client.impl.Tables;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.ConfigurationCopy;
 import org.apache.accumulo.core.conf.ConfigurationObserver;
@@ -70,13 +76,11 @@
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.system.SourceSwitchingIterator;
+import org.apache.accumulo.core.master.thrift.BulkImportState;
 import org.apache.accumulo.core.master.thrift.TabletLoadState;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ScanFileColumnFamily;
 import org.apache.accumulo.core.protobuf.ProtobufUtil;
 import org.apache.accumulo.core.replication.ReplicationConfigurationUtil;
 import org.apache.accumulo.core.security.Authorizations;
@@ -88,8 +92,7 @@
 import org.apache.accumulo.core.trace.Trace;
 import org.apache.accumulo.core.util.LocalityGroupUtil;
 import org.apache.accumulo.core.util.Pair;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.server.AccumuloServerContext;
+import org.apache.accumulo.core.util.ratelimit.RateLimiter;
 import org.apache.accumulo.server.ServerConstants;
 import org.apache.accumulo.server.conf.TableConfiguration;
 import org.apache.accumulo.server.fs.FileRef;
@@ -149,6 +152,8 @@
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Optional;
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
 
 /**
  *
@@ -157,7 +162,6 @@
  */
 public class Tablet implements TabletCommitter {
   static private final Logger log = Logger.getLogger(Tablet.class);
-  static private final List<LogEntry> NO_LOG_ENTRIES = Collections.emptyList();
 
   private final TabletServer tabletServer;
   private final KeyExtent extent;
@@ -173,7 +177,7 @@
   private final Object timeLock = new Object();
   private long persistedTime;
 
-  private TServerInstance lastLocation;
+  private TServerInstance lastLocation = null;
   private volatile boolean tableDirChecked = false;
 
   private final AtomicLong dataSourceDeletions = new AtomicLong(0);
@@ -182,7 +186,7 @@
     return dataSourceDeletions.get();
   }
 
-  private final Set<ScanDataSource> activeScans = new HashSet<ScanDataSource>();
+  private final Set<ScanDataSource> activeScans = new HashSet<>();
 
   private static enum CloseState {
     OPEN, CLOSING, CLOSED, COMPLETE
@@ -201,18 +205,18 @@
   }
 
   // stores info about user initiated major compaction that is waiting on a minor compaction to finish
-  private CompactionWaitInfo compactionWaitInfo = new CompactionWaitInfo();
+  private final CompactionWaitInfo compactionWaitInfo = new CompactionWaitInfo();
 
   static enum CompactionState {
     WAITING_TO_START, IN_PROGRESS
-  };
+  }
 
   private volatile CompactionState minorCompactionState = null;
   private volatile CompactionState majorCompactionState = null;
 
   private final Set<MajorCompactionReason> majorCompactionQueued = Collections.synchronizedSet(EnumSet.noneOf(MajorCompactionReason.class));
 
-  private final AtomicReference<ConstraintChecker> constraintChecker = new AtomicReference<ConstraintChecker>();
+  private final AtomicReference<ConstraintChecker> constraintChecker = new AtomicReference<>();
 
   private int writesInProgress = 0;
 
@@ -243,6 +247,8 @@
 
   private final ConfigurationObserver configObserver;
 
+  private final Cache<Long,List<FileRef>> bulkImported = CacheBuilder.newBuilder().build();
+
   private final int logId;
 
   @Override
@@ -251,7 +257,7 @@
   }
 
   public static class LookupResult {
-    public List<Range> unfinishedRanges = new ArrayList<Range>();
+    public List<Range> unfinishedRanges = new ArrayList<>();
     public long bytesAdded = 0;
     public long dataSize = 0;
     public boolean closed = false;
@@ -302,235 +308,54 @@
     this.tableConfiguration = tableConfiguration;
     this.extent = extent;
     this.configObserver = configObserver;
+    this.splitCreationTime = 0;
   }
 
-  public Tablet(TabletServer tabletServer, KeyExtent extent, TabletResourceManager trm, SplitInfo info) throws IOException {
-    this(tabletServer, new Text(info.getDir()), extent, trm, info.getDatafiles(), info.getTime(), info.getInitFlushID(), info.getInitCompactID(), info
-        .getLastLocation());
-    splitCreationTime = System.currentTimeMillis();
-  }
+  public Tablet(final TabletServer tabletServer, final KeyExtent extent, final TabletResourceManager trm, TabletData data) throws IOException {
 
-  private Tablet(TabletServer tabletServer, Text location, KeyExtent extent, TabletResourceManager trm, SortedMap<FileRef,DataFileValue> datafiles,
-      String time, long initFlushID, long initCompactID, TServerInstance lastLocation) throws IOException {
-    this(tabletServer, extent, location, trm, NO_LOG_ENTRIES, datafiles, time, lastLocation, new HashSet<FileRef>(), initFlushID, initCompactID);
-  }
-
-  private static String lookupTime(AccumuloConfiguration conf, KeyExtent extent, SortedMap<Key,Value> tabletsKeyValues) {
-    SortedMap<Key,Value> entries;
-
-    if (extent.isRootTablet()) {
-      return null;
-    } else {
-      entries = new TreeMap<Key,Value>();
-      Text rowName = extent.getMetadataEntry();
-      for (Entry<Key,Value> entry : tabletsKeyValues.entrySet()) {
-        if (entry.getKey().compareRow(rowName) == 0 && TabletsSection.ServerColumnFamily.TIME_COLUMN.hasColumns(entry.getKey())) {
-          entries.put(new Key(entry.getKey()), new Value(entry.getValue()));
-        }
-      }
-    }
-
-    if (entries.size() == 1)
-      return entries.values().iterator().next().toString();
-    return null;
-  }
-
-  private static SortedMap<FileRef,DataFileValue> lookupDatafiles(AccumuloServerContext context, VolumeManager fs, KeyExtent extent,
-      SortedMap<Key,Value> tabletsKeyValues) throws IOException {
-
-    TreeMap<FileRef,DataFileValue> datafiles = new TreeMap<FileRef,DataFileValue>();
-
-    if (extent.isRootTablet()) { // the meta0 tablet
-      Path location = new Path(MetadataTableUtil.getRootTabletDir());
-
-      // cleanUpFiles() has special handling for delete. files
-      FileStatus[] files = fs.listStatus(location);
-      Collection<String> goodPaths = RootFiles.cleanupReplacement(fs, files, true);
-      for (String good : goodPaths) {
-        Path path = new Path(good);
-        String filename = path.getName();
-        FileRef ref = new FileRef(location.toString() + "/" + filename, path);
-        DataFileValue dfv = new DataFileValue(0, 0);
-        datafiles.put(ref, dfv);
-      }
-    } else {
-      final Text buffer = new Text();
-      final Text row = extent.getMetadataEntry();
-
-      for (Entry<Key,Value> entry : tabletsKeyValues.entrySet()) {
-        Key k = entry.getKey();
-        k.getRow(buffer);
-        // Double-check that we have the expected row
-        if (row.equals(buffer)) {
-          k.getColumnFamily(buffer);
-          // Ignore anything but file:
-          if (TabletsSection.DataFileColumnFamily.NAME.equals(buffer)) {
-            FileRef ref = new FileRef(fs, k);
-            datafiles.put(ref, new DataFileValue(entry.getValue().get()));
-          }
-        }
-      }
-    }
-    return datafiles;
-  }
-
-  private static List<LogEntry> lookupLogEntries(AccumuloServerContext context, KeyExtent ke, SortedMap<Key,Value> tabletsKeyValues) {
-    List<LogEntry> logEntries = new ArrayList<LogEntry>();
-
-    if (ke.isMeta()) {
-      try {
-        logEntries = MetadataTableUtil.getLogEntries(context, ke);
-      } catch (Exception ex) {
-        throw new RuntimeException("Unable to read tablet log entries", ex);
-      }
-    } else {
-      log.debug("Looking at metadata " + tabletsKeyValues);
-      Text row = ke.getMetadataEntry();
-      for (Entry<Key,Value> entry : tabletsKeyValues.entrySet()) {
-        Key key = entry.getKey();
-        if (key.getRow().equals(row)) {
-          if (key.getColumnFamily().equals(LogColumnFamily.NAME)) {
-            logEntries.add(LogEntry.fromKeyValue(key, entry.getValue()));
-          }
-        }
-      }
-    }
-
-    log.debug("got " + logEntries + " for logs for " + ke);
-    return logEntries;
-  }
-
-  private static Set<FileRef> lookupScanFiles(KeyExtent extent, SortedMap<Key,Value> tabletsKeyValues, VolumeManager fs) throws IOException {
-    HashSet<FileRef> scanFiles = new HashSet<FileRef>();
-
-    Text row = extent.getMetadataEntry();
-    for (Entry<Key,Value> entry : tabletsKeyValues.entrySet()) {
-      Key key = entry.getKey();
-      if (key.getRow().equals(row) && key.getColumnFamily().equals(ScanFileColumnFamily.NAME)) {
-        scanFiles.add(new FileRef(fs, key));
-      }
-    }
-
-    return scanFiles;
-  }
-
-  private static long lookupFlushID(KeyExtent extent, SortedMap<Key,Value> tabletsKeyValues) {
-    Text row = extent.getMetadataEntry();
-    for (Entry<Key,Value> entry : tabletsKeyValues.entrySet()) {
-      Key key = entry.getKey();
-      if (key.getRow().equals(row) && TabletsSection.ServerColumnFamily.FLUSH_COLUMN.equals(key.getColumnFamily(), key.getColumnQualifier()))
-        return Long.parseLong(entry.getValue().toString());
-    }
-
-    return -1;
-  }
-
-  private static long lookupCompactID(KeyExtent extent, SortedMap<Key,Value> tabletsKeyValues) {
-    Text row = extent.getMetadataEntry();
-    for (Entry<Key,Value> entry : tabletsKeyValues.entrySet()) {
-      Key key = entry.getKey();
-      if (key.getRow().equals(row) && TabletsSection.ServerColumnFamily.COMPACT_COLUMN.equals(key.getColumnFamily(), key.getColumnQualifier()))
-        return Long.parseLong(entry.getValue().toString());
-    }
-
-    return -1;
-  }
-
-  private static TServerInstance lookupLastServer(KeyExtent extent, SortedMap<Key,Value> tabletsKeyValues) {
-    for (Entry<Key,Value> entry : tabletsKeyValues.entrySet()) {
-      if (entry.getKey().getColumnFamily().compareTo(TabletsSection.LastLocationColumnFamily.NAME) == 0) {
-        return new TServerInstance(entry.getValue(), entry.getKey().getColumnQualifier());
-      }
-    }
-    return null;
-  }
-
-  public Tablet(TabletServer tabletServer, KeyExtent extent, Text location, TabletResourceManager trm, SortedMap<Key,Value> tabletsKeyValues)
-      throws IOException {
-    this(tabletServer, extent, location, trm, lookupLogEntries(tabletServer, extent, tabletsKeyValues), lookupDatafiles(tabletServer,
-        tabletServer.getFileSystem(), extent, tabletsKeyValues), lookupTime(tabletServer.getConfiguration(), extent, tabletsKeyValues), lookupLastServer(
-        extent, tabletsKeyValues), lookupScanFiles(extent, tabletsKeyValues, tabletServer.getFileSystem()), lookupFlushID(extent, tabletsKeyValues),
-        lookupCompactID(extent, tabletsKeyValues));
-  }
-
-  /**
-   * yet another constructor - this one allows us to avoid costly lookups into the Metadata table if we already know the files we need - as at split time
-   */
-  private Tablet(final TabletServer tabletServer, final KeyExtent extent, final Text location, final TabletResourceManager trm,
-      final List<LogEntry> rawLogEntries, final SortedMap<FileRef,DataFileValue> rawDatafiles, String time, final TServerInstance lastLocation,
-      Set<FileRef> scanFiles, long initFlushID, long initCompactID) throws IOException {
+    this.tabletServer = tabletServer;
+    this.extent = extent;
+    this.tabletResources = trm;
+    this.lastLocation = data.getLastLocation();
+    this.lastFlushID = data.getFlushID();
+    this.lastCompactID = data.getCompactID();
+    this.splitCreationTime = data.getSplitTime();
+    this.tabletTime = TabletTime.getInstance(data.getTime());
+    this.persistedTime = tabletTime.getTime();
+    this.logId = tabletServer.createLogId(extent);
 
     TableConfiguration tblConf = tabletServer.getTableConfiguration(extent);
     if (null == tblConf) {
       Tables.clearCache(tabletServer.getInstance());
       tblConf = tabletServer.getTableConfiguration(extent);
-      requireNonNull(tblConf, "Could not get table configuration for " + extent.getTableId().toString());
+      requireNonNull(tblConf, "Could not get table configuration for " + extent.getTableId());
     }
 
     this.tableConfiguration = tblConf;
 
-    TabletFiles tabletPaths = VolumeUtil.updateTabletVolumes(tabletServer, tabletServer.getLock(), tabletServer.getFileSystem(), extent, new TabletFiles(
-        location.toString(), rawLogEntries, rawDatafiles), ReplicationConfigurationUtil.isEnabled(extent, this.tableConfiguration));
+    // translate any volume changes
+    VolumeManager fs = tabletServer.getFileSystem();
+    boolean replicationEnabled = ReplicationConfigurationUtil.isEnabled(extent, this.tableConfiguration);
+    TabletFiles tabletPaths = new TabletFiles(data.getDirectory(), data.getLogEntris(), data.getDataFiles());
+    tabletPaths = VolumeUtil.updateTabletVolumes(tabletServer, tabletServer.getLock(), fs, extent, tabletPaths, replicationEnabled);
 
+    // deal with relative path for the directory
     Path locationPath;
-
     if (tabletPaths.dir.contains(":")) {
-      locationPath = new Path(tabletPaths.dir.toString());
+      locationPath = new Path(tabletPaths.dir);
     } else {
-      locationPath = tabletServer.getFileSystem().getFullPath(FileType.TABLE, extent.getTableId().toString() + tabletPaths.dir.toString());
+      locationPath = tabletServer.getFileSystem().getFullPath(FileType.TABLE, extent.getTableId() + tabletPaths.dir);
     }
+    this.location = locationPath;
+    this.tabletDirectory = tabletPaths.dir;
+    for (Entry<Long,List<FileRef>> entry : data.getBulkImported().entrySet()) {
+      this.bulkImported.put(entry.getKey(), new CopyOnWriteArrayList<>(entry.getValue()));
+    }
+    setupDefaultSecurityLabels(extent);
 
     final List<LogEntry> logEntries = tabletPaths.logEntries;
     final SortedMap<FileRef,DataFileValue> datafiles = tabletPaths.datafiles;
 
-    this.location = locationPath;
-    this.lastLocation = lastLocation;
-    this.tabletDirectory = tabletPaths.dir;
-
-    this.extent = extent;
-    this.tabletResources = trm;
-
-    this.lastFlushID = initFlushID;
-    this.lastCompactID = initCompactID;
-
-    if (extent.isRootTablet()) {
-      long rtime = Long.MIN_VALUE;
-      for (FileRef ref : datafiles.keySet()) {
-        Path path = ref.path();
-        FileSystem ns = tabletServer.getFileSystem().getVolumeByPath(path).getFileSystem();
-        FileSKVIterator reader = FileOperations.getInstance().openReader(path.toString(), true, ns, ns.getConf(), tabletServer.getTableConfiguration(extent));
-        long maxTime = -1;
-        try {
-
-          while (reader.hasTop()) {
-            maxTime = Math.max(maxTime, reader.getTopKey().getTimestamp());
-            reader.next();
-          }
-
-        } finally {
-          reader.close();
-        }
-
-        if (maxTime > rtime) {
-          time = TabletTime.LOGICAL_TIME_ID + "" + maxTime;
-          rtime = maxTime;
-        }
-      }
-    }
-    if (time == null && datafiles.isEmpty() && extent.equals(RootTable.OLD_EXTENT)) {
-      // recovery... old root tablet has no data, so time doesn't matter:
-      time = TabletTime.LOGICAL_TIME_ID + "" + Long.MIN_VALUE;
-    }
-
-    this.tabletServer = tabletServer;
-    this.logId = tabletServer.createLogId(extent);
-
-    setupDefaultSecurityLabels(extent);
-
-    tabletMemory = new TabletMemory(this);
-    tabletTime = TabletTime.getInstance(time);
-    persistedTime = tabletTime.getTime();
-
     tableConfiguration.addObserver(configObserver = new ConfigurationObserver() {
 
       private void reloadConstraints() {
@@ -572,19 +397,18 @@
     });
 
     tableConfiguration.getNamespaceConfiguration().addObserver(configObserver);
+    tabletMemory = new TabletMemory(this);
 
     // Force a load of any per-table properties
     configObserver.propertiesChanged();
-
     if (!logEntries.isEmpty()) {
       log.info("Starting Write-Ahead Log recovery for " + this.extent);
-      // count[0] = entries used on tablet
-      // count[1] = track max time from walog entries wihtout timestamps
-      final long[] count = new long[2];
+      final AtomicLong entriesUsedOnTablet = new AtomicLong(0);
+      // track max time from walog entries without timestamps
+      final AtomicLong maxTime = new AtomicLong(Long.MIN_VALUE);
       final CommitSession commitSession = getTabletMemory().getCommitSession();
-      count[1] = Long.MIN_VALUE;
       try {
-        Set<String> absPaths = new HashSet<String>();
+        Set<String> absPaths = new HashSet<>();
         for (FileRef ref : datafiles.keySet())
           absPaths.add(ref.path().toString());
 
@@ -597,20 +421,20 @@
               if (!columnUpdate.hasTimestamp()) {
                 // if it is not a user set timestamp, it must have been set
                 // by the system
-                count[1] = Math.max(count[1], columnUpdate.getTimestamp());
+                maxTime.set(Math.max(maxTime.get(), columnUpdate.getTimestamp()));
               }
             }
             getTabletMemory().mutate(commitSession, Collections.singletonList(m));
-            count[0]++;
+            entriesUsedOnTablet.incrementAndGet();
           }
         });
 
-        if (count[1] != Long.MIN_VALUE) {
-          tabletTime.useMaxTimeFromWALog(count[1]);
+        if (maxTime.get() != Long.MIN_VALUE) {
+          tabletTime.useMaxTimeFromWALog(maxTime.get());
         }
         commitSession.updateMaxCommittedTime(tabletTime.getTime());
 
-        if (count[0] == 0) {
+        if (entriesUsedOnTablet.get() == 0) {
           log.debug("No replayed mutations applied, removing unused entries for " + extent);
           MetadataTableUtil.removeUnusedWALEntries(getTabletServer(), extent, logEntries, tabletServer.getLock());
 
@@ -629,8 +453,8 @@
           // the WAL isn't closed (WRT replication Status) and thus we're safe to update its progress.
           Status status = StatusUtil.openWithUnknownLength();
           for (LogEntry logEntry : logEntries) {
-            log.debug("Writing updated status to metadata table for " + logEntry.logSet + " " + ProtobufUtil.toString(status));
-            ReplicationTableUtil.updateFiles(tabletServer, extent, logEntry.logSet, status);
+            log.debug("Writing updated status to metadata table for " + logEntry.filename + " " + ProtobufUtil.toString(status));
+            ReplicationTableUtil.updateFiles(tabletServer, extent, logEntry.filename, status);
           }
         }
 
@@ -642,15 +466,13 @@
         }
       }
       // make some closed references that represent the recovered logs
-      currentLogs = new HashSet<DfsLogger>();
+      currentLogs = new ConcurrentSkipListSet<>();
       for (LogEntry logEntry : logEntries) {
-        for (String log : logEntry.logSet) {
-          currentLogs.add(new DfsLogger(tabletServer.getServerConfig(), log, logEntry.getColumnQualifier().toString()));
-        }
+        currentLogs.add(new DfsLogger(tabletServer.getServerConfig(), logEntry.filename, logEntry.getColumnQualifier().toString()));
       }
 
-      log.info("Write-Ahead Log recovery complete for " + this.extent + " (" + count[0] + " mutations applied, " + getTabletMemory().getNumEntries()
-          + " entries created)");
+      log.info("Write-Ahead Log recovery complete for " + this.extent + " (" + entriesUsedOnTablet.get() + " mutations applied, "
+          + getTabletMemory().getNumEntries() + " entries created)");
     }
 
     String contextName = tableConfiguration.get(Property.TABLE_CLASSPATH);
@@ -666,7 +488,7 @@
 
     computeNumEntries();
 
-    getDatafileManager().removeFilesAfterScan(scanFiles);
+    getDatafileManager().removeFilesAfterScan(data.getScanFiles());
 
     // look for hints of a failure on the previous tablet server
     if (!logEntries.isEmpty() || needsMajorCompaction(MajorCompactionReason.NORMAL)) {
@@ -707,8 +529,8 @@
     }
   }
 
-  private LookupResult lookup(SortedKeyValueIterator<Key,Value> mmfi, List<Range> ranges, HashSet<Column> columnSet, List<KVEntry> results, long maxResultsSize)
-      throws IOException {
+  private LookupResult lookup(SortedKeyValueIterator<Key,Value> mmfi, List<Range> ranges, HashSet<Column> columnSet, List<KVEntry> results,
+      long maxResultsSize, long batchTimeOut) throws IOException {
 
     LookupResult lookupResult = new LookupResult();
 
@@ -719,9 +541,16 @@
     if (columnSet.size() > 0)
       cfset = LocalityGroupUtil.families(columnSet);
 
+    long returnTime = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(batchTimeOut);
+    if (batchTimeOut <= 0 || batchTimeOut == Long.MAX_VALUE) {
+      batchTimeOut = 0;
+    }
+
     for (Range range : ranges) {
 
-      if (exceededMemoryUsage || tabletClosed) {
+      boolean timesUp = batchTimeOut > 0 && System.nanoTime() > returnTime;
+
+      if (exceededMemoryUsage || tabletClosed || timesUp) {
         lookupResult.unfinishedRanges.add(range);
         continue;
       }
@@ -745,7 +574,9 @@
 
           exceededMemoryUsage = lookupResult.bytesAdded > maxResultsSize;
 
-          if (exceededMemoryUsage) {
+          timesUp = batchTimeOut > 0 && System.nanoTime() > returnTime;
+
+          if (exceededMemoryUsage || timesUp) {
             addUnfinishedRange(lookupResult, range, key, false);
             break;
           }
@@ -821,7 +652,8 @@
   }
 
   public LookupResult lookup(List<Range> ranges, HashSet<Column> columns, Authorizations authorizations, List<KVEntry> results, long maxResultSize,
-      List<IterInfo> ssiList, Map<String,Map<String,String>> ssio, AtomicBoolean interruptFlag) throws IOException {
+      List<IterInfo> ssiList, Map<String,Map<String,String>> ssio, AtomicBoolean interruptFlag, SamplerConfiguration samplerConfig, long batchTimeOut,
+      String classLoaderContext) throws IOException {
 
     if (ranges.size() == 0) {
       return new LookupResult();
@@ -839,13 +671,14 @@
       tabletRange.clip(range);
     }
 
-    ScanDataSource dataSource = new ScanDataSource(this, authorizations, this.defaultSecurityLabel, columns, ssiList, ssio, interruptFlag);
+    ScanDataSource dataSource = new ScanDataSource(this, authorizations, this.defaultSecurityLabel, columns, ssiList, ssio, interruptFlag, samplerConfig,
+        batchTimeOut, classLoaderContext);
 
     LookupResult result = null;
 
     try {
       SortedKeyValueIterator<Key,Value> iter = new SourceSwitchingIterator(dataSource);
-      result = lookup(iter, ranges, columns, results, maxResultSize);
+      result = lookup(iter, ranges, columns, results, maxResultSize, batchTimeOut);
       return result;
     } catch (IOException ioe) {
       dataSource.close(true);
@@ -863,11 +696,15 @@
     }
   }
 
-  Batch nextBatch(SortedKeyValueIterator<Key,Value> iter, Range range, int num, Set<Column> columns) throws IOException {
+  Batch nextBatch(SortedKeyValueIterator<Key,Value> iter, Range range, int num, Set<Column> columns, long batchTimeOut) throws IOException {
 
     // log.info("In nextBatch..");
 
-    List<KVEntry> results = new ArrayList<KVEntry>();
+    long stopTime = System.nanoTime() + TimeUnit.MILLISECONDS.toNanos(batchTimeOut);
+    if (batchTimeOut == Long.MAX_VALUE || batchTimeOut <= 0) {
+      batchTimeOut = 0;
+    }
+    List<KVEntry> results = new ArrayList<>();
     Key key = null;
 
     Value value;
@@ -896,7 +733,9 @@
       resultSize += kvEntry.estimateMemoryUsed();
       resultBytes += kvEntry.numBytes();
 
-      if (resultSize >= maxResultsSize || results.size() >= num) {
+      boolean timesUp = batchTimeOut > 0 && System.nanoTime() >= stopTime;
+
+      if (resultSize >= maxResultsSize || results.size() >= num || timesUp) {
         continueKey = new Key(key);
         skipContinueKey = true;
         break;
@@ -937,12 +776,14 @@
   }
 
   public Scanner createScanner(Range range, int num, Set<Column> columns, Authorizations authorizations, List<IterInfo> ssiList,
-      Map<String,Map<String,String>> ssio, boolean isolated, AtomicBoolean interruptFlag) {
+      Map<String,Map<String,String>> ssio, boolean isolated, AtomicBoolean interruptFlag, SamplerConfiguration samplerConfig, long batchTimeOut,
+      String classLoaderContext) {
     // do a test to see if this range falls within the tablet, if it does not
     // then clip will throw an exception
     extent.toDataRange().clip(range);
 
-    ScanOptions opts = new ScanOptions(num, authorizations, this.defaultSecurityLabel, columns, ssiList, ssio, interruptFlag, isolated);
+    ScanOptions opts = new ScanOptions(num, authorizations, this.defaultSecurityLabel, columns, ssiList, ssio, interruptFlag, isolated, samplerConfig,
+        batchTimeOut, classLoaderContext);
     return new Scanner(this, range, opts);
   }
 
@@ -954,7 +795,9 @@
 
     long count = 0;
 
+    String oldName = Thread.currentThread().getName();
     try {
+      Thread.currentThread().setName("Minor compacting " + this.extent);
       Span span = Trace.start("write");
       CompactionStats stats;
       try {
@@ -985,6 +828,7 @@
       failed = true;
       throw new RuntimeException(e);
     } finally {
+      Thread.currentThread().setName(oldName);
       try {
         getTabletMemory().finalizeMinC();
       } catch (Throwable t) {
@@ -1009,7 +853,7 @@
   private synchronized MinorCompactionTask prepareForMinC(long flushId, MinorCompactionReason mincReason) {
     CommitSession oldCommitSession = getTabletMemory().prepareForMinC();
     otherLogs = currentLogs;
-    currentLogs = new HashSet<DfsLogger>();
+    currentLogs = new ConcurrentSkipListSet<>();
 
     FileRef mergeFile = null;
     if (mincReason != MinorCompactionReason.RECOVERY) {
@@ -1206,7 +1050,7 @@
         }
       }
 
-      return new Pair<Long,UserCompactionConfig>(compactID, compactionConfig);
+      return new Pair<>(compactID, compactionConfig);
     } catch (InterruptedException e) {
       throw new RuntimeException(e);
     } catch (NumberFormatException nfe) {
@@ -1259,7 +1103,7 @@
       if (more != null) {
         violations.add(more);
         if (violators == null)
-          violators = new ArrayList<Mutation>();
+          violators = new ArrayList<>();
         violators.add(mutation);
       }
     }
@@ -1268,8 +1112,8 @@
 
     if (!violations.isEmpty()) {
 
-      HashSet<Mutation> violatorsSet = new HashSet<Mutation>(violators);
-      ArrayList<Mutation> nonViolators = new ArrayList<Mutation>();
+      HashSet<Mutation> violatorsSet = new HashSet<>(violators);
+      ArrayList<Mutation> nonViolators = new ArrayList<>();
 
       for (Mutation mutation : mutations) {
         if (!violatorsSet.contains(mutation)) {
@@ -1475,12 +1319,11 @@
         } catch (RuntimeException t) {
           err = t;
           log.error("Consistency check fails, retrying " + t);
-          UtilWaitThread.sleep(500);
+          sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
         }
       }
       if (err != null) {
-        ProblemReports.getInstance(tabletServer)
-            .report(new ProblemReport(extent.getTableId().toString(), ProblemType.TABLET_LOAD, this.extent.toString(), err));
+        ProblemReports.getInstance(tabletServer).report(new ProblemReport(extent.getTableId(), ProblemType.TABLET_LOAD, this.extent.toString(), err));
         log.error("Tablet closed consistency check has failed for " + this.extent + " giving up and closing");
       }
     }
@@ -1612,7 +1455,7 @@
   private boolean sawBigRow = false;
   private long timeOfLastMinCWhenBigFreakinRowWasSeen = 0;
   private long timeOfLastImportWhenBigFreakinRowWasSeen = 0;
-  private long splitCreationTime;
+  private final long splitCreationTime;
 
   private SplitRowSpec findSplitRow(Collection<FileRef> files) {
 
@@ -1735,17 +1578,18 @@
   }
 
   private Map<FileRef,Pair<Key,Key>> getFirstAndLastKeys(SortedMap<FileRef,DataFileValue> allFiles) throws IOException {
-    Map<FileRef,Pair<Key,Key>> result = new HashMap<FileRef,Pair<Key,Key>>();
+    Map<FileRef,Pair<Key,Key>> result = new HashMap<>();
     FileOperations fileFactory = FileOperations.getInstance();
     VolumeManager fs = getTabletServer().getFileSystem();
     for (Entry<FileRef,DataFileValue> entry : allFiles.entrySet()) {
       FileRef file = entry.getKey();
       FileSystem ns = fs.getVolumeByPath(file.path()).getFileSystem();
-      FileSKVIterator openReader = fileFactory.openReader(file.path().toString(), true, ns, ns.getConf(), this.getTableConfiguration());
+      FileSKVIterator openReader = fileFactory.newReaderBuilder().forFile(file.path().toString(), ns, ns.getConf())
+          .withTableConfiguration(this.getTableConfiguration()).seekToBeginning().build();
       try {
         Key first = openReader.getFirstKey();
         Key last = openReader.getLastKey();
-        result.put(file, new Pair<Key,Key>(first, last));
+        result.put(file, new Pair<>(first, last));
       } finally {
         openReader.close();
       }
@@ -1754,7 +1598,7 @@
   }
 
   List<FileRef> findChopFiles(KeyExtent extent, Map<FileRef,Pair<Key,Key>> firstAndLastKeys, Collection<FileRef> allFiles) throws IOException {
-    List<FileRef> result = new ArrayList<FileRef>();
+    List<FileRef> result = new ArrayList<>();
     if (firstAndLastKeys == null) {
       result.addAll(allFiles);
       return result;
@@ -1862,7 +1706,7 @@
         RootFiles.cleanupReplacement(fs, fs.listStatus(this.location), false);
       }
       SortedMap<FileRef,DataFileValue> allFiles = getDatafileManager().getDatafileSizes();
-      List<FileRef> inputFiles = new ArrayList<FileRef>();
+      List<FileRef> inputFiles = new ArrayList<>();
       if (reason == MajorCompactionReason.CHOP) {
         // enforce rules: files with keys outside our range need to be compacted
         inputFiles.addAll(findChopFiles(extent, firstAndLastKeys, allFiles.keySet()));
@@ -1929,7 +1773,7 @@
         }
       }
 
-      List<IteratorSetting> compactionIterators = new ArrayList<IteratorSetting>();
+      List<IteratorSetting> compactionIterators = new ArrayList<>();
       if (compactionId != null) {
         if (reason == MajorCompactionReason.USER) {
           if (getCompactionCancelID() >= compactionId.getFirst()) {
@@ -1965,8 +1809,8 @@
         AccumuloConfiguration tableConf = createTableConfiguration(tableConfiguration, plan);
 
         Span span = Trace.start("compactFiles");
-        try {
 
+        try {
           CompactionEnv cenv = new CompactionEnv() {
             @Override
             public boolean isCompactionEnabled() {
@@ -1977,9 +1821,20 @@
             public IteratorScope getIteratorScope() {
               return IteratorScope.majc;
             }
+
+            @Override
+            public RateLimiter getReadLimiter() {
+              return getTabletServer().getMajorCompactionReadLimiter();
+            }
+
+            @Override
+            public RateLimiter getWriteLimiter() {
+              return getTabletServer().getMajorCompactionWriteLimiter();
+            }
+
           };
 
-          HashMap<FileRef,DataFileValue> copy = new HashMap<FileRef,DataFileValue>(getDatafileManager().getDatafileSizes());
+          HashMap<FileRef,DataFileValue> copy = new HashMap<>(getDatafileManager().getDatafileSizes());
           if (!copy.keySet().containsAll(smallestFiles))
             throw new IllegalStateException("Cannot find data file values for " + smallestFiles);
 
@@ -2047,12 +1902,12 @@
 
     // short-circuit; also handles zero files case
     if (filesToCompact.size() <= maxFilesToCompact) {
-      Set<FileRef> smallestFiles = new HashSet<FileRef>(filesToCompact.keySet());
+      Set<FileRef> smallestFiles = new HashSet<>(filesToCompact.keySet());
       filesToCompact.clear();
       return smallestFiles;
     }
 
-    PriorityQueue<Pair<FileRef,Long>> fileHeap = new PriorityQueue<Pair<FileRef,Long>>(filesToCompact.size(), new Comparator<Pair<FileRef,Long>>() {
+    PriorityQueue<Pair<FileRef,Long>> fileHeap = new PriorityQueue<>(filesToCompact.size(), new Comparator<Pair<FileRef,Long>>() {
       @Override
       public int compare(Pair<FileRef,Long> o1, Pair<FileRef,Long> o2) {
         if (o1.getSecond() == o2.getSecond())
@@ -2065,10 +1920,10 @@
 
     for (Iterator<Entry<FileRef,DataFileValue>> iterator = filesToCompact.entrySet().iterator(); iterator.hasNext();) {
       Entry<FileRef,DataFileValue> entry = iterator.next();
-      fileHeap.add(new Pair<FileRef,Long>(entry.getKey(), entry.getValue().getSize()));
+      fileHeap.add(new Pair<>(entry.getKey(), entry.getValue().getSize()));
     }
 
-    Set<FileRef> smallestFiles = new HashSet<FileRef>();
+    Set<FileRef> smallestFiles = new HashSet<>();
     while (smallestFiles.size() < maxFilesToCompact && fileHeap.size() > 0) {
       Pair<FileRef,Long> pair = fileHeap.remove();
       filesToCompact.remove(pair.getFirst());
@@ -2202,7 +2057,7 @@
     return majorCompactionQueued.size() > 0;
   }
 
-  public TreeMap<KeyExtent,SplitInfo> split(byte[] sp) throws IOException {
+  public TreeMap<KeyExtent,TabletData> split(byte[] sp) throws IOException {
 
     if (sp != null && extent.getEndRow() != null && extent.getEndRow().equals(new Text(sp))) {
       throw new IllegalArgumentException();
@@ -2237,7 +2092,7 @@
 
     synchronized (this) {
       // java needs tuples ...
-      TreeMap<KeyExtent,SplitInfo> newTablets = new TreeMap<KeyExtent,SplitInfo>();
+      TreeMap<KeyExtent,TabletData> newTablets = new TreeMap<>();
 
       long t1 = System.currentTimeMillis();
       // choose a split point
@@ -2265,12 +2120,12 @@
       KeyExtent low = new KeyExtent(extent.getTableId(), midRow, extent.getPrevEndRow());
       KeyExtent high = new KeyExtent(extent.getTableId(), extent.getEndRow(), midRow);
 
-      String lowDirectory = createTabletDirectory(getTabletServer().getFileSystem(), extent.getTableId().toString(), midRow);
+      String lowDirectory = createTabletDirectory(getTabletServer().getFileSystem(), extent.getTableId(), midRow);
 
       // write new tablet information to MetadataTable
-      SortedMap<FileRef,DataFileValue> lowDatafileSizes = new TreeMap<FileRef,DataFileValue>();
-      SortedMap<FileRef,DataFileValue> highDatafileSizes = new TreeMap<FileRef,DataFileValue>();
-      List<FileRef> highDatafilesToRemove = new ArrayList<FileRef>();
+      SortedMap<FileRef,DataFileValue> lowDatafileSizes = new TreeMap<>();
+      SortedMap<FileRef,DataFileValue> highDatafileSizes = new TreeMap<>();
+      List<FileRef> highDatafilesToRemove = new ArrayList<>();
 
       MetadataTableUtil.splitDatafiles(extent.getTableId(), midRow, splitRatio, firstAndLastRows, getDatafileManager().getDatafileSizes(), lowDatafileSizes,
           highDatafileSizes, highDatafilesToRemove);
@@ -2280,20 +2135,15 @@
 
       String time = tabletTime.getMetadataValue();
 
-      // it is possible that some of the bulk loading flags will be deleted after being read below because the bulk load
-      // finishes.... therefore split could propagate load flags for a finished bulk load... there is a special iterator
-      // on the metadata table to clean up this type of garbage
-      Map<FileRef,Long> bulkLoadedFiles = MetadataTableUtil.getBulkFilesLoaded(getTabletServer(), extent);
-
       MetadataTableUtil.splitTablet(high, extent.getPrevEndRow(), splitRatio, getTabletServer(), getTabletServer().getLock());
-      MasterMetadataUtil.addNewTablet(getTabletServer(), low, lowDirectory, getTabletServer().getTabletSession(), lowDatafileSizes, bulkLoadedFiles, time,
-          lastFlushID, lastCompactID, getTabletServer().getLock());
+      MasterMetadataUtil.addNewTablet(getTabletServer(), low, lowDirectory, getTabletServer().getTabletSession(), lowDatafileSizes, getBulkIngestedFiles(),
+          time, lastFlushID, lastCompactID, getTabletServer().getLock());
       MetadataTableUtil.finishSplit(high, highDatafileSizes, highDatafilesToRemove, getTabletServer(), getTabletServer().getLock());
 
       log.log(TLevel.TABLET_HIST, extent + " split " + low + " " + high);
 
-      newTablets.put(high, new SplitInfo(tabletDirectory, highDatafileSizes, time, lastFlushID, lastCompactID, lastLocation));
-      newTablets.put(low, new SplitInfo(lowDirectory, lowDatafileSizes, time, lastFlushID, lastCompactID, lastLocation));
+      newTablets.put(high, new TabletData(tabletDirectory, highDatafileSizes, time, lastFlushID, lastCompactID, lastLocation, getBulkIngestedFiles()));
+      newTablets.put(low, new TabletData(lowDirectory, lowDatafileSizes, time, lastFlushID, lastCompactID, lastLocation, getBulkIngestedFiles()));
 
       long t2 = System.currentTimeMillis();
 
@@ -2346,10 +2196,12 @@
   }
 
   public void importMapFiles(long tid, Map<FileRef,MapFileInfo> fileMap, boolean setTime) throws IOException {
-    Map<FileRef,DataFileValue> entries = new HashMap<FileRef,DataFileValue>(fileMap.size());
+    Map<FileRef,DataFileValue> entries = new HashMap<>(fileMap.size());
+    List<String> files = new ArrayList<>();
 
     for (Entry<FileRef,MapFileInfo> entry : fileMap.entrySet()) {
       entries.put(entry.getKey(), new DataFileValue(entry.getValue().estimatedSize, 0l));
+      files.add(entry.getKey().path().toString());
     }
 
     // Clients timeout and will think that this operation failed.
@@ -2366,12 +2218,25 @@
         throw new IOException("Timeout waiting " + (lockWait / 1000.) + " seconds to get tablet lock");
       }
 
-      if (writesInProgress < 0)
+      List<FileRef> alreadyImported = bulkImported.getIfPresent(tid);
+      if (alreadyImported != null) {
+        for (FileRef entry : alreadyImported) {
+          if (fileMap.remove(entry) != null) {
+            log.info("Ignoring import of bulk file already imported: " + entry);
+          }
+        }
+      }
+      if (fileMap.isEmpty()) {
+        return;
+      }
+
+      if (writesInProgress < 0) {
         throw new IllegalStateException("writesInProgress < 0 " + writesInProgress);
+      }
 
       writesInProgress++;
     }
-
+    tabletServer.updateBulkImportState(files, BulkImportState.LOADING);
     try {
       getDatafileManager().importMapFiles(tid, entries, setTime);
       lastMapFileImportTime = System.currentTimeMillis();
@@ -2389,25 +2254,34 @@
         writesInProgress--;
         if (writesInProgress == 0)
           this.notifyAll();
+
+        try {
+          bulkImported.get(tid, new Callable<List<FileRef>>() {
+            @Override
+            public List<FileRef> call() throws Exception {
+              return new ArrayList<>();
+            }
+          }).addAll(fileMap.keySet());
+        } catch (Exception ex) {
+          log.info(ex.toString(), ex);
+        }
+        tabletServer.removeBulkImportState(files);
       }
     }
   }
 
-  private Set<DfsLogger> currentLogs = new HashSet<DfsLogger>();
+  private ConcurrentSkipListSet<DfsLogger> currentLogs = new ConcurrentSkipListSet<>();
 
-  public synchronized Set<String> getCurrentLogFiles() {
-    Set<String> result = new HashSet<String>();
-    for (DfsLogger log : currentLogs) {
-      result.add(log.getFileName());
-    }
-    return result;
+  // currentLogs may be updated while a tablet is otherwise locked
+  public Set<DfsLogger> getCurrentLogFiles() {
+    return new HashSet<>(currentLogs);
   }
 
   Set<String> beginClearingUnusedLogs() {
-    Set<String> doomed = new HashSet<String>();
+    Set<String> doomed = new HashSet<>();
 
-    ArrayList<String> otherLogsCopy = new ArrayList<String>();
-    ArrayList<String> currentLogsCopy = new ArrayList<String>();
+    ArrayList<String> otherLogsCopy = new ArrayList<>();
+    ArrayList<String> currentLogsCopy = new ArrayList<>();
 
     // do not hold tablet lock while acquiring the log lock
     logLock.lock();
@@ -2459,13 +2333,13 @@
   // this lock is basically used to synchronize writing of log info to metadata
   private final ReentrantLock logLock = new ReentrantLock();
 
-  public synchronized int getLogCount() {
+  public int getLogCount() {
     return currentLogs.size();
   }
 
   // don't release the lock if this method returns true for success; instead, the caller should clean up by calling finishUpdatingLogsUsed()
   @Override
-  public boolean beginUpdatingLogsUsed(InMemoryMap memTable, Collection<DfsLogger> more, boolean mincFinish) {
+  public boolean beginUpdatingLogsUsed(InMemoryMap memTable, DfsLogger more, boolean mincFinish) {
 
     boolean releaseLock = true;
 
@@ -2502,28 +2376,26 @@
 
         int numAdded = 0;
         int numContained = 0;
-        for (DfsLogger logger : more) {
-          if (addToOther) {
-            if (otherLogs.add(logger))
-              numAdded++;
+        if (addToOther) {
+          if (otherLogs.add(more))
+            numAdded++;
 
-            if (currentLogs.contains(logger))
-              numContained++;
-          } else {
-            if (currentLogs.add(logger))
-              numAdded++;
+          if (currentLogs.contains(more))
+            numContained++;
+        } else {
+          if (currentLogs.add(more))
+            numAdded++;
 
-            if (otherLogs.contains(logger))
-              numContained++;
-          }
+          if (otherLogs.contains(more))
+            numContained++;
         }
 
-        if (numAdded > 0 && numAdded != more.size()) {
+        if (numAdded > 0 && numAdded != 1) {
           // expect to add all or none
           throw new IllegalArgumentException("Added subset of logs " + extent + " " + more + " " + currentLogs);
         }
 
-        if (numContained > 0 && numContained != more.size()) {
+        if (numContained > 0 && numContained != 1) {
           // expect to contain all or none
           throw new IllegalArgumentException("Other logs contained subset of logs " + extent + " " + more + " " + otherLogs);
         }
@@ -2772,9 +2644,19 @@
       }
 
       log.warn("Failed to create dir for tablet in table " + tableId + " in volume " + volume + " + will retry ...");
-      UtilWaitThread.sleep(3000);
+      sleepUninterruptibly(3, TimeUnit.SECONDS);
 
     }
   }
 
+  public Map<Long,List<FileRef>> getBulkIngestedFiles() {
+    return new HashMap<>(bulkImported.asMap());
+  }
+
+  public void cleanupBulkLoadedFiles(Set<Long> tids) {
+    for (Long tid : tids) {
+      bulkImported.invalidate(tid);
+    }
+  }
+
 }
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletCommitter.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletCommitter.java
index c7e3a66..934ce20 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletCommitter.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletCommitter.java
@@ -16,7 +16,6 @@
  */
 package org.apache.accumulo.tserver.tablet;
 
-import java.util.Collection;
 import java.util.List;
 
 import org.apache.accumulo.core.client.Durability;
@@ -38,7 +37,7 @@
   /**
    * If this method returns true, the caller must call {@link #finishUpdatingLogsUsed()} to clean up
    */
-  boolean beginUpdatingLogsUsed(InMemoryMap memTable, Collection<DfsLogger> copy, boolean mincFinish);
+  boolean beginUpdatingLogsUsed(InMemoryMap memTable, DfsLogger copy, boolean mincFinish);
 
   void finishUpdatingLogsUsed();
 
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletData.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletData.java
new file mode 100644
index 0000000..3874d95
--- /dev/null
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletData.java
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.tserver.tablet;
+
+import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.COMPACT_COLUMN;
+import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN;
+import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily.FLUSH_COLUMN;
+import static org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.data.impl.KeyExtent;
+import org.apache.accumulo.core.file.FileOperations;
+import org.apache.accumulo.core.file.FileSKVIterator;
+import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.DataFileValue;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.BulkFileColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LastLocationColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.LogColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ScanFileColumnFamily;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily;
+import org.apache.accumulo.core.tabletserver.log.LogEntry;
+import org.apache.accumulo.fate.zookeeper.ZooReader;
+import org.apache.accumulo.server.fs.FileRef;
+import org.apache.accumulo.server.fs.VolumeManager;
+import org.apache.accumulo.server.fs.VolumeUtil;
+import org.apache.accumulo.server.master.state.TServerInstance;
+import org.apache.accumulo.server.tablets.TabletTime;
+import org.apache.accumulo.server.util.MetadataTableUtil;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/*
+ * Basic information needed to create a tablet.
+ */
+public class TabletData {
+  private static Logger log = LoggerFactory.getLogger(TabletData.class);
+
+  private String time = null;
+  private SortedMap<FileRef,DataFileValue> dataFiles = new TreeMap<>();
+  private List<LogEntry> logEntris = new ArrayList<>();
+  private HashSet<FileRef> scanFiles = new HashSet<>();
+  private long flushID = -1;
+  private long compactID = -1;
+  private TServerInstance lastLocation = null;
+  private Map<Long,List<FileRef>> bulkImported = new HashMap<>();
+  private long splitTime = 0;
+  private String directory = null;
+
+  // Read tablet data from metadata tables
+  public TabletData(KeyExtent extent, VolumeManager fs, Iterator<Entry<Key,Value>> entries) {
+    final Text family = new Text();
+    Text rowName = extent.getMetadataEntry();
+    while (entries.hasNext()) {
+      Entry<Key,Value> entry = entries.next();
+      Key key = entry.getKey();
+      Value value = entry.getValue();
+      key.getColumnFamily(family);
+      if (key.compareRow(rowName) != 0) {
+        log.info("Unexpected metadata table entry for {}: {}", extent, key.getRow());
+        continue;
+      }
+      if (ServerColumnFamily.TIME_COLUMN.hasColumns(entry.getKey())) {
+        if (time == null) {
+          time = value.toString();
+        }
+      } else if (DataFileColumnFamily.NAME.equals(family)) {
+        FileRef ref = new FileRef(fs, key);
+        dataFiles.put(ref, new DataFileValue(entry.getValue().get()));
+      } else if (DIRECTORY_COLUMN.hasColumns(key)) {
+        directory = value.toString();
+      } else if (family.equals(LogColumnFamily.NAME)) {
+        logEntris.add(LogEntry.fromKeyValue(key, entry.getValue()));
+      } else if (family.equals(ScanFileColumnFamily.NAME)) {
+        scanFiles.add(new FileRef(fs, key));
+      } else if (FLUSH_COLUMN.hasColumns(key)) {
+        flushID = Long.parseLong(value.toString());
+      } else if (COMPACT_COLUMN.hasColumns(key)) {
+        compactID = Long.parseLong(entry.getValue().toString());
+      } else if (family.equals(LastLocationColumnFamily.NAME)) {
+        lastLocation = new TServerInstance(value, key.getColumnQualifier());
+      } else if (family.equals(BulkFileColumnFamily.NAME)) {
+        Long id = Long.decode(value.toString());
+        List<FileRef> lst = bulkImported.get(id);
+        if (lst == null) {
+          bulkImported.put(id, lst = new ArrayList<>());
+        }
+        lst.add(new FileRef(fs, key));
+      } else if (PREV_ROW_COLUMN.hasColumns(key)) {
+        KeyExtent check = new KeyExtent(key.getRow(), value);
+        if (!check.equals(extent)) {
+          throw new RuntimeException("Found bad entry for " + extent + ": " + check);
+        }
+      }
+    }
+    if (time == null && dataFiles.isEmpty() && extent.equals(RootTable.OLD_EXTENT)) {
+      // recovery... old root tablet has no data, so time doesn't matter:
+      time = TabletTime.LOGICAL_TIME_ID + "" + Long.MIN_VALUE;
+    }
+  }
+
+  // Read basic root table metadata from zookeeper
+  public TabletData(VolumeManager fs, ZooReader rdr, AccumuloConfiguration conf) throws IOException {
+    directory = VolumeUtil.switchRootTableVolume(MetadataTableUtil.getRootTabletDir());
+
+    Path location = new Path(directory);
+
+    // cleanReplacement() has special handling for deleting files
+    FileStatus[] files = fs.listStatus(location);
+    Collection<String> goodPaths = RootFiles.cleanupReplacement(fs, files, true);
+    long rtime = Long.MIN_VALUE;
+    for (String good : goodPaths) {
+      Path path = new Path(good);
+      String filename = path.getName();
+      FileRef ref = new FileRef(location.toString() + "/" + filename, path);
+      DataFileValue dfv = new DataFileValue(0, 0);
+      dataFiles.put(ref, dfv);
+
+      FileSystem ns = fs.getVolumeByPath(path).getFileSystem();
+      FileSKVIterator reader = FileOperations.getInstance().newReaderBuilder().forFile(path.toString(), ns, ns.getConf()).withTableConfiguration(conf)
+          .seekToBeginning().build();
+      long maxTime = -1;
+      try {
+        while (reader.hasTop()) {
+          maxTime = Math.max(maxTime, reader.getTopKey().getTimestamp());
+          reader.next();
+        }
+      } finally {
+        reader.close();
+      }
+      if (maxTime > rtime) {
+        time = TabletTime.LOGICAL_TIME_ID + "" + maxTime;
+        rtime = maxTime;
+      }
+    }
+
+    try {
+      logEntris = MetadataTableUtil.getLogEntries(null, RootTable.EXTENT);
+    } catch (Exception ex) {
+      throw new RuntimeException("Unable to read tablet log entries", ex);
+    }
+  }
+
+  // Data pulled from an existing tablet to make a split
+  public TabletData(String tabletDirectory, SortedMap<FileRef,DataFileValue> highDatafileSizes, String time, long lastFlushID, long lastCompactID,
+      TServerInstance lastLocation, Map<Long,List<FileRef>> bulkIngestedFiles) {
+    this.directory = tabletDirectory;
+    this.dataFiles = highDatafileSizes;
+    this.time = time;
+    this.flushID = lastFlushID;
+    this.compactID = lastCompactID;
+    this.lastLocation = lastLocation;
+    this.bulkImported = bulkIngestedFiles;
+    this.splitTime = System.currentTimeMillis();
+  }
+
+  public static Logger getLog() {
+    return log;
+  }
+
+  public String getTime() {
+    return time;
+  }
+
+  public SortedMap<FileRef,DataFileValue> getDataFiles() {
+    return dataFiles;
+  }
+
+  public List<LogEntry> getLogEntris() {
+    return logEntris;
+  }
+
+  public HashSet<FileRef> getScanFiles() {
+    return scanFiles;
+  }
+
+  public long getFlushID() {
+    return flushID;
+  }
+
+  public long getCompactID() {
+    return compactID;
+  }
+
+  public TServerInstance getLastLocation() {
+    return lastLocation;
+  }
+
+  public Map<Long,List<FileRef>> getBulkImported() {
+    return bulkImported;
+  }
+
+  public String getDirectory() {
+    return directory;
+  }
+
+  public long getSplitTime() {
+    return splitTime;
+  }
+}
diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletMemory.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletMemory.java
index 0b39d40..f66e457 100644
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletMemory.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/tablet/TabletMemory.java
@@ -22,6 +22,7 @@
 import java.util.List;
 
 import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.util.LocalityGroupUtil.LocalityGroupConfigurationError;
 import org.apache.accumulo.tserver.InMemoryMap;
 import org.apache.accumulo.tserver.InMemoryMap.MemoryIterator;
@@ -156,11 +157,11 @@
     tablet.updateMemoryUsageStats(memTable.estimatedSizeInBytes(), other);
   }
 
-  public List<MemoryIterator> getIterators() {
-    List<MemoryIterator> toReturn = new ArrayList<MemoryIterator>(2);
-    toReturn.add(memTable.skvIterator());
+  public List<MemoryIterator> getIterators(SamplerConfigurationImpl samplerConfig) {
+    List<MemoryIterator> toReturn = new ArrayList<>(2);
+    toReturn.add(memTable.skvIterator(samplerConfig));
     if (otherMemTable != null)
-      toReturn.add(otherMemTable.skvIterator());
+      toReturn.add(otherMemTable.skvIterator(samplerConfig));
     return toReturn;
   }
 
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/AssignmentWatcherTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/AssignmentWatcherTest.java
index b809a35..5dd3c19 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/AssignmentWatcherTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/AssignmentWatcherTest.java
@@ -24,7 +24,6 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.server.util.time.SimpleTimer;
 import org.apache.accumulo.tserver.TabletServerResourceManager.AssignmentWatcher;
-import org.apache.hadoop.io.Text;
 import org.easymock.EasyMock;
 import org.junit.Before;
 import org.junit.Test;
@@ -38,7 +37,7 @@
 
   @Before
   public void setup() {
-    assignments = new HashMap<KeyExtent,RunnableStartedAt>();
+    assignments = new HashMap<>();
     timer = EasyMock.createMock(SimpleTimer.class);
     conf = EasyMock.createMock(AccumuloConfiguration.class);
     watcher = new AssignmentWatcher(conf, assignments, timer);
@@ -50,7 +49,7 @@
     RunnableStartedAt run = new RunnableStartedAt(task, System.currentTimeMillis());
     EasyMock.expect(conf.getTimeInMillis(Property.TSERV_ASSIGNMENT_DURATION_WARNING)).andReturn(0l);
 
-    assignments.put(new KeyExtent(new Text("1"), null, null), run);
+    assignments.put(new KeyExtent("1", null, null), run);
 
     EasyMock.expect(task.getException()).andReturn(new Exception("Assignment warning happened"));
     EasyMock.expect(timer.schedule(watcher, 5000l)).andReturn(null);
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/CheckTabletMetadataTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/CheckTabletMetadataTest.java
index 8c00ece..f474972 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/CheckTabletMetadataTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/CheckTabletMetadataTest.java
@@ -58,7 +58,7 @@
   }
 
   private static void assertFail(TreeMap<Key,Value> tabletMeta, KeyExtent ke, TServerInstance tsi, Key keyToDelete) {
-    TreeMap<Key,Value> copy = new TreeMap<Key,Value>(tabletMeta);
+    TreeMap<Key,Value> copy = new TreeMap<>(tabletMeta);
     Assert.assertNotNull(copy.remove(keyToDelete));
     try {
       Assert.assertNull(TabletServer.checkTabletMetadata(ke, tsi, copy, ke.getMetadataEntry()));
@@ -70,9 +70,9 @@
   @Test
   public void testBadTabletMetadata() throws Exception {
 
-    KeyExtent ke = new KeyExtent(new Text("1"), null, null);
+    KeyExtent ke = new KeyExtent("1", null, null);
 
-    TreeMap<Key,Value> tabletMeta = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tabletMeta = new TreeMap<>();
 
     put(tabletMeta, "1<", TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN, KeyExtent.encodePrevEndRow(null).get());
     put(tabletMeta, "1<", TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN, "/t1".getBytes());
@@ -89,9 +89,9 @@
     assertFail(tabletMeta, ke, new TServerInstance("127.0.0.2:9997", 4));
     assertFail(tabletMeta, ke, new TServerInstance("127.0.0.2:9997", 5));
 
-    assertFail(tabletMeta, new KeyExtent(new Text("1"), null, new Text("m")), tsi);
+    assertFail(tabletMeta, new KeyExtent("1", null, new Text("m")), tsi);
 
-    assertFail(tabletMeta, new KeyExtent(new Text("1"), new Text("r"), new Text("m")), tsi);
+    assertFail(tabletMeta, new KeyExtent("1", new Text("r"), new Text("m")), tsi);
 
     assertFail(tabletMeta, ke, tsi, nk("1<", TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN));
 
@@ -101,18 +101,18 @@
 
     assertFail(tabletMeta, ke, tsi, nk("1<", TabletsSection.FutureLocationColumnFamily.NAME, "4"));
 
-    TreeMap<Key,Value> copy = new TreeMap<Key,Value>(tabletMeta);
+    TreeMap<Key,Value> copy = new TreeMap<>(tabletMeta);
     put(copy, "1<", TabletsSection.CurrentLocationColumnFamily.NAME, "4", "127.0.0.1:9997");
     assertFail(copy, ke, tsi);
     assertFail(copy, ke, tsi, nk("1<", TabletsSection.FutureLocationColumnFamily.NAME, "4"));
 
-    copy = new TreeMap<Key,Value>(tabletMeta);
+    copy = new TreeMap<>(tabletMeta);
     put(copy, "1<", TabletsSection.CurrentLocationColumnFamily.NAME, "5", "127.0.0.1:9998");
     assertFail(copy, ke, tsi);
     put(copy, "1<", TabletsSection.CurrentLocationColumnFamily.NAME, "6", "127.0.0.1:9999");
     assertFail(copy, ke, tsi);
 
-    copy = new TreeMap<Key,Value>(tabletMeta);
+    copy = new TreeMap<>(tabletMeta);
     put(copy, "1<", TabletsSection.FutureLocationColumnFamily.NAME, "5", "127.0.0.1:9998");
     assertFail(copy, ke, tsi);
 
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/CountingIteratorTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/CountingIteratorTest.java
index 8a1dca5..423d522 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/CountingIteratorTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/CountingIteratorTest.java
@@ -36,7 +36,7 @@
 public class CountingIteratorTest {
   @Test
   public void testDeepCopyCount() throws IOException {
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     tm.put(new Key("r1", "cf1", "cq1"), new Value("data1".getBytes()));
     tm.put(new Key("r2", "cf1", "cq1"), new Value("data2".getBytes()));
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/InMemoryMapTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/InMemoryMapTest.java
index da7157a..59dc68e 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/InMemoryMapTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/InMemoryMapTest.java
@@ -26,16 +26,24 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
-import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
-import java.util.Map;
+import java.util.Map.Entry;
 import java.util.Set;
+import java.util.TreeMap;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 
+import org.apache.accumulo.core.client.SampleNotPresentException;
+import org.apache.accumulo.core.client.impl.BaseIteratorEnvironment;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.Sampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.conf.ConfigurationCopy;
+import org.apache.accumulo.core.conf.DefaultConfiguration;
+import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.ArrayByteSequence;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
@@ -45,21 +53,54 @@
 import org.apache.accumulo.core.iterators.IterationInterruptedException;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.system.ColumnFamilySkippingIterator;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
+import org.apache.accumulo.core.sample.impl.SamplerFactory;
 import org.apache.accumulo.core.util.LocalityGroupUtil;
+import org.apache.accumulo.core.util.LocalityGroupUtil.LocalityGroupConfigurationError;
 import org.apache.accumulo.server.client.HdfsZooInstance;
 import org.apache.accumulo.server.conf.ZooConfiguration;
 import org.apache.accumulo.tserver.InMemoryMap.MemoryIterator;
 import org.apache.hadoop.io.Text;
 import org.apache.log4j.Level;
 import org.apache.log4j.Logger;
+import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Ignore;
 import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.ExpectedException;
 import org.junit.rules.TemporaryFolder;
 
+import com.google.common.collect.ImmutableMap;
+
 public class InMemoryMapTest {
 
+  private static class SampleIE extends BaseIteratorEnvironment {
+
+    private final SamplerConfiguration sampleConfig;
+
+    public SampleIE() {
+      this.sampleConfig = null;
+    }
+
+    public SampleIE(SamplerConfigurationImpl sampleConfig) {
+      this.sampleConfig = sampleConfig.toSamplerConfiguration();
+    }
+
+    @Override
+    public boolean isSamplingEnabled() {
+      return sampleConfig != null;
+    }
+
+    @Override
+    public SamplerConfiguration getSamplerConfiguration() {
+      return sampleConfig;
+    }
+  }
+
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
   @BeforeClass
   public static void setUp() throws Exception {
     // suppress log messages having to do with not having an instance
@@ -101,20 +142,42 @@
   }
 
   static Set<ByteSequence> newCFSet(String... cfs) {
-    HashSet<ByteSequence> cfSet = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> cfSet = new HashSet<>();
     for (String cf : cfs) {
       cfSet.add(new ArrayByteSequence(cf));
     }
     return cfSet;
   }
 
+  static Set<Text> toTextSet(String... cfs) {
+    HashSet<Text> cfSet = new HashSet<>();
+    for (String cf : cfs) {
+      cfSet.add(new Text(cf));
+    }
+    return cfSet;
+  }
+
+  static ConfigurationCopy newConfig(String memDumpDir) {
+    ConfigurationCopy config = new ConfigurationCopy(DefaultConfiguration.getInstance());
+    config.set(Property.TSERV_NATIVEMAP_ENABLED, "" + false);
+    config.set(Property.TSERV_MEMDUMP_DIR, memDumpDir);
+    return config;
+  }
+
+  static InMemoryMap newInMemoryMap(boolean useNative, String memDumpDir) throws LocalityGroupConfigurationError {
+    ConfigurationCopy config = new ConfigurationCopy(DefaultConfiguration.getInstance());
+    config.set(Property.TSERV_NATIVEMAP_ENABLED, "" + useNative);
+    config.set(Property.TSERV_MEMDUMP_DIR, memDumpDir);
+    return new InMemoryMap(config);
+  }
+
   @Test
   public void test2() throws Exception {
-    InMemoryMap imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
-    MemoryIterator ski1 = imm.skvIterator();
+    MemoryIterator ski1 = imm.skvIterator(null);
     mutate(imm, "r1", "foo:cq1", 3, "bar1");
-    MemoryIterator ski2 = imm.skvIterator();
+    MemoryIterator ski2 = imm.skvIterator(null);
 
     ski1.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
     assertFalse(ski1.hasTop());
@@ -128,17 +191,17 @@
 
   @Test
   public void test3() throws Exception {
-    InMemoryMap imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
     mutate(imm, "r1", "foo:cq1", 3, "bar1");
     mutate(imm, "r1", "foo:cq1", 3, "bar2");
-    MemoryIterator ski1 = imm.skvIterator();
+    MemoryIterator ski1 = imm.skvIterator(null);
     mutate(imm, "r1", "foo:cq1", 3, "bar3");
 
     mutate(imm, "r3", "foo:cq1", 3, "bar9");
     mutate(imm, "r3", "foo:cq1", 3, "bara");
 
-    MemoryIterator ski2 = imm.skvIterator();
+    MemoryIterator ski2 = imm.skvIterator(null);
 
     ski1.seek(new Range(new Text("r1")), LocalityGroupUtil.EMPTY_CF_SET, false);
     ae(ski1, "r1", "foo:cq1", 3, "bar2");
@@ -154,11 +217,11 @@
 
   @Test
   public void test4() throws Exception {
-    InMemoryMap imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
     mutate(imm, "r1", "foo:cq1", 3, "bar1");
     mutate(imm, "r1", "foo:cq1", 3, "bar2");
-    MemoryIterator ski1 = imm.skvIterator();
+    MemoryIterator ski1 = imm.skvIterator(null);
     mutate(imm, "r1", "foo:cq1", 3, "bar3");
 
     imm.delete(0);
@@ -186,13 +249,13 @@
 
   @Test
   public void test5() throws Exception {
-    InMemoryMap imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
     mutate(imm, "r1", "foo:cq1", 3, "bar1");
     mutate(imm, "r1", "foo:cq1", 3, "bar2");
     mutate(imm, "r1", "foo:cq1", 3, "bar3");
 
-    MemoryIterator ski1 = imm.skvIterator();
+    MemoryIterator ski1 = imm.skvIterator(null);
     ski1.seek(new Range(new Text("r1")), LocalityGroupUtil.EMPTY_CF_SET, false);
     ae(ski1, "r1", "foo:cq1", 3, "bar3");
 
@@ -204,13 +267,13 @@
 
     ski1.close();
 
-    imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+    imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
     mutate(imm, "r1", "foo:cq1", 3, "bar1");
     mutate(imm, "r1", "foo:cq2", 3, "bar2");
     mutate(imm, "r1", "foo:cq3", 3, "bar3");
 
-    ski1 = imm.skvIterator();
+    ski1 = imm.skvIterator(null);
     ski1.seek(new Range(new Text("r1")), LocalityGroupUtil.EMPTY_CF_SET, false);
     ae(ski1, "r1", "foo:cq1", 3, "bar1");
 
@@ -225,18 +288,18 @@
 
   @Test
   public void test6() throws Exception {
-    InMemoryMap imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
     mutate(imm, "r1", "foo:cq1", 3, "bar1");
     mutate(imm, "r1", "foo:cq2", 3, "bar2");
     mutate(imm, "r1", "foo:cq3", 3, "bar3");
     mutate(imm, "r1", "foo:cq4", 3, "bar4");
 
-    MemoryIterator ski1 = imm.skvIterator();
+    MemoryIterator ski1 = imm.skvIterator(null);
 
     mutate(imm, "r1", "foo:cq5", 3, "bar5");
 
-    SortedKeyValueIterator<Key,Value> dc = ski1.deepCopy(null);
+    SortedKeyValueIterator<Key,Value> dc = ski1.deepCopy(new SampleIE());
 
     ski1.seek(new Range(nk("r1", "foo:cq1", 3), null), LocalityGroupUtil.EMPTY_CF_SET, false);
     ae(ski1, "r1", "foo:cq1", 3, "bar1");
@@ -271,12 +334,12 @@
   private void deepCopyAndDelete(int interleaving, boolean interrupt) throws Exception {
     // interleaving == 0 intentionally omitted, this runs the test w/o deleting in mem map
 
-    InMemoryMap imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
     mutate(imm, "r1", "foo:cq1", 3, "bar1");
     mutate(imm, "r1", "foo:cq2", 3, "bar2");
 
-    MemoryIterator ski1 = imm.skvIterator();
+    MemoryIterator ski1 = imm.skvIterator(null);
 
     AtomicBoolean iflag = new AtomicBoolean(false);
     ski1.setInterruptFlag(iflag);
@@ -287,7 +350,7 @@
         iflag.set(true);
     }
 
-    SortedKeyValueIterator<Key,Value> dc = ski1.deepCopy(null);
+    SortedKeyValueIterator<Key,Value> dc = ski1.deepCopy(new SampleIE());
 
     if (interleaving == 2) {
       imm.delete(0);
@@ -338,7 +401,7 @@
 
   @Test
   public void testBug1() throws Exception {
-    InMemoryMap imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
     for (int i = 0; i < 20; i++) {
       mutate(imm, "r1", "foo:cq" + i, 3, "bar" + i);
@@ -348,12 +411,12 @@
       mutate(imm, "r2", "foo:cq" + i, 3, "bar" + i);
     }
 
-    MemoryIterator ski1 = imm.skvIterator();
+    MemoryIterator ski1 = imm.skvIterator(null);
     ColumnFamilySkippingIterator cfsi = new ColumnFamilySkippingIterator(ski1);
 
     imm.delete(0);
 
-    ArrayList<ByteSequence> columns = new ArrayList<ByteSequence>();
+    ArrayList<ByteSequence> columns = new ArrayList<>();
     columns.add(new ArrayByteSequence("bar"));
 
     // this seek resulted in an infinite loop before a bug was fixed
@@ -366,14 +429,14 @@
 
   @Test
   public void testSeekBackWards() throws Exception {
-    InMemoryMap imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
     mutate(imm, "r1", "foo:cq1", 3, "bar1");
     mutate(imm, "r1", "foo:cq2", 3, "bar2");
     mutate(imm, "r1", "foo:cq3", 3, "bar3");
     mutate(imm, "r1", "foo:cq4", 3, "bar4");
 
-    MemoryIterator skvi1 = imm.skvIterator();
+    MemoryIterator skvi1 = imm.skvIterator(null);
 
     skvi1.seek(new Range(nk("r1", "foo:cq3", 3), null), LocalityGroupUtil.EMPTY_CF_SET, false);
     ae(skvi1, "r1", "foo:cq3", 3, "bar3");
@@ -385,14 +448,14 @@
 
   @Test
   public void testDuplicateKey() throws Exception {
-    InMemoryMap imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
 
     Mutation m = new Mutation(new Text("r1"));
     m.put(new Text("foo"), new Text("cq"), 3, new Value("v1".getBytes()));
     m.put(new Text("foo"), new Text("cq"), 3, new Value("v2".getBytes()));
     imm.mutate(Collections.singletonList(m));
 
-    MemoryIterator skvi1 = imm.skvIterator();
+    MemoryIterator skvi1 = imm.skvIterator(null);
     skvi1.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
     ae(skvi1, "r1", "foo:cq", 3, "v2");
     ae(skvi1, "r1", "foo:cq", 3, "v1");
@@ -410,12 +473,12 @@
   // - hard to get this timing test to run well on apache build machines
   @Test
   @Ignore
-  public void parallelWriteSpeed() throws InterruptedException, IOException {
-    List<Double> timings = new ArrayList<Double>();
+  public void parallelWriteSpeed() throws Exception {
+    List<Double> timings = new ArrayList<>();
     for (int threads : new int[] {1, 2, 16, /* 64, 256 */}) {
       final long now = System.currentTimeMillis();
       final long counts[] = new long[threads];
-      final InMemoryMap imm = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+      final InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
       ExecutorService e = Executors.newFixedThreadPool(threads);
       for (int j = 0; j < threads; j++) {
         final int threadId = j;
@@ -451,12 +514,12 @@
 
   @Test
   public void testLocalityGroups() throws Exception {
+    ConfigurationCopy config = newConfig(tempFolder.newFolder().getAbsolutePath());
+    config.set(Property.TABLE_LOCALITY_GROUP_PREFIX + "lg1", LocalityGroupUtil.encodeColumnFamilies(toTextSet("cf1", "cf2")));
+    config.set(Property.TABLE_LOCALITY_GROUP_PREFIX + "lg2", LocalityGroupUtil.encodeColumnFamilies(toTextSet("cf3", "cf4")));
+    config.set(Property.TABLE_LOCALITY_GROUPS.getKey(), "lg1,lg2");
 
-    Map<String,Set<ByteSequence>> lggroups1 = new HashMap<String,Set<ByteSequence>>();
-    lggroups1.put("lg1", newCFSet("cf1", "cf2"));
-    lggroups1.put("lg2", newCFSet("cf3", "cf4"));
-
-    InMemoryMap imm = new InMemoryMap(lggroups1, false, tempFolder.newFolder().getAbsolutePath());
+    InMemoryMap imm = new InMemoryMap(config);
 
     Mutation m1 = new Mutation("r1");
     m1.put("cf1", "x", 2, "1");
@@ -480,10 +543,10 @@
 
     imm.mutate(Arrays.asList(m1, m2, m3, m4, m5));
 
-    MemoryIterator iter1 = imm.skvIterator();
+    MemoryIterator iter1 = imm.skvIterator(null);
 
     seekLocalityGroups(iter1);
-    SortedKeyValueIterator<Key,Value> dc1 = iter1.deepCopy(null);
+    SortedKeyValueIterator<Key,Value> dc1 = iter1.deepCopy(new SampleIE());
     seekLocalityGroups(dc1);
 
     assertTrue(imm.getNumEntries() == 10);
@@ -497,6 +560,254 @@
     // seekLocalityGroups(iter1.deepCopy(null));
   }
 
+  @Test
+  public void testSample() throws Exception {
+
+    SamplerConfigurationImpl sampleConfig = new SamplerConfigurationImpl(RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "7"));
+    Sampler sampler = SamplerFactory.newSampler(sampleConfig, DefaultConfiguration.getInstance());
+
+    ConfigurationCopy config1 = newConfig(tempFolder.newFolder().getAbsolutePath());
+    for (Entry<String,String> entry : sampleConfig.toTablePropertiesMap().entrySet()) {
+      config1.set(entry.getKey(), entry.getValue());
+    }
+
+    ConfigurationCopy config2 = newConfig(tempFolder.newFolder().getAbsolutePath());
+    config2.set(Property.TABLE_LOCALITY_GROUP_PREFIX + "lg1", LocalityGroupUtil.encodeColumnFamilies(toTextSet("cf2")));
+    config2.set(Property.TABLE_LOCALITY_GROUPS.getKey(), "lg1");
+    for (Entry<String,String> entry : sampleConfig.toTablePropertiesMap().entrySet()) {
+      config2.set(entry.getKey(), entry.getValue());
+    }
+
+    for (ConfigurationCopy config : Arrays.asList(config1, config2)) {
+
+      InMemoryMap imm = new InMemoryMap(config);
+
+      TreeMap<Key,Value> expectedSample = new TreeMap<>();
+      TreeMap<Key,Value> expectedAll = new TreeMap<>();
+      TreeMap<Key,Value> expectedNone = new TreeMap<>();
+
+      MemoryIterator iter0 = imm.skvIterator(sampleConfig);
+
+      for (int r = 0; r < 100; r++) {
+        String row = String.format("r%06d", r);
+        mutate(imm, row, "cf1:cq1", 5, "v" + (2 * r), sampler, expectedSample, expectedAll);
+        mutate(imm, row, "cf2:cq2", 5, "v" + ((2 * r) + 1), sampler, expectedSample, expectedAll);
+      }
+
+      assertTrue(expectedSample.size() > 0);
+
+      MemoryIterator iter1 = imm.skvIterator(sampleConfig);
+      MemoryIterator iter2 = imm.skvIterator(null);
+      SortedKeyValueIterator<Key,Value> iter0dc1 = iter0.deepCopy(new SampleIE());
+      SortedKeyValueIterator<Key,Value> iter0dc2 = iter0.deepCopy(new SampleIE(sampleConfig));
+      SortedKeyValueIterator<Key,Value> iter1dc1 = iter1.deepCopy(new SampleIE());
+      SortedKeyValueIterator<Key,Value> iter1dc2 = iter1.deepCopy(new SampleIE(sampleConfig));
+      SortedKeyValueIterator<Key,Value> iter2dc1 = iter2.deepCopy(new SampleIE());
+      SortedKeyValueIterator<Key,Value> iter2dc2 = iter2.deepCopy(new SampleIE(sampleConfig));
+
+      assertEquals(expectedNone, readAll(iter0));
+      assertEquals(expectedNone, readAll(iter0dc1));
+      assertEquals(expectedNone, readAll(iter0dc2));
+      assertEquals(expectedSample, readAll(iter1));
+      assertEquals(expectedAll, readAll(iter2));
+      assertEquals(expectedAll, readAll(iter1dc1));
+      assertEquals(expectedAll, readAll(iter2dc1));
+      assertEquals(expectedSample, readAll(iter1dc2));
+      assertEquals(expectedSample, readAll(iter2dc2));
+
+      imm.delete(0);
+
+      assertEquals(expectedNone, readAll(iter0));
+      assertEquals(expectedNone, readAll(iter0dc1));
+      assertEquals(expectedNone, readAll(iter0dc2));
+      assertEquals(expectedSample, readAll(iter1));
+      assertEquals(expectedAll, readAll(iter2));
+      assertEquals(expectedAll, readAll(iter1dc1));
+      assertEquals(expectedAll, readAll(iter2dc1));
+      assertEquals(expectedSample, readAll(iter1dc2));
+      assertEquals(expectedSample, readAll(iter2dc2));
+
+      SortedKeyValueIterator<Key,Value> iter0dc3 = iter0.deepCopy(new SampleIE());
+      SortedKeyValueIterator<Key,Value> iter0dc4 = iter0.deepCopy(new SampleIE(sampleConfig));
+      SortedKeyValueIterator<Key,Value> iter1dc3 = iter1.deepCopy(new SampleIE());
+      SortedKeyValueIterator<Key,Value> iter1dc4 = iter1.deepCopy(new SampleIE(sampleConfig));
+      SortedKeyValueIterator<Key,Value> iter2dc3 = iter2.deepCopy(new SampleIE());
+      SortedKeyValueIterator<Key,Value> iter2dc4 = iter2.deepCopy(new SampleIE(sampleConfig));
+
+      assertEquals(expectedNone, readAll(iter0dc3));
+      assertEquals(expectedNone, readAll(iter0dc4));
+      assertEquals(expectedAll, readAll(iter1dc3));
+      assertEquals(expectedAll, readAll(iter2dc3));
+      assertEquals(expectedSample, readAll(iter1dc4));
+      assertEquals(expectedSample, readAll(iter2dc4));
+
+      iter1.close();
+      iter2.close();
+    }
+  }
+
+  @Test
+  public void testInterruptingSample() throws Exception {
+    runInterruptSampleTest(false, false, false);
+    runInterruptSampleTest(false, true, false);
+    runInterruptSampleTest(true, false, false);
+    runInterruptSampleTest(true, true, false);
+    runInterruptSampleTest(true, true, true);
+  }
+
+  private void runInterruptSampleTest(boolean deepCopy, boolean delete, boolean dcAfterDelete) throws Exception {
+    SamplerConfigurationImpl sampleConfig1 = new SamplerConfigurationImpl(RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "2"));
+    Sampler sampler = SamplerFactory.newSampler(sampleConfig1, DefaultConfiguration.getInstance());
+
+    ConfigurationCopy config1 = newConfig(tempFolder.newFolder().getAbsolutePath());
+    for (Entry<String,String> entry : sampleConfig1.toTablePropertiesMap().entrySet()) {
+      config1.set(entry.getKey(), entry.getValue());
+    }
+
+    InMemoryMap imm = new InMemoryMap(config1);
+
+    TreeMap<Key,Value> expectedSample = new TreeMap<>();
+    TreeMap<Key,Value> expectedAll = new TreeMap<>();
+
+    for (int r = 0; r < 1000; r++) {
+      String row = String.format("r%06d", r);
+      mutate(imm, row, "cf1:cq1", 5, "v" + (2 * r), sampler, expectedSample, expectedAll);
+      mutate(imm, row, "cf2:cq2", 5, "v" + ((2 * r) + 1), sampler, expectedSample, expectedAll);
+    }
+
+    assertTrue(expectedSample.size() > 0);
+
+    MemoryIterator miter = imm.skvIterator(sampleConfig1);
+    AtomicBoolean iFlag = new AtomicBoolean(false);
+    miter.setInterruptFlag(iFlag);
+    SortedKeyValueIterator<Key,Value> iter = miter;
+
+    if (delete && !dcAfterDelete) {
+      imm.delete(0);
+    }
+
+    if (deepCopy) {
+      iter = iter.deepCopy(new SampleIE(sampleConfig1));
+    }
+
+    if (delete && dcAfterDelete) {
+      imm.delete(0);
+    }
+
+    assertEquals(expectedSample, readAll(iter));
+    iFlag.set(true);
+    try {
+      readAll(iter);
+      Assert.fail();
+    } catch (IterationInterruptedException iie) {}
+
+    miter.close();
+  }
+
+  private void mutate(InMemoryMap imm, String row, String cols, int ts, String val, Sampler sampler, TreeMap<Key,Value> expectedSample,
+      TreeMap<Key,Value> expectedAll) {
+    mutate(imm, row, cols, ts, val);
+    Key k1 = nk(row, cols, ts);
+    if (sampler.accept(k1)) {
+      expectedSample.put(k1, new Value(val.getBytes()));
+    }
+    expectedAll.put(k1, new Value(val.getBytes()));
+  }
+
+  @Test(expected = SampleNotPresentException.class)
+  public void testDifferentSampleConfig() throws Exception {
+    SamplerConfigurationImpl sampleConfig = new SamplerConfigurationImpl(RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "7"));
+
+    ConfigurationCopy config1 = newConfig(tempFolder.newFolder().getAbsolutePath());
+    for (Entry<String,String> entry : sampleConfig.toTablePropertiesMap().entrySet()) {
+      config1.set(entry.getKey(), entry.getValue());
+    }
+
+    InMemoryMap imm = new InMemoryMap(config1);
+
+    mutate(imm, "r", "cf:cq", 5, "b");
+
+    SamplerConfigurationImpl sampleConfig2 = new SamplerConfigurationImpl(RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "9"));
+    MemoryIterator iter = imm.skvIterator(sampleConfig2);
+    iter.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
+  }
+
+  @Test(expected = SampleNotPresentException.class)
+  public void testNoSampleConfig() throws Exception {
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+
+    mutate(imm, "r", "cf:cq", 5, "b");
+
+    SamplerConfigurationImpl sampleConfig2 = new SamplerConfigurationImpl(RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "9"));
+    MemoryIterator iter = imm.skvIterator(sampleConfig2);
+    iter.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
+  }
+
+  @Test
+  public void testEmptyNoSampleConfig() throws Exception {
+    InMemoryMap imm = newInMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
+
+    SamplerConfigurationImpl sampleConfig2 = new SamplerConfigurationImpl(RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "9"));
+
+    // when in mem map is empty should be able to get sample iterator with any sample config
+    MemoryIterator iter = imm.skvIterator(sampleConfig2);
+    iter.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
+    Assert.assertFalse(iter.hasTop());
+  }
+
+  @Test
+  public void testDeferredSamplerCreation() throws Exception {
+    SamplerConfigurationImpl sampleConfig1 = new SamplerConfigurationImpl(RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "9"));
+
+    ConfigurationCopy config1 = newConfig(tempFolder.newFolder().getAbsolutePath());
+    for (Entry<String,String> entry : sampleConfig1.toTablePropertiesMap().entrySet()) {
+      config1.set(entry.getKey(), entry.getValue());
+    }
+
+    InMemoryMap imm = new InMemoryMap(config1);
+
+    // change sampler config after creating in mem map.
+    SamplerConfigurationImpl sampleConfig2 = new SamplerConfigurationImpl(RowSampler.class.getName(), ImmutableMap.of("hasher", "murmur3_32", "modulus", "7"));
+    for (Entry<String,String> entry : sampleConfig2.toTablePropertiesMap().entrySet()) {
+      config1.set(entry.getKey(), entry.getValue());
+    }
+
+    TreeMap<Key,Value> expectedSample = new TreeMap<>();
+    TreeMap<Key,Value> expectedAll = new TreeMap<>();
+    Sampler sampler = SamplerFactory.newSampler(sampleConfig2, config1);
+
+    for (int i = 0; i < 100; i++) {
+      mutate(imm, "r" + i, "cf:cq", 5, "v" + i, sampler, expectedSample, expectedAll);
+    }
+
+    MemoryIterator iter = imm.skvIterator(sampleConfig2);
+    iter.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
+    Assert.assertEquals(expectedSample, readAll(iter));
+
+    SortedKeyValueIterator<Key,Value> dc = iter.deepCopy(new SampleIE(sampleConfig2));
+    dc.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
+    Assert.assertEquals(expectedSample, readAll(dc));
+
+    iter = imm.skvIterator(null);
+    iter.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
+    Assert.assertEquals(expectedAll, readAll(iter));
+
+    iter = imm.skvIterator(sampleConfig1);
+    thrown.expect(SampleNotPresentException.class);
+    iter.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
+  }
+
+  private TreeMap<Key,Value> readAll(SortedKeyValueIterator<Key,Value> iter) throws IOException {
+    iter.seek(new Range(), LocalityGroupUtil.EMPTY_CF_SET, false);
+
+    TreeMap<Key,Value> actual = new TreeMap<>();
+    while (iter.hasTop()) {
+      actual.put(iter.getTopKey(), iter.getTopValue());
+      iter.next();
+    }
+    return actual;
+  }
+
   private void seekLocalityGroups(SortedKeyValueIterator<Key,Value> iter1) throws IOException {
     iter1.seek(new Range(), newCFSet("cf1"), true);
     ae(iter1, "r1", "cf1:x", 2, "1");
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/LargestFirstMemoryManagerTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/LargestFirstMemoryManagerTest.java
index f3bd220..82ec8ec 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/LargestFirstMemoryManagerTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/LargestFirstMemoryManagerTest.java
@@ -22,7 +22,6 @@
 import java.util.List;
 
 import org.apache.accumulo.core.client.Instance;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.DefaultConfiguration;
 import org.apache.accumulo.core.data.impl.KeyExtent;
@@ -34,6 +33,8 @@
 import org.apache.accumulo.server.tabletserver.MemoryManagementActions;
 import org.apache.accumulo.server.tabletserver.TabletState;
 import org.apache.hadoop.io.Text;
+import org.easymock.EasyMock;
+import org.junit.Before;
 import org.junit.Test;
 
 import com.google.common.base.Function;
@@ -47,11 +48,18 @@
   private static final long QGIG = ONE_GIG / 4;
   private static final long ONE_MINUTE = 60 * 1000;
 
+  private Instance inst;
+
+  @Before
+  public void mockInstance() {
+    inst = EasyMock.createMock(Instance.class);
+  }
+
   @Test
   public void test() throws Exception {
     LargestFirstMemoryManagerUnderTest mgr = new LargestFirstMemoryManagerUnderTest();
     ServerConfiguration config = new ServerConfiguration() {
-      ServerConfigurationFactory delegate = new ServerConfigurationFactory(new MockInstance());
+      ServerConfigurationFactory delegate = new ServerConfigurationFactory(inst);
 
       @Override
       public AccumuloConfiguration getConfiguration() {
@@ -176,7 +184,7 @@
     };
     LargestFirstMemoryManagerWithExistenceCheck mgr = new LargestFirstMemoryManagerWithExistenceCheck(existenceCheck);
     ServerConfiguration config = new ServerConfiguration() {
-      ServerConfigurationFactory delegate = new ServerConfigurationFactory(new MockInstance());
+      ServerConfigurationFactory delegate = new ServerConfigurationFactory(inst);
 
       @Override
       public AccumuloConfiguration getConfiguration() {
@@ -206,7 +214,7 @@
     mgr.init(config);
     MemoryManagementActions result;
     // one tablet is really big and the other is for a nonexistent table
-    KeyExtent extent = new KeyExtent(new Text("2"), new Text("j"), null);
+    KeyExtent extent = new KeyExtent("2", new Text("j"), null);
     result = mgr.getMemoryManagementActions(tablets(t(extent, ZERO, ONE_GIG, 0), t(k("j"), ZERO, ONE_GIG, 0)));
     assertEquals(1, result.tabletsToMinorCompact.size());
     assertEquals(extent, result.tabletsToMinorCompact.get(0));
@@ -248,7 +256,7 @@
   }
 
   private static KeyExtent k(String endRow) {
-    return new KeyExtent(new Text("1"), new Text(endRow), null);
+    return new KeyExtent("1", new Text(endRow), null);
   }
 
   private static class TestTabletState implements TabletState {
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/DefaultCompactionStrategyTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/DefaultCompactionStrategyTest.java
index 55226fb..e54e1c8 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/DefaultCompactionStrategyTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/DefaultCompactionStrategyTest.java
@@ -41,6 +41,7 @@
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
 import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.server.fs.FileRef;
 import org.apache.hadoop.io.Text;
@@ -55,10 +56,11 @@
     Key second = null;
     if (secondString != null)
       second = new Key(new Text(secondString));
-    return new Pair<Key,Key>(first, second);
+    return new Pair<>(first, second);
   }
 
-  static final Map<String,Pair<Key,Key>> fakeFiles = new HashMap<String,Pair<Key,Key>>();
+  static final Map<String,Pair<Key,Key>> fakeFiles = new HashMap<>();
+
   static {
     fakeFiles.put("file1", keys("b", "m"));
     fakeFiles.put("file2", keys("n", "z"));
@@ -133,6 +135,11 @@
     @Override
     public void close() throws IOException {}
 
+    @Override
+    public FileSKVIterator getSample(SamplerConfigurationImpl sampleConfig) {
+      return null;
+    }
+
   }
 
   static final DefaultConfiguration dfault = AccumuloConfiguration.getDefaultConfiguration();
@@ -151,11 +158,11 @@
   }
 
   private MajorCompactionRequest createRequest(MajorCompactionReason reason, Object... objs) throws IOException {
-    return createRequest(new KeyExtent(new Text("0"), null, null), reason, objs);
+    return createRequest(new KeyExtent("0", null, null), reason, objs);
   }
 
   private MajorCompactionRequest createRequest(KeyExtent extent, MajorCompactionReason reason, Object... objs) throws IOException {
-    Map<FileRef,DataFileValue> files = new HashMap<FileRef,DataFileValue>();
+    Map<FileRef,DataFileValue> files = new HashMap<>();
     for (int i = 0; i < objs.length; i += 2) {
       files.put(new FileRef("hdfs://nn1/accumulo/tables/5/t-0001/" + (String) objs[i]), new DataFileValue(((Number) objs[i + 1]).longValue(), 0));
     }
@@ -167,7 +174,7 @@
   }
 
   private static Set<String> asStringSet(Collection<FileRef> refs) {
-    HashSet<String> result = new HashSet<String>();
+    HashSet<String> result = new HashSet<>();
     for (FileRef ref : refs) {
       result.add(ref.path().toString());
     }
@@ -175,7 +182,7 @@
   }
 
   private static Set<String> asSet(Collection<String> strings) {
-    HashSet<String> result = new HashSet<String>();
+    HashSet<String> result = new HashSet<>();
     for (String string : strings)
       result.add("hdfs://nn1/accumulo/tables/5/t-0001/" + string);
     return result;
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/SizeLimitCompactionStrategyTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/SizeLimitCompactionStrategyTest.java
index e5cdd72..648f451 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/SizeLimitCompactionStrategyTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/SizeLimitCompactionStrategyTest.java
@@ -25,7 +25,6 @@
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
 import org.apache.accumulo.server.fs.FileRef;
-import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -35,7 +34,7 @@
 public class SizeLimitCompactionStrategyTest {
   private Map<FileRef,DataFileValue> nfl(String... sa) {
 
-    HashMap<FileRef,DataFileValue> ret = new HashMap<FileRef,DataFileValue>();
+    HashMap<FileRef,DataFileValue> ret = new HashMap<>();
     for (int i = 0; i < sa.length; i += 2) {
       ret.put(new FileRef("hdfs://nn1/accumulo/tables/5/t-0001/" + sa[i]), new DataFileValue(AccumuloConfiguration.getMemoryInBytes(sa[i + 1]), 1));
     }
@@ -46,12 +45,12 @@
   @Test
   public void testLimits() throws IOException {
     SizeLimitCompactionStrategy slcs = new SizeLimitCompactionStrategy();
-    HashMap<String,String> opts = new HashMap<String,String>();
+    HashMap<String,String> opts = new HashMap<>();
     opts.put(SizeLimitCompactionStrategy.SIZE_LIMIT_OPT, "1G");
 
     slcs.init(opts);
 
-    KeyExtent ke = new KeyExtent(new Text("0"), null, null);
+    KeyExtent ke = new KeyExtent("0", null, null);
     MajorCompactionRequest mcr = new MajorCompactionRequest(ke, MajorCompactionReason.NORMAL, null, AccumuloConfiguration.getDefaultConfiguration());
 
     mcr.setFiles(nfl("f1", "2G", "f2", "2G", "f3", "2G", "f4", "2G"));
@@ -63,7 +62,7 @@
     mcr.setFiles(nfl("f1", "2G", "f2", "2G", "f3", "2G", "f4", "2G", "f5", "500M", "f6", "500M", "f7", "500M", "f8", "500M"));
 
     Assert.assertTrue(slcs.shouldCompact(mcr));
-    Assert.assertEquals(nfl("f5", "500M", "f6", "500M", "f7", "500M", "f8", "500M").keySet(), new HashSet<FileRef>(slcs.getCompactionPlan(mcr).inputFiles));
+    Assert.assertEquals(nfl("f5", "500M", "f6", "500M", "f7", "500M", "f8", "500M").keySet(), new HashSet<>(slcs.getCompactionPlan(mcr).inputFiles));
     Assert.assertEquals(8, mcr.getFiles().size());
   }
 }
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/strategies/ConfigurableCompactionStrategyTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/strategies/ConfigurableCompactionStrategyTest.java
index 62962db..d2a1fe4 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/strategies/ConfigurableCompactionStrategyTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/compaction/strategies/ConfigurableCompactionStrategyTest.java
@@ -28,7 +28,6 @@
 import org.apache.accumulo.tserver.compaction.CompactionPlan;
 import org.apache.accumulo.tserver.compaction.MajorCompactionReason;
 import org.apache.accumulo.tserver.compaction.MajorCompactionRequest;
-import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -38,7 +37,7 @@
 
   @Test
   public void testOutputOptions() throws Exception {
-    MajorCompactionRequest mcr = new MajorCompactionRequest(new KeyExtent(new Text("1"), null, null), MajorCompactionReason.USER, null, null);
+    MajorCompactionRequest mcr = new MajorCompactionRequest(new KeyExtent("1", null, null), MajorCompactionReason.USER, null, null);
 
     Map<FileRef,DataFileValue> files = new HashMap<>();
     files.put(new FileRef("hdfs://nn1/accumulo/tables/1/t-009/F00001.rf"), new DataFileValue(50000, 400));
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/constraints/ConstraintCheckerTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/constraints/ConstraintCheckerTest.java
index c590242..0765b5d 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/constraints/ConstraintCheckerTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/constraints/ConstraintCheckerTest.java
@@ -53,7 +53,7 @@
   @Before
   public void setup() throws NoSuchMethodException, SecurityException {
     cc = createMockBuilder(ConstraintChecker.class).addMockedMethod("getConstraints").createMock();
-    constraints = new ArrayList<Constraint>();
+    constraints = new ArrayList<>();
     expect(cc.getConstraints()).andReturn(constraints);
 
     env = createMock(Environment.class);
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/log/DfsLoggerTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/log/DfsLoggerTest.java
index cd652e4..6f10a48 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/log/DfsLoggerTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/log/DfsLoggerTest.java
@@ -31,7 +31,7 @@
 
   @Test
   public void testDurabilityForGroupCommit() {
-    List<TabletMutations> lst = new ArrayList<TabletMutations>();
+    List<TabletMutations> lst = new ArrayList<>();
     assertEquals(Durability.NONE, DfsLogger.chooseDurabilityForGroupCommit(lst));
     TabletMutations m1 = new TabletMutations(0, 1, Collections.<Mutation> emptyList(), Durability.NONE);
     lst.add(m1);
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/log/LogEntryTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/log/LogEntryTest.java
new file mode 100644
index 0000000..2edcaa9
--- /dev/null
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/log/LogEntryTest.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.tserver.log;
+
+import static org.junit.Assert.assertEquals;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.data.impl.KeyExtent;
+import org.apache.accumulo.core.tabletserver.log.LogEntry;
+import org.apache.hadoop.io.Text;
+import org.junit.Test;
+
+public class LogEntryTest {
+
+  @Test
+  public void test() throws Exception {
+    KeyExtent extent = new KeyExtent("1", null, new Text(""));
+    long ts = 12345678L;
+    String server = "localhost:1234";
+    String filename = "default/foo";
+    LogEntry entry = new LogEntry(extent, ts, server, filename);
+    assertEquals(extent, entry.extent);
+    assertEquals(server, entry.server);
+    assertEquals(filename, entry.filename);
+    assertEquals(ts, entry.timestamp);
+    assertEquals("1<; default/foo", entry.toString());
+    assertEquals(new Text("log"), entry.getColumnFamily());
+    assertEquals(new Text("localhost:1234/default/foo"), entry.getColumnQualifier());
+    LogEntry copy = LogEntry.fromBytes(entry.toBytes());
+    assertEquals(entry.toString(), copy.toString());
+    Key key = new Key(new Text("1<"), new Text("log"), new Text("localhost:1234/default/foo"));
+    key.setTimestamp(ts);
+    LogEntry copy2 = LogEntry.fromKeyValue(key, entry.getValue());
+    assertEquals(entry.toString(), copy2.toString());
+    assertEquals(entry.timestamp, copy2.timestamp);
+    assertEquals("foo", entry.getUniqueID());
+    assertEquals("localhost:1234/default/foo", entry.getName());
+    assertEquals(new Value("default/foo".getBytes()), entry.getValue());
+  }
+
+}
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/log/SortedLogRecoveryTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/log/SortedLogRecoveryTest.java
index b47b376..b65d5ce 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/log/SortedLogRecoveryTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/log/SortedLogRecoveryTest.java
@@ -56,7 +56,7 @@
 
 public class SortedLogRecoveryTest {
 
-  static final KeyExtent extent = new KeyExtent(new Text("table"), null, null);
+  static final KeyExtent extent = new KeyExtent("table", null, null);
   static final Text cf = new Text("cf");
   static final Text cq = new Text("cq");
   static final Value value = new Value("value".getBytes());
@@ -113,7 +113,7 @@
   }
 
   private static class CaptureMutations implements MutationReceiver {
-    public ArrayList<Mutation> result = new ArrayList<Mutation>();
+    public ArrayList<Mutation> result = new ArrayList<>();
 
     @Override
     public void receive(Mutation m) {
@@ -133,7 +133,7 @@
     VolumeManager fs = VolumeManagerImpl.getLocal(workdir);
     final Path workdirPath = new Path("file://" + workdir);
     fs.deleteRecursively(workdirPath);
-    ArrayList<Path> dirs = new ArrayList<Path>();
+    ArrayList<Path> dirs = new ArrayList<>();
     try {
       for (Entry<String,KeyValue[]> entry : logs.entrySet()) {
         String path = workdir + "/" + entry.getKey();
@@ -177,7 +177,7 @@
     KeyValue entries5[] = new KeyValue[] {createKeyValue(OPEN, 0, 4, "70"), createKeyValue(DEFINE_TABLET, 1, 4, extent),
         createKeyValue(COMPACTION_START, 3, 4, "/t1/f3"), createKeyValue(MUTATION, 2, 4, ignored), createKeyValue(MUTATION, 6, 4, m2),};
 
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
     logs.put("entries2", entries2);
     logs.put("entries3", entries3);
@@ -217,7 +217,7 @@
         createKeyValue(MUTATION, 11, 1, ignored), createKeyValue(MUTATION, 15, 1, m), createKeyValue(MUTATION, 16, 1, m2),};
     KeyValue entries4[] = new KeyValue[] {createKeyValue(OPEN, 17, -1, "4"), createKeyValue(DEFINE_TABLET, 18, 1, extent),
         createKeyValue(COMPACTION_START, 20, 1, "/t1/f3"), createKeyValue(MUTATION, 19, 1, m3), createKeyValue(MUTATION, 21, 1, m4),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
     logs.put("entries2", entries2);
     logs.put("entries3", entries3);
@@ -252,7 +252,7 @@
     KeyValue entries2[] = new KeyValue[] {createKeyValue(OPEN, 0, 1, "2"), createKeyValue(DEFINE_TABLET, 1, 1, extent),
         createKeyValue(COMPACTION_START, 2, 1, "/t1/f1"), createKeyValue(COMPACTION_FINISH, 3, 1, "/t1/f1"), createKeyValue(MUTATION, 3, 1, m2),};
 
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
     logs.put("entries2", entries2);
 
@@ -269,7 +269,7 @@
   public void testEmpty() throws IOException {
     // Create a test log
     KeyValue entries[] = new KeyValue[] {createKeyValue(OPEN, 0, -1, "1"), createKeyValue(DEFINE_TABLET, 1, 1, extent),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("testlog", entries);
     // Recover
     List<Mutation> mutations = recover(logs, extent);
@@ -282,7 +282,7 @@
   public void testMissingDefinition() {
     // Create a test log
     KeyValue entries[] = new KeyValue[] {createKeyValue(OPEN, 0, -1, "1"),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("testlog", entries);
     // Recover
     try {
@@ -297,7 +297,7 @@
     Mutation m = new ServerMutation(new Text("row1"));
     m.put(cf, cq, value);
     KeyValue entries[] = new KeyValue[] {createKeyValue(OPEN, 0, -1, "1"), createKeyValue(DEFINE_TABLET, 1, 1, extent), createKeyValue(MUTATION, 2, 1, m),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("testlog", entries);
     // Recover
     List<Mutation> mutations = recover(logs, extent);
@@ -316,7 +316,7 @@
     KeyValue entries[] = new KeyValue[] {createKeyValue(OPEN, 0, -1, "1"), createKeyValue(DEFINE_TABLET, 1, 1, extent),
         createKeyValue(COMPACTION_START, 3, 1, "/t1/f1"), createKeyValue(COMPACTION_FINISH, 4, 1, null), createKeyValue(MUTATION, 2, 1, ignored),
         createKeyValue(MUTATION, 5, 1, m),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("testlog", entries);
     // Recover
     List<Mutation> mutations = recover(logs, extent);
@@ -336,7 +336,7 @@
         createKeyValue(COMPACTION_START, 3, 1, "/t1/f1"), createKeyValue(MUTATION, 2, 1, ignored),};
     KeyValue entries2[] = new KeyValue[] {createKeyValue(OPEN, 4, -1, "1"), createKeyValue(DEFINE_TABLET, 5, 1, extent),
         createKeyValue(COMPACTION_FINISH, 6, 1, null), createKeyValue(MUTATION, 7, 1, m),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
     logs.put("entries2", entries2);
     // Recover
@@ -359,7 +359,7 @@
         createKeyValue(COMPACTION_START, 3, 1, "/t1/f1"), createKeyValue(MUTATION, 2, 1, ignored), createKeyValue(MUTATION, 4, 1, m),};
     KeyValue entries2[] = new KeyValue[] {createKeyValue(OPEN, 5, -1, "1"), createKeyValue(DEFINE_TABLET, 6, 1, extent),
         createKeyValue(COMPACTION_FINISH, 7, 1, null), createKeyValue(MUTATION, 8, 1, m2),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
     logs.put("entries2", entries2);
     // Recover
@@ -382,7 +382,7 @@
     KeyValue entries[] = new KeyValue[] {createKeyValue(OPEN, 0, -1, "1"), createKeyValue(DEFINE_TABLET, 1, 1, extent),
         createKeyValue(COMPACTION_FINISH, 2, 1, null), createKeyValue(COMPACTION_START, 4, 1, "/t1/f1"), createKeyValue(COMPACTION_FINISH, 6, 1, null),
         createKeyValue(MUTATION, 3, 1, ignored), createKeyValue(MUTATION, 5, 1, m), createKeyValue(MUTATION, 7, 1, m2),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
     // Recover
     List<Mutation> mutations = recover(logs, extent);
@@ -408,7 +408,7 @@
     KeyValue entries2[] = new KeyValue[] {createKeyValue(OPEN, 5, -1, "1"), createKeyValue(DEFINE_TABLET, 6, 1, extent), createKeyValue(MUTATION, 7, 1, m2),};
     KeyValue entries3[] = new KeyValue[] {createKeyValue(OPEN, 8, -1, "1"), createKeyValue(DEFINE_TABLET, 9, 1, extent),
         createKeyValue(COMPACTION_FINISH, 10, 1, null), createKeyValue(MUTATION, 11, 1, m3),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
     logs.put("entries2", entries2);
     logs.put("entries3", entries3);
@@ -431,7 +431,7 @@
     KeyValue entries[] = new KeyValue[] {createKeyValue(OPEN, 0, -1, "1"), createKeyValue(DEFINE_TABLET, 1, 1, extent),
         createKeyValue(COMPACTION_START, 30, 1, "/t1/f1"), createKeyValue(COMPACTION_FINISH, 32, 1, "/t1/f1"), createKeyValue(MUTATION, 29, 1, m1),
         createKeyValue(MUTATION, 30, 1, m2),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("testlog", entries);
     // Recover
     List<Mutation> mutations = recover(logs, extent);
@@ -454,7 +454,7 @@
         createKeyValue(COMPACTION_START, 2, 1, "/t1/f1"), createKeyValue(COMPACTION_FINISH, 4, 1, null), createKeyValue(MUTATION, 3, 1, m),};
     KeyValue entries2[] = new KeyValue[] {createKeyValue(OPEN, 5, -1, "1"), createKeyValue(DEFINE_TABLET, 6, 1, extent),
         createKeyValue(COMPACTION_START, 8, 1, "/t1/f1"), createKeyValue(MUTATION, 7, 1, m2), createKeyValue(MUTATION, 9, 1, m3),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
     logs.put("entries2", entries2);
     // Recover
@@ -498,7 +498,7 @@
         // createKeyValue(COMPACTION_START, 18, 1, "somefile"),
         // createKeyValue(COMPACTION_FINISH, 19, 1, null),
         createKeyValue(MUTATION, 8, 1, m5), createKeyValue(MUTATION, 20, 1, m6),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
     logs.put("entries2", entries2);
     logs.put("entries3", entries3);
@@ -535,7 +535,7 @@
         createKeyValue(MUTATION, 3, 1, m), createKeyValue(MUTATION, 3, 1, m2), createKeyValue(MUTATION, 3, 1, m3),};
     KeyValue entries2[] = new KeyValue[] {createKeyValue(OPEN, 0, -1, "2"), createKeyValue(DEFINE_TABLET, 1, 1, extent),
         createKeyValue(COMPACTION_START, 2, 1, "/t1/f12"), createKeyValue(MUTATION, 3, 1, m4), createKeyValue(MUTATION, 3, 1, m5),};
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
     logs.put("entries2", entries2);
     // Recover
@@ -566,7 +566,7 @@
 
     Arrays.sort(entries);
 
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
 
     List<Mutation> mutations = recover(logs, extent);
@@ -586,7 +586,7 @@
         createKeyValue(MUTATION, 2, 2, ignored), createKeyValue(COMPACTION_START, 3, 2, "/t/f1")};
 
     Arrays.sort(entries);
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
 
     List<Mutation> mutations = recover(logs, Collections.singleton("/t/f1"), extent);
@@ -607,7 +607,7 @@
         createKeyValue(MUTATION, 2, 2, ignored), createKeyValue(COMPACTION_START, 3, 2, "/t/f1"), createKeyValue(MUTATION, 4, 2, m),};
 
     Arrays.sort(entries);
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
 
     List<Mutation> mutations = recover(logs, Collections.singleton("/t/f1"), extent);
@@ -626,10 +626,10 @@
         createKeyValue(COMPACTION_START, 3, 2, compactionStartFile), createKeyValue(MUTATION, 4, 2, m2),};
 
     Arrays.sort(entries);
-    Map<String,KeyValue[]> logs = new TreeMap<String,KeyValue[]>();
+    Map<String,KeyValue[]> logs = new TreeMap<>();
     logs.put("entries", entries);
 
-    HashSet<String> filesSet = new HashSet<String>();
+    HashSet<String> filesSet = new HashSet<>();
     filesSet.addAll(Arrays.asList(tabletFiles));
     List<Mutation> mutations = recover(logs, filesSet, extent);
 
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/logger/LogFileTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/logger/LogFileTest.java
index 06fe1d5..582d6df 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/logger/LogFileTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/logger/LogFileTest.java
@@ -80,7 +80,7 @@
     assertEquals(key.seq, 3);
     assertEquals(key.tid, 4);
     assertEquals(key.filename, "some file");
-    KeyExtent tablet = new KeyExtent(new Text("table"), new Text("bbbb"), new Text("aaaa"));
+    KeyExtent tablet = new KeyExtent("table", new Text("bbbb"), new Text("aaaa"));
     readWrite(DEFINE_TABLET, 5, 6, null, tablet, null, key, value);
     assertEquals(key.event, DEFINE_TABLET);
     assertEquals(key.seq, 5);
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystemTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystemTest.java
index 76c8bdf..71d8c50 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystemTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/replication/AccumuloReplicaSystemTest.java
@@ -77,7 +77,7 @@
      * look like in a WAL. They are solely for testing that each LogEvents is handled, order is not important.
      */
     key.event = LogEvents.DEFINE_TABLET;
-    key.tablet = new KeyExtent(new Text("1"), null, null);
+    key.tablet = new KeyExtent("1", null, null);
     key.tid = 1;
 
     key.write(dos);
@@ -92,7 +92,7 @@
     value.write(dos);
 
     key.event = LogEvents.DEFINE_TABLET;
-    key.tablet = new KeyExtent(new Text("2"), null, null);
+    key.tablet = new KeyExtent("2", null, null);
     key.tid = 2;
     value.mutations = Collections.emptyList();
 
@@ -123,7 +123,7 @@
     value.write(dos);
 
     key.event = LogEvents.DEFINE_TABLET;
-    key.tablet = new KeyExtent(new Text("1"), null, null);
+    key.tablet = new KeyExtent("1", null, null);
     key.tid = 3;
     value.mutations = Collections.emptyList();
 
@@ -183,7 +183,7 @@
      * look like in a WAL. They are solely for testing that each LogEvents is handled, order is not important.
      */
     key.event = LogEvents.DEFINE_TABLET;
-    key.tablet = new KeyExtent(new Text("1"), null, null);
+    key.tablet = new KeyExtent("1", null, null);
     key.tid = 1;
 
     key.write(dos);
@@ -198,7 +198,7 @@
     value.write(dos);
 
     key.event = LogEvents.DEFINE_TABLET;
-    key.tablet = new KeyExtent(new Text("2"), null, null);
+    key.tablet = new KeyExtent("2", null, null);
     key.tid = 2;
     value.mutations = Collections.emptyList();
 
@@ -229,7 +229,7 @@
     value.write(dos);
 
     key.event = LogEvents.DEFINE_TABLET;
-    key.tablet = new KeyExtent(new Text("1"), null, null);
+    key.tablet = new KeyExtent("1", null, null);
     key.tid = 3;
     value.mutations = Collections.emptyList();
 
@@ -380,7 +380,7 @@
      * look like in a WAL. They are solely for testing that each LogEvents is handled, order is not important.
      */
     key.event = LogEvents.DEFINE_TABLET;
-    key.tablet = new KeyExtent(new Text("1"), null, null);
+    key.tablet = new KeyExtent("1", null, null);
     key.tid = 1;
 
     key.write(dos);
diff --git a/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java b/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java
index e5d893a..d9c6862 100644
--- a/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java
+++ b/server/tserver/src/test/java/org/apache/accumulo/tserver/tablet/RootFilesTest.java
@@ -60,7 +60,7 @@
 
       rootTabletDir = new File(tempFolder.newFolder(), "accumulo/tables/+r/root_tablet");
       assertTrue(rootTabletDir.mkdirs() || rootTabletDir.isDirectory());
-      oldDatafiles = new HashSet<FileRef>();
+      oldDatafiles = new HashSet<>();
       for (String filename : inputFiles) {
         File file = new File(rootTabletDir, filename);
         assertTrue(file.createNewFile());
@@ -91,17 +91,17 @@
     public Collection<String> cleanupReplacement(String... expectedFiles) throws IOException {
       Collection<String> ret = RootFiles.cleanupReplacement(vm, vm.listStatus(new Path(rootTabletDir.toURI())), true);
 
-      HashSet<String> expected = new HashSet<String>();
+      HashSet<String> expected = new HashSet<>();
       for (String efile : expectedFiles)
         expected.add(new File(rootTabletDir, efile).toURI().toString());
 
-      Assert.assertEquals(expected, new HashSet<String>(ret));
+      Assert.assertEquals(expected, new HashSet<>(ret));
 
       return ret;
     }
 
     public void assertFiles(String... files) {
-      HashSet<String> actual = new HashSet<String>();
+      HashSet<String> actual = new HashSet<>();
       File[] children = rootTabletDir.listFiles();
       if (children != null) {
         for (File file : children) {
@@ -109,7 +109,7 @@
         }
       }
 
-      HashSet<String> expected = new HashSet<String>();
+      HashSet<String> expected = new HashSet<>();
       expected.addAll(Arrays.asList(files));
 
       Assert.assertEquals(expected, actual);
diff --git a/shell/pom.xml b/shell/pom.xml
index a102d1e..2dee4c2 100644
--- a/shell/pom.xml
+++ b/shell/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-shell</artifactId>
   <name>Apache Accumulo Shell</name>
@@ -36,6 +36,10 @@
       <optional>true</optional>
     </dependency>
     <dependency>
+      <groupId>com.google.code.gson</groupId>
+      <artifactId>gson</artifactId>
+    </dependency>
+    <dependency>
       <groupId>com.google.guava</groupId>
       <artifactId>guava</artifactId>
     </dependency>
diff --git a/shell/src/main/java/org/apache/accumulo/shell/Shell.java b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
index 393fe26..7678ead 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/Shell.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
@@ -55,7 +55,6 @@
 import org.apache.accumulo.core.client.ZooKeeperInstance;
 import org.apache.accumulo.core.client.impl.ClientContext;
 import org.apache.accumulo.core.client.impl.Tables;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
@@ -67,9 +66,10 @@
 import org.apache.accumulo.core.tabletserver.thrift.ConstraintViolationException;
 import org.apache.accumulo.core.trace.DistributedTrace;
 import org.apache.accumulo.core.util.BadArgumentException;
-import org.apache.accumulo.core.util.format.BinaryFormatter;
+import org.apache.accumulo.core.util.DeprecationUtil;
 import org.apache.accumulo.core.util.format.DefaultFormatter;
 import org.apache.accumulo.core.util.format.Formatter;
+import org.apache.accumulo.core.util.format.FormatterConfig;
 import org.apache.accumulo.core.util.format.FormatterFactory;
 import org.apache.accumulo.core.volume.VolumeConfiguration;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
@@ -90,6 +90,7 @@
 import org.apache.accumulo.shell.commands.CreateUserCommand;
 import org.apache.accumulo.shell.commands.DUCommand;
 import org.apache.accumulo.shell.commands.DebugCommand;
+import org.apache.accumulo.shell.commands.DeleteAuthsCommand;
 import org.apache.accumulo.shell.commands.DeleteCommand;
 import org.apache.accumulo.shell.commands.DeleteIterCommand;
 import org.apache.accumulo.shell.commands.DeleteManyCommand;
@@ -122,6 +123,7 @@
 import org.apache.accumulo.shell.commands.InfoCommand;
 import org.apache.accumulo.shell.commands.InsertCommand;
 import org.apache.accumulo.shell.commands.InterpreterCommand;
+import org.apache.accumulo.shell.commands.ListBulkCommand;
 import org.apache.accumulo.shell.commands.ListCompactionsCommand;
 import org.apache.accumulo.shell.commands.ListIterCommand;
 import org.apache.accumulo.shell.commands.ListScansCommand;
@@ -205,13 +207,12 @@
   protected ConsoleReader reader;
   private AuthenticationToken token;
   private final Class<? extends Formatter> defaultFormatterClass = DefaultFormatter.class;
-  private final Class<? extends Formatter> binaryFormatterClass = BinaryFormatter.class;
-  public Map<String,List<IteratorSetting>> scanIteratorOptions = new HashMap<String,List<IteratorSetting>>();
-  public Map<String,List<IteratorSetting>> iteratorProfiles = new HashMap<String,List<IteratorSetting>>();
+  public Map<String,List<IteratorSetting>> scanIteratorOptions = new HashMap<>();
+  public Map<String,List<IteratorSetting>> iteratorProfiles = new HashMap<>();
 
   private Token rootToken;
-  public final Map<String,Command> commandFactory = new TreeMap<String,Command>();
-  public final Map<String,Command[]> commandGrouping = new TreeMap<String,Command[]>();
+  public final Map<String,Command> commandFactory = new TreeMap<>();
+  public final Map<String,Command[]> commandGrouping = new TreeMap<>();
 
   // exit if true
   private boolean exit = false;
@@ -227,29 +228,41 @@
   private long authTimeout;
   private long lastUserActivity = System.nanoTime();
   private boolean logErrorsToConsole = false;
-  private PrintWriter writer = null;
   private boolean masking = false;
 
+  {
+    // set the JLine output encoding to some reasonable default if it isn't already set
+    // despite the misleading property name, "input.encoding" is the property jline uses for the encoding of the output stream writer
+    String prop = "input.encoding";
+    if (System.getProperty(prop) == null) {
+      String value = System.getProperty("jline.WindowsTerminal.output.encoding");
+      if (value == null) {
+        value = System.getProperty("file.encoding");
+      }
+      if (value != null) {
+        System.setProperty(prop, value);
+      }
+    }
+  }
+
   // no arg constructor should do minimal work since its used in Main ServiceLoader
   public Shell() {}
 
-  public Shell(ConsoleReader reader, PrintWriter writer) {
+  public Shell(ConsoleReader reader) {
     super();
     this.reader = reader;
-    this.writer = writer;
   }
 
   /**
    * Configures the shell using the provided options. Not for client use.
    *
    * @return true if the shell was successfully configured, false otherwise.
+   * @throws IOException
+   *           if problems occur creating the ConsoleReader
    */
   public boolean config(String... args) throws IOException {
     if (this.reader == null)
       this.reader = new ConsoleReader();
-    if (this.writer == null)
-      this.writer = new PrintWriter(new OutputStreamWriter(System.out, Charset.forName(System.getProperty("jline.WindowsTerminal.output.encoding",
-          System.getProperty("file.encoding")))));
     ShellOptionsJC options = new ShellOptionsJC();
     JCommander jc = new JCommander();
 
@@ -306,7 +319,7 @@
 
     tabCompletion = !options.isTabCompletionDisabled();
 
-    // Use a fake (Mock), ZK, or HdfsZK Accumulo instance
+    // Use a ZK, or HdfsZK Accumulo instance
     setInstance(options);
 
     // AuthenticationToken options
@@ -389,7 +402,7 @@
     Command[] dataCommands = {new DeleteCommand(), new DeleteManyCommand(), new DeleteRowsCommand(), new EGrepCommand(), new FormatterCommand(),
         new InterpreterCommand(), new GrepCommand(), new ImportDirectoryCommand(), new InsertCommand(), new MaxRowCommand(), new ScanCommand()};
     Command[] debuggingCommands = {new ClasspathCommand(), new DebugCommand(), new ListScansCommand(), new ListCompactionsCommand(), new TraceCommand(),
-        new PingCommand()};
+        new PingCommand(), new ListBulkCommand()};
     Command[] execCommands = {new ExecfileCommand(), new HistoryCommand(), new ExtensionCommand(), new ScriptCommand()};
     Command[] exitCommands = {new ByeCommand(), new ExitCommand(), new QuitCommand()};
     Command[] helpCommands = {new AboutCommand(), new HelpCommand(), new InfoCommand(), new QuestionCommand()};
@@ -406,7 +419,7 @@
     Command[] tableControlCommands = {new AddSplitsCommand(), new CompactCommand(), new ConstraintCommand(), new FlushCommand(), new GetGroupsCommand(),
         new GetSplitsCommand(), new MergeCommand(), new SetGroupsCommand()};
     Command[] userCommands = {new AddAuthsCommand(), new CreateUserCommand(), new DeleteUserCommand(), new DropUserCommand(), new GetAuthsCommand(),
-        new PasswdCommand(), new SetAuthsCommand(), new UsersCommand()};
+        new PasswdCommand(), new SetAuthsCommand(), new UsersCommand(), new DeleteAuthsCommand()};
     commandGrouping.put("-- Writing, Reading, and Removing Data --", dataCommands);
     commandGrouping.put("-- Debugging Commands -------------------", debuggingCommands);
     commandGrouping.put("-- Shell Execution Commands -------------", execCommands);
@@ -439,7 +452,7 @@
     // should only be one set of instance options set
     instance = null;
     if (options.isFake()) {
-      instance = new MockInstance("fake");
+      instance = DeprecationUtil.makeMockInstance("fake");
     } else {
       String instanceName, hosts;
       if (options.isHdfsZooInstance()) {
@@ -586,8 +599,7 @@
   }
 
   public static void main(String args[]) throws IOException {
-    new Shell(new ConsoleReader(), new PrintWriter(new OutputStreamWriter(System.out, Charset.forName(System.getProperty(
-        "jline.WindowsTerminal.output.encoding", System.getProperty("file.encoding")))))).execute(args);
+    new Shell(new ConsoleReader()).execute(args);
   }
 
   public int start() throws IOException {
@@ -869,15 +881,15 @@
       namespaces = Collections.emptySet();
     }
 
-    Map<Command.CompletionSet,Set<String>> options = new HashMap<Command.CompletionSet,Set<String>>();
+    Map<Command.CompletionSet,Set<String>> options = new HashMap<>();
 
-    Set<String> commands = new HashSet<String>();
+    Set<String> commands = new HashSet<>();
     for (String a : commandFactory.keySet())
       commands.add(a);
 
-    Set<String> modifiedUserlist = new HashSet<String>();
-    Set<String> modifiedTablenames = new HashSet<String>();
-    Set<String> modifiedNamespaces = new HashSet<String>();
+    Set<String> modifiedUserlist = new HashSet<>();
+    Set<String> modifiedTablenames = new HashSet<>();
+    Set<String> modifiedNamespaces = new HashSet<>();
 
     for (String a : tableNames)
       modifiedTablenames.add(a.replaceAll("([\\s'\"])", "\\\\$1"));
@@ -970,11 +982,11 @@
 
     // The general version of this method uses the HelpFormatter
     // that comes with the apache Options package to print out the help
-    public final void printHelp(Shell shellState) {
+    public final void printHelp(Shell shellState) throws IOException {
       shellState.printHelp(usage(), "description: " + this.description(), getOptionsWithHelp());
     }
 
-    public final void printHelp(Shell shellState, int width) {
+    public final void printHelp(Shell shellState, int width) throws IOException {
       shellState.printHelp(usage(), "description: " + this.description(), getOptionsWithHelp(), width);
     }
 
@@ -1020,7 +1032,7 @@
 
     @Override
     public void close() {}
-  };
+  }
 
   public static class PrintFile implements PrintLine {
     PrintWriter writer;
@@ -1038,7 +1050,7 @@
     public void close() {
       writer.close();
     }
-  };
+  }
 
   public final void printLines(Iterator<String> lines, boolean paginate) throws IOException {
     printLines(lines, paginate, null);
@@ -1095,22 +1107,14 @@
     }
   }
 
-  public final void printRecords(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps, boolean paginate, Class<? extends Formatter> formatterClass,
+  public final void printRecords(Iterable<Entry<Key,Value>> scanner, FormatterConfig config, boolean paginate, Class<? extends Formatter> formatterClass,
       PrintLine outFile) throws IOException {
-    printLines(FormatterFactory.getFormatter(formatterClass, scanner, printTimestamps), paginate, outFile);
+    printLines(FormatterFactory.getFormatter(formatterClass, scanner, config), paginate, outFile);
   }
 
-  public final void printRecords(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps, boolean paginate, Class<? extends Formatter> formatterClass)
+  public final void printRecords(Iterable<Entry<Key,Value>> scanner, FormatterConfig config, boolean paginate, Class<? extends Formatter> formatterClass)
       throws IOException {
-    printLines(FormatterFactory.getFormatter(formatterClass, scanner, printTimestamps), paginate);
-  }
-
-  public final void printBinaryRecords(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps, boolean paginate, PrintLine outFile) throws IOException {
-    printLines(FormatterFactory.getFormatter(binaryFormatterClass, scanner, printTimestamps), paginate, outFile);
-  }
-
-  public final void printBinaryRecords(Iterable<Entry<Key,Value>> scanner, boolean printTimestamps, boolean paginate) throws IOException {
-    printLines(FormatterFactory.getFormatter(binaryFormatterClass, scanner, printTimestamps), paginate);
+    printLines(FormatterFactory.getFormatter(formatterClass, scanner, config), paginate);
   }
 
   public static String repeat(String s, int c) {
@@ -1156,14 +1160,13 @@
     return Logger.getLogger(Constants.CORE_PACKAGE_NAME).isTraceEnabled();
   }
 
-  private final void printHelp(String usage, String description, Options opts) {
+  private final void printHelp(String usage, String description, Options opts) throws IOException {
     printHelp(usage, description, opts, Integer.MAX_VALUE);
   }
 
-  private final void printHelp(String usage, String description, Options opts, int width) {
-    // TODO Use the OutputStream from the JLine ConsoleReader if we can ever get access to it
-    new HelpFormatter().printHelp(writer, width, usage, description, opts, 2, 5, null, true);
-    writer.flush();
+  private final void printHelp(String usage, String description, Options opts, int width) throws IOException {
+    new HelpFormatter().printHelp(new PrintWriter(reader.getOutput()), width, usage, description, opts, 2, 5, null, true);
+    reader.getOutput().flush();
   }
 
   public int getExitCode() {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java b/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java
index 01b7ce3..1e5156f 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/ShellOptionsJC.java
@@ -105,7 +105,7 @@
       String process(String value) {
         return value;
       }
-    };
+    }
 
     @Override
     public String convert(String value) {
@@ -141,7 +141,7 @@
   private AuthenticationToken authenticationToken;
 
   @DynamicParameter(names = {"-l", "--tokenProperty"}, description = "login properties in the format key=value. Reuse -l for each property")
-  private Map<String,String> tokenProperties = new TreeMap<String,String>();
+  private Map<String,String> tokenProperties = new TreeMap<>();
 
   @Parameter(names = "--disable-tab-completion", description = "disables tab completion (for less overhead when scripting)")
   private boolean tabCompletionDisabled;
@@ -169,7 +169,7 @@
   private boolean hdfsZooInstance;
 
   @Parameter(names = {"-z", "--zooKeeperInstance"}, description = "use a zookeeper instance with the given instance name and list of zoo hosts", arity = 2)
-  private List<String> zooKeeperInstance = new ArrayList<String>();
+  private List<String> zooKeeperInstance = new ArrayList<>();
 
   @Parameter(names = {"--ssl"}, description = "use ssl to connect to accumulo")
   private boolean useSsl = false;
diff --git a/shell/src/main/java/org/apache/accumulo/shell/Token.java b/shell/src/main/java/org/apache/accumulo/shell/Token.java
index fe29537..41921e5 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/Token.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/Token.java
@@ -30,8 +30,8 @@
  */
 
 public class Token {
-  private Set<String> command = new HashSet<String>();
-  private Set<Token> subcommands = new HashSet<Token>();
+  private Set<String> command = new HashSet<>();
+  private Set<Token> subcommands = new HashSet<>();
   private boolean caseSensitive = false;
 
   public Token() {}
@@ -73,7 +73,7 @@
   }
 
   public Set<String> getSubcommandNames() {
-    HashSet<String> set = new HashSet<String>();
+    HashSet<String> set = new HashSet<>();
     for (Token t : subcommands)
       set.addAll(t.getCommandNames());
     return set;
@@ -81,7 +81,7 @@
 
   public Set<String> getSubcommandNames(String startsWith) {
     Iterator<Token> iter = subcommands.iterator();
-    HashSet<String> set = new HashSet<String>();
+    HashSet<String> set = new HashSet<>();
     while (iter.hasNext()) {
       Token t = iter.next();
       Set<String> subset = t.getCommandNames();
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ActiveCompactionIterator.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ActiveCompactionIterator.java
index bd039b0..27a5bc8 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ActiveCompactionIterator.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ActiveCompactionIterator.java
@@ -54,7 +54,7 @@
   }
 
   private void readNext() {
-    final List<String> compactions = new ArrayList<String>();
+    final List<String> compactions = new ArrayList<>();
 
     while (tsIter.hasNext()) {
 
@@ -62,7 +62,7 @@
       try {
         List<ActiveCompaction> acl = instanceOps.getActiveCompactions(tserver);
 
-        acl = new ArrayList<ActiveCompaction>(acl);
+        acl = new ArrayList<>(acl);
 
         Collections.sort(acl, new Comparator<ActiveCompaction>() {
           @Override
@@ -80,8 +80,8 @@
 
           ac.getIterators();
 
-          List<String> iterList = new ArrayList<String>();
-          Map<String,Map<String,String>> iterOpts = new HashMap<String,Map<String,String>>();
+          List<String> iterList = new ArrayList<>();
+          Map<String,Map<String,String>> iterOpts = new HashMap<>();
           for (IteratorSetting is : ac.getIterators()) {
             iterList.add(is.getName() + "=" + is.getPriority() + "," + is.getIteratorClass());
             iterOpts.put(is.getName(), is.getOptions());
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ActiveScanIterator.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ActiveScanIterator.java
index 1089d78..ab0b344 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ActiveScanIterator.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ActiveScanIterator.java
@@ -33,7 +33,7 @@
   private Iterator<String> scansIter;
 
   private void readNext() {
-    final List<String> scans = new ArrayList<String>();
+    final List<String> scans = new ArrayList<>();
 
     while (tsIter.hasNext()) {
 
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/AddSplitsCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/AddSplitsCommand.java
index 964ec41..86d08a0 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/AddSplitsCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/AddSplitsCommand.java
@@ -36,7 +36,7 @@
     final String tableName = OptUtil.getTableOpt(cl, shellState);
     final boolean decode = cl.hasOption(base64Opt.getOpt());
 
-    final TreeSet<Text> splits = new TreeSet<Text>();
+    final TreeSet<Text> splits = new TreeSet<>();
 
     if (cl.hasOption(optSplitsFile.getOpt())) {
       splits.addAll(ShellUtil.scanFile(cl.getOptionValue(optSplitsFile.getOpt()), decode));
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/BulkImportListIterator.java b/shell/src/main/java/org/apache/accumulo/shell/commands/BulkImportListIterator.java
new file mode 100644
index 0000000..b1cc72d
--- /dev/null
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/BulkImportListIterator.java
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.shell.commands;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.accumulo.core.master.thrift.BulkImportStatus;
+import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
+import org.apache.accumulo.core.master.thrift.TabletServerStatus;
+import org.apache.accumulo.core.util.Duration;
+
+public class BulkImportListIterator implements Iterator<String> {
+
+  private final Iterator<String> iter;
+
+  public BulkImportListIterator(List<String> tservers, MasterMonitorInfo stats) {
+    List<String> result = new ArrayList<>();
+    for (BulkImportStatus status : stats.bulkImports) {
+      result.add(format(status));
+    }
+    if (!tservers.isEmpty()) {
+      for (TabletServerStatus tserver : stats.tServerInfo) {
+        if (tservers.contains(tserver.name)) {
+          result.add(tserver.name + ":");
+          for (BulkImportStatus status : tserver.bulkImports) {
+            result.add(format(status));
+          }
+        }
+      }
+    }
+    iter = result.iterator();
+  }
+
+  private String format(BulkImportStatus status) {
+    long diff = System.currentTimeMillis() - status.startTime;
+    return String.format("%25s | %4s | %s", status.filename, Duration.format(diff, " ", "-"), status.state);
+  }
+
+  @Override
+  public boolean hasNext() {
+    return iter.hasNext();
+  }
+
+  @Override
+  public String next() {
+    return iter.next();
+  }
+
+  @Override
+  public void remove() {
+    iter.remove();
+  }
+
+}
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/CloneTableCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/CloneTableCommand.java
index daca82c..8f99f9c 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/CloneTableCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/CloneTableCommand.java
@@ -42,8 +42,8 @@
   public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws AccumuloException, AccumuloSecurityException,
       TableNotFoundException, TableExistsException {
 
-    final HashMap<String,String> props = new HashMap<String,String>();
-    final HashSet<String> exclude = new HashSet<String>();
+    final HashMap<String,String> props = new HashMap<>();
+    final HashSet<String> exclude = new HashSet<>();
     boolean flush = true;
 
     if (cl.hasOption(setPropsOption.getOpt())) {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/CompactCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/CompactCommand.java
index f183b25..c8b0e11 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/CompactCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/CompactCommand.java
@@ -38,7 +38,7 @@
 
   // file selection and file output options
   private Option enameOption, epathOption, sizeLtOption, sizeGtOption, minFilesOption, outBlockSizeOpt, outHdfsBlockSizeOpt, outIndexBlockSizeOpt,
-      outCompressionOpt, outReplication;
+      outCompressionOpt, outReplication, enoSampleOption;
 
   private CompactionConfig compactionConfig = null;
 
@@ -89,6 +89,7 @@
   private Map<String,String> getConfigurableCompactionStrategyOpts(CommandLine cl) {
     Map<String,String> opts = new HashMap<>();
 
+    put(cl, opts, enoSampleOption, CompactionSettings.SF_NO_SAMPLE);
     put(cl, opts, enameOption, CompactionSettings.SF_NAME_RE_OPT);
     put(cl, opts, epathOption, CompactionSettings.SF_PATH_RE_OPT);
     put(cl, opts, sizeLtOption, CompactionSettings.SF_LT_ESIZE_OPT);
@@ -190,6 +191,9 @@
     cancelOpt = new Option(null, "cancel", false, "cancel user initiated compactions");
     opts.addOption(cancelOpt);
 
+    enoSampleOption = new Option(null, "sf-no-sample", false,
+        "Select files that have no sample data or sample data that differes from the table configuration.");
+    opts.addOption(enoSampleOption);
     enameOption = newLAO("sf-ename", "Select files using regular expression to match file names. Only matches against last part of path.");
     opts.addOption(enameOption);
     epathOption = newLAO("sf-epath", "Select files using regular expression to match file paths to compact. Matches against full path.");
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ConfigCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ConfigCommand.java
index ec3f276..482bd84 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ConfigCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ConfigCommand.java
@@ -140,21 +140,21 @@
       }
     } else {
       // display properties
-      final TreeMap<String,String> systemConfig = new TreeMap<String,String>();
+      final TreeMap<String,String> systemConfig = new TreeMap<>();
       systemConfig.putAll(shellState.getConnector().instanceOperations().getSystemConfiguration());
 
       final String outputFile = cl.getOptionValue(outputFileOpt.getOpt());
       final PrintFile printFile = outputFile == null ? null : new PrintFile(outputFile);
 
-      final TreeMap<String,String> siteConfig = new TreeMap<String,String>();
+      final TreeMap<String,String> siteConfig = new TreeMap<>();
       siteConfig.putAll(shellState.getConnector().instanceOperations().getSiteConfiguration());
 
-      final TreeMap<String,String> defaults = new TreeMap<String,String>();
+      final TreeMap<String,String> defaults = new TreeMap<>();
       for (Entry<String,String> defaultEntry : AccumuloConfiguration.getDefaultConfiguration()) {
         defaults.put(defaultEntry.getKey(), defaultEntry.getValue());
       }
 
-      final TreeMap<String,String> namespaceConfig = new TreeMap<String,String>();
+      final TreeMap<String,String> namespaceConfig = new TreeMap<>();
       if (tableName != null) {
         String n = Namespaces.getNamespaceName(shellState.getInstance(),
             Tables.getNamespaceId(shellState.getInstance(), Tables.getTableId(shellState.getInstance(), tableName)));
@@ -169,7 +169,7 @@
       } else if (namespace != null) {
         acuconf = shellState.getConnector().namespaceOperations().getProperties(namespace);
       }
-      final TreeMap<String,String> sortedConf = new TreeMap<String,String>();
+      final TreeMap<String,String> sortedConf = new TreeMap<>();
       for (Entry<String,String> propEntry : acuconf) {
         sortedConf.put(propEntry.getKey(), propEntry.getValue());
       }
@@ -187,7 +187,7 @@
         COL2 = Math.max(COL2, propEntry.getKey().length() + 3);
       }
 
-      final ArrayList<String> output = new ArrayList<String>();
+      final ArrayList<String> output = new ArrayList<>();
       printConfHeader(output);
 
       for (Entry<String,String> propEntry : sortedConf.entrySet()) {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/CreateTableCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/CreateTableCommand.java
index b0aca92..eac16fa 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/CreateTableCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/CreateTableCommand.java
@@ -62,7 +62,7 @@
       TableExistsException, TableNotFoundException, IOException, ClassNotFoundException {
 
     final String testTableName = cl.getArgs()[0];
-    final HashMap<String,String> props = new HashMap<String,String>();
+    final HashMap<String,String> props = new HashMap<>();
 
     if (!testTableName.matches(Tables.VALID_NAME_REGEX)) {
       shellState.getReader().println("Only letters, numbers and underscores are allowed for use in table names.");
@@ -73,7 +73,7 @@
     if (shellState.getConnector().tableOperations().exists(tableName)) {
       throw new TableExistsException(null, tableName, null);
     }
-    final SortedSet<Text> partitions = new TreeSet<Text>();
+    final SortedSet<Text> partitions = new TreeSet<>();
     final boolean decode = cl.hasOption(base64Opt.getOpt());
 
     if (cl.hasOption(createTableOptSplit.getOpt())) {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/DUCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/DUCommand.java
index 2adcc81..4d312fc 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/DUCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/DUCommand.java
@@ -42,7 +42,7 @@
   public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws IOException, TableNotFoundException,
       NamespaceNotFoundException {
 
-    final SortedSet<String> tables = new TreeSet<String>(Arrays.asList(cl.getArgs()));
+    final SortedSet<String> tables = new TreeSet<>(Arrays.asList(cl.getArgs()));
 
     if (cl.hasOption(ShellOptions.tableOption)) {
       tables.add(cl.getOptionValue(ShellOptions.tableOption));
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/DeleteAuthsCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/DeleteAuthsCommand.java
new file mode 100644
index 0000000..995db0c
--- /dev/null
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/DeleteAuthsCommand.java
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.shell.commands;
+
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.accumulo.core.client.AccumuloException;
+import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.shell.Shell;
+import org.apache.accumulo.shell.Shell.Command;
+import org.apache.accumulo.shell.ShellOptions;
+import org.apache.accumulo.shell.Token;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionGroup;
+import org.apache.commons.cli.Options;
+
+public class DeleteAuthsCommand extends Command {
+  private Option userOpt;
+  private Option scanOptAuths;
+
+  @Override
+  public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws AccumuloException, AccumuloSecurityException {
+    final Connector connector = shellState.getConnector();
+    final String user = cl.getOptionValue(userOpt.getOpt(), connector.whoami());
+    final String scanOpts = cl.getOptionValue(scanOptAuths.getOpt());
+
+    final Authorizations auths = connector.securityOperations().getUserAuthorizations(user);
+    final StringBuilder userAuths = new StringBuilder();
+    final String[] toBeRemovedAuths = scanOpts.split(",");
+    final Set<String> toBeRemovedSet = new HashSet<>();
+    for (String auth : toBeRemovedAuths) {
+      toBeRemovedSet.add(auth);
+    }
+    final String[] existingAuths = auths.toString().split(",");
+    for (String auth : existingAuths) {
+      if (!toBeRemovedSet.contains(auth)) {
+        userAuths.append(auth);
+        userAuths.append(",");
+      }
+    }
+    if (userAuths.length() > 0) {
+      connector.securityOperations().changeUserAuthorizations(user, ScanCommand.parseAuthorizations(userAuths.substring(0, userAuths.length() - 1)));
+    } else {
+      connector.securityOperations().changeUserAuthorizations(user, new Authorizations());
+    }
+
+    Shell.log.debug("Changed record-level authorizations for user " + user);
+    return 0;
+  }
+
+  @Override
+  public String description() {
+    return "remove authorizations from the maximum scan authorizations for a user";
+  }
+
+  @Override
+  public void registerCompletion(final Token root, final Map<Command.CompletionSet,Set<String>> completionSet) {
+    registerCompletionForUsers(root, completionSet);
+  }
+
+  @Override
+  public Options getOptions() {
+    final Options o = new Options();
+    final OptionGroup setOrClear = new OptionGroup();
+    scanOptAuths = new Option("s", "scan-authorizations", true, "scan authorizations to remove");
+    scanOptAuths.setArgName("comma-separated-authorizations");
+    setOrClear.addOption(scanOptAuths);
+    setOrClear.setRequired(true);
+    o.addOptionGroup(setOrClear);
+    userOpt = new Option(ShellOptions.userOption, "user", true, "user to operate on");
+    userOpt.setArgName("user");
+    o.addOption(userOpt);
+    return o;
+  }
+
+  @Override
+  public int numArgs() {
+    return 0;
+  }
+}
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/DeleteManyCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/DeleteManyCommand.java
index 3400680..b8782f0 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/DeleteManyCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/DeleteManyCommand.java
@@ -24,6 +24,7 @@
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.iterators.SortedKeyIterator;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.format.FormatterConfig;
 import org.apache.accumulo.core.util.interpret.ScanInterpreter;
 import org.apache.accumulo.shell.Shell;
 import org.apache.accumulo.shell.format.DeleterFormatter;
@@ -61,7 +62,9 @@
     // output / delete the records
     final BatchWriter writer = shellState.getConnector()
         .createBatchWriter(tableName, new BatchWriterConfig().setTimeout(getTimeout(cl), TimeUnit.MILLISECONDS));
-    shellState.printLines(new DeleterFormatter(writer, scanner, cl.hasOption(timestampOpt.getOpt()), shellState, cl.hasOption(forceOpt.getOpt())), false);
+    FormatterConfig config = new FormatterConfig();
+    config.setPrintTimestamps(cl.hasOption(timestampOpt.getOpt()));
+    shellState.printLines(new DeleterFormatter(writer, scanner, config, shellState, cl.hasOption(forceOpt.getOpt())), false);
 
     return 0;
   }
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ExtensionCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ExtensionCommand.java
index 3cff189..f78a503 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ExtensionCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ExtensionCommand.java
@@ -34,9 +34,9 @@
 
   private ServiceLoader<ShellExtension> extensions = null;
 
-  private Set<String> loadedHeaders = new HashSet<String>();
-  private Set<String> loadedCommands = new HashSet<String>();
-  private Set<String> loadedExtensions = new TreeSet<String>();
+  private Set<String> loadedHeaders = new HashSet<>();
+  private Set<String> loadedCommands = new HashSet<>();
+  private Set<String> loadedExtensions = new TreeSet<>();
 
   @Override
   public int execute(String fullCommand, CommandLine cl, Shell shellState) throws Exception {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java
index cef9a6d..88eeefd 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/FateCommand.java
@@ -17,10 +17,14 @@
 package org.apache.accumulo.shell.commands;
 
 import java.io.IOException;
+import java.lang.reflect.Type;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
 import java.util.Collections;
 import java.util.EnumSet;
 import java.util.Formatter;
 import java.util.HashSet;
+import java.util.List;
 import java.util.Set;
 
 import org.apache.accumulo.core.Constants;
@@ -29,9 +33,12 @@
 import org.apache.accumulo.core.conf.DefaultConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.conf.SiteConfiguration;
+import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.AdminUtil;
+import org.apache.accumulo.fate.ReadOnlyRepo;
 import org.apache.accumulo.fate.ReadOnlyTStore.TStatus;
+import org.apache.accumulo.fate.Repo;
 import org.apache.accumulo.fate.ZooStore;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooReaderWriter;
@@ -43,6 +50,13 @@
 import org.apache.commons.cli.ParseException;
 import org.apache.zookeeper.KeeperException;
 
+import com.google.gson.Gson;
+import com.google.gson.GsonBuilder;
+import com.google.gson.JsonElement;
+import com.google.gson.JsonObject;
+import com.google.gson.JsonSerializationContext;
+import com.google.gson.JsonSerializer;
+
 /**
  * Manage FATE transactions
  *
@@ -53,6 +67,47 @@
 
   private static final String USER = "accumulo";
 
+  // this class serializes references to interfaces with the concrete class name
+  private static class InterfaceSerializer<T> implements JsonSerializer<T> {
+    @Override
+    public JsonElement serialize(T link, Type type, JsonSerializationContext context) {
+      JsonElement je = context.serialize(link, link.getClass());
+      JsonObject jo = new JsonObject();
+      jo.add(link.getClass().getName(), je);
+      return jo;
+    }
+  }
+
+  // the purpose of this class is to be serialized as JSon for display
+  public static class ByteArrayContainer {
+    public String asUtf8;
+    public String asBase64;
+
+    ByteArrayContainer(byte[] ba) {
+      asUtf8 = new String(ba, StandardCharsets.UTF_8);
+      asBase64 = Base64.encodeBase64URLSafeString(ba);
+    }
+  }
+
+  // serialize byte arrays in human and machine readable ways
+  private static class ByteArraySerializer implements JsonSerializer<byte[]> {
+    @Override
+    public JsonElement serialize(byte[] link, Type type, JsonSerializationContext context) {
+      return context.serialize(new ByteArrayContainer(link));
+    }
+  }
+
+  // the purpose of this class is to be serialized as JSon for display
+  public static class FateStack {
+    String txid;
+    List<ReadOnlyRepo<FateCommand>> stack;
+
+    FateStack(Long txid, List<ReadOnlyRepo<FateCommand>> stack) {
+      this.txid = String.format("%016x", txid);
+      this.stack = stack;
+    }
+  }
+
   private Option secretOption;
   private Option statusOption;
   private Option disablePaginationOpt;
@@ -68,12 +123,12 @@
     String cmd = args[0];
     boolean failedCommand = false;
 
-    AdminUtil<FateCommand> admin = new AdminUtil<FateCommand>(false);
+    AdminUtil<FateCommand> admin = new AdminUtil<>(false);
 
     String path = ZooUtil.getRoot(instance) + Constants.ZFATE;
     String masterPath = ZooUtil.getRoot(instance) + Constants.ZMASTER_LOCK;
     IZooReaderWriter zk = getZooReaderWriter(shellState.getInstance(), cl.getOptionValue(secretOption.getOpt()));
-    ZooStore<FateCommand> zs = new ZooStore<FateCommand>(path, zk);
+    ZooStore<FateCommand> zs = new ZooStore<>(path, zk);
 
     if ("fail".equals(cmd)) {
       if (args.length <= 1) {
@@ -101,7 +156,7 @@
       // Parse transaction ID filters for print display
       Set<Long> filterTxid = null;
       if (args.length >= 2) {
-        filterTxid = new HashSet<Long>(args.length);
+        filterTxid = new HashSet<>(args.length);
         for (int i = 1; i < args.length; i++) {
           try {
             Long val = Long.parseLong(args[i], 16);
@@ -133,6 +188,30 @@
       Formatter fmt = new Formatter(buf);
       admin.print(zs, zk, ZooUtil.getRoot(instance) + Constants.ZTABLE_LOCKS, fmt, filterTxid, filterStatus);
       shellState.printLines(Collections.singletonList(buf.toString()).iterator(), !cl.hasOption(disablePaginationOpt.getOpt()));
+    } else if ("dump".equals(cmd)) {
+      List<Long> txids;
+
+      if (args.length == 1) {
+        txids = zs.list();
+      } else {
+        txids = new ArrayList<>();
+        for (int i = 1; i < args.length; i++) {
+          txids.add(Long.parseLong(args[i], 16));
+        }
+      }
+
+      Gson gson = new GsonBuilder().registerTypeAdapter(ReadOnlyRepo.class, new InterfaceSerializer<>())
+          .registerTypeAdapter(Repo.class, new InterfaceSerializer<>()).registerTypeAdapter(byte[].class, new ByteArraySerializer()).setPrettyPrinting()
+          .create();
+
+      List<FateStack> txStacks = new ArrayList<>();
+
+      for (Long txid : txids) {
+        List<ReadOnlyRepo<FateCommand>> repoStack = zs.getStack(txid);
+        txStacks.add(new FateStack(txid, repoStack));
+      }
+
+      System.out.println(gson.toJson(txStacks));
     } else {
       throw new ParseException("Invalid command option");
     }
@@ -157,7 +236,7 @@
 
   @Override
   public String usage() {
-    return getName() + " fail <txid>... | delete <txid>... | print [<txid>...]";
+    return getName() + " fail <txid>... | delete <txid>... | print [<txid>...] | dump [<txid>...]";
   }
 
   @Override
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/GetAuthsCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/GetAuthsCommand.java
index 5c6a4eb..1f6cec0 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/GetAuthsCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/GetAuthsCommand.java
@@ -46,7 +46,7 @@
   }
 
   protected List<String> sortAuthorizations(Authorizations auths) {
-    List<String> list = new ArrayList<String>();
+    List<String> list = new ArrayList<>();
     for (byte[] auth : auths) {
       list.add(new String(auth));
     }
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/GetSplitsCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/GetSplitsCommand.java
index 9d82269..17b7db4 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/GetSplitsCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/GetSplitsCommand.java
@@ -36,7 +36,7 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.TextUtil;
-import org.apache.accumulo.core.util.format.BinaryFormatter;
+import org.apache.accumulo.core.util.format.DefaultFormatter;
 import org.apache.accumulo.shell.Shell;
 import org.apache.accumulo.shell.Shell.Command;
 import org.apache.accumulo.shell.Shell.PrintFile;
@@ -102,8 +102,8 @@
     if (text == null) {
       return null;
     }
-    BinaryFormatter.getlength(text.getLength());
-    return encode ? Base64.encodeBase64String(TextUtil.getBytes(text)) : BinaryFormatter.appendText(new StringBuilder(), text).toString();
+    final int length = text.getLength();
+    return encode ? Base64.encodeBase64String(TextUtil.getBytes(text)) : DefaultFormatter.appendText(new StringBuilder(), text, length).toString();
   }
 
   private static String obscuredTabletName(final KeyExtent extent) {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/GrepCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/GrepCommand.java
index 97bddc9..70c5db2 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/GrepCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/GrepCommand.java
@@ -19,12 +19,12 @@
 import java.io.IOException;
 import java.util.Collections;
 import java.util.concurrent.TimeUnit;
-
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.iterators.user.GrepIterator;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.format.Formatter;
+import org.apache.accumulo.core.util.format.FormatterConfig;
 import org.apache.accumulo.core.util.interpret.ScanInterpreter;
 import org.apache.accumulo.shell.Shell;
 import org.apache.accumulo.shell.Shell.PrintFile;
@@ -61,6 +61,8 @@
 
     scanner.setTimeout(getTimeout(cl), TimeUnit.MILLISECONDS);
 
+    setupSampling(tableName, cl, shellState, scanner);
+
     for (int i = 0; i < cl.getArgs().length; i++) {
       setUpIterator(Integer.MAX_VALUE - cl.getArgs().length + i, "grep" + i, cl.getArgs()[i], scanner, cl);
     }
@@ -69,7 +71,9 @@
       fetchColumns(cl, scanner, interpeter);
 
       // output the records
-      printRecords(cl, shellState, scanner, formatter, printFile);
+      final FormatterConfig config = new FormatterConfig();
+      config.setPrintTimestamps(cl.hasOption(timestampOpt.getOpt()));
+      printRecords(cl, shellState, config, scanner, formatter, printFile);
     } finally {
       scanner.close();
     }
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/HelpCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/HelpCommand.java
index 5945cc3..bf26a45 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/HelpCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/HelpCommand.java
@@ -49,7 +49,7 @@
       if (numColumns < 40) {
         throw new IllegalArgumentException("numColumns must be at least 40 (was " + numColumns + ")");
       }
-      final ArrayList<String> output = new ArrayList<String>();
+      final ArrayList<String> output = new ArrayList<>();
       for (Entry<String,Command[]> cmdGroup : shellState.commandGrouping.entrySet()) {
         output.add(cmdGroup.getKey());
         for (Command c : cmdGroup.getValue()) {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/InsertCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/InsertCommand.java
index af5f810..c9da47e 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/InsertCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/InsertCommand.java
@@ -106,7 +106,7 @@
     try {
       bw.close();
     } catch (MutationsRejectedException e) {
-      final ArrayList<String> lines = new ArrayList<String>();
+      final ArrayList<String> lines = new ArrayList<>();
       if (e.getSecurityErrorCodes().isEmpty() == false) {
         lines.add("\tAuthorization Failures:");
       }
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ListBulkCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ListBulkCommand.java
new file mode 100644
index 0000000..8f09e8a
--- /dev/null
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ListBulkCommand.java
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.shell.commands;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.accumulo.core.client.impl.MasterClient;
+import org.apache.accumulo.core.master.thrift.MasterClientService;
+import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
+import org.apache.accumulo.core.trace.Tracer;
+import org.apache.accumulo.server.AccumuloServerContext;
+import org.apache.accumulo.server.conf.ServerConfigurationFactory;
+import org.apache.accumulo.shell.Shell;
+import org.apache.accumulo.shell.Shell.Command;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+
+public class ListBulkCommand extends Command {
+
+  private Option tserverOption, disablePaginationOpt;
+
+  @Override
+  public String description() {
+    return "lists what bulk imports are currently running in accumulo.";
+  }
+
+  @Override
+  public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws Exception {
+
+    List<String> tservers;
+
+    MasterMonitorInfo stats;
+    MasterClientService.Iface client = null;
+    try {
+      AccumuloServerContext context = new AccumuloServerContext(new ServerConfigurationFactory(shellState.getInstance()));
+      client = MasterClient.getConnectionWithRetry(context);
+      stats = client.getMasterStats(Tracer.traceInfo(), context.rpcCreds());
+    } finally {
+      if (client != null)
+        MasterClient.close(client);
+    }
+
+    final boolean paginate = !cl.hasOption(disablePaginationOpt.getOpt());
+
+    if (cl.hasOption(tserverOption.getOpt())) {
+      tservers = new ArrayList<>();
+      tservers.add(cl.getOptionValue(tserverOption.getOpt()));
+    } else {
+      tservers = Collections.emptyList();
+    }
+
+    shellState.printLines(new BulkImportListIterator(tservers, stats), paginate);
+    return 0;
+  }
+
+  @Override
+  public int numArgs() {
+    return 0;
+  }
+
+  @Override
+  public Options getOptions() {
+    final Options opts = new Options();
+
+    tserverOption = new Option("ts", "tabletServer", true, "tablet server to list bulk imports");
+    tserverOption.setArgName("tablet server");
+    opts.addOption(tserverOption);
+
+    disablePaginationOpt = new Option("np", "no-pagination", false, "disable pagination of output");
+    opts.addOption(disablePaginationOpt);
+
+    return opts;
+  }
+
+}
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ListCompactionsCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ListCompactionsCommand.java
index e4b3410..f59c481 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ListCompactionsCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ListCompactionsCommand.java
@@ -45,7 +45,7 @@
     final boolean paginate = !cl.hasOption(disablePaginationOpt.getOpt());
 
     if (cl.hasOption(tserverOption.getOpt())) {
-      tservers = new ArrayList<String>();
+      tservers = new ArrayList<>();
       tservers.add(cl.getOptionValue(tserverOption.getOpt()));
     } else {
       tservers = instanceOps.getTabletServers();
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ListIterCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ListIterCommand.java
index 6187300..4a1c0bf 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ListIterCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ListIterCommand.java
@@ -63,7 +63,7 @@
     }
 
     final boolean allScopes = cl.hasOption(allScopesOpt.getOpt());
-    Set<IteratorScope> desiredScopes = new HashSet<IteratorScope>();
+    Set<IteratorScope> desiredScopes = new HashSet<>();
     for (IteratorScope scope : IteratorScope.values()) {
       if (allScopes || cl.hasOption(scopeOpts.get(scope).getOpt()))
         desiredScopes.add(scope);
@@ -120,7 +120,7 @@
     allScopesOpt = new Option("all", "all-scopes", false, "list from all scopes");
     o.addOption(allScopesOpt);
 
-    scopeOpts = new EnumMap<IteratorScope,Option>(IteratorScope.class);
+    scopeOpts = new EnumMap<>(IteratorScope.class);
     scopeOpts.put(IteratorScope.minc, new Option(IteratorScope.minc.name(), "minor-compaction", false, "list iterator for minor compaction scope"));
     scopeOpts.put(IteratorScope.majc, new Option(IteratorScope.majc.name(), "major-compaction", false, "list iterator for major compaction scope"));
     scopeOpts.put(IteratorScope.scan, new Option(IteratorScope.scan.name(), "scan-time", false, "list iterator for scan scope"));
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ListScansCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ListScansCommand.java
index f89b6fc..261db07 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ListScansCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ListScansCommand.java
@@ -45,7 +45,7 @@
     final boolean paginate = !cl.hasOption(disablePaginationOpt.getOpt());
 
     if (cl.hasOption(tserverOption.getOpt())) {
-      tservers = new ArrayList<String>();
+      tservers = new ArrayList<>();
       tservers.add(cl.getOptionValue(tserverOption.getOpt()));
     } else {
       tservers = instanceOps.getTabletServers();
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/NamespacesCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/NamespacesCommand.java
index d88d6f1..cb37505 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/NamespacesCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/NamespacesCommand.java
@@ -41,7 +41,7 @@
 
   @Override
   public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws AccumuloException, AccumuloSecurityException, IOException {
-    Map<String,String> namespaces = new TreeMap<String,String>(shellState.getConnector().namespaceOperations().namespaceIdMap());
+    Map<String,String> namespaces = new TreeMap<>(shellState.getConnector().namespaceOperations().namespaceIdMap());
 
     Iterator<String> it = Iterators.transform(namespaces.entrySet().iterator(), new Function<Entry<String,String>,String>() {
       @Override
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/PingCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/PingCommand.java
index ba6de68..c0271b7 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/PingCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/PingCommand.java
@@ -48,7 +48,7 @@
     final boolean paginate = !cl.hasOption(disablePaginationOpt.getOpt());
 
     if (cl.hasOption(tserverOption.getOpt())) {
-      tservers = new ArrayList<String>();
+      tservers = new ArrayList<>();
       tservers.add(cl.getOptionValue(tserverOption.getOpt()));
     } else {
       tservers = instanceOps.getTabletServers();
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/QuotedStringTokenizer.java b/shell/src/main/java/org/apache/accumulo/shell/commands/QuotedStringTokenizer.java
index 74397de..a66188e 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/QuotedStringTokenizer.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/QuotedStringTokenizer.java
@@ -38,7 +38,7 @@
   private String input;
 
   public QuotedStringTokenizer(final String t) throws BadArgumentException {
-    tokens = new ArrayList<String>();
+    tokens = new ArrayList<>();
     this.input = t;
     try {
       createTokens();
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ScanCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ScanCommand.java
index 6f1ddd3..e1da444 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ScanCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ScanCommand.java
@@ -22,21 +22,22 @@
 import java.util.List;
 import java.util.Map.Entry;
 import java.util.concurrent.TimeUnit;
-
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.SampleNotPresentException;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.ScannerBase;
 import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.format.BinaryFormatter;
 import org.apache.accumulo.core.util.format.Formatter;
+import org.apache.accumulo.core.util.format.FormatterConfig;
 import org.apache.accumulo.core.util.interpret.DefaultScanInterpreter;
 import org.apache.accumulo.core.util.interpret.ScanInterpreter;
 import org.apache.accumulo.shell.Shell;
@@ -61,6 +62,20 @@
   private Option optEndRowExclusive;
   private Option timeoutOption;
   private Option profileOpt;
+  private Option sampleOpt;
+  private Option contextOpt;
+
+  protected void setupSampling(final String tableName, final CommandLine cl, final Shell shellState, ScannerBase scanner) throws TableNotFoundException,
+      AccumuloException, AccumuloSecurityException {
+    if (getUseSample(cl)) {
+      SamplerConfiguration samplerConfig = shellState.getConnector().tableOperations().getSamplerConfiguration(tableName);
+      if (samplerConfig == null) {
+        throw new SampleNotPresentException("Table " + tableName + " does not have sampling configured");
+      }
+      Shell.log.debug("Using sampling configuration : " + samplerConfig);
+      scanner.setSamplerConfiguration(samplerConfig);
+    }
+  }
 
   @Override
   public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws Exception {
@@ -70,11 +85,17 @@
     final Class<? extends Formatter> formatter = getFormatter(cl, tableName, shellState);
     final ScanInterpreter interpeter = getInterpreter(cl, tableName, shellState);
 
+    String classLoaderContext = null;
+    if (cl.hasOption(contextOpt.getOpt())) {
+      classLoaderContext = cl.getOptionValue(contextOpt.getOpt());
+    }
     // handle first argument, if present, the authorizations list to
     // scan with
     final Authorizations auths = getAuths(cl, shellState);
     final Scanner scanner = shellState.getConnector().createScanner(tableName, auths);
-
+    if (null != classLoaderContext) {
+      scanner.setClassLoaderContext(classLoaderContext);
+    }
     // handle session-specific scan iterators
     addScanIterators(shellState, cl, scanner, tableName);
 
@@ -87,25 +108,24 @@
     // set timeout
     scanner.setTimeout(getTimeout(cl), TimeUnit.MILLISECONDS);
 
+    setupSampling(tableName, cl, shellState, scanner);
+
     // output the records
+
+    final FormatterConfig config = new FormatterConfig();
+    config.setPrintTimestamps(cl.hasOption(timestampOpt.getOpt()));
     if (cl.hasOption(showFewOpt.getOpt())) {
       final String showLength = cl.getOptionValue(showFewOpt.getOpt());
       try {
         final int length = Integer.parseInt(showLength);
-        if (length < 1) {
-          throw new IllegalArgumentException();
-        }
-        BinaryFormatter.getlength(length);
-        printBinaryRecords(cl, shellState, scanner, printFile);
+        config.setShownLength(length);
       } catch (NumberFormatException nfe) {
         shellState.getReader().println("Arg must be an integer.");
       } catch (IllegalArgumentException iae) {
         shellState.getReader().println("Arg must be greater than one.");
       }
-
-    } else {
-      printRecords(cl, shellState, scanner, formatter, printFile);
     }
+    printRecords(cl, shellState, config, scanner, formatter, printFile);
     if (printFile != null) {
       printFile.close();
     }
@@ -113,6 +133,10 @@
     return 0;
   }
 
+  protected boolean getUseSample(CommandLine cl) {
+    return cl.hasOption(sampleOpt.getLongOpt());
+  }
+
   protected long getTimeout(final CommandLine cl) {
     if (cl.hasOption(timeoutOption.getLongOpt())) {
       return AccumuloConfiguration.getTimeInMillis(cl.getOptionValue(timeoutOption.getLongOpt()));
@@ -163,21 +187,12 @@
     }
   }
 
-  protected void printRecords(final CommandLine cl, final Shell shellState, final Iterable<Entry<Key,Value>> scanner,
+  protected void printRecords(final CommandLine cl, final Shell shellState, FormatterConfig config, final Iterable<Entry<Key,Value>> scanner,
       final Class<? extends Formatter> formatter, PrintFile outFile) throws IOException {
     if (outFile == null) {
-      shellState.printRecords(scanner, cl.hasOption(timestampOpt.getOpt()), !cl.hasOption(disablePaginationOpt.getOpt()), formatter);
+      shellState.printRecords(scanner, config, !cl.hasOption(disablePaginationOpt.getOpt()), formatter);
     } else {
-      shellState.printRecords(scanner, cl.hasOption(timestampOpt.getOpt()), !cl.hasOption(disablePaginationOpt.getOpt()), formatter, outFile);
-    }
-  }
-
-  protected void printBinaryRecords(final CommandLine cl, final Shell shellState, final Iterable<Entry<Key,Value>> scanner, PrintFile outFile)
-      throws IOException {
-    if (outFile == null) {
-      shellState.printBinaryRecords(scanner, cl.hasOption(timestampOpt.getOpt()), !cl.hasOption(disablePaginationOpt.getOpt()));
-    } else {
-      shellState.printBinaryRecords(scanner, cl.hasOption(timestampOpt.getOpt()), !cl.hasOption(disablePaginationOpt.getOpt()), outFile);
+      shellState.printRecords(scanner, config, !cl.hasOption(disablePaginationOpt.getOpt()), formatter, outFile);
     }
   }
 
@@ -295,6 +310,8 @@
     timeoutOption = new Option(null, "timeout", true,
         "time before scan should fail if no data is returned. If no unit is given assumes seconds.  Units d,h,m,s,and ms are supported.  e.g. 30s or 100ms");
     outputFileOpt = new Option("o", "output", true, "local file to write the scan output to");
+    sampleOpt = new Option(null, "sample", false, "Show sample");
+    contextOpt = new Option("cc", "context", true, "name of the classloader context");
 
     scanOptAuths.setArgName("comma-separated-authorizations");
     scanOptRow.setArgName("row");
@@ -304,6 +321,7 @@
     formatterOpt.setArgName("className");
     timeoutOption.setArgName("timeout");
     outputFileOpt.setArgName("file");
+    contextOpt.setArgName("context");
 
     profileOpt = new Option("pn", "profile", true, "iterator profile name");
     profileOpt.setArgName("profile");
@@ -327,6 +345,8 @@
     o.addOption(timeoutOption);
     o.addOption(outputFileOpt);
     o.addOption(profileOpt);
+    o.addOption(sampleOpt);
+    o.addOption(contextOpt);
 
     return o;
   }
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/ScriptCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/ScriptCommand.java
index 3059b52..82b7d57 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/ScriptCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/ScriptCommand.java
@@ -88,7 +88,7 @@
       Bindings b = engine.getBindings(ScriptContext.ENGINE_SCOPE);
       b.put("connection", shellState.getConnector());
 
-      List<Object> argValues = new ArrayList<Object>();
+      List<Object> argValues = new ArrayList<>();
       if (cl.hasOption(args.getOpt())) {
         String[] argList = cl.getOptionValue(args.getOpt()).split(",");
         for (String arg : argList) {
@@ -249,7 +249,7 @@
 
   private void listJSREngineInfo(ScriptEngineManager mgr, Shell shellState) throws IOException {
     List<ScriptEngineFactory> factories = mgr.getEngineFactories();
-    Set<String> lines = new TreeSet<String>();
+    Set<String> lines = new TreeSet<>();
     for (ScriptEngineFactory factory : factories) {
       lines.add("ScriptEngineFactory Info");
       String engName = factory.getEngineName();
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/SetGroupsCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/SetGroupsCommand.java
index 62ed1a2..b095bcf 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/SetGroupsCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/SetGroupsCommand.java
@@ -36,7 +36,7 @@
   public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws Exception {
     final String tableName = OptUtil.getTableOpt(cl, shellState);
 
-    final HashMap<String,Set<Text>> groups = new HashMap<String,Set<Text>>();
+    final HashMap<String,Set<Text>> groups = new HashMap<>();
 
     for (String arg : cl.getArgs()) {
       final String sa[] = arg.split("=", 2);
@@ -44,7 +44,7 @@
         throw new IllegalArgumentException("Missing '='");
       }
       final String group = sa[0];
-      final HashSet<Text> colFams = new HashSet<Text>();
+      final HashSet<Text> colFams = new HashSet<>();
 
       for (String family : sa[1].split(",")) {
         colFams.add(new Text(family.getBytes(Shell.CHARSET)));
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
index 23b98a6..fffdf21 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
@@ -64,7 +64,7 @@
 
     final int priority = Integer.parseInt(cl.getOptionValue(priorityOpt.getOpt()));
 
-    final Map<String,String> options = new HashMap<String,String>();
+    final Map<String,String> options = new HashMap<>();
     String classname = cl.getOptionValue(classnameTypeOpt.getOpt());
     if (cl.hasOption(aggTypeOpt.getOpt())) {
       Shell.log.warn("aggregators are deprecated");
@@ -240,7 +240,7 @@
       if (className.contains(".")) {
         shortClassName = className.substring(className.lastIndexOf('.') + 1);
       }
-      final Map<String,String> localOptions = new HashMap<String,String>();
+      final Map<String,String> localOptions = new HashMap<>();
       do {
         // clean up the overall options that caused things to fail
         for (String key : localOptions.keySet()) {
@@ -311,7 +311,7 @@
       reader.flush();
       reader.println("Optional, configure name-value options for iterator:");
       String prompt = Shell.repeat("-", 10) + "> set option (<name> <value>, hit enter to skip): ";
-      final HashMap<String,String> localOptions = new HashMap<String,String>();
+      final HashMap<String,String> localOptions = new HashMap<>();
 
       while (true) {
         reader.flush();
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/SetScanIterCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/SetScanIterCommand.java
index e7d2793..2399d0e 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/SetScanIterCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/SetScanIterCommand.java
@@ -63,7 +63,7 @@
 
     List<IteratorSetting> tableScanIterators = shellState.scanIteratorOptions.get(tableName);
     if (tableScanIterators == null) {
-      tableScanIterators = new ArrayList<IteratorSetting>();
+      tableScanIterators = new ArrayList<>();
       shellState.scanIteratorOptions.put(tableName, tableScanIterators);
     }
     final IteratorSetting setting = new IteratorSetting(priority, name, classname);
@@ -92,7 +92,7 @@
   public Options getOptions() {
     // Remove the options that specify which type of iterator this is, since
     // they are all scan iterators with this command.
-    final HashSet<OptionGroup> groups = new HashSet<OptionGroup>();
+    final HashSet<OptionGroup> groups = new HashSet<>();
     final Options parentOptions = super.getOptions();
     final Options modifiedOptions = new Options();
     for (Iterator<?> it = parentOptions.getOptions().iterator(); it.hasNext();) {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/SetShellIterCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/SetShellIterCommand.java
index ad66995..e1fa5e0 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/SetShellIterCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/SetShellIterCommand.java
@@ -62,7 +62,7 @@
 
     List<IteratorSetting> tableScanIterators = shellState.iteratorProfiles.get(profile);
     if (tableScanIterators == null) {
-      tableScanIterators = new ArrayList<IteratorSetting>();
+      tableScanIterators = new ArrayList<>();
       shellState.iteratorProfiles.put(profile, tableScanIterators);
     }
     final IteratorSetting setting = new IteratorSetting(priority, name, classname);
@@ -87,7 +87,7 @@
   public Options getOptions() {
     // Remove the options that specify which type of iterator this is, since
     // they are all scan iterators with this command.
-    final HashSet<OptionGroup> groups = new HashSet<OptionGroup>();
+    final HashSet<OptionGroup> groups = new HashSet<>();
     final Options parentOptions = super.getOptions();
     final Options modifiedOptions = new Options();
     for (Iterator<?> it = parentOptions.getOptions().iterator(); it.hasNext();) {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/TableOperation.java b/shell/src/main/java/org/apache/accumulo/shell/commands/TableOperation.java
index b7d0f44..6ad1246 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/TableOperation.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/TableOperation.java
@@ -43,7 +43,7 @@
   @Override
   public int execute(final String fullCommand, final CommandLine cl, final Shell shellState) throws Exception {
     // populate the tableSet set with the tables you want to operate on
-    final SortedSet<String> tableSet = new TreeSet<String>();
+    final SortedSet<String> tableSet = new TreeSet<>();
     if (cl.hasOption(optTablePattern.getOpt())) {
       String tablePattern = cl.getOptionValue(optTablePattern.getOpt());
       for (String table : shellState.getConnector().tableOperations().list())
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/TablesCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/TablesCommand.java
index a70cc13..397b450 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/TablesCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/TablesCommand.java
@@ -62,7 +62,7 @@
     });
 
     final boolean sortByTableId = cl.hasOption(sortByTableIdOption.getOpt());
-    tables = new TreeMap<String,String>((sortByTableId ? MapUtils.invertMap(tables) : tables));
+    tables = new TreeMap<>((sortByTableId ? MapUtils.invertMap(tables) : tables));
 
     Iterator<String> it = Iterators.transform(tables.entrySet().iterator(), new Function<Entry<String,String>,String>() {
       @Override
diff --git a/shell/src/main/java/org/apache/accumulo/shell/commands/TraceCommand.java b/shell/src/main/java/org/apache/accumulo/shell/commands/TraceCommand.java
index f5e2a0d..edc6550 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/TraceCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/TraceCommand.java
@@ -18,6 +18,7 @@
 
 import java.io.IOException;
 import java.util.Map;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.conf.Property;
@@ -25,13 +26,14 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.trace.Trace;
 import org.apache.accumulo.core.util.BadArgumentException;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.shell.Shell;
 import org.apache.accumulo.tracer.TraceDump;
 import org.apache.accumulo.tracer.TraceDump.Printer;
 import org.apache.commons.cli.CommandLine;
 import org.apache.hadoop.io.Text;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class TraceCommand extends DebugCommand {
 
   @Override
@@ -74,7 +76,7 @@
             }
             shellState.getReader().println("Waiting for trace information");
             shellState.getReader().flush();
-            UtilWaitThread.sleep(500);
+            sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
           }
           if (traceCount < 0) {
             // display the trace even though there are unrooted spans
diff --git a/shell/src/main/java/org/apache/accumulo/shell/format/DeleterFormatter.java b/shell/src/main/java/org/apache/accumulo/shell/format/DeleterFormatter.java
index 1dd2234..275592e 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/format/DeleterFormatter.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/format/DeleterFormatter.java
@@ -18,7 +18,6 @@
 
 import java.io.IOException;
 import java.util.Map.Entry;
-
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.data.ConstraintViolationSummary;
@@ -27,7 +26,9 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.accumulo.core.util.format.DefaultFormatter;
+import org.apache.accumulo.core.util.format.FormatterConfig;
 import org.apache.accumulo.shell.Shell;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -36,15 +37,13 @@
   private static final Logger log = LoggerFactory.getLogger(DeleterFormatter.class);
   private BatchWriter writer;
   private Shell shellState;
-  private boolean printTimestamps;
   private boolean force;
   private boolean more;
 
-  public DeleterFormatter(BatchWriter writer, Iterable<Entry<Key,Value>> scanner, boolean printTimestamps, Shell shellState, boolean force) {
-    super.initialize(scanner, printTimestamps);
+  public DeleterFormatter(BatchWriter writer, Iterable<Entry<Key,Value>> scanner, FormatterConfig config, Shell shellState, boolean force) {
+    super.initialize(scanner, config);
     this.writer = writer;
     this.shellState = shellState;
-    this.printTimestamps = printTimestamps;
     this.force = force;
     this.more = true;
   }
@@ -73,7 +72,7 @@
     Entry<Key,Value> next = getScannerIterator().next();
     Key key = next.getKey();
     Mutation m = new Mutation(key.getRow());
-    String entryStr = formatEntry(next, printTimestamps);
+    String entryStr = formatEntry(next, isDoTimestamps());
     boolean delete = force;
     try {
       if (!force) {
diff --git a/shell/src/main/java/org/apache/accumulo/shell/mock/MockShell.java b/shell/src/main/java/org/apache/accumulo/shell/mock/MockShell.java
index 8e63eb8..ebc92f7 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/mock/MockShell.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/mock/MockShell.java
@@ -28,7 +28,6 @@
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.shell.Shell;
 import org.apache.accumulo.shell.ShellOptionsJC;
 import org.apache.commons.cli.CommandLine;
@@ -36,7 +35,10 @@
 
 /**
  * An Accumulo Shell implementation that allows a developer to attach an InputStream and Writer to the Shell for testing purposes.
+ *
+ * @deprecated since 1.8.0; use MiniAccumuloCluster or a standard mock framework instead.
  */
+@Deprecated
 public class MockShell extends Shell {
   private static final String NEWLINE = "\n";
 
@@ -76,7 +78,7 @@
   @Override
   protected void setInstance(ShellOptionsJC options) {
     // We always want a MockInstance for this test
-    instance = new MockInstance();
+    instance = new org.apache.accumulo.core.client.mock.MockInstance();
   }
 
   @Override
diff --git a/shell/src/test/java/org/apache/accumulo/shell/ShellConfigTest.java b/shell/src/test/java/org/apache/accumulo/shell/ShellConfigTest.java
index 49f22a6..8bef14d 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/ShellConfigTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/ShellConfigTest.java
@@ -25,7 +25,6 @@
 import java.io.FileInputStream;
 import java.io.IOException;
 import java.io.PrintStream;
-import java.io.PrintWriter;
 import java.nio.file.Files;
 import java.util.HashMap;
 import java.util.Map;
@@ -63,7 +62,7 @@
     System.setOut(new PrintStream(output));
     config = Files.createTempFile(null, null).toFile();
 
-    shell = new Shell(new ConsoleReader(new FileInputStream(FileDescriptor.in), output), new PrintWriter(output));
+    shell = new Shell(new ConsoleReader(new FileInputStream(FileDescriptor.in), output));
     shell.setLogErrorsToConsole();
   }
 
diff --git a/shell/src/test/java/org/apache/accumulo/shell/ShellSetInstanceTest.java b/shell/src/test/java/org/apache/accumulo/shell/ShellSetInstanceTest.java
index 1bf03b8..428481a 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/ShellSetInstanceTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/ShellSetInstanceTest.java
@@ -29,7 +29,6 @@
 import java.io.FileInputStream;
 import java.io.IOException;
 import java.io.OutputStream;
-import java.io.PrintWriter;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
@@ -40,7 +39,6 @@
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.ConfigSanityCheck;
 import org.apache.accumulo.core.conf.Property;
@@ -51,15 +49,18 @@
 import org.easymock.EasyMock;
 import org.junit.After;
 import org.junit.AfterClass;
+import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.runner.RunWith;
+import org.powermock.core.classloader.annotations.PowerMockIgnore;
 import org.powermock.core.classloader.annotations.PrepareForTest;
 import org.powermock.modules.junit4.PowerMockRunner;
 
 @RunWith(PowerMockRunner.class)
+@PowerMockIgnore("javax.security.*")
 @PrepareForTest({Shell.class, ZooUtil.class, ConfigSanityCheck.class})
 public class ShellSetInstanceTest {
   public static class TestOutputStream extends OutputStream {
@@ -110,7 +111,7 @@
   public void setup() throws IOException {
     Shell.log.setLevel(Level.OFF);
     output = new TestOutputStream();
-    shell = new Shell(new ConsoleReader(new FileInputStream(FileDescriptor.in), output), new PrintWriter(output));
+    shell = new Shell(new ConsoleReader(new FileInputStream(FileDescriptor.in), output));
     shell.setLogErrorsToConsole();
   }
 
@@ -120,17 +121,15 @@
     SiteConfiguration.clearInstance();
   }
 
+  @Deprecated
   @Test
   public void testSetInstance_Fake() throws Exception {
     ShellOptionsJC opts = createMock(ShellOptionsJC.class);
     expect(opts.isFake()).andReturn(true);
     replay(opts);
-    MockInstance theInstance = createMock(MockInstance.class);
-    expectNew(MockInstance.class, "fake").andReturn(theInstance);
-    replay(theInstance, MockInstance.class);
 
     shell.setInstance(opts);
-    verify(theInstance, MockInstance.class);
+    Assert.assertTrue(shell.getInstance() instanceof org.apache.accumulo.core.client.mock.MockInstance);
   }
 
   @Test
@@ -244,7 +243,7 @@
       expect(clientConf.getString(ClientProperty.INSTANCE_NAME.getKey())).andReturn("foo");
       expect(clientConf.withZkHosts("host1,host2")).andReturn(clientConf);
       expect(clientConf.getString(ClientProperty.INSTANCE_ZK_HOST.getKey())).andReturn("host1,host2");
-      List<String> zl = new java.util.ArrayList<String>();
+      List<String> zl = new java.util.ArrayList<>();
       zl.add("foo");
       zl.add("host1,host2");
       expect(opts.getZooKeeperInstance()).andReturn(zl);
diff --git a/shell/src/test/java/org/apache/accumulo/shell/ShellTest.java b/shell/src/test/java/org/apache/accumulo/shell/ShellTest.java
index 7a4a87e..dc902ce 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/ShellTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/ShellTest.java
@@ -23,8 +23,6 @@
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
-import java.io.OutputStreamWriter;
-import java.io.PrintWriter;
 import java.nio.file.Files;
 import java.text.DateFormat;
 import java.text.SimpleDateFormat;
@@ -33,9 +31,6 @@
 import java.util.List;
 import java.util.TimeZone;
 
-import jline.console.ConsoleReader;
-
-import org.apache.accumulo.core.util.format.DateStringFormatter;
 import org.apache.log4j.Level;
 import org.junit.After;
 import org.junit.Before;
@@ -43,6 +38,8 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import jline.console.ConsoleReader;
+
 public class ShellTest {
   private static final Logger log = LoggerFactory.getLogger(ShellTest.class);
 
@@ -131,8 +128,7 @@
     output = new TestOutputStream();
     input = new StringInputStream();
     config = Files.createTempFile(null, null).toFile();
-    PrintWriter pw = new PrintWriter(new OutputStreamWriter(output));
-    shell = new Shell(new ConsoleReader(input, output), pw);
+    shell = new Shell(new ConsoleReader(input, output));
     shell.setLogErrorsToConsole();
     shell.config("--config-file", config.toString(), "--fake", "-u", "test", "-p", "secret");
   }
@@ -177,6 +173,8 @@
     exec("createtable test", true);
     exec("addsplits 1 \\x80", true);
     exec("getsplits", true, "1\n\\x80");
+    exec("getsplits -m 1", true, "1");
+    exec("getsplits -b64", true, "MQ==\ngA==");
     exec("deletetable test -f", true, "Table: [test] has been deleted");
   }
 
@@ -206,6 +204,36 @@
   }
 
   @Test
+  public void deleteManyTest() throws IOException {
+    exec("deletemany", false, "java.lang.IllegalStateException: Not in a table context");
+    exec("createtable test", true);
+    exec("deletemany", true, "\n");
+
+    exec("insert 0 0 0 0 -ts 0");
+    exec("insert 0 0 0 0 -l 0 -ts 0");
+    exec("insert 1 1 1 1 -ts 1");
+    exec("insert 2 2 2 2 -ts 2");
+
+    // prompts for delete, and rejects by default
+    exec("deletemany", true, "[SKIPPED] 0 0:0 []");
+    exec("deletemany -r 0", true, "[SKIPPED] 0 0:0 []");
+    exec("deletemany -r 0 -f", true, "[DELETED] 0 0:0 []");
+
+    // with auths, can delete the other record
+    exec("setauths -s 0");
+    exec("deletemany -r 0 -f", true, "[DELETED] 0 0:0 [0]");
+
+    // delete will show the timestamp
+    exec("deletemany -r 1 -f -st", true, "[DELETED] 1 1:1 [] 1");
+
+    // DeleteManyCommand has its own Formatter (DeleterFormatter), so it does not honor the -fm flag
+    exec("deletemany -r 2 -f -st -fm org.apache.accumulo.core.util.format.DateStringFormatter", true, "[DELETED] 2 2:2 [] 2");
+
+    exec("setauths -c ", true);
+    exec("deletetable test -f", true, "Table: [test] has been deleted");
+  }
+
+  @Test
   public void authsTest() throws Exception {
     Shell.log.debug("Starting auths test --------------------------");
     exec("setauths x,y,z", false, "Missing required option");
@@ -254,13 +282,70 @@
   }
 
   @Test
+  public void scanTimestampTest() throws IOException {
+    Shell.log.debug("Starting scanTimestamp test ------------------------");
+    exec("createtable test", true);
+    exec("insert r f q v -ts 0", true);
+    exec("scan -st", true, "r f:q [] 0    v");
+    exec("scan -st -f 0", true, " : [] 0   ");
+    exec("deletemany -f", true);
+    exec("deletetable test -f", true, "Table: [test] has been deleted");
+  }
+
+  @Test
+  public void scanFewTest() throws IOException {
+    Shell.log.debug("Starting scanFew test ------------------------");
+    exec("createtable test", true);
+    // historically, showing few did not pertain to ColVis or Timestamp
+    exec("insert 1 123 123456 -l '12345678' -ts 123456789 1234567890", true);
+    exec("setauths -s 12345678", true);
+    String expected = "1 123:123456 [12345678] 123456789    1234567890";
+    String expectedFew = "1 123:12345 [12345678] 123456789    12345";
+    exec("scan -st", true, expected);
+    exec("scan -st -f 5", true, expectedFew);
+    // also prove that BinaryFormatter behaves same as the default
+    exec("scan -st -fm org.apache.accumulo.core.util.format.BinaryFormatter", true, expected);
+    exec("scan -st -f 5 -fm org.apache.accumulo.core.util.format.BinaryFormatter", true, expectedFew);
+    exec("setauths -c", true);
+    exec("deletetable test -f", true, "Table: [test] has been deleted");
+  }
+
+  @Test
   public void scanDateStringFormatterTest() throws IOException {
     Shell.log.debug("Starting scan dateStringFormatter test --------------------------");
     exec("createtable t", true);
     exec("insert r f q v -ts 0", true);
-    DateFormat dateFormat = new SimpleDateFormat(DateStringFormatter.DATE_FORMAT);
+    @SuppressWarnings("deprecation")
+    DateFormat dateFormat = new SimpleDateFormat(org.apache.accumulo.core.util.format.DateStringFormatter.DATE_FORMAT);
     String expected = String.format("r f:q [] %s    v", dateFormat.format(new Date(0)));
+    // historically, showing few did not pertain to ColVis or Timestamp
+    String expectedFew = expected;
+    String expectedNoTimestamp = String.format("r f:q []    v");
     exec("scan -fm org.apache.accumulo.core.util.format.DateStringFormatter -st", true, expected);
+    exec("scan -fm org.apache.accumulo.core.util.format.DateStringFormatter -st -f 1000", true, expected);
+    exec("scan -fm org.apache.accumulo.core.util.format.DateStringFormatter -st -f 5", true, expectedFew);
+    exec("scan -fm org.apache.accumulo.core.util.format.DateStringFormatter", true, expectedNoTimestamp);
+    exec("deletetable t -f", true, "Table: [t] has been deleted");
+  }
+
+  @Test
+  public void grepTest() throws IOException {
+    Shell.log.debug("Starting grep test --------------------------");
+    exec("grep", false, "java.lang.IllegalStateException: Not in a table context");
+    exec("createtable t", true);
+    exec("setauths -s vis", true);
+    exec("insert r f q v -ts 0 -l vis", true);
+
+    String expected = "r f:q [vis]    v";
+    String expectedTimestamp = "r f:q [vis] 0    v";
+    exec("grep", false, "No terms specified");
+    exec("grep non_matching_string", true, "");
+    // historically, showing few did not pertain to ColVis or Timestamp
+    exec("grep r", true, expected);
+    exec("grep r -f 1", true, expected);
+    exec("grep r -st", true, expectedTimestamp);
+    exec("grep r -st -f 1", true, expectedTimestamp);
+    exec("setauths -c", true);
     exec("deletetable t -f", true, "Table: [t] has been deleted");
   }
 
diff --git a/shell/src/test/java/org/apache/accumulo/shell/commands/DeleteAuthsCommandTest.java b/shell/src/test/java/org/apache/accumulo/shell/commands/DeleteAuthsCommandTest.java
new file mode 100644
index 0000000..d19e4d0
--- /dev/null
+++ b/shell/src/test/java/org/apache/accumulo/shell/commands/DeleteAuthsCommandTest.java
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.shell.commands;
+
+import jline.console.ConsoleReader;
+
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.admin.SecurityOperations;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.shell.Shell;
+import org.apache.commons.cli.CommandLine;
+import org.easymock.EasyMock;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ */
+public class DeleteAuthsCommandTest {
+
+  private DeleteAuthsCommand cmd;
+
+  @Before
+  public void setup() {
+    cmd = new DeleteAuthsCommand();
+
+    // Initialize that internal state
+    cmd.getOptions();
+  }
+
+  @Test
+  public void deleteExistingAuth() throws Exception {
+    Connector conn = EasyMock.createMock(Connector.class);
+    CommandLine cli = EasyMock.createMock(CommandLine.class);
+    Shell shellState = EasyMock.createMock(Shell.class);
+    ConsoleReader reader = EasyMock.createMock(ConsoleReader.class);
+    SecurityOperations secOps = EasyMock.createMock(SecurityOperations.class);
+
+    EasyMock.expect(shellState.getConnector()).andReturn(conn);
+
+    // We're the root user
+    EasyMock.expect(conn.whoami()).andReturn("root");
+    EasyMock.expect(cli.getOptionValue("u", "root")).andReturn("foo");
+    EasyMock.expect(cli.getOptionValue("s")).andReturn("abc");
+
+    EasyMock.expect(conn.securityOperations()).andReturn(secOps);
+    EasyMock.expect(conn.securityOperations()).andReturn(secOps);
+    EasyMock.expect(secOps.getUserAuthorizations("foo")).andReturn(new Authorizations("abc", "123"));
+    secOps.changeUserAuthorizations("foo", new Authorizations("123"));
+    EasyMock.expectLastCall();
+
+    EasyMock.replay(conn, cli, shellState, reader, secOps);
+
+    cmd.execute("deleteauths -u foo -s abc", cli, shellState);
+
+    EasyMock.verify(conn, cli, shellState, reader, secOps);
+  }
+
+  @Test
+  public void deleteNonExistingAuth() throws Exception {
+    Connector conn = EasyMock.createMock(Connector.class);
+    CommandLine cli = EasyMock.createMock(CommandLine.class);
+    Shell shellState = EasyMock.createMock(Shell.class);
+    ConsoleReader reader = EasyMock.createMock(ConsoleReader.class);
+    SecurityOperations secOps = EasyMock.createMock(SecurityOperations.class);
+
+    EasyMock.expect(shellState.getConnector()).andReturn(conn);
+
+    // We're the root user
+    EasyMock.expect(conn.whoami()).andReturn("root");
+    EasyMock.expect(cli.getOptionValue("u", "root")).andReturn("foo");
+    EasyMock.expect(cli.getOptionValue("s")).andReturn("def");
+
+    EasyMock.expect(conn.securityOperations()).andReturn(secOps);
+    EasyMock.expect(conn.securityOperations()).andReturn(secOps);
+    EasyMock.expect(secOps.getUserAuthorizations("foo")).andReturn(new Authorizations("abc", "123"));
+    secOps.changeUserAuthorizations("foo", new Authorizations("abc", "123"));
+    EasyMock.expectLastCall();
+
+    EasyMock.replay(conn, cli, shellState, reader, secOps);
+
+    cmd.execute("deleteauths -u foo -s def", cli, shellState);
+
+    EasyMock.verify(conn, cli, shellState, reader, secOps);
+  }
+
+  @Test
+  public void deleteAllAuth() throws Exception {
+    Connector conn = EasyMock.createMock(Connector.class);
+    CommandLine cli = EasyMock.createMock(CommandLine.class);
+    Shell shellState = EasyMock.createMock(Shell.class);
+    ConsoleReader reader = EasyMock.createMock(ConsoleReader.class);
+    SecurityOperations secOps = EasyMock.createMock(SecurityOperations.class);
+
+    EasyMock.expect(shellState.getConnector()).andReturn(conn);
+
+    // We're the root user
+    EasyMock.expect(conn.whoami()).andReturn("root");
+    EasyMock.expect(cli.getOptionValue("u", "root")).andReturn("foo");
+    EasyMock.expect(cli.getOptionValue("s")).andReturn("abc,123");
+
+    EasyMock.expect(conn.securityOperations()).andReturn(secOps);
+    EasyMock.expect(conn.securityOperations()).andReturn(secOps);
+    EasyMock.expect(secOps.getUserAuthorizations("foo")).andReturn(new Authorizations("abc", "123"));
+    secOps.changeUserAuthorizations("foo", new Authorizations());
+    EasyMock.expectLastCall();
+
+    EasyMock.replay(conn, cli, shellState, reader, secOps);
+
+    cmd.execute("deleteauths -u foo -s abc,123", cli, shellState);
+
+    EasyMock.verify(conn, cli, shellState, reader, secOps);
+  }
+
+}
diff --git a/shell/src/test/java/org/apache/accumulo/shell/commands/DeleteTableCommandTest.java b/shell/src/test/java/org/apache/accumulo/shell/commands/DeleteTableCommandTest.java
index 4877552..ad9624f 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/commands/DeleteTableCommandTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/commands/DeleteTableCommandTest.java
@@ -32,11 +32,11 @@
 
   @Test
   public void removeAccumuloNamespaceTables() {
-    Set<String> tables = new HashSet<String>(Arrays.asList(MetadataTable.NAME, RootTable.NAME, "a1", "a2"));
+    Set<String> tables = new HashSet<>(Arrays.asList(MetadataTable.NAME, RootTable.NAME, "a1", "a2"));
     DeleteTableCommand cmd = new DeleteTableCommand();
     cmd.pruneTables("a.*", tables);
 
-    Assert.assertEquals(new HashSet<String>(Arrays.asList("a1", "a2")), tables);
+    Assert.assertEquals(new HashSet<>(Arrays.asList("a1", "a2")), tables);
   }
 
 }
diff --git a/shell/src/test/java/org/apache/accumulo/shell/commands/FormatterCommandTest.java b/shell/src/test/java/org/apache/accumulo/shell/commands/FormatterCommandTest.java
deleted file mode 100644
index 704d0c3..0000000
--- a/shell/src/test/java/org/apache/accumulo/shell/commands/FormatterCommandTest.java
+++ /dev/null
@@ -1,190 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.shell.commands;
-
-import static org.junit.Assert.assertTrue;
-
-import java.io.ByteArrayOutputStream;
-import java.io.File;
-import java.io.IOException;
-import java.io.InputStream;
-import java.nio.file.Files;
-import java.util.Iterator;
-import java.util.Map.Entry;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.TableExistsException;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.util.format.Formatter;
-import org.apache.accumulo.shell.Shell;
-import org.apache.accumulo.shell.mock.MockShell;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
-import org.junit.Assert;
-import org.junit.Test;
-
-/**
- * Uses the MockShell to test the shell output with Formatters
- */
-public class FormatterCommandTest {
-  ByteArrayOutputStream out = null;
-  InputStream in = null;
-
-  @Test
-  public void test() throws IOException, AccumuloException, AccumuloSecurityException, TableExistsException, ClassNotFoundException {
-    // Keep the Shell AUDIT log off the test output
-    Logger.getLogger(Shell.class).setLevel(Level.WARN);
-
-    File config = Files.createTempFile(null, null).toFile();
-    config.deleteOnExit();
-    final String[] args = new String[] {"--config-file", config.toString(), "--fake", "-u", "root", "-p", ""};
-
-    final String[] commands = createCommands();
-
-    in = MockShell.makeCommands(commands);
-    out = new ByteArrayOutputStream();
-
-    final MockShell shell = new MockShell(in, out);
-    assertTrue("Failed to configure shell without error", shell.config(args));
-
-    // Can't call createtable in the shell with MockAccumulo
-    shell.getConnector().tableOperations().create("test");
-
-    try {
-      shell.start();
-    } catch (Exception e) {
-      Assert.fail("Exception while running commands: " + e.getMessage());
-    }
-
-    shell.getReader().flush();
-
-    final String[] output = new String(out.toByteArray()).split("\n\r");
-
-    boolean formatterOn = false;
-
-    final String[] expectedDefault = new String[] {"row cf:cq []    1234abcd", "row cf1:cq1 []    9876fedc", "row2 cf:cq []    13579bdf",
-        "row2 cf1:cq []    2468ace"};
-
-    final String[] expectedFormatted = new String[] {"row cf:cq []    0x31 0x32 0x33 0x34 0x61 0x62 0x63 0x64",
-        "row cf1:cq1 []    0x39 0x38 0x37 0x36 0x66 0x65 0x64 0x63", "row2 cf:cq []    0x31 0x33 0x35 0x37 0x39 0x62 0x64 0x66",
-        "row2 cf1:cq []    0x32 0x34 0x36 0x38 0x61 0x63 0x65"};
-
-    int outputIndex = 0;
-    while (outputIndex < output.length) {
-      final String line = output[outputIndex];
-
-      if (line.startsWith("root@mock-instance")) {
-        if (line.contains("formatter")) {
-          formatterOn = true;
-        }
-
-        outputIndex++;
-      } else if (line.startsWith("row")) {
-        int expectedIndex = 0;
-        String[] comparisonData;
-
-        // Pick the type of data we expect (formatted or default)
-        if (formatterOn) {
-          comparisonData = expectedFormatted;
-        } else {
-          comparisonData = expectedDefault;
-        }
-
-        // Ensure each output is what we expected
-        while (expectedIndex + outputIndex < output.length && expectedIndex < expectedFormatted.length) {
-          Assert.assertEquals(comparisonData[expectedIndex].trim(), output[expectedIndex + outputIndex].trim());
-          expectedIndex++;
-        }
-
-        outputIndex += expectedIndex;
-      }
-    }
-  }
-
-  private String[] createCommands() {
-    return new String[] {"table test", "insert row cf cq 1234abcd", "insert row cf1 cq1 9876fedc", "insert row2 cf cq 13579bdf", "insert row2 cf1 cq 2468ace",
-        "scan", "formatter -t test -f org.apache.accumulo.core.util.shell.command.FormatterCommandTest$HexFormatter", "scan"};
-  }
-
-  /**
-   * <p>
-   * Simple <code>Formatter</code> that will convert each character in the Value from decimal to hexadecimal. Will automatically skip over characters in the
-   * value which do not fall within the [0-9,a-f] range.
-   * </p>
-   *
-   * <p>
-   * Example: <code>'0'</code> will be displayed as <code>'0x30'</code>
-   * </p>
-   */
-  public static class HexFormatter implements Formatter {
-    private Iterator<Entry<Key,Value>> iter = null;
-    private boolean printTs = false;
-
-    private final static String tab = "\t";
-    private final static String newline = "\n";
-
-    public HexFormatter() {}
-
-    @Override
-    public boolean hasNext() {
-      return this.iter.hasNext();
-    }
-
-    @Override
-    public String next() {
-      final Entry<Key,Value> entry = iter.next();
-
-      String key;
-
-      // Observe the timestamps
-      if (printTs) {
-        key = entry.getKey().toString();
-      } else {
-        key = entry.getKey().toStringNoTime();
-      }
-
-      final Value v = entry.getValue();
-
-      // Approximate how much space we'll need
-      final StringBuilder sb = new StringBuilder(key.length() + v.getSize() * 5);
-
-      sb.append(key).append(tab);
-
-      for (byte b : v.get()) {
-        if ((b >= 48 && b <= 57) || (b >= 97 && b <= 102)) {
-          sb.append(String.format("0x%x ", Integer.valueOf(b)));
-        }
-      }
-
-      sb.append(newline);
-
-      return sb.toString();
-    }
-
-    @Override
-    public void remove() {}
-
-    @Override
-    public void initialize(final Iterable<Entry<Key,Value>> scanner, final boolean printTimestamps) {
-      this.iter = scanner.iterator();
-      this.printTs = printTimestamps;
-    }
-  }
-
-}
diff --git a/shell/src/test/java/org/apache/accumulo/shell/commands/HistoryCommandTest.java b/shell/src/test/java/org/apache/accumulo/shell/commands/HistoryCommandTest.java
index 638af3f..f5c887b 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/commands/HistoryCommandTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/commands/HistoryCommandTest.java
@@ -64,7 +64,7 @@
     reader = new ConsoleReader(new ByteArrayInputStream(input.getBytes()), baos);
     reader.setHistory(history);
 
-    shell = new Shell(reader, null);
+    shell = new Shell(reader);
   }
 
   @Test
diff --git a/shell/src/test/java/org/apache/accumulo/shell/format/DeleterFormatterTest.java b/shell/src/test/java/org/apache/accumulo/shell/format/DeleterFormatterTest.java
index d99c0f2..2eba15e 100644
--- a/shell/src/test/java/org/apache/accumulo/shell/format/DeleterFormatterTest.java
+++ b/shell/src/test/java/org/apache/accumulo/shell/format/DeleterFormatterTest.java
@@ -35,18 +35,19 @@
 import java.util.Map;
 import java.util.TreeMap;
 
-import jline.UnsupportedTerminal;
-import jline.console.ConsoleReader;
-
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.util.format.FormatterConfig;
 import org.apache.accumulo.shell.Shell;
 import org.junit.Before;
 import org.junit.Test;
 
+import jline.UnsupportedTerminal;
+import jline.console.ConsoleReader;
+
 public class DeleterFormatterTest {
   DeleterFormatter formatter;
   Map<Key,Value> data;
@@ -70,7 +71,7 @@
     public void set(String in) {
       bais = new ByteArrayInputStream(in.getBytes(UTF_8));
     }
-  };
+  }
 
   @Before
   public void setUp() throws IOException, MutationsRejectedException {
@@ -93,19 +94,19 @@
 
     replay(writer, exceptionWriter, shellState);
 
-    data = new TreeMap<Key,Value>();
+    data = new TreeMap<>();
     data.put(new Key("r", "cf", "cq"), new Value("value".getBytes(UTF_8)));
   }
 
   @Test
   public void testEmpty() {
-    formatter = new DeleterFormatter(writer, Collections.<Key,Value> emptyMap().entrySet(), true, shellState, true);
+    formatter = new DeleterFormatter(writer, Collections.<Key,Value> emptyMap().entrySet(), new FormatterConfig().setPrintTimestamps(true), shellState, true);
     assertFalse(formatter.hasNext());
   }
 
   @Test
   public void testSingle() throws IOException {
-    formatter = new DeleterFormatter(writer, data.entrySet(), true, shellState, true);
+    formatter = new DeleterFormatter(writer, data.entrySet(), new FormatterConfig().setPrintTimestamps(true), shellState, true);
 
     assertTrue(formatter.hasNext());
     assertNull(formatter.next());
@@ -117,7 +118,7 @@
   public void testNo() throws IOException {
     input.set("no\n");
     data.put(new Key("z"), new Value("v2".getBytes(UTF_8)));
-    formatter = new DeleterFormatter(writer, data.entrySet(), true, shellState, false);
+    formatter = new DeleterFormatter(writer, data.entrySet(), new FormatterConfig().setPrintTimestamps(true), shellState, false);
 
     assertTrue(formatter.hasNext());
     assertNull(formatter.next());
@@ -131,7 +132,7 @@
   public void testNoConfirmation() throws IOException {
     input.set("");
     data.put(new Key("z"), new Value("v2".getBytes(UTF_8)));
-    formatter = new DeleterFormatter(writer, data.entrySet(), true, shellState, false);
+    formatter = new DeleterFormatter(writer, data.entrySet(), new FormatterConfig().setPrintTimestamps(true), shellState, false);
 
     assertTrue(formatter.hasNext());
     assertNull(formatter.next());
@@ -145,7 +146,7 @@
   public void testYes() throws IOException {
     input.set("y\nyes\n");
     data.put(new Key("z"), new Value("v2".getBytes(UTF_8)));
-    formatter = new DeleterFormatter(writer, data.entrySet(), true, shellState, false);
+    formatter = new DeleterFormatter(writer, data.entrySet(), new FormatterConfig().setPrintTimestamps(true), shellState, false);
 
     assertTrue(formatter.hasNext());
     assertNull(formatter.next());
@@ -158,7 +159,7 @@
 
   @Test
   public void testMutationException() {
-    formatter = new DeleterFormatter(exceptionWriter, data.entrySet(), true, shellState, true);
+    formatter = new DeleterFormatter(exceptionWriter, data.entrySet(), new FormatterConfig().setPrintTimestamps(true), shellState, true);
 
     assertTrue(formatter.hasNext());
     assertNull(formatter.next());
diff --git a/start/.gitignore b/start/.gitignore
index f97b5ca..e77a822 100644
--- a/start/.gitignore
+++ b/start/.gitignore
@@ -26,4 +26,3 @@
 /nbproject/
 /nbactions.xml
 /nb-configuration.xml
-
diff --git a/start/pom.xml b/start/pom.xml
index 881ff14..bc6c4b6 100644
--- a/start/pom.xml
+++ b/start/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-start</artifactId>
   <name>Apache Accumulo Start</name>
@@ -104,5 +104,33 @@
         </plugin>
       </plugins>
     </pluginManagement>
+    <plugins>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>exec-maven-plugin</artifactId>
+        <executions>
+          <execution>
+            <id>Build Test jars</id>
+            <goals>
+              <goal>exec</goal>
+            </goals>
+            <phase>process-test-classes</phase>
+            <configuration>
+              <executable>${project.basedir}/src/test/shell/makeTestJars.sh</executable>
+            </configuration>
+          </execution>
+          <execution>
+            <id>Build HelloWorld jars</id>
+            <goals>
+              <goal>exec</goal>
+            </goals>
+            <phase>process-test-classes</phase>
+            <configuration>
+              <executable>${project.basedir}/src/test/shell/makeHelloWorldJars.sh</executable>
+            </configuration>
+          </execution>
+        </executions>
+      </plugin>
+    </plugins>
   </build>
 </project>
diff --git a/start/src/main/java/org/apache/accumulo/start/Main.java b/start/src/main/java/org/apache/accumulo/start/Main.java
index 90eb5f4..414394a 100644
--- a/start/src/main/java/org/apache/accumulo/start/Main.java
+++ b/start/src/main/java/org/apache/accumulo/start/Main.java
@@ -210,14 +210,14 @@
     System.out.println("accumulo " + kwString + " | <accumulo class> args");
   }
 
-  static synchronized Map<String,KeywordExecutable> getExecutables(final ClassLoader cl) {
+  public static synchronized Map<String,KeywordExecutable> getExecutables(final ClassLoader cl) {
     if (servicesMap == null) {
       servicesMap = checkDuplicates(ServiceLoader.load(KeywordExecutable.class, cl));
     }
     return servicesMap;
   }
 
-  static Map<String,KeywordExecutable> checkDuplicates(final Iterable<? extends KeywordExecutable> services) {
+  public static Map<String,KeywordExecutable> checkDuplicates(final Iterable<? extends KeywordExecutable> services) {
     TreeSet<String> blacklist = new TreeSet<>();
     TreeMap<String,KeywordExecutable> results = new TreeMap<>();
     for (KeywordExecutable service : services) {
diff --git a/start/src/main/java/org/apache/accumulo/start/classloader/AccumuloClassLoader.java b/start/src/main/java/org/apache/accumulo/start/classloader/AccumuloClassLoader.java
index 9ebbae0..991e89e 100644
--- a/start/src/main/java/org/apache/accumulo/start/classloader/AccumuloClassLoader.java
+++ b/start/src/main/java/org/apache/accumulo/start/classloader/AccumuloClassLoader.java
@@ -224,9 +224,9 @@
   private static ArrayList<URL> findAccumuloURLs() throws IOException {
     String cp = getAccumuloString(AccumuloClassLoader.CLASSPATH_PROPERTY_NAME, AccumuloClassLoader.ACCUMULO_CLASSPATH_VALUE);
     if (cp == null)
-      return new ArrayList<URL>();
+      return new ArrayList<>();
     String[] cps = replaceEnvVars(cp, System.getenv()).split(",");
-    ArrayList<URL> urls = new ArrayList<URL>();
+    ArrayList<URL> urls = new ArrayList<>();
     for (String classpath : getMavenClasspaths())
       addUrl(classpath, urls);
     for (String classpath : cps) {
@@ -241,7 +241,7 @@
     String baseDirname = AccumuloClassLoader.getAccumuloString(MAVEN_PROJECT_BASEDIR_PROPERTY_NAME, DEFAULT_MAVEN_PROJECT_BASEDIR_VALUE);
     if (baseDirname == null || baseDirname.trim().isEmpty())
       return Collections.emptySet();
-    Set<String> paths = new TreeSet<String>();
+    Set<String> paths = new TreeSet<>();
     findMavenTargetClasses(paths, new File(baseDirname.trim()), 0);
     return paths;
   }
diff --git a/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloReloadingVFSClassLoader.java b/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloReloadingVFSClassLoader.java
index 539f9f5..6a884e9 100644
--- a/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloReloadingVFSClassLoader.java
+++ b/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloReloadingVFSClassLoader.java
@@ -56,7 +56,7 @@
   private final boolean preDelegate;
   private final ThreadPoolExecutor executor;
   {
-    BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(2);
+    BlockingQueue<Runnable> queue = new ArrayBlockingQueue<>(2);
     ThreadFactory factory = new ThreadFactory() {
 
       @Override
@@ -140,7 +140,7 @@
     this.parent = parent;
     this.preDelegate = preDelegate;
 
-    ArrayList<FileObject> pathsToMonitor = new ArrayList<FileObject>();
+    ArrayList<FileObject> pathsToMonitor = new ArrayList<>();
     files = AccumuloVFSClassLoader.resolve(vfs, uris, pathsToMonitor);
 
     if (preDelegate)
diff --git a/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoader.java b/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoader.java
index 8c7067f..db070ec 100644
--- a/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoader.java
+++ b/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoader.java
@@ -125,7 +125,7 @@
     if (uris == null)
       return new FileObject[0];
 
-    ArrayList<FileObject> classpath = new ArrayList<FileObject>();
+    ArrayList<FileObject> classpath = new ArrayList<>();
 
     pathsToMonitor.clear();
 
@@ -281,7 +281,7 @@
     vfs.setReplicator(new UniqueFileReplicator(cacheDir));
     vfs.setCacheStrategy(CacheStrategy.ON_RESOLVE);
     vfs.init();
-    vfsInstances.add(new WeakReference<DefaultFileSystemManager>(vfs));
+    vfsInstances.add(new WeakReference<>(vfs));
     return vfs;
   }
 
@@ -307,7 +307,7 @@
   public static void printClassPath(Printer out) {
     try {
       ClassLoader cl = getClassLoader();
-      ArrayList<ClassLoader> classloaders = new ArrayList<ClassLoader>();
+      ArrayList<ClassLoader> classloaders = new ArrayList<>();
 
       while (cl != null) {
         classloaders.add(cl);
diff --git a/start/src/main/java/org/apache/accumulo/start/classloader/vfs/ContextManager.java b/start/src/main/java/org/apache/accumulo/start/classloader/vfs/ContextManager.java
index c873fa6..7145b4a 100644
--- a/start/src/main/java/org/apache/accumulo/start/classloader/vfs/ContextManager.java
+++ b/start/src/main/java/org/apache/accumulo/start/classloader/vfs/ContextManager.java
@@ -58,7 +58,7 @@
     }
   }
 
-  private Map<String,Context> contexts = new HashMap<String,Context>();
+  private Map<String,Context> contexts = new HashMap<>();
 
   private volatile ContextsConfig config;
   private FileSystemManager vfs;
@@ -197,7 +197,7 @@
     // the set of currently configured contexts. We will close the contexts that are
     // no longer in the configuration.
     synchronized (this) {
-      unused = new HashMap<String,Context>(contexts);
+      unused = new HashMap<>(contexts);
       unused.keySet().removeAll(configuredContexts);
       contexts.keySet().removeAll(unused.keySet());
     }
diff --git a/start/src/test/java/org/apache/accumulo/start/classloader/vfs/providers/VfsClassLoaderTest.java b/start/src/test/java/org/apache/accumulo/start/classloader/vfs/providers/VfsClassLoaderTest.java
index 2ba4ecf..1efd7b5 100644
--- a/start/src/test/java/org/apache/accumulo/start/classloader/vfs/providers/VfsClassLoaderTest.java
+++ b/start/src/test/java/org/apache/accumulo/start/classloader/vfs/providers/VfsClassLoaderTest.java
@@ -18,7 +18,7 @@
 
 import java.net.URL;
 
-import org.apache.accumulo.test.AccumuloDFSBase;
+import org.apache.accumulo.start.test.AccumuloDFSBase;
 import org.apache.commons.vfs2.FileChangeEvent;
 import org.apache.commons.vfs2.FileListener;
 import org.apache.commons.vfs2.FileObject;
diff --git a/start/src/test/java/org/apache/accumulo/test/AccumuloDFSBase.java b/start/src/test/java/org/apache/accumulo/start/test/AccumuloDFSBase.java
similarity index 99%
rename from start/src/test/java/org/apache/accumulo/test/AccumuloDFSBase.java
rename to start/src/test/java/org/apache/accumulo/start/test/AccumuloDFSBase.java
index 9b823a1..b405466 100644
--- a/start/src/test/java/org/apache/accumulo/test/AccumuloDFSBase.java
+++ b/start/src/test/java/org/apache/accumulo/start/test/AccumuloDFSBase.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.test;
+package org.apache.accumulo.start.test;
 
 import java.io.File;
 import java.io.IOException;
diff --git a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java b/start/src/test/java/test/HelloWorldTemplate
similarity index 67%
copy from core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
copy to start/src/test/java/test/HelloWorldTemplate
index 01f5fa8..2d4d766 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/UtilWaitThread.java
+++ b/start/src/test/java/test/HelloWorldTemplate
@@ -14,19 +14,12 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.util;
+package test;
 
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
+public class HelloWorld {
 
-public class UtilWaitThread {
-  private static final Logger log = LoggerFactory.getLogger(UtilWaitThread.class);
-
-  public static void sleep(long millis) {
-    try {
-      Thread.sleep(millis);
-    } catch (InterruptedException e) {
-      log.error("{}", e.getMessage(), e);
-    }
+  @Override
+  public String toString() {
+    return "%%";
   }
 }
diff --git a/start/src/test/resources/ClassLoaderTestA/Test.jar b/start/src/test/resources/ClassLoaderTestA/Test.jar
deleted file mode 100644
index 8b9c462..0000000
--- a/start/src/test/resources/ClassLoaderTestA/Test.jar
+++ /dev/null
Binary files differ
diff --git a/start/src/test/resources/ClassLoaderTestB/Test.jar b/start/src/test/resources/ClassLoaderTestB/Test.jar
deleted file mode 100644
index 4ced46f..0000000
--- a/start/src/test/resources/ClassLoaderTestB/Test.jar
+++ /dev/null
Binary files differ
diff --git a/start/src/test/resources/ClassLoaderTestC/Test.jar b/start/src/test/resources/ClassLoaderTestC/Test.jar
deleted file mode 100644
index 87b077e..0000000
--- a/start/src/test/resources/ClassLoaderTestC/Test.jar
+++ /dev/null
Binary files differ
diff --git a/start/src/test/resources/HelloWorld.jar b/start/src/test/resources/HelloWorld.jar
deleted file mode 100644
index 4e7028f..0000000
--- a/start/src/test/resources/HelloWorld.jar
+++ /dev/null
Binary files differ
diff --git a/start/src/test/resources/HelloWorld2.jar b/start/src/test/resources/HelloWorld2.jar
deleted file mode 100644
index 2dc06ea..0000000
--- a/start/src/test/resources/HelloWorld2.jar
+++ /dev/null
Binary files differ
diff --git a/start/src/test/shell/makeHelloWorldJars.sh b/start/src/test/shell/makeHelloWorldJars.sh
new file mode 100755
index 0000000..9f4a990
--- /dev/null
+++ b/start/src/test/shell/makeHelloWorldJars.sh
@@ -0,0 +1,26 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+mkdir -p target/generated-sources/HelloWorld/test
+sed "s/%%/Hello World\!/" < src/test/java/test/HelloWorldTemplate > target/generated-sources/HelloWorld/test/HelloWorld.java
+$JAVA_HOME/bin/javac target/generated-sources/HelloWorld/test/HelloWorld.java -d target/generated-sources/HelloWorld
+$JAVA_HOME/bin/jar -cf target/test-classes/HelloWorld.jar -C target/generated-sources/HelloWorld test/HelloWorld.class
+
+mkdir -p target/generated-sources/HalloWelt/test
+sed "s/%%/Hallo Welt/" < src/test/java/test/HelloWorldTemplate > target/generated-sources/HalloWelt/test/HelloWorld.java
+$JAVA_HOME/bin/javac target/generated-sources/HalloWelt/test/HelloWorld.java -d target/generated-sources/HalloWelt
+$JAVA_HOME/bin/jar -cf target/test-classes/HelloWorld2.jar -C target/generated-sources/HalloWelt test/HelloWorld.class
diff --git a/start/src/test/shell/makeTestJars.sh b/start/src/test/shell/makeTestJars.sh
index d3b92a1..d4dc9fc 100755
--- a/start/src/test/shell/makeTestJars.sh
+++ b/start/src/test/shell/makeTestJars.sh
@@ -15,13 +15,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-
-cd "$ACCUMULO_HOME/src/start"
 for x in A B C
 do
-    sed "s/testX/test$x/" < src/test/java/test/TestTemplate > src/test/java/test/TestObject.java
-    mkdir -p target/$x
-    javac -cp src/test/java src/test/java/test/TestObject.java -d target/$x
-    jar -cf src/test/resources/ClassLoaderTest$x/Test.jar -C target/$x test/TestObject.class
-    rm -f src/test/java/test/TestObject.java
+    mkdir -p target/generated-sources/$x/test target/test-classes/ClassLoaderTest$x
+    sed "s/testX/test$x/" < src/test/java/test/TestTemplate > target/generated-sources/$x/test/TestObject.java
+    $JAVA_HOME/bin/javac -cp target/test-classes target/generated-sources/$x/test/TestObject.java -d target/generated-sources/$x
+    $JAVA_HOME/bin/jar -cf target/test-classes/ClassLoaderTest$x/Test.jar -C target/generated-sources/$x test/TestObject.class
 done
diff --git a/test/pom.xml b/test/pom.xml
index a78ba7e..92aeeaf 100644
--- a/test/pom.xml
+++ b/test/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-test</artifactId>
   <name>Apache Accumulo Testing</name>
@@ -43,6 +43,10 @@
       <artifactId>guava</artifactId>
     </dependency>
     <dependency>
+      <groupId>commons-cli</groupId>
+      <artifactId>commons-cli</artifactId>
+    </dependency>
+    <dependency>
       <groupId>commons-codec</groupId>
       <artifactId>commons-codec</artifactId>
     </dependency>
@@ -51,6 +55,10 @@
       <artifactId>commons-configuration</artifactId>
     </dependency>
     <dependency>
+      <groupId>commons-httpclient</groupId>
+      <artifactId>commons-httpclient</artifactId>
+    </dependency>
+    <dependency>
       <groupId>commons-io</groupId>
       <artifactId>commons-io</artifactId>
     </dependency>
@@ -63,6 +71,10 @@
       <artifactId>jline</artifactId>
     </dependency>
     <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+    </dependency>
+    <dependency>
       <groupId>log4j</groupId>
       <artifactId>log4j</artifactId>
     </dependency>
@@ -72,6 +84,10 @@
     </dependency>
     <dependency>
       <groupId>org.apache.accumulo</groupId>
+      <artifactId>accumulo-examples-simple</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.accumulo</groupId>
       <artifactId>accumulo-fate</artifactId>
     </dependency>
     <dependency>
@@ -121,13 +137,36 @@
     </dependency>
     <dependency>
       <groupId>org.apache.commons</groupId>
-      <artifactId>commons-math</artifactId>
+      <artifactId>commons-lang3</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.commons</groupId>
+      <artifactId>commons-math3</artifactId>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-client</artifactId>
     </dependency>
     <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-distcp</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-minicluster</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-minikdc</artifactId>
+      <exclusions>
+        <!-- Pulls in an older bouncycastle version -->
+        <exclusion>
+          <groupId>bouncycastle</groupId>
+          <artifactId>bcprov-jdk15</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
       <groupId>org.apache.thrift</groupId>
       <artifactId>libthrift</artifactId>
     </dependency>
@@ -136,60 +175,28 @@
       <artifactId>zookeeper</artifactId>
     </dependency>
     <dependency>
-      <groupId>commons-cli</groupId>
-      <artifactId>commons-cli</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>commons-httpclient</groupId>
-      <artifactId>commons-httpclient</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.accumulo</groupId>
-      <artifactId>accumulo-examples-simple</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-minicluster</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-minikdc</artifactId>
-      <scope>test</scope>
-      <exclusions>
-        <!-- Pulls in an older bouncycastle version -->
-        <exclusion>
-          <groupId>bouncycastle</groupId>
-          <artifactId>bcprov-jdk15</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-    <dependency>
       <groupId>org.bouncycastle</groupId>
       <artifactId>bcpkix-jdk15on</artifactId>
-      <scope>test</scope>
     </dependency>
     <dependency>
       <groupId>org.bouncycastle</groupId>
       <artifactId>bcprov-jdk15on</artifactId>
-      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.easymock</groupId>
+      <artifactId>easymock</artifactId>
     </dependency>
     <dependency>
       <groupId>org.eclipse.jetty</groupId>
       <artifactId>jetty-server</artifactId>
-      <scope>test</scope>
     </dependency>
     <dependency>
       <groupId>org.slf4j</groupId>
       <artifactId>slf4j-log4j12</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.accumulo</groupId>
+      <artifactId>accumulo-iterator-test-harness</artifactId>
       <scope>test</scope>
     </dependency>
   </dependencies>
@@ -211,6 +218,8 @@
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-failsafe-plugin</artifactId>
           <configuration>
+            <testSourceDirectory>${project.basedir}/src/main/java/</testSourceDirectory>
+            <testClassesDirectory>${project.build.directory}/classes/</testClassesDirectory>
             <systemPropertyVariables>
               <timeout.factor>${timeout.factor}</timeout.factor>
               <org.apache.accumulo.test.functional.useCredProviderForIT>${useCredProviderForIT}</org.apache.accumulo.test.functional.useCredProviderForIT>
@@ -230,48 +239,109 @@
             </systemPropertyVariables>
           </configuration>
         </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-jar-plugin</artifactId>
+          <executions>
+            <execution>
+              <id>create-iterator-test-jar</id>
+              <goals>
+                <goal>test-jar</goal>
+              </goals>
+              <phase>pre-integration-test</phase>
+              <configuration>
+                <finalName>TestIterators</finalName>
+                <classifier />
+                <includes>
+                  <include>org/apache/accumulo/test/functional/ValueReversingIterator.class</include>
+                </includes>
+              </configuration>
+            </execution>
+          </executions>
+        </plugin>
       </plugins>
     </pluginManagement>
+    <plugins>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>exec-maven-plugin</artifactId>
+        <executions>
+          <execution>
+            <id>check-for-misplaced-ITs</id>
+            <goals>
+              <goal>exec</goal>
+            </goals>
+            <phase>validate</phase>
+            <configuration>
+              <executable>bash</executable>
+              <arguments>
+                <argument>-c</argument>
+                <argument>! find src/test/java -name '*IT.java' -exec echo '[ERROR] {} should be in src/main/java' \; | grep 'src/test/java'</argument>
+              </arguments>
+            </configuration>
+          </execution>
+        </executions>
+      </plugin>
+    </plugins>
   </build>
   <profiles>
     <profile>
-      <id>hadoop-default</id>
+      <!-- create shaded test jar appropriate for running ITs on MapReduce -->
+      <id>mrit</id>
       <activation>
         <property>
-          <name>!hadoop.profile</name>
+          <name>mrit</name>
         </property>
       </activation>
-      <properties>
-        <!-- Denotes intention and allows the enforcer plugin to pass when
-             the user is relying on default behavior; won't work to activate profile -->
-        <hadoop.profile>2</hadoop.profile>
-      </properties>
-      <dependencies>
-        <dependency>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-distcp</artifactId>
-          <scope>test</scope>
-        </dependency>
-      </dependencies>
-    </profile>
-    <!-- profile for building against Hadoop 2.x
-     XXX Since this is the default, make sure to sync hadoop-default when changing.
-    Activate using: mvn -Dhadoop.profile=2 -->
-    <profile>
-      <id>hadoop-2</id>
-      <activation>
-        <property>
-          <name>hadoop.profile</name>
-          <value>2</value>
-        </property>
-      </activation>
-      <dependencies>
-        <dependency>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-distcp</artifactId>
-          <scope>test</scope>
-        </dependency>
-      </dependencies>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-shade-plugin</artifactId>
+            <configuration>
+              <artifactSet>
+                <excludes>
+                  <exclude>com.google.auto.service</exclude>
+                  <exclude>com.google.auto</exclude>
+                  <exclude>javax.servlet:servlet-api</exclude>
+                  <exclude>org.apache.accumulo:accumulo-native</exclude>
+                  <exclude>org.slf4j:slf4j-log4j12</exclude>
+                </excludes>
+              </artifactSet>
+              <shadedArtifactAttached>true</shadedArtifactAttached>
+              <shadedClassifierName>mrit</shadedClassifierName>
+              <createDependencyReducedPom>false</createDependencyReducedPom>
+              <filters>
+                <filter>
+                  <artifact>*:*</artifact>
+                  <excludes>
+                    <exclude>META-INF/*.DSA</exclude>
+                    <exclude>META-INF/*.RSA</exclude>
+                    <exclude>META-INF/*.SF</exclude>
+                    <exclude>META-INF/DEPENDENCIES</exclude>
+                  </excludes>
+                </filter>
+              </filters>
+              <transformers>
+                <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
+                  <manifestEntries>
+                    <Sealed>false</Sealed>
+                    <Main-Class>org.apache.accumulo.test.mrit.IntegrationTestMapReduce</Main-Class>
+                  </manifestEntries>
+                </transformer>
+              </transformers>
+            </configuration>
+            <executions>
+              <execution>
+                <id>create-shaded-mrit</id>
+                <goals>
+                  <goal>shade</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+        </plugins>
+      </build>
     </profile>
   </profiles>
 </project>
diff --git a/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java b/test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
rename to test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
index e2b35f4..7d7b73a 100644
--- a/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
+++ b/test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
@@ -58,8 +58,8 @@
 /**
  * General Integration-Test base class that provides access to an Accumulo instance for testing. This instance could be MAC or a standalone instance.
  */
-public abstract class AccumuloClusterIT extends AccumuloIT implements MiniClusterConfigurationCallback, ClusterUsers {
-  private static final Logger log = LoggerFactory.getLogger(AccumuloClusterIT.class);
+public abstract class AccumuloClusterHarness extends AccumuloITBase implements MiniClusterConfigurationCallback, ClusterUsers {
+  private static final Logger log = LoggerFactory.getLogger(AccumuloClusterHarness.class);
   private static final String TRUE = Boolean.toString(true);
 
   public static enum ClusterType {
@@ -170,7 +170,8 @@
           UserGroupInformation.loginUserFromKeytab(systemUser.getPrincipal(), systemUser.getKeytab().getAbsolutePath());
 
           // Open a connector as the system user (ensures the user will exist for us to assign permissions to)
-          Connector conn = cluster.getConnector(systemUser.getPrincipal(), new KerberosToken(systemUser.getPrincipal(), systemUser.getKeytab(), true));
+          UserGroupInformation.loginUserFromKeytab(systemUser.getPrincipal(), systemUser.getKeytab().getAbsolutePath());
+          Connector conn = cluster.getConnector(systemUser.getPrincipal(), new KerberosToken());
 
           // Then, log back in as the "root" user and do the grant
           UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
diff --git a/test/src/test/java/org/apache/accumulo/harness/AccumuloIT.java b/test/src/main/java/org/apache/accumulo/harness/AccumuloITBase.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/harness/AccumuloIT.java
rename to test/src/main/java/org/apache/accumulo/harness/AccumuloITBase.java
index 03ee44c..d516c44 100644
--- a/test/src/test/java/org/apache/accumulo/harness/AccumuloIT.java
+++ b/test/src/main/java/org/apache/accumulo/harness/AccumuloITBase.java
@@ -19,6 +19,7 @@
 import static org.junit.Assert.assertTrue;
 
 import java.io.File;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.commons.io.FileUtils;
 import org.junit.Rule;
@@ -30,8 +31,8 @@
 /**
  * Methods, setup and/or infrastructure which are common to any Accumulo integration test.
  */
-public class AccumuloIT {
-  private static final Logger log = LoggerFactory.getLogger(AccumuloIT.class);
+public class AccumuloITBase {
+  private static final Logger log = LoggerFactory.getLogger(AccumuloITBase.class);
 
   @Rule
   public TestName testName = new TestName();
@@ -92,7 +93,8 @@
     } catch (NumberFormatException exception) {
       log.warn("Could not parse timeout.factor, defaulting to no timeout.");
     }
-    return new Timeout(waitLonger * defaultTimeoutSeconds() * 1000);
+
+    return Timeout.builder().withTimeout(waitLonger * defaultTimeoutSeconds(), TimeUnit.SECONDS).withLookingForStuckThread(true).build();
   }
 
   /**
diff --git a/test/src/test/java/org/apache/accumulo/harness/MiniClusterConfigurationCallback.java b/test/src/main/java/org/apache/accumulo/harness/MiniClusterConfigurationCallback.java
similarity index 100%
rename from test/src/test/java/org/apache/accumulo/harness/MiniClusterConfigurationCallback.java
rename to test/src/main/java/org/apache/accumulo/harness/MiniClusterConfigurationCallback.java
diff --git a/test/src/test/java/org/apache/accumulo/harness/MiniClusterHarness.java b/test/src/main/java/org/apache/accumulo/harness/MiniClusterHarness.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/harness/MiniClusterHarness.java
rename to test/src/main/java/org/apache/accumulo/harness/MiniClusterHarness.java
index 57da534..e1df97e 100644
--- a/test/src/test/java/org/apache/accumulo/harness/MiniClusterHarness.java
+++ b/test/src/main/java/org/apache/accumulo/harness/MiniClusterHarness.java
@@ -72,24 +72,24 @@
     return create(MiniClusterHarness.class.getName(), Long.toString(COUNTER.incrementAndGet()), token, kdc);
   }
 
-  public MiniAccumuloClusterImpl create(AccumuloIT testBase, AuthenticationToken token) throws Exception {
+  public MiniAccumuloClusterImpl create(AccumuloITBase testBase, AuthenticationToken token) throws Exception {
     return create(testBase.getClass().getName(), testBase.testName.getMethodName(), token);
   }
 
-  public MiniAccumuloClusterImpl create(AccumuloIT testBase, AuthenticationToken token, TestingKdc kdc) throws Exception {
+  public MiniAccumuloClusterImpl create(AccumuloITBase testBase, AuthenticationToken token, TestingKdc kdc) throws Exception {
     return create(testBase, token, kdc, MiniClusterConfigurationCallback.NO_CALLBACK);
   }
 
-  public MiniAccumuloClusterImpl create(AccumuloIT testBase, AuthenticationToken token, TestingKdc kdc, MiniClusterConfigurationCallback configCallback)
+  public MiniAccumuloClusterImpl create(AccumuloITBase testBase, AuthenticationToken token, TestingKdc kdc, MiniClusterConfigurationCallback configCallback)
       throws Exception {
     return create(testBase.getClass().getName(), testBase.testName.getMethodName(), token, configCallback, kdc);
   }
 
-  public MiniAccumuloClusterImpl create(AccumuloClusterIT testBase, AuthenticationToken token, TestingKdc kdc) throws Exception {
+  public MiniAccumuloClusterImpl create(AccumuloClusterHarness testBase, AuthenticationToken token, TestingKdc kdc) throws Exception {
     return create(testBase.getClass().getName(), testBase.testName.getMethodName(), token, testBase, kdc);
   }
 
-  public MiniAccumuloClusterImpl create(AccumuloClusterIT testBase, AuthenticationToken token, MiniClusterConfigurationCallback callback) throws Exception {
+  public MiniAccumuloClusterImpl create(AccumuloClusterHarness testBase, AuthenticationToken token, MiniClusterConfigurationCallback callback) throws Exception {
     return create(testBase.getClass().getName(), testBase.testName.getMethodName(), token, callback);
   }
 
@@ -118,7 +118,7 @@
       rootPasswd = UUID.randomUUID().toString();
     }
 
-    File baseDir = AccumuloClusterIT.createTestDir(testClassName + "_" + testMethodName);
+    File baseDir = AccumuloClusterHarness.createTestDir(testClassName + "_" + testMethodName);
     MiniAccumuloConfigImpl cfg = new MiniAccumuloConfigImpl(baseDir, rootPasswd);
 
     // Enable native maps by default
@@ -128,7 +128,7 @@
     Configuration coreSite = new Configuration(false);
 
     // Setup SSL and credential providers if the properties request such
-    configureForEnvironment(cfg, getClass(), AccumuloClusterIT.getSslDir(baseDir), coreSite, kdc);
+    configureForEnvironment(cfg, getClass(), AccumuloClusterHarness.getSslDir(baseDir), coreSite, kdc);
 
     // Invoke the callback for tests to configure MAC before it starts
     configCallback.configureMiniCluster(cfg, coreSite);
diff --git a/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java b/test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
similarity index 86%
rename from test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
rename to test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
index f66a192..544b5de 100644
--- a/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
+++ b/test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
@@ -40,16 +40,16 @@
 /**
  * Convenience class which starts a single MAC instance for a test to leverage.
  *
- * There isn't a good way to build this off of the {@link AccumuloClusterIT} (as would be the logical place) because we need to start the MiniAccumuloCluster in
- * a static BeforeClass-annotated method. Because it is static and invoked before any other BeforeClass methods in the implementation, the actual test classes
- * can't expose any information to tell the base class that it is to perform the one-MAC-per-class semantics.
+ * There isn't a good way to build this off of the {@link AccumuloClusterHarness} (as would be the logical place) because we need to start the
+ * MiniAccumuloCluster in a static BeforeClass-annotated method. Because it is static and invoked before any other BeforeClass methods in the implementation,
+ * the actual test classes can't expose any information to tell the base class that it is to perform the one-MAC-per-class semantics.
  *
  * Implementations of this class must be sure to invoke {@link #startMiniCluster()} or {@link #startMiniClusterWithConfig(MiniClusterConfigurationCallback)} in
  * a method annotated with the {@link org.junit.BeforeClass} JUnit annotation and {@link #stopMiniCluster()} in a method annotated with the
  * {@link org.junit.AfterClass} JUnit annotation.
  */
-public abstract class SharedMiniClusterIT extends AccumuloIT implements ClusterUsers {
-  private static final Logger log = LoggerFactory.getLogger(SharedMiniClusterIT.class);
+public abstract class SharedMiniClusterBase extends AccumuloITBase implements ClusterUsers {
+  private static final Logger log = LoggerFactory.getLogger(SharedMiniClusterBase.class);
   public static final String TRUE = Boolean.toString(true);
 
   private static String principal = "root";
@@ -89,14 +89,14 @@
       // Login as the client
       ClusterUser rootUser = krb.getRootUser();
       // Get the krb token
-      principal = rootUser.getPrincipal();
-      token = new KerberosToken(principal, rootUser.getKeytab(), true);
+      UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
+      token = new KerberosToken();
     } else {
       rootPassword = "rootPasswordShared1";
       token = new PasswordToken(rootPassword);
     }
 
-    cluster = harness.create(SharedMiniClusterIT.class.getName(), System.currentTimeMillis() + "_" + new Random().nextInt(Short.MAX_VALUE), token,
+    cluster = harness.create(SharedMiniClusterBase.class.getName(), System.currentTimeMillis() + "_" + new Random().nextInt(Short.MAX_VALUE), token,
         miniClusterCallback, krb);
     cluster.start();
 
@@ -105,7 +105,8 @@
       final ClusterUser systemUser = krb.getAccumuloServerUser(), rootUser = krb.getRootUser();
       // Login as the trace user
       // Open a connector as the system user (ensures the user will exist for us to assign permissions to)
-      Connector conn = cluster.getConnector(systemUser.getPrincipal(), new KerberosToken(systemUser.getPrincipal(), systemUser.getKeytab(), true));
+      UserGroupInformation.loginUserFromKeytab(systemUser.getPrincipal(), systemUser.getKeytab().getAbsolutePath());
+      Connector conn = cluster.getConnector(systemUser.getPrincipal(), new KerberosToken());
 
       // Then, log back in as the "root" user and do the grant
       UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
@@ -193,7 +194,7 @@
   @Override
   public ClusterUser getUser(int offset) {
     if (null == krb) {
-      String user = SharedMiniClusterIT.class.getName() + "_" + testName.getMethodName() + "_" + offset;
+      String user = SharedMiniClusterBase.class.getName() + "_" + testName.getMethodName() + "_" + offset;
       // Password is the username
       return new ClusterUser(user, user);
     } else {
diff --git a/test/src/test/java/org/apache/accumulo/harness/TestingKdc.java b/test/src/main/java/org/apache/accumulo/harness/TestingKdc.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/harness/TestingKdc.java
rename to test/src/main/java/org/apache/accumulo/harness/TestingKdc.java
index 0b3e395..8e976a8 100644
--- a/test/src/test/java/org/apache/accumulo/harness/TestingKdc.java
+++ b/test/src/main/java/org/apache/accumulo/harness/TestingKdc.java
@@ -57,6 +57,8 @@
 
   public static File computeKdcDir() {
     File targetDir = new File(System.getProperty("user.dir"), "target");
+    if (!targetDir.exists())
+      Assert.assertTrue(targetDir.mkdirs());
     Assert.assertTrue("Could not find Maven target directory: " + targetDir, targetDir.exists() && targetDir.isDirectory());
 
     // Create the directories: target/kerberos/minikdc
diff --git a/test/src/test/java/org/apache/accumulo/harness/conf/AccumuloClusterConfiguration.java b/test/src/main/java/org/apache/accumulo/harness/conf/AccumuloClusterConfiguration.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/harness/conf/AccumuloClusterConfiguration.java
rename to test/src/main/java/org/apache/accumulo/harness/conf/AccumuloClusterConfiguration.java
index 3ce83b8..31ed94a 100644
--- a/test/src/test/java/org/apache/accumulo/harness/conf/AccumuloClusterConfiguration.java
+++ b/test/src/main/java/org/apache/accumulo/harness/conf/AccumuloClusterConfiguration.java
@@ -18,7 +18,7 @@
 
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.harness.AccumuloClusterIT.ClusterType;
+import org.apache.accumulo.harness.AccumuloClusterHarness.ClusterType;
 
 /**
  * Base functionality that must be provided as configuration to the test
diff --git a/test/src/test/java/org/apache/accumulo/harness/conf/AccumuloClusterPropertyConfiguration.java b/test/src/main/java/org/apache/accumulo/harness/conf/AccumuloClusterPropertyConfiguration.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/harness/conf/AccumuloClusterPropertyConfiguration.java
rename to test/src/main/java/org/apache/accumulo/harness/conf/AccumuloClusterPropertyConfiguration.java
index 4e04db9..5062384 100644
--- a/test/src/test/java/org/apache/accumulo/harness/conf/AccumuloClusterPropertyConfiguration.java
+++ b/test/src/main/java/org/apache/accumulo/harness/conf/AccumuloClusterPropertyConfiguration.java
@@ -27,7 +27,7 @@
 import java.util.Map.Entry;
 import java.util.Properties;
 
-import org.apache.accumulo.harness.AccumuloClusterIT.ClusterType;
+import org.apache.accumulo.harness.AccumuloClusterHarness.ClusterType;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -139,7 +139,7 @@
         throw new IllegalArgumentException("Unknown ClusterType: " + type);
     }
 
-    Map<String,String> configuration = new HashMap<String,String>();
+    Map<String,String> configuration = new HashMap<>();
 
     Properties systemProperties = System.getProperties();
 
diff --git a/test/src/test/java/org/apache/accumulo/harness/conf/AccumuloMiniClusterConfiguration.java b/test/src/main/java/org/apache/accumulo/harness/conf/AccumuloMiniClusterConfiguration.java
similarity index 90%
rename from test/src/test/java/org/apache/accumulo/harness/conf/AccumuloMiniClusterConfiguration.java
rename to test/src/main/java/org/apache/accumulo/harness/conf/AccumuloMiniClusterConfiguration.java
index 4579182..dea72e5 100644
--- a/test/src/test/java/org/apache/accumulo/harness/conf/AccumuloMiniClusterConfiguration.java
+++ b/test/src/main/java/org/apache/accumulo/harness/conf/AccumuloMiniClusterConfiguration.java
@@ -26,8 +26,8 @@
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.harness.AccumuloClusterIT;
-import org.apache.accumulo.harness.AccumuloClusterIT.ClusterType;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.harness.AccumuloClusterHarness.ClusterType;
 import org.apache.accumulo.harness.MiniClusterHarness;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
@@ -65,7 +65,7 @@
   @Override
   public String getAdminPrincipal() {
     if (saslEnabled) {
-      return AccumuloClusterIT.getKdc().getRootUser().getPrincipal();
+      return AccumuloClusterHarness.getKdc().getRootUser().getPrincipal();
     } else {
       String principal = conf.get(ACCUMULO_MINI_PRINCIPAL_KEY);
       if (null == principal) {
@@ -84,9 +84,10 @@
       conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
       UserGroupInformation.setConfiguration(conf);
 
-      ClusterUser rootUser = AccumuloClusterIT.getKdc().getRootUser();
+      ClusterUser rootUser = AccumuloClusterHarness.getKdc().getRootUser();
       try {
-        return new KerberosToken(rootUser.getPrincipal(), rootUser.getKeytab(), true);
+        UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
+        return new KerberosToken();
       } catch (IOException e) {
         throw new RuntimeException(e);
       }
diff --git a/test/src/test/java/org/apache/accumulo/harness/conf/StandaloneAccumuloClusterConfiguration.java b/test/src/main/java/org/apache/accumulo/harness/conf/StandaloneAccumuloClusterConfiguration.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/harness/conf/StandaloneAccumuloClusterConfiguration.java
rename to test/src/main/java/org/apache/accumulo/harness/conf/StandaloneAccumuloClusterConfiguration.java
index 4cf145b..99a1cdc 100644
--- a/test/src/test/java/org/apache/accumulo/harness/conf/StandaloneAccumuloClusterConfiguration.java
+++ b/test/src/main/java/org/apache/accumulo/harness/conf/StandaloneAccumuloClusterConfiguration.java
@@ -34,9 +34,10 @@
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.harness.AccumuloClusterIT.ClusterType;
+import org.apache.accumulo.harness.AccumuloClusterHarness.ClusterType;
 import org.apache.commons.configuration.ConfigurationException;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -162,7 +163,8 @@
     if (clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
       File keytab = getAdminKeytab();
       try {
-        return new KerberosToken(getAdminPrincipal(), keytab, true);
+        UserGroupInformation.loginUserFromKeytab(getAdminPrincipal(), keytab.getAbsolutePath());
+        return new KerberosToken();
       } catch (IOException e) {
         // The user isn't logged in
         throw new RuntimeException("Failed to create KerberosToken", e);
diff --git a/test/src/test/java/org/apache/accumulo/test/ArbitraryTablePropertiesIT.java b/test/src/main/java/org/apache/accumulo/test/ArbitraryTablePropertiesIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/ArbitraryTablePropertiesIT.java
rename to test/src/main/java/org/apache/accumulo/test/ArbitraryTablePropertiesIT.java
index 40b7e18..0c38464 100644
--- a/test/src/test/java/org/apache/accumulo/test/ArbitraryTablePropertiesIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ArbitraryTablePropertiesIT.java
@@ -25,7 +25,7 @@
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.BeforeClass;
@@ -33,7 +33,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class ArbitraryTablePropertiesIT extends SharedMiniClusterIT {
+public class ArbitraryTablePropertiesIT extends SharedMiniClusterBase {
   private static final Logger log = LoggerFactory.getLogger(ArbitraryTablePropertiesIT.class);
 
   @Override
@@ -43,12 +43,12 @@
 
   @BeforeClass
   public static void setup() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
   }
 
   @AfterClass
   public static void teardown() throws Exception {
-    SharedMiniClusterIT.stopMiniCluster();
+    SharedMiniClusterBase.stopMiniCluster();
   }
 
   // Test set, get, and remove arbitrary table properties on the root account
diff --git a/test/src/test/java/org/apache/accumulo/test/AssignmentThreadsIT.java b/test/src/main/java/org/apache/accumulo/test/AssignmentThreadsIT.java
similarity index 87%
rename from test/src/test/java/org/apache/accumulo/test/AssignmentThreadsIT.java
rename to test/src/main/java/org/apache/accumulo/test/AssignmentThreadsIT.java
index fd7ea6c..7253e45 100644
--- a/test/src/test/java/org/apache/accumulo/test/AssignmentThreadsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/AssignmentThreadsIT.java
@@ -21,18 +21,22 @@
 import java.util.Random;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 // ACCUMULO-1177
-public class AssignmentThreadsIT extends ConfigurableMacIT {
+@Category(PerformanceTest.class)
+public class AssignmentThreadsIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -63,7 +67,7 @@
     Connector c = getConnector();
     log.info("Creating table");
     c.tableOperations().create(tableName);
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (int i = 0; i < 1000; i++) {
       splits.add(new Text(randomHex(8)));
     }
@@ -82,7 +86,7 @@
     log.info("Taking table offline, again");
     c.tableOperations().offline(tableName, true);
     // wait >10 seconds for thread pool to update
-    UtilWaitThread.sleep(Math.max(0, now + 11 * 1000 - System.currentTimeMillis()));
+    sleepUninterruptibly(Math.max(0, now + 11 * 1000 - System.currentTimeMillis()), TimeUnit.MILLISECONDS);
     now = System.currentTimeMillis();
     log.info("Bringing table back online");
     c.tableOperations().online(tableName, true);
diff --git a/test/src/test/java/org/apache/accumulo/test/AuditMessageIT.java b/test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/AuditMessageIT.java
rename to test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java
index 268965f..1f47793 100644
--- a/test/src/test/java/org/apache/accumulo/test/AuditMessageIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java
@@ -50,7 +50,7 @@
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.security.AuditedSecurityOperation;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.io.LineIterator;
 import org.apache.hadoop.io.Text;
@@ -64,7 +64,7 @@
  * MiniAccumuloClusterTest sets up the log4j stuff differently to an installed instance, instead piping everything through stdout and writing to a set location
  * so we have to find the logs and grep the bits we need out.
  */
-public class AuditMessageIT extends ConfigurableMacIT {
+public class AuditMessageIT extends ConfigurableMacBase {
 
   private static final String AUDIT_USER_1 = "AuditUser1";
   private static final String AUDIT_USER_2 = "AuditUser2";
@@ -94,7 +94,7 @@
   private Connector conn;
 
   private static ArrayList<String> findAuditMessage(ArrayList<String> input, String pattern) {
-    ArrayList<String> result = new ArrayList<String>();
+    ArrayList<String> result = new ArrayList<>();
     for (String s : input) {
       if (s.matches(".*" + pattern + ".*"))
         result.add(s);
@@ -125,7 +125,7 @@
     // Grab the audit messages
     System.out.println("Start of captured audit messages for step " + stepName);
 
-    ArrayList<String> result = new ArrayList<String>();
+    ArrayList<String> result = new ArrayList<>();
     File[] files = getCluster().getConfig().getLogDir().listFiles();
     assertNotNull(files);
     for (File file : files) {
diff --git a/test/src/test/java/org/apache/accumulo/test/BadDeleteMarkersCreatedIT.java b/test/src/main/java/org/apache/accumulo/test/BadDeleteMarkersCreatedIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/BadDeleteMarkersCreatedIT.java
rename to test/src/main/java/org/apache/accumulo/test/BadDeleteMarkersCreatedIT.java
index 25337b2..aa1ad54 100644
--- a/test/src/test/java/org/apache/accumulo/test/BadDeleteMarkersCreatedIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/BadDeleteMarkersCreatedIT.java
@@ -20,6 +20,7 @@
 import java.util.Map.Entry;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.Connector;
@@ -32,11 +33,10 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooLock;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
@@ -48,13 +48,15 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 // Accumulo3047
-public class BadDeleteMarkersCreatedIT extends AccumuloClusterIT {
+public class BadDeleteMarkersCreatedIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(BadDeleteMarkersCreatedIT.class);
 
   @Override
   public int defaultTimeoutSeconds() {
-    return 60;
+    return 120;
   }
 
   @Override
@@ -147,7 +149,7 @@
     Assert.assertNotNull("Expected to find a tableId", tableId);
 
     // add some splits
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (int i = 0; i < 10; i++) {
       splits.add(new Text("" + i));
     }
@@ -158,7 +160,7 @@
     c.tableOperations().delete(tableName);
     log.info("Sleeping to let garbage collector run");
     // let gc run
-    UtilWaitThread.sleep(timeoutFactor * 15 * 1000);
+    sleepUninterruptibly(timeoutFactor * 15, TimeUnit.SECONDS);
     log.info("Verifying that delete markers were deleted");
     // look for delete markers
     Scanner scanner = c.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
diff --git a/test/src/test/java/org/apache/accumulo/test/BalanceFasterIT.java b/test/src/main/java/org/apache/accumulo/test/BalanceFasterIT.java
similarity index 80%
rename from test/src/test/java/org/apache/accumulo/test/BalanceFasterIT.java
rename to test/src/main/java/org/apache/accumulo/test/BalanceFasterIT.java
index 2cc5d34..327b42b 100644
--- a/test/src/test/java/org/apache/accumulo/test/BalanceFasterIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/BalanceFasterIT.java
@@ -17,6 +17,7 @@
 package org.apache.accumulo.test;
 
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assume.assumeFalse;
 
 import java.util.HashMap;
 import java.util.Iterator;
@@ -24,6 +25,7 @@
 import java.util.Map.Entry;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
@@ -32,39 +34,49 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.accumulo.test.mrit.IntegrationTestMapReduce;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
+import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 // ACCUMULO-2952
-public class BalanceFasterIT extends ConfigurableMacIT {
+@Category(PerformanceTest.class)
+public class BalanceFasterIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
     cfg.setNumTservers(3);
   }
 
+  @BeforeClass
+  static public void checkMR() {
+    assumeFalse(IntegrationTestMapReduce.isMapReduce());
+  }
+
   @Test(timeout = 90 * 1000)
   public void test() throws Exception {
     // create a table, add a bunch of splits
     String tableName = getUniqueNames(1)[0];
     Connector conn = getConnector();
     conn.tableOperations().create(tableName);
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (int i = 0; i < 1000; i++) {
       splits.add(new Text("" + i));
     }
     conn.tableOperations().addSplits(tableName, splits);
     // give a short wait for balancing
-    UtilWaitThread.sleep(10 * 1000);
+    sleepUninterruptibly(10, TimeUnit.SECONDS);
     // find out where the tabets are
     Scanner s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
     s.fetchColumnFamily(MetadataSchema.TabletsSection.CurrentLocationColumnFamily.NAME);
     s.setRange(MetadataSchema.TabletsSection.getRange());
-    Map<String,Integer> counts = new HashMap<String,Integer>();
+    Map<String,Integer> counts = new HashMap<>();
     while (true) {
       int total = 0;
       counts.clear();
diff --git a/test/src/test/java/org/apache/accumulo/test/BalanceIT.java b/test/src/main/java/org/apache/accumulo/test/BalanceIT.java
similarity index 73%
rename from test/src/test/java/org/apache/accumulo/test/BalanceIT.java
rename to test/src/main/java/org/apache/accumulo/test/BalanceIT.java
index f793925..fa1d857 100644
--- a/test/src/test/java/org/apache/accumulo/test/BalanceIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/BalanceIT.java
@@ -20,25 +20,33 @@
 import java.util.TreeSet;
 
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
-public class BalanceIT extends ConfigurableMacIT {
+public class BalanceIT extends AccumuloClusterHarness {
+  private static final Logger log = LoggerFactory.getLogger(BalanceIT.class);
 
-  @Test(timeout = 60 * 1000)
+  @Override
+  public int defaultTimeoutSeconds() {
+    return 60;
+  }
+
+  @Test
   public void testBalance() throws Exception {
     String tableName = getUniqueNames(1)[0];
     Connector c = getConnector();
-    System.out.println("Creating table");
+    log.info("Creating table");
     c.tableOperations().create(tableName);
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (int i = 0; i < 10; i++) {
       splits.add(new Text("" + i));
     }
-    System.out.println("Adding splits");
+    log.info("Adding splits");
     c.tableOperations().addSplits(tableName, splits);
-    System.out.println("Waiting for balance");
+    log.info("Waiting for balance");
     c.instanceOperations().waitForBalance();
   }
 }
diff --git a/test/src/test/java/org/apache/accumulo/test/BalanceWithOfflineTableIT.java b/test/src/main/java/org/apache/accumulo/test/BalanceWithOfflineTableIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/BalanceWithOfflineTableIT.java
rename to test/src/main/java/org/apache/accumulo/test/BalanceWithOfflineTableIT.java
index 2d79dd8..9dad4e9 100644
--- a/test/src/test/java/org/apache/accumulo/test/BalanceWithOfflineTableIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/BalanceWithOfflineTableIT.java
@@ -25,13 +25,13 @@
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.util.SimpleThreadPool;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
 // ACCUMULO-3692
-public class BalanceWithOfflineTableIT extends ConfigurableMacIT {
+public class BalanceWithOfflineTableIT extends ConfigurableMacBase {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -50,7 +50,7 @@
     final Connector c = getConnector();
     log.info("Creating table " + tableName);
     c.tableOperations().create(tableName);
-    ;
+
     final SortedSet<Text> splits = new TreeSet<>();
     for (String split : "a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z".split(",")) {
       splits.add(new Text(split));
diff --git a/test/src/test/java/org/apache/accumulo/test/BatchWriterIT.java b/test/src/main/java/org/apache/accumulo/test/BatchWriterIT.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/BatchWriterIT.java
rename to test/src/main/java/org/apache/accumulo/test/BatchWriterIT.java
index b1fe900..11fc595 100644
--- a/test/src/test/java/org/apache/accumulo/test/BatchWriterIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/BatchWriterIT.java
@@ -21,10 +21,10 @@
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.junit.Test;
 
-public class BatchWriterIT extends AccumuloClusterIT {
+public class BatchWriterIT extends AccumuloClusterHarness {
 
   @Override
   public int defaultTimeoutSeconds() {
diff --git a/test/src/main/java/org/apache/accumulo/test/BatchWriterInTabletServerIT.java b/test/src/main/java/org/apache/accumulo/test/BatchWriterInTabletServerIT.java
new file mode 100644
index 0000000..6bd5da4
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/BatchWriterInTabletServerIT.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import com.google.common.collect.Iterators;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.PartialKey;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.LongCombiner;
+import org.apache.accumulo.core.iterators.user.SummingCombiner;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.hadoop.io.Text;
+import org.apache.log4j.Logger;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Map;
+
+/**
+ * Test writing to another table from inside an iterator.
+ *
+ * @see BatchWriterIterator
+ */
+public class BatchWriterInTabletServerIT extends AccumuloClusterHarness {
+  private static final Logger log = Logger.getLogger(BatchWriterInTabletServerIT.class);
+
+  @Override
+  public boolean canRunTest(ClusterType type) {
+    return ClusterType.MINI == type;
+  }
+
+  /**
+   * This test should succeed.
+   */
+  @Test
+  public void testNormalWrite() throws Exception {
+    String[] uniqueNames = getUniqueNames(2);
+    String t1 = uniqueNames[0], t2 = uniqueNames[1];
+    Connector c = getConnector();
+    int numEntriesToWritePerEntry = 50;
+    IteratorSetting itset = BatchWriterIterator.iteratorSetting(6, 0, 15, 1000, numEntriesToWritePerEntry, t2, c, getAdminToken(), false, false);
+    test(t1, t2, c, itset, numEntriesToWritePerEntry);
+  }
+
+  /**
+   * Fixed by ACCUMULO-4229.
+   * <p>
+   * This tests a situation that a client which shares a LocatorCache with the tablet server may fall into. Before the problem was fixed, adding a split after
+   * the Locator cache falls out of sync caused the BatchWriter to continuously attempt to write to an old, closed tablet. It would do so for 15 seconds until a
+   * timeout on the BatchWriter.
+   */
+  @Test
+  public void testClearLocatorAndSplitWrite() throws Exception {
+    String[] uniqueNames = getUniqueNames(2);
+    String t1 = uniqueNames[0], t2 = uniqueNames[1];
+    Connector c = getConnector();
+    int numEntriesToWritePerEntry = 50;
+    IteratorSetting itset = BatchWriterIterator.iteratorSetting(6, 0, 15, 1000, numEntriesToWritePerEntry, t2, c, getAdminToken(), true, true);
+    test(t1, t2, c, itset, numEntriesToWritePerEntry);
+  }
+
+  private void test(String t1, String t2, Connector c, IteratorSetting itset, int numEntriesToWritePerEntry) throws Exception {
+    // Write an entry to t1
+    c.tableOperations().create(t1);
+    Key k = new Key(new Text("row"), new Text("cf"), new Text("cq"));
+    Value v = new Value("1".getBytes());
+    {
+      BatchWriterConfig config = new BatchWriterConfig();
+      config.setMaxMemory(0);
+      BatchWriter writer = c.createBatchWriter(t1, config);
+      Mutation m = new Mutation(k.getRow());
+      m.put(k.getColumnFamily(), k.getColumnQualifier(), v);
+      writer.addMutation(m);
+      writer.close();
+    }
+
+    // Create t2 with a combiner to count entries written to it
+    c.tableOperations().create(t2);
+    IteratorSetting summer = new IteratorSetting(2, "summer", SummingCombiner.class);
+    LongCombiner.setEncodingType(summer, LongCombiner.Type.STRING);
+    LongCombiner.setCombineAllColumns(summer, true);
+    c.tableOperations().attachIterator(t2, summer);
+
+    Map.Entry<Key,Value> actual;
+    // Scan t1 with an iterator that writes to table t2
+    Scanner scanner = c.createScanner(t1, Authorizations.EMPTY);
+    scanner.addScanIterator(itset);
+    actual = Iterators.getOnlyElement(scanner.iterator());
+    Assert.assertTrue(actual.getKey().equals(k, PartialKey.ROW_COLFAM_COLQUAL));
+    Assert.assertEquals(BatchWriterIterator.SUCCESS_VALUE, actual.getValue());
+    scanner.close();
+
+    // ensure entries correctly wrote to table t2
+    scanner = c.createScanner(t2, Authorizations.EMPTY);
+    actual = Iterators.getOnlyElement(scanner.iterator());
+    log.debug("t2 entry is " + actual.getKey().toStringNoTime() + " -> " + actual.getValue());
+    Assert.assertTrue(actual.getKey().equals(k, PartialKey.ROW_COLFAM_COLQUAL));
+    Assert.assertEquals(numEntriesToWritePerEntry, Integer.parseInt(actual.getValue().toString()));
+    scanner.close();
+
+    c.tableOperations().delete(t1);
+    c.tableOperations().delete(t2);
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/BatchWriterIterator.java b/test/src/main/java/org/apache/accumulo/test/BatchWriterIterator.java
new file mode 100644
index 0000000..6a6604f
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/BatchWriterIterator.java
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.ClientConfiguration;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.MutationsRejectedException;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.ZooKeeperInstance;
+import org.apache.accumulo.core.client.impl.TabletLocator;
+import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.iterators.WrappingIterator;
+import org.apache.accumulo.test.util.SerializationUtil;
+import org.apache.hadoop.io.Text;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Iterator that opens a BatchWriter and writes to another table.
+ * <p>
+ * For each entry passed to this iterator, this writes a certain number of entries with the same key to another table and passes the entry downstream of this
+ * iterator with its value replaced by either "{@value #SUCCESS_STRING}" or a description of what failed. Success counts as all entries writing to the result
+ * table within a timeout period. Failure counts as one of the entries taking longer than the timeout period.
+ * <p>
+ * Configure this iterator by calling the static {@link #iteratorSetting} method.
+ */
+public class BatchWriterIterator extends WrappingIterator {
+  private static final Logger log = LoggerFactory.getLogger(BatchWriterIterator.class);
+
+  private Map<String,String> originalOptions; // remembered for deepCopy
+
+  private int sleepAfterFirstWrite = 0;
+  private int numEntriesToWritePerEntry = 10;
+  private long batchWriterTimeout = 0;
+  private long batchWriterMaxMemory = 0;
+  private boolean clearCacheAfterFirstWrite = false;
+  private boolean splitAfterFirstWrite = false;
+
+  public static final String OPT_sleepAfterFirstWrite = "sleepAfterFirstWrite", OPT_numEntriesToWritePerEntry = "numEntriesToWritePerEntry",
+      OPT_batchWriterTimeout = "batchWriterTimeout", OPT_batchWriterMaxMemory = "batchWriterMaxMemory",
+      OPT_clearCacheAfterFirstWrite = "clearCacheAfterFirstWrite", OPT_splitAfterFirstWrite = "splitAfterFirstWrite";
+
+  private String instanceName;
+  private String tableName;
+  private String zookeeperHost;
+  private int zookeeperTimeout = -1;
+  private String username;
+  private AuthenticationToken auth = null;
+
+  public static final String ZOOKEEPERHOST = "zookeeperHost", INSTANCENAME = "instanceName", TABLENAME = "tableName", USERNAME = "username",
+      ZOOKEEPERTIMEOUT = "zookeeperTimeout", AUTHENTICATION_TOKEN = "authenticationToken", // base64 encoding of token
+      AUTHENTICATION_TOKEN_CLASS = "authenticationTokenClass"; // class of token
+
+  private BatchWriter batchWriter;
+  private boolean firstWrite = true;
+  private Value topValue = null;
+  private Connector connector;
+
+  public static final String SUCCESS_STRING = "success";
+  public static final Value SUCCESS_VALUE = new Value(SUCCESS_STRING.getBytes());
+
+  public static IteratorSetting iteratorSetting(int priority, int sleepAfterFirstWrite, long batchWriterTimeout, long batchWriterMaxMemory,
+      int numEntriesToWrite, String tableName, Connector connector, AuthenticationToken token, boolean clearCacheAfterFirstWrite, boolean splitAfterFirstWrite) {
+    return iteratorSetting(priority, sleepAfterFirstWrite, batchWriterTimeout, batchWriterMaxMemory, numEntriesToWrite, tableName, connector.getInstance()
+        .getZooKeepers(), connector.getInstance().getInstanceName(), connector.getInstance().getZooKeepersSessionTimeOut(), connector.whoami(), token,
+        clearCacheAfterFirstWrite, splitAfterFirstWrite);
+  }
+
+  public static IteratorSetting iteratorSetting(int priority, int sleepAfterFirstWrite, long batchWriterTimeout, long batchWriterMaxMemory,
+      int numEntriesToWrite, String tableName, String zookeeperHost, String instanceName, int zookeeperTimeout, String username, AuthenticationToken token,
+      boolean clearCacheAfterFirstWrite, boolean splitAfterFirstWrite) {
+    IteratorSetting itset = new IteratorSetting(priority, BatchWriterIterator.class);
+    itset.addOption(OPT_sleepAfterFirstWrite, Integer.toString(sleepAfterFirstWrite));
+    itset.addOption(OPT_numEntriesToWritePerEntry, Integer.toString(numEntriesToWrite));
+    itset.addOption(OPT_batchWriterTimeout, Long.toString(batchWriterTimeout));
+    itset.addOption(OPT_batchWriterMaxMemory, Long.toString(batchWriterMaxMemory));
+    itset.addOption(OPT_clearCacheAfterFirstWrite, Boolean.toString(clearCacheAfterFirstWrite));
+    itset.addOption(OPT_splitAfterFirstWrite, Boolean.toString(splitAfterFirstWrite));
+
+    itset.addOption(TABLENAME, tableName);
+    itset.addOption(ZOOKEEPERHOST, zookeeperHost);
+    itset.addOption(ZOOKEEPERTIMEOUT, Integer.toString(zookeeperTimeout));
+    itset.addOption(INSTANCENAME, instanceName);
+    itset.addOption(USERNAME, username);
+    itset.addOption(AUTHENTICATION_TOKEN_CLASS, token.getClass().getName());
+    itset.addOption(AUTHENTICATION_TOKEN, SerializationUtil.serializeWritableBase64(token));
+
+    return itset;
+  }
+
+  @Override
+  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
+    super.init(source, options, env);
+    parseOptions(options);
+    initBatchWriter();
+  }
+
+  private void parseOptions(Map<String,String> options) {
+    this.originalOptions = new HashMap<>(options);
+
+    if (options.containsKey(OPT_numEntriesToWritePerEntry))
+      numEntriesToWritePerEntry = Integer.parseInt(options.get(OPT_numEntriesToWritePerEntry));
+    if (options.containsKey(OPT_sleepAfterFirstWrite))
+      sleepAfterFirstWrite = Integer.parseInt(options.get(OPT_sleepAfterFirstWrite));
+    if (options.containsKey(OPT_batchWriterTimeout))
+      batchWriterTimeout = Long.parseLong(options.get(OPT_batchWriterTimeout));
+    if (options.containsKey(OPT_batchWriterMaxMemory))
+      batchWriterMaxMemory = Long.parseLong(options.get(OPT_batchWriterMaxMemory));
+    if (options.containsKey(OPT_clearCacheAfterFirstWrite))
+      clearCacheAfterFirstWrite = Boolean.parseBoolean(options.get(OPT_clearCacheAfterFirstWrite));
+    if (options.containsKey(OPT_splitAfterFirstWrite))
+      splitAfterFirstWrite = Boolean.parseBoolean(options.get(OPT_splitAfterFirstWrite));
+
+    instanceName = options.get(INSTANCENAME);
+    tableName = options.get(TABLENAME);
+    zookeeperHost = options.get(ZOOKEEPERHOST);
+    zookeeperTimeout = Integer.parseInt(options.get(ZOOKEEPERTIMEOUT));
+    username = options.get(USERNAME);
+    String authClass = options.get(AUTHENTICATION_TOKEN_CLASS);
+    String authString = options.get(AUTHENTICATION_TOKEN);
+    auth = SerializationUtil.subclassNewInstance(authClass, AuthenticationToken.class);
+    SerializationUtil.deserializeWritableBase64(auth, authString);
+  }
+
+  private void initBatchWriter() {
+    ClientConfiguration cc = ClientConfiguration.loadDefault().withInstance(instanceName).withZkHosts(zookeeperHost).withZkTimeout(zookeeperTimeout);
+    Instance instance = new ZooKeeperInstance(cc);
+    try {
+      connector = instance.getConnector(username, auth);
+    } catch (Exception e) {
+      log.error("failed to connect to Accumulo instance " + instanceName, e);
+      throw new RuntimeException(e);
+    }
+
+    BatchWriterConfig bwc = new BatchWriterConfig();
+    bwc.setMaxMemory(batchWriterMaxMemory);
+    bwc.setTimeout(batchWriterTimeout, TimeUnit.SECONDS);
+
+    try {
+      batchWriter = connector.createBatchWriter(tableName, bwc);
+    } catch (TableNotFoundException e) {
+      log.error(tableName + " does not exist in instance " + instanceName, e);
+      throw new RuntimeException(e);
+    }
+  }
+
+  /**
+   * Write numEntriesToWritePerEntry. Flush. Set topValue accordingly.
+   */
+  private void processNext() {
+    assert hasTop();
+    Key k = getTopKey();
+    Text row = k.getRow(), cf = k.getColumnFamily(), cq = k.getColumnQualifier();
+    Value v = super.getTopValue();
+    String failure = null;
+    try {
+      for (int i = 0; i < numEntriesToWritePerEntry; i++) {
+        Mutation m = new Mutation(row);
+        m.put(cf, cq, v);
+        batchWriter.addMutation(m);
+
+        if (firstWrite) {
+          batchWriter.flush();
+          if (clearCacheAfterFirstWrite)
+            TabletLocator.clearLocators();
+          if (splitAfterFirstWrite) {
+            SortedSet<Text> splits = new TreeSet<>();
+            splits.add(new Text(row));
+            connector.tableOperations().addSplits(tableName, splits);
+          }
+          if (sleepAfterFirstWrite > 0)
+            try {
+              Thread.sleep(sleepAfterFirstWrite);
+            } catch (InterruptedException ignored) {}
+          firstWrite = false;
+        }
+      }
+
+      batchWriter.flush();
+    } catch (Exception e) {
+      // in particular: watching for TimedOutException
+      log.error("Problem while BatchWriting to target table " + tableName, e);
+      failure = e.getClass().getSimpleName() + ": " + e.getMessage();
+    }
+    topValue = failure == null ? SUCCESS_VALUE : new Value(failure.getBytes());
+  }
+
+  @Override
+  protected void finalize() throws Throwable {
+    super.finalize();
+    try {
+      batchWriter.close();
+    } catch (MutationsRejectedException e) {
+      log.error("Failed to close BatchWriter; some mutations may not be applied", e);
+    }
+  }
+
+  @Override
+  public void next() throws IOException {
+    super.next();
+    if (hasTop())
+      processNext();
+  }
+
+  @Override
+  public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
+    super.seek(range, columnFamilies, inclusive);
+    if (hasTop())
+      processNext();
+  }
+
+  @Override
+  public Value getTopValue() {
+    return topValue;
+  }
+
+  @Override
+  public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
+    BatchWriterIterator newInstance;
+    try {
+      newInstance = this.getClass().newInstance();
+      newInstance.init(getSource().deepCopy(env), originalOptions, env);
+      return newInstance;
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+  }
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/BulkImportDirectory.java b/test/src/main/java/org/apache/accumulo/test/BulkImportDirectory.java
index a0cc26e..4cbba8e 100644
--- a/test/src/main/java/org/apache/accumulo/test/BulkImportDirectory.java
+++ b/test/src/main/java/org/apache/accumulo/test/BulkImportDirectory.java
@@ -41,7 +41,7 @@
     @Parameter(names = {"-f", "--failures"}, description = "directory to copy failures into: will be deleted before the bulk import")
     String failures = null;
     @Parameter(description = "<username> <password> <tablename> <sourcedir> <failuredir>")
-    List<String> args = new ArrayList<String>();
+    List<String> args = new ArrayList<>();
   }
 
   public static void main(String[] args) throws IOException, AccumuloException, AccumuloSecurityException, TableNotFoundException {
diff --git a/test/src/main/java/org/apache/accumulo/test/BulkImportMonitoringIT.java b/test/src/main/java/org/apache/accumulo/test/BulkImportMonitoringIT.java
new file mode 100644
index 0000000..9c7abf3
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/BulkImportMonitoringIT.java
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.FileOperations;
+import org.apache.accumulo.core.file.FileSKVWriter;
+import org.apache.accumulo.core.file.rfile.RFile;
+import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
+import org.apache.accumulo.core.util.CachedConfiguration;
+import org.apache.accumulo.core.util.Pair;
+import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+import org.junit.Test;
+
+public class BulkImportMonitoringIT extends ConfigurableMacBase {
+
+  @Override
+  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.setNumTservers(1);
+    cfg.useMiniDFS(true);
+    cfg.setProperty(Property.GC_FILE_ARCHIVE, "false");
+  }
+
+  @Test
+  public void test() throws Exception {
+    getCluster().getClusterControl().start(ServerType.MONITOR);
+    final Connector c = getConnector();
+    final String tableName = getUniqueNames(1)[0];
+    c.tableOperations().create(tableName);
+    c.tableOperations().setProperty(tableName, Property.TABLE_MAJC_RATIO.getKey(), "1");
+    // splits to slow down bulk import
+    SortedSet<Text> splits = new TreeSet<>();
+    for (int i = 1; i < 0xf; i++) {
+      splits.add(new Text(Integer.toHexString(i)));
+    }
+    c.tableOperations().addSplits(tableName, splits);
+
+    MasterMonitorInfo stats = getCluster().getMasterMonitorInfo();
+    assertEquals(1, stats.tServerInfo.size());
+    assertEquals(0, stats.bulkImports.size());
+    assertEquals(0, stats.tServerInfo.get(0).bulkImports.size());
+
+    log.info("Creating lots of bulk import files");
+    final FileSystem fs = getCluster().getFileSystem();
+    final Path basePath = getCluster().getTemporaryPath();
+    CachedConfiguration.setInstance(fs.getConf());
+
+    final Path base = new Path(basePath, "testBulkLoad" + tableName);
+    fs.delete(base, true);
+    fs.mkdirs(base);
+
+    ExecutorService es = Executors.newFixedThreadPool(5);
+    List<Future<Pair<String,String>>> futures = new ArrayList<>();
+    for (int i = 0; i < 10; i++) {
+      final int which = i;
+      futures.add(es.submit(new Callable<Pair<String,String>>() {
+        @Override
+        public Pair<String,String> call() throws Exception {
+          Path bulkFailures = new Path(base, "failures" + which);
+          Path files = new Path(base, "files" + which);
+          fs.mkdirs(bulkFailures);
+          fs.mkdirs(files);
+          for (int i = 0; i < 10; i++) {
+            FileSKVWriter writer = FileOperations.getInstance().newWriterBuilder()
+                .forFile(files.toString() + "/bulk_" + i + "." + RFile.EXTENSION, fs, fs.getConf())
+                .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
+            writer.startDefaultLocalityGroup();
+            for (int j = 0x100; j < 0xfff; j += 3) {
+              writer.append(new Key(Integer.toHexString(j)), new Value(new byte[0]));
+            }
+            writer.close();
+          }
+          return new Pair<>(files.toString(), bulkFailures.toString());
+        }
+      }));
+    }
+    List<Pair<String,String>> dirs = new ArrayList<>();
+    for (Future<Pair<String,String>> f : futures) {
+      dirs.add(f.get());
+    }
+    log.info("Importing");
+    long now = System.currentTimeMillis();
+    List<Future<Object>> errs = new ArrayList<>();
+    for (Pair<String,String> entry : dirs) {
+      final String dir = entry.getFirst();
+      final String err = entry.getSecond();
+      errs.add(es.submit(new Callable<Object>() {
+        @Override
+        public Object call() throws Exception {
+          c.tableOperations().importDirectory(tableName, dir, err, false);
+          return null;
+        }
+      }));
+    }
+    es.shutdown();
+    while (!es.isTerminated() && stats.bulkImports.size() + stats.tServerInfo.get(0).bulkImports.size() == 0) {
+      es.awaitTermination(10, TimeUnit.MILLISECONDS);
+      stats = getCluster().getMasterMonitorInfo();
+    }
+    log.info(stats.bulkImports.toString());
+    assertTrue(stats.bulkImports.size() > 0);
+    // look for exception
+    for (Future<Object> err : errs) {
+      err.get();
+    }
+    es.awaitTermination(2, TimeUnit.MINUTES);
+    assertTrue(es.isTerminated());
+    log.info(String.format("Completed in %.2f seconds", (System.currentTimeMillis() - now) / 1000.));
+  }
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/BulkImportSequentialRowsIT.java b/test/src/main/java/org/apache/accumulo/test/BulkImportSequentialRowsIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/BulkImportSequentialRowsIT.java
rename to test/src/main/java/org/apache/accumulo/test/BulkImportSequentialRowsIT.java
index 2c04607..1cfd3e6 100644
--- a/test/src/test/java/org/apache/accumulo/test/BulkImportSequentialRowsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/BulkImportSequentialRowsIT.java
@@ -23,7 +23,7 @@
 
 import org.apache.accumulo.core.client.admin.TableOperations;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
@@ -38,7 +38,7 @@
 import com.google.common.collect.Iterables;
 
 // ACCUMULO-3967
-public class BulkImportSequentialRowsIT extends AccumuloClusterIT {
+public class BulkImportSequentialRowsIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(BulkImportSequentialRowsIT.class);
 
   private static final long NR = 24;
diff --git a/test/src/test/java/org/apache/accumulo/test/BulkImportVolumeIT.java b/test/src/main/java/org/apache/accumulo/test/BulkImportVolumeIT.java
similarity index 90%
rename from test/src/test/java/org/apache/accumulo/test/BulkImportVolumeIT.java
rename to test/src/main/java/org/apache/accumulo/test/BulkImportVolumeIT.java
index cc8fcc9..e892f6a 100644
--- a/test/src/test/java/org/apache/accumulo/test/BulkImportVolumeIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/BulkImportVolumeIT.java
@@ -23,11 +23,10 @@
 
 import org.apache.accumulo.core.client.admin.TableOperations;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.FsShell;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.RawLocalFileSystem;
 import org.junit.Test;
@@ -35,7 +34,7 @@
 import org.slf4j.LoggerFactory;
 
 // ACCUMULO-118/ACCUMULO-2504
-public class BulkImportVolumeIT extends AccumuloClusterIT {
+public class BulkImportVolumeIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(BulkImportVolumeIT.class);
 
   File volDirBase = null;
@@ -85,8 +84,6 @@
     fs.create(bogus).close();
     log.info("bogus: {}", bogus);
     assertTrue(fs.exists(bogus));
-    FsShell fsShell = new FsShell(fs.getConf());
-    assertEquals("Failed to chmod " + rootPath, 0, fsShell.run(new String[] {"-chmod", "-R", "777", rootPath.toString()}));
     log.info("Importing {} into {} with failures directory {}", bulk, tableName, err);
     to.importDirectory(tableName, bulk.toString(), err.toString(), false);
     assertEquals(1, fs.listStatus(err).length);
diff --git a/test/src/test/java/org/apache/accumulo/test/CleanWalIT.java b/test/src/main/java/org/apache/accumulo/test/CleanWalIT.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/CleanWalIT.java
rename to test/src/main/java/org/apache/accumulo/test/CleanWalIT.java
index 08e3c09..7146a9f 100644
--- a/test/src/test/java/org/apache/accumulo/test/CleanWalIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/CleanWalIT.java
@@ -19,6 +19,7 @@
 import static org.junit.Assert.assertEquals;
 
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -33,8 +34,7 @@
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
@@ -46,8 +46,9 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class CleanWalIT extends AccumuloClusterIT {
+public class CleanWalIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(CleanWalIT.class);
 
   @Override
@@ -119,7 +120,7 @@
     conn.tableOperations().flush(RootTable.NAME, null, null, true);
     try {
       getCluster().getClusterControl().stopAllServers(ServerType.TABLET_SERVER);
-      UtilWaitThread.sleep(3 * 1000);
+      sleepUninterruptibly(3, TimeUnit.SECONDS);
     } finally {
       getCluster().getClusterControl().startAllServers(ServerType.TABLET_SERVER);
     }
@@ -129,6 +130,7 @@
   private int countLogs(String tableName, Connector conn) throws TableNotFoundException {
     Scanner scanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
     scanner.fetchColumnFamily(MetadataSchema.TabletsSection.LogColumnFamily.NAME);
+    scanner.setRange(MetadataSchema.TabletsSection.getRange());
     int count = 0;
     for (Entry<Key,Value> entry : scanner) {
       log.debug("Saw " + entry.getKey() + "=" + entry.getValue());
diff --git a/core/src/test/java/org/apache/accumulo/core/client/ClientSideIteratorTest.java b/test/src/main/java/org/apache/accumulo/test/ClientSideIteratorIT.java
similarity index 67%
rename from core/src/test/java/org/apache/accumulo/core/client/ClientSideIteratorTest.java
rename to test/src/main/java/org/apache/accumulo/test/ClientSideIteratorIT.java
index 60f668b..180eed1 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/ClientSideIteratorTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/ClientSideIteratorIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.client;
+package org.apache.accumulo.test;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
@@ -24,8 +24,12 @@
 import java.util.List;
 import java.util.Map.Entry;
 
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.ClientSideIteratorScanner;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.PartialKey;
@@ -33,28 +37,32 @@
 import org.apache.accumulo.core.iterators.user.IntersectingIterator;
 import org.apache.accumulo.core.iterators.user.VersioningIterator;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
+import org.junit.Before;
 import org.junit.Test;
 
-public class ClientSideIteratorTest {
-  List<Key> resultSet1;
-  List<Key> resultSet2;
-  List<Key> resultSet3;
-  {
-    resultSet1 = new ArrayList<Key>();
+public class ClientSideIteratorIT extends AccumuloClusterHarness {
+  private List<Key> resultSet1;
+  private List<Key> resultSet2;
+  private List<Key> resultSet3;
+
+  @Before
+  public void setupData() {
+    resultSet1 = new ArrayList<>();
     resultSet1.add(new Key("row1", "colf", "colq", 4l));
     resultSet1.add(new Key("row1", "colf", "colq", 3l));
-    resultSet2 = new ArrayList<Key>();
+    resultSet2 = new ArrayList<>();
     resultSet2.add(new Key("row1", "colf", "colq", 4l));
     resultSet2.add(new Key("row1", "colf", "colq", 3l));
     resultSet2.add(new Key("row1", "colf", "colq", 2l));
     resultSet2.add(new Key("row1", "colf", "colq", 1l));
-    resultSet3 = new ArrayList<Key>();
+    resultSet3 = new ArrayList<>();
     resultSet3.add(new Key("part1", "", "doc2"));
     resultSet3.add(new Key("part2", "", "DOC2"));
   }
 
-  public void checkResults(final Iterable<Entry<Key,Value>> scanner, final List<Key> results, final PartialKey pk) {
+  private void checkResults(final Iterable<Entry<Key,Value>> scanner, final List<Key> results, final PartialKey pk) {
     int i = 0;
     for (Entry<Key,Value> entry : scanner) {
       assertTrue(entry.getKey().equals(results.get(i++), pk));
@@ -62,12 +70,19 @@
     assertEquals(i, results.size());
   }
 
+  private Connector conn;
+  private String tableName;
+
+  @Before
+  public void setupInstance() throws Exception {
+    conn = getConnector();
+    tableName = getUniqueNames(1)[0];
+  }
+
   @Test
   public void testIntersect() throws Exception {
-    Instance instance = new MockInstance("local");
-    Connector conn = instance.getConnector("root", new PasswordToken(""));
-    conn.tableOperations().create("intersect");
-    BatchWriter bw = conn.createBatchWriter("intersect", new BatchWriterConfig());
+    conn.tableOperations().create(tableName);
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
     Mutation m = new Mutation("part1");
     m.put("bar", "doc1", "value");
     m.put("bar", "doc2", "value");
@@ -84,8 +99,8 @@
     bw.addMutation(m);
     bw.flush();
 
-    final ClientSideIteratorScanner csis = new ClientSideIteratorScanner(conn.createScanner("intersect", new Authorizations()));
-    final IteratorSetting si = new IteratorSetting(10, "intersect", IntersectingIterator.class);
+    final ClientSideIteratorScanner csis = new ClientSideIteratorScanner(conn.createScanner(tableName, new Authorizations()));
+    final IteratorSetting si = new IteratorSetting(10, tableName, IntersectingIterator.class);
     IntersectingIterator.setColumnFamilies(si, new Text[] {new Text("bar"), new Text("foo")});
     csis.addScanIterator(si);
 
@@ -94,13 +109,11 @@
 
   @Test
   public void testVersioning() throws Exception {
-    final Instance instance = new MockInstance("local");
-    final Connector conn = instance.getConnector("root", new PasswordToken(""));
-    conn.tableOperations().create("table");
-    conn.tableOperations().removeProperty("table", "table.iterator.scan.vers");
-    conn.tableOperations().removeProperty("table", "table.iterator.majc.vers");
-    conn.tableOperations().removeProperty("table", "table.iterator.minc.vers");
-    final BatchWriter bw = conn.createBatchWriter("table", new BatchWriterConfig());
+    conn.tableOperations().create(tableName);
+    conn.tableOperations().removeProperty(tableName, "table.iterator.scan.vers");
+    conn.tableOperations().removeProperty(tableName, "table.iterator.majc.vers");
+    conn.tableOperations().removeProperty(tableName, "table.iterator.minc.vers");
+    final BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
     Mutation m = new Mutation("row1");
     m.put("colf", "colq", 1l, "value");
     m.put("colf", "colq", 2l, "value");
@@ -112,7 +125,7 @@
     bw.addMutation(m);
     bw.flush();
 
-    final Scanner scanner = conn.createScanner("table", new Authorizations());
+    final Scanner scanner = conn.createScanner(tableName, new Authorizations());
 
     final ClientSideIteratorScanner csis = new ClientSideIteratorScanner(scanner);
     final IteratorSetting si = new IteratorSetting(10, "localvers", VersioningIterator.class);
diff --git a/server/base/src/test/java/org/apache/accumulo/server/util/CloneTest.java b/test/src/main/java/org/apache/accumulo/test/CloneIT.java
similarity index 64%
rename from server/base/src/test/java/org/apache/accumulo/server/util/CloneTest.java
rename to test/src/main/java/org/apache/accumulo/test/CloneIT.java
index 9d33935..713be3b 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/util/CloneTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/CloneIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.server.util;
+package org.apache.accumulo.test;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -26,44 +26,45 @@
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.data.impl.KeyExtent;
-import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.DataFileValue;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.server.util.MetadataTableUtil;
+import org.apache.accumulo.server.util.TabletIterator;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class CloneTest {
+public class CloneIT extends AccumuloClusterHarness {
 
   @Test
   public void testNoFiles() throws Exception {
-    MockInstance mi = new MockInstance();
-    Connector conn = mi.getConnector("", new PasswordToken(""));
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
 
-    KeyExtent ke = new KeyExtent(new Text("0"), null, null);
+    KeyExtent ke = new KeyExtent("0", null, null);
     Mutation mut = ke.getPrevRowUpdateMutation();
 
     TabletsSection.ServerColumnFamily.TIME_COLUMN.put(mut, new Value("M0".getBytes()));
     TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(mut, new Value("/default_tablet".getBytes()));
 
-    BatchWriter bw1 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw1 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
     bw1.addMutation(mut);
 
     bw1.close();
 
-    BatchWriter bw2 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw2 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
-    MetadataTableUtil.initializeClone("0", "1", conn, bw2);
+    MetadataTableUtil.initializeClone(tableName, "0", "1", conn, bw2);
 
-    int rc = MetadataTableUtil.checkClone("0", "1", conn, bw2);
+    int rc = MetadataTableUtil.checkClone(tableName, "0", "1", conn, bw2);
 
     assertEquals(0, rc);
 
@@ -73,25 +74,26 @@
 
   @Test
   public void testFilesChange() throws Exception {
-    MockInstance mi = new MockInstance();
-    Connector conn = mi.getConnector("", new PasswordToken(""));
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
 
-    KeyExtent ke = new KeyExtent(new Text("0"), null, null);
+    KeyExtent ke = new KeyExtent("0", null, null);
     Mutation mut = ke.getPrevRowUpdateMutation();
 
     TabletsSection.ServerColumnFamily.TIME_COLUMN.put(mut, new Value("M0".getBytes()));
     TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(mut, new Value("/default_tablet".getBytes()));
     mut.put(DataFileColumnFamily.NAME.toString(), "/default_tablet/0_0.rf", new DataFileValue(1, 200).encodeAsString());
 
-    BatchWriter bw1 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw1 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
     bw1.addMutation(mut);
 
     bw1.flush();
 
-    BatchWriter bw2 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw2 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
-    MetadataTableUtil.initializeClone("0", "1", conn, bw2);
+    MetadataTableUtil.initializeClone(tableName, "0", "1", conn, bw2);
 
     Mutation mut2 = new Mutation(ke.getMetadataEntry());
     mut2.putDelete(DataFileColumnFamily.NAME.toString(), "/default_tablet/0_0.rf");
@@ -100,18 +102,18 @@
     bw1.addMutation(mut2);
     bw1.flush();
 
-    int rc = MetadataTableUtil.checkClone("0", "1", conn, bw2);
+    int rc = MetadataTableUtil.checkClone(tableName, "0", "1", conn, bw2);
 
     assertEquals(1, rc);
 
-    rc = MetadataTableUtil.checkClone("0", "1", conn, bw2);
+    rc = MetadataTableUtil.checkClone(tableName, "0", "1", conn, bw2);
 
     assertEquals(0, rc);
 
-    Scanner scanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    scanner.setRange(new KeyExtent(new Text("1"), null, null).toMetadataRange());
+    Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
+    scanner.setRange(new KeyExtent("1", null, null).toMetadataRange());
 
-    HashSet<String> files = new HashSet<String>();
+    HashSet<String> files = new HashSet<>();
 
     for (Entry<Key,Value> entry : scanner) {
       if (entry.getKey().getColumnFamily().equals(DataFileColumnFamily.NAME))
@@ -126,32 +128,33 @@
   // test split where files of children are the same
   @Test
   public void testSplit1() throws Exception {
-    MockInstance mi = new MockInstance();
-    Connector conn = mi.getConnector("", new PasswordToken(""));
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
 
-    BatchWriter bw1 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw1 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
     bw1.addMutation(createTablet("0", null, null, "/default_tablet", "/default_tablet/0_0.rf"));
 
     bw1.flush();
 
-    BatchWriter bw2 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw2 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
-    MetadataTableUtil.initializeClone("0", "1", conn, bw2);
+    MetadataTableUtil.initializeClone(tableName, "0", "1", conn, bw2);
 
     bw1.addMutation(createTablet("0", "m", null, "/default_tablet", "/default_tablet/0_0.rf"));
     bw1.addMutation(createTablet("0", null, "m", "/t-1", "/default_tablet/0_0.rf"));
 
     bw1.flush();
 
-    int rc = MetadataTableUtil.checkClone("0", "1", conn, bw2);
+    int rc = MetadataTableUtil.checkClone(tableName, "0", "1", conn, bw2);
 
     assertEquals(0, rc);
 
-    Scanner scanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    scanner.setRange(new KeyExtent(new Text("1"), null, null).toMetadataRange());
+    Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
+    scanner.setRange(new KeyExtent("1", null, null).toMetadataRange());
 
-    HashSet<String> files = new HashSet<String>();
+    HashSet<String> files = new HashSet<>();
 
     int count = 0;
     for (Entry<Key,Value> entry : scanner) {
@@ -169,18 +172,19 @@
   // test split where files of children differ... like majc and split occurred
   @Test
   public void testSplit2() throws Exception {
-    MockInstance mi = new MockInstance();
-    Connector conn = mi.getConnector("", new PasswordToken(""));
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
 
-    BatchWriter bw1 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw1 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
     bw1.addMutation(createTablet("0", null, null, "/default_tablet", "/default_tablet/0_0.rf"));
 
     bw1.flush();
 
-    BatchWriter bw2 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw2 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
-    MetadataTableUtil.initializeClone("0", "1", conn, bw2);
+    MetadataTableUtil.initializeClone(tableName, "0", "1", conn, bw2);
 
     bw1.addMutation(createTablet("0", "m", null, "/default_tablet", "/default_tablet/1_0.rf"));
     Mutation mut3 = createTablet("0", null, "m", "/t-1", "/default_tablet/1_0.rf");
@@ -189,18 +193,18 @@
 
     bw1.flush();
 
-    int rc = MetadataTableUtil.checkClone("0", "1", conn, bw2);
+    int rc = MetadataTableUtil.checkClone(tableName, "0", "1", conn, bw2);
 
     assertEquals(1, rc);
 
-    rc = MetadataTableUtil.checkClone("0", "1", conn, bw2);
+    rc = MetadataTableUtil.checkClone(tableName, "0", "1", conn, bw2);
 
     assertEquals(0, rc);
 
-    Scanner scanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    scanner.setRange(new KeyExtent(new Text("1"), null, null).toMetadataRange());
+    Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
+    scanner.setRange(new KeyExtent("1", null, null).toMetadataRange());
 
-    HashSet<String> files = new HashSet<String>();
+    HashSet<String> files = new HashSet<>();
 
     int count = 0;
 
@@ -217,7 +221,7 @@
   }
 
   private static Mutation deleteTablet(String tid, String endRow, String prevRow, String dir, String file) throws Exception {
-    KeyExtent ke = new KeyExtent(new Text(tid), endRow == null ? null : new Text(endRow), prevRow == null ? null : new Text(prevRow));
+    KeyExtent ke = new KeyExtent(tid, endRow == null ? null : new Text(endRow), prevRow == null ? null : new Text(prevRow));
     Mutation mut = new Mutation(ke.getMetadataEntry());
     TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.putDelete(mut);
     TabletsSection.ServerColumnFamily.TIME_COLUMN.putDelete(mut);
@@ -228,7 +232,7 @@
   }
 
   private static Mutation createTablet(String tid, String endRow, String prevRow, String dir, String file) throws Exception {
-    KeyExtent ke = new KeyExtent(new Text(tid), endRow == null ? null : new Text(endRow), prevRow == null ? null : new Text(prevRow));
+    KeyExtent ke = new KeyExtent(tid, endRow == null ? null : new Text(endRow), prevRow == null ? null : new Text(prevRow));
     Mutation mut = ke.getPrevRowUpdateMutation();
 
     TabletsSection.ServerColumnFamily.TIME_COLUMN.put(mut, new Value("M0".getBytes()));
@@ -241,19 +245,20 @@
   // test two tablets splitting into four
   @Test
   public void testSplit3() throws Exception {
-    MockInstance mi = new MockInstance();
-    Connector conn = mi.getConnector("", new PasswordToken(""));
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
 
-    BatchWriter bw1 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw1 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
     bw1.addMutation(createTablet("0", "m", null, "/d1", "/d1/file1"));
     bw1.addMutation(createTablet("0", null, "m", "/d2", "/d2/file2"));
 
     bw1.flush();
 
-    BatchWriter bw2 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw2 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
-    MetadataTableUtil.initializeClone("0", "1", conn, bw2);
+    MetadataTableUtil.initializeClone(tableName, "0", "1", conn, bw2);
 
     bw1.addMutation(createTablet("0", "f", null, "/d1", "/d1/file3"));
     bw1.addMutation(createTablet("0", "m", "f", "/d3", "/d1/file1"));
@@ -262,14 +267,14 @@
 
     bw1.flush();
 
-    int rc = MetadataTableUtil.checkClone("0", "1", conn, bw2);
+    int rc = MetadataTableUtil.checkClone(tableName, "0", "1", conn, bw2);
 
     assertEquals(0, rc);
 
-    Scanner scanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    scanner.setRange(new KeyExtent(new Text("1"), null, null).toMetadataRange());
+    Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
+    scanner.setRange(new KeyExtent("1", null, null).toMetadataRange());
 
-    HashSet<String> files = new HashSet<String>();
+    HashSet<String> files = new HashSet<>();
 
     int count = 0;
     for (Entry<Key,Value> entry : scanner) {
@@ -288,20 +293,20 @@
   // test cloned marker
   @Test
   public void testClonedMarker() throws Exception {
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
 
-    MockInstance mi = new MockInstance();
-    Connector conn = mi.getConnector("", new PasswordToken(""));
-
-    BatchWriter bw1 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw1 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
     bw1.addMutation(createTablet("0", "m", null, "/d1", "/d1/file1"));
     bw1.addMutation(createTablet("0", null, "m", "/d2", "/d2/file2"));
 
     bw1.flush();
 
-    BatchWriter bw2 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw2 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
-    MetadataTableUtil.initializeClone("0", "1", conn, bw2);
+    MetadataTableUtil.initializeClone(tableName, "0", "1", conn, bw2);
 
     bw1.addMutation(deleteTablet("0", "m", null, "/d1", "/d1/file1"));
     bw1.addMutation(deleteTablet("0", null, "m", "/d2", "/d2/file2"));
@@ -315,7 +320,7 @@
 
     bw1.flush();
 
-    int rc = MetadataTableUtil.checkClone("0", "1", conn, bw2);
+    int rc = MetadataTableUtil.checkClone(tableName, "0", "1", conn, bw2);
 
     assertEquals(1, rc);
 
@@ -327,14 +332,14 @@
 
     bw1.flush();
 
-    rc = MetadataTableUtil.checkClone("0", "1", conn, bw2);
+    rc = MetadataTableUtil.checkClone(tableName, "0", "1", conn, bw2);
 
     assertEquals(0, rc);
 
-    Scanner scanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    scanner.setRange(new KeyExtent(new Text("1"), null, null).toMetadataRange());
+    Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
+    scanner.setRange(new KeyExtent("1", null, null).toMetadataRange());
 
-    HashSet<String> files = new HashSet<String>();
+    HashSet<String> files = new HashSet<>();
 
     int count = 0;
     for (Entry<Key,Value> entry : scanner) {
@@ -354,19 +359,20 @@
   // test two tablets splitting into four
   @Test
   public void testMerge() throws Exception {
-    MockInstance mi = new MockInstance();
-    Connector conn = mi.getConnector("", new PasswordToken(""));
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
 
-    BatchWriter bw1 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw1 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
     bw1.addMutation(createTablet("0", "m", null, "/d1", "/d1/file1"));
     bw1.addMutation(createTablet("0", null, "m", "/d2", "/d2/file2"));
 
     bw1.flush();
 
-    BatchWriter bw2 = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    BatchWriter bw2 = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
-    MetadataTableUtil.initializeClone("0", "1", conn, bw2);
+    MetadataTableUtil.initializeClone(tableName, "0", "1", conn, bw2);
 
     bw1.addMutation(deleteTablet("0", "m", null, "/d1", "/d1/file1"));
     Mutation mut = createTablet("0", null, null, "/d2", "/d2/file2");
@@ -376,7 +382,7 @@
     bw1.flush();
 
     try {
-      MetadataTableUtil.checkClone("0", "1", conn, bw2);
+      MetadataTableUtil.checkClone(tableName, "0", "1", conn, bw2);
       assertTrue(false);
     } catch (TabletIterator.TabletDeletedException tde) {}
 
diff --git a/test/src/main/java/org/apache/accumulo/test/CompactionRateLimitingIT.java b/test/src/main/java/org/apache/accumulo/test/CompactionRateLimitingIT.java
new file mode 100644
index 0000000..6aa6930
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/CompactionRateLimitingIT.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import java.util.Random;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class CompactionRateLimitingIT extends ConfigurableMacBase {
+  public static final long BYTES_TO_WRITE = 10 * 1024 * 1024;
+  public static final long RATE = 1 * 1024 * 1024;
+
+  @Override
+  public void configure(MiniAccumuloConfigImpl cfg, Configuration fsConf) {
+    cfg.setProperty(Property.TSERV_MAJC_THROUGHPUT, RATE + "B");
+    cfg.setProperty(Property.TABLE_MAJC_RATIO, "20");
+    cfg.setProperty(Property.TABLE_FILE_COMPRESSION_TYPE, "none");
+  }
+
+  @Test
+  public void majorCompactionsAreRateLimited() throws Exception {
+    long bytesWritten = 0;
+    String tableName = getUniqueNames(1)[0];
+    Connector conn = getCluster().getConnector("root", new PasswordToken(ROOT_PASSWORD));
+    conn.tableOperations().create(tableName);
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
+    try {
+      Random r = new Random();
+      while (bytesWritten < BYTES_TO_WRITE) {
+        byte[] rowKey = new byte[32];
+        r.nextBytes(rowKey);
+
+        byte[] qual = new byte[32];
+        r.nextBytes(qual);
+
+        byte[] value = new byte[1024];
+        r.nextBytes(value);
+
+        Mutation m = new Mutation(rowKey);
+        m.put(new byte[0], qual, value);
+        bw.addMutation(m);
+
+        bytesWritten += rowKey.length + qual.length + value.length;
+      }
+    } finally {
+      bw.close();
+    }
+
+    conn.tableOperations().flush(tableName, null, null, true);
+
+    long compactionStart = System.currentTimeMillis();
+    conn.tableOperations().compact(tableName, null, null, false, true);
+    long duration = System.currentTimeMillis() - compactionStart;
+    Assert.assertTrue(
+        String.format("Expected a compaction rate of no more than %,d bytes/sec, but saw a rate of %,f bytes/sec", RATE, 1000.0 * bytesWritten / duration),
+        duration > 1000L * BYTES_TO_WRITE / RATE);
+  }
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/ConditionalWriterIT.java b/test/src/main/java/org/apache/accumulo/test/ConditionalWriterIT.java
new file mode 100644
index 0000000..f0b46b5
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/ConditionalWriterIT.java
@@ -0,0 +1,1476 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.test;
+
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Random;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.UUID;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.accumulo.cluster.AccumuloCluster;
+import org.apache.accumulo.cluster.ClusterUser;
+import org.apache.accumulo.core.client.AccumuloException;
+import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.ClientConfiguration;
+import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
+import org.apache.accumulo.core.client.ConditionalWriter;
+import org.apache.accumulo.core.client.ConditionalWriter.Result;
+import org.apache.accumulo.core.client.ConditionalWriter.Status;
+import org.apache.accumulo.core.client.ConditionalWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.IsolatedScanner;
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.RowIterator;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.TableDeletedException;
+import org.apache.accumulo.core.client.TableExistsException;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.TableOfflineException;
+import org.apache.accumulo.core.client.admin.NewTableConfiguration;
+import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.data.ArrayByteSequence;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Condition;
+import org.apache.accumulo.core.data.ConditionalMutation;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
+import org.apache.accumulo.core.iterators.LongCombiner.Type;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.iterators.WrappingIterator;
+import org.apache.accumulo.core.iterators.user.SummingCombiner;
+import org.apache.accumulo.core.iterators.user.VersioningIterator;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.core.security.SystemPermission;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.core.trace.DistributedTrace;
+import org.apache.accumulo.core.trace.Span;
+import org.apache.accumulo.core.trace.Trace;
+import org.apache.accumulo.core.util.FastFormat;
+import org.apache.accumulo.examples.simple.constraints.AlphaNumKeyConstraint;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.accumulo.test.functional.BadIterator;
+import org.apache.accumulo.test.functional.SlowIterator;
+import org.apache.accumulo.tracer.TraceDump;
+import org.apache.accumulo.tracer.TraceDump.Printer;
+import org.apache.accumulo.tracer.TraceServer;
+import org.apache.hadoop.io.Text;
+import org.junit.Assert;
+import org.junit.Assume;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.Iterables;
+
+/**
+ *
+ */
+public class ConditionalWriterIT extends AccumuloClusterHarness {
+  private static final Logger log = LoggerFactory.getLogger(ConditionalWriterIT.class);
+
+  @Override
+  protected int defaultTimeoutSeconds() {
+    return 60;
+  }
+
+  public static long abs(long l) {
+    l = Math.abs(l); // abs(Long.MIN_VALUE) == Long.MIN_VALUE...
+    if (l < 0)
+      return 0;
+    return l;
+  }
+
+  @Before
+  public void deleteUsers() throws Exception {
+    Connector conn = getConnector();
+    Set<String> users = conn.securityOperations().listLocalUsers();
+    ClusterUser user = getUser(0);
+    if (users.contains(user.getPrincipal())) {
+      conn.securityOperations().dropLocalUser(user.getPrincipal());
+    }
+  }
+
+  @Test
+  public void testBasic() throws Exception {
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    conn.tableOperations().create(tableName);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig())) {
+
+      // mutation conditional on column tx:seq not existing
+      ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq"));
+      cm0.put("name", "last", "doe");
+      cm0.put("name", "first", "john");
+      cm0.put("tx", "seq", "1");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm0).getStatus());
+      Assert.assertEquals(Status.REJECTED, cw.write(cm0).getStatus());
+
+      // mutation conditional on column tx:seq being 1
+      ConditionalMutation cm1 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("1"));
+      cm1.put("name", "last", "Doe");
+      cm1.put("tx", "seq", "2");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm1).getStatus());
+
+      // test condition where value differs
+      ConditionalMutation cm2 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("1"));
+      cm2.put("name", "last", "DOE");
+      cm2.put("tx", "seq", "2");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm2).getStatus());
+
+      // test condition where column does not exists
+      ConditionalMutation cm3 = new ConditionalMutation("99006", new Condition("txtypo", "seq").setValue("1"));
+      cm3.put("name", "last", "deo");
+      cm3.put("tx", "seq", "2");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm3).getStatus());
+
+      // test two conditions, where one should fail
+      ConditionalMutation cm4 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("2"), new Condition("name", "last").setValue("doe"));
+      cm4.put("name", "last", "deo");
+      cm4.put("tx", "seq", "3");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm4).getStatus());
+
+      // test two conditions, where one should fail
+      ConditionalMutation cm5 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("1"), new Condition("name", "last").setValue("Doe"));
+      cm5.put("name", "last", "deo");
+      cm5.put("tx", "seq", "3");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm5).getStatus());
+
+      // ensure rejected mutations did not write
+      Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
+      scanner.fetchColumn(new Text("name"), new Text("last"));
+      scanner.setRange(new Range("99006"));
+      Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("Doe", entry.getValue().toString());
+
+      // test w/ two conditions that are met
+      ConditionalMutation cm6 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("2"), new Condition("name", "last").setValue("Doe"));
+      cm6.put("name", "last", "DOE");
+      cm6.put("tx", "seq", "3");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm6).getStatus());
+
+      entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("DOE", entry.getValue().toString());
+
+      // test a conditional mutation that deletes
+      ConditionalMutation cm7 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("3"));
+      cm7.putDelete("name", "last");
+      cm7.putDelete("name", "first");
+      cm7.putDelete("tx", "seq");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm7).getStatus());
+
+      Assert.assertFalse("Did not expect to find any results", scanner.iterator().hasNext());
+
+      // add the row back
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm0).getStatus());
+      Assert.assertEquals(Status.REJECTED, cw.write(cm0).getStatus());
+
+      entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("doe", entry.getValue().toString());
+    }
+  }
+
+  @Test
+  public void testFields() throws Exception {
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    String user = null;
+    ClientConfiguration clientConf = cluster.getClientConfig();
+    final boolean saslEnabled = clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false);
+
+    ClusterUser user1 = getUser(0);
+    user = user1.getPrincipal();
+    if (saslEnabled) {
+      // The token is pointless for kerberos
+      conn.securityOperations().createLocalUser(user, null);
+    } else {
+      conn.securityOperations().createLocalUser(user, new PasswordToken(user1.getPassword()));
+    }
+
+    Authorizations auths = new Authorizations("A", "B");
+
+    conn.securityOperations().changeUserAuthorizations(user, auths);
+    conn.securityOperations().grantSystemPermission(user, SystemPermission.CREATE_TABLE);
+
+    conn = conn.getInstance().getConnector(user, user1.getToken());
+
+    conn.tableOperations().create(tableName);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig().setAuthorizations(auths))) {
+
+      ColumnVisibility cva = new ColumnVisibility("A");
+      ColumnVisibility cvb = new ColumnVisibility("B");
+
+      ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cva));
+      cm0.put("name", "last", cva, "doe");
+      cm0.put("name", "first", cva, "john");
+      cm0.put("tx", "seq", cva, "1");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm0).getStatus());
+
+      Scanner scanner = conn.createScanner(tableName, auths);
+      scanner.setRange(new Range("99006"));
+      // TODO verify all columns
+      scanner.fetchColumn(new Text("tx"), new Text("seq"));
+      Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("1", entry.getValue().toString());
+      long ts = entry.getKey().getTimestamp();
+
+      // test wrong colf
+      ConditionalMutation cm1 = new ConditionalMutation("99006", new Condition("txA", "seq").setVisibility(cva).setValue("1"));
+      cm1.put("name", "last", cva, "Doe");
+      cm1.put("name", "first", cva, "John");
+      cm1.put("tx", "seq", cva, "2");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm1).getStatus());
+
+      // test wrong colq
+      ConditionalMutation cm2 = new ConditionalMutation("99006", new Condition("tx", "seqA").setVisibility(cva).setValue("1"));
+      cm2.put("name", "last", cva, "Doe");
+      cm2.put("name", "first", cva, "John");
+      cm2.put("tx", "seq", cva, "2");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm2).getStatus());
+
+      // test wrong colv
+      ConditionalMutation cm3 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb).setValue("1"));
+      cm3.put("name", "last", cva, "Doe");
+      cm3.put("name", "first", cva, "John");
+      cm3.put("tx", "seq", cva, "2");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm3).getStatus());
+
+      // test wrong timestamp
+      ConditionalMutation cm4 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cva).setTimestamp(ts + 1).setValue("1"));
+      cm4.put("name", "last", cva, "Doe");
+      cm4.put("name", "first", cva, "John");
+      cm4.put("tx", "seq", cva, "2");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm4).getStatus());
+
+      // test wrong timestamp
+      ConditionalMutation cm5 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cva).setTimestamp(ts - 1).setValue("1"));
+      cm5.put("name", "last", cva, "Doe");
+      cm5.put("name", "first", cva, "John");
+      cm5.put("tx", "seq", cva, "2");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm5).getStatus());
+
+      // ensure no updates were made
+      entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("1", entry.getValue().toString());
+
+      // set all columns correctly
+      ConditionalMutation cm6 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cva).setTimestamp(ts).setValue("1"));
+      cm6.put("name", "last", cva, "Doe");
+      cm6.put("name", "first", cva, "John");
+      cm6.put("tx", "seq", cva, "2");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm6).getStatus());
+
+      entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("2", entry.getValue().toString());
+    }
+  }
+
+  @Test
+  public void testBadColVis() throws Exception {
+    // test when a user sets a col vis in a condition that can never be seen
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    conn.tableOperations().create(tableName);
+
+    Authorizations auths = new Authorizations("A", "B");
+
+    conn.securityOperations().changeUserAuthorizations(getAdminPrincipal(), auths);
+
+    Authorizations filteredAuths = new Authorizations("A");
+
+    ColumnVisibility cva = new ColumnVisibility("A");
+    ColumnVisibility cvb = new ColumnVisibility("B");
+    ColumnVisibility cvc = new ColumnVisibility("C");
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig().setAuthorizations(filteredAuths))) {
+
+      // User has authorization, but didn't include it in the writer
+      ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb));
+      cm0.put("name", "last", cva, "doe");
+      cm0.put("name", "first", cva, "john");
+      cm0.put("tx", "seq", cva, "1");
+      Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm0).getStatus());
+
+      ConditionalMutation cm1 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb).setValue("1"));
+      cm1.put("name", "last", cva, "doe");
+      cm1.put("name", "first", cva, "john");
+      cm1.put("tx", "seq", cva, "1");
+      Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm1).getStatus());
+
+      // User does not have the authorization
+      ConditionalMutation cm2 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvc));
+      cm2.put("name", "last", cva, "doe");
+      cm2.put("name", "first", cva, "john");
+      cm2.put("tx", "seq", cva, "1");
+      Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm2).getStatus());
+
+      ConditionalMutation cm3 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvc).setValue("1"));
+      cm3.put("name", "last", cva, "doe");
+      cm3.put("name", "first", cva, "john");
+      cm3.put("tx", "seq", cva, "1");
+      Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm3).getStatus());
+
+      // if any visibility is bad, good visibilities don't override
+      ConditionalMutation cm4 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb), new Condition("tx", "seq").setVisibility(cva));
+
+      cm4.put("name", "last", cva, "doe");
+      cm4.put("name", "first", cva, "john");
+      cm4.put("tx", "seq", cva, "1");
+      Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm4).getStatus());
+
+      ConditionalMutation cm5 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb).setValue("1"), new Condition("tx", "seq")
+          .setVisibility(cva).setValue("1"));
+      cm5.put("name", "last", cva, "doe");
+      cm5.put("name", "first", cva, "john");
+      cm5.put("tx", "seq", cva, "1");
+      Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm5).getStatus());
+
+      ConditionalMutation cm6 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb).setValue("1"),
+          new Condition("tx", "seq").setVisibility(cva));
+      cm6.put("name", "last", cva, "doe");
+      cm6.put("name", "first", cva, "john");
+      cm6.put("tx", "seq", cva, "1");
+      Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm6).getStatus());
+
+      ConditionalMutation cm7 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb), new Condition("tx", "seq").setVisibility(cva)
+          .setValue("1"));
+      cm7.put("name", "last", cva, "doe");
+      cm7.put("name", "first", cva, "john");
+      cm7.put("tx", "seq", cva, "1");
+      Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm7).getStatus());
+
+    }
+
+    // test passing auths that exceed users configured auths
+
+    Authorizations exceedingAuths = new Authorizations("A", "B", "D");
+    try (ConditionalWriter cw2 = conn.createConditionalWriter(tableName, new ConditionalWriterConfig().setAuthorizations(exceedingAuths))) {
+
+      ConditionalMutation cm8 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb), new Condition("tx", "seq").setVisibility(cva)
+          .setValue("1"));
+      cm8.put("name", "last", cva, "doe");
+      cm8.put("name", "first", cva, "john");
+      cm8.put("tx", "seq", cva, "1");
+
+      try {
+        Status status = cw2.write(cm8).getStatus();
+        Assert.fail("Writing mutation with Authorizations the user doesn't have should fail. Got status: " + status);
+      } catch (AccumuloSecurityException ase) {
+        // expected, check specific failure?
+      }
+    }
+  }
+
+  @Test
+  public void testConstraints() throws Exception {
+    // ensure constraint violations are properly reported
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    conn.tableOperations().create(tableName);
+    conn.tableOperations().addConstraint(tableName, AlphaNumKeyConstraint.class.getName());
+    conn.tableOperations().clone(tableName, tableName + "_clone", true, new HashMap<String,String>(), new HashSet<String>());
+
+    Scanner scanner = conn.createScanner(tableName + "_clone", new Authorizations());
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName + "_clone", new ConditionalWriterConfig())) {
+
+      ConditionalMutation cm0 = new ConditionalMutation("99006+", new Condition("tx", "seq"));
+      cm0.put("tx", "seq", "1");
+
+      Assert.assertEquals(Status.VIOLATED, cw.write(cm0).getStatus());
+      Assert.assertFalse("Should find no results in the table is mutation result was violated", scanner.iterator().hasNext());
+
+      ConditionalMutation cm1 = new ConditionalMutation("99006", new Condition("tx", "seq"));
+      cm1.put("tx", "seq", "1");
+
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm1).getStatus());
+      Assert.assertTrue("Accepted result should be returned when reading table", scanner.iterator().hasNext());
+    }
+  }
+
+  @Test
+  public void testIterators() throws Exception {
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    conn.tableOperations().create(tableName, new NewTableConfiguration().withoutDefaultIterators());
+
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
+
+    Mutation m = new Mutation("ACCUMULO-1000");
+    m.put("count", "comments", "1");
+    bw.addMutation(m);
+    bw.addMutation(m);
+    bw.addMutation(m);
+
+    m = new Mutation("ACCUMULO-1001");
+    m.put("count2", "comments", "1");
+    bw.addMutation(m);
+    bw.addMutation(m);
+
+    m = new Mutation("ACCUMULO-1002");
+    m.put("count2", "comments", "1");
+    bw.addMutation(m);
+    bw.addMutation(m);
+
+    bw.close();
+
+    IteratorSetting iterConfig = new IteratorSetting(10, SummingCombiner.class);
+    SummingCombiner.setEncodingType(iterConfig, Type.STRING);
+    SummingCombiner.setColumns(iterConfig, Collections.singletonList(new IteratorSetting.Column("count")));
+
+    IteratorSetting iterConfig2 = new IteratorSetting(10, SummingCombiner.class);
+    SummingCombiner.setEncodingType(iterConfig2, Type.STRING);
+    SummingCombiner.setColumns(iterConfig2, Collections.singletonList(new IteratorSetting.Column("count2", "comments")));
+
+    IteratorSetting iterConfig3 = new IteratorSetting(5, VersioningIterator.class);
+    VersioningIterator.setMaxVersions(iterConfig3, 1);
+
+    Scanner scanner = conn.createScanner(tableName, new Authorizations());
+    scanner.addScanIterator(iterConfig);
+    scanner.setRange(new Range("ACCUMULO-1000"));
+    scanner.fetchColumn(new Text("count"), new Text("comments"));
+
+    Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
+    Assert.assertEquals("3", entry.getValue().toString());
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig())) {
+
+      ConditionalMutation cm0 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setValue("3"));
+      cm0.put("count", "comments", "1");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm0).getStatus());
+      entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("3", entry.getValue().toString());
+
+      ConditionalMutation cm1 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setIterators(iterConfig).setValue("3"));
+      cm1.put("count", "comments", "1");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm1).getStatus());
+      entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("4", entry.getValue().toString());
+
+      ConditionalMutation cm2 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setValue("4"));
+      cm2.put("count", "comments", "1");
+      Assert.assertEquals(Status.REJECTED, cw.write(cm1).getStatus());
+      entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("4", entry.getValue().toString());
+
+      // run test with multiple iterators passed in same batch and condition with two iterators
+
+      ConditionalMutation cm3 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setIterators(iterConfig).setValue("4"));
+      cm3.put("count", "comments", "1");
+
+      ConditionalMutation cm4 = new ConditionalMutation("ACCUMULO-1001", new Condition("count2", "comments").setIterators(iterConfig2).setValue("2"));
+      cm4.put("count2", "comments", "1");
+
+      ConditionalMutation cm5 = new ConditionalMutation("ACCUMULO-1002", new Condition("count2", "comments").setIterators(iterConfig2, iterConfig3).setValue(
+          "2"));
+      cm5.put("count2", "comments", "1");
+
+      Iterator<Result> results = cw.write(Arrays.asList(cm3, cm4, cm5).iterator());
+      Map<String,Status> actual = new HashMap<>();
+
+      while (results.hasNext()) {
+        Result result = results.next();
+        String k = new String(result.getMutation().getRow());
+        Assert.assertFalse("Did not expect to see multiple resultus for the row: " + k, actual.containsKey(k));
+        actual.put(k, result.getStatus());
+      }
+
+      Map<String,Status> expected = new HashMap<>();
+      expected.put("ACCUMULO-1000", Status.ACCEPTED);
+      expected.put("ACCUMULO-1001", Status.ACCEPTED);
+      expected.put("ACCUMULO-1002", Status.REJECTED);
+
+      Assert.assertEquals(expected, actual);
+    }
+  }
+
+  public static class AddingIterator extends WrappingIterator {
+    long amount = 0;
+
+    @Override
+    public Value getTopValue() {
+      Value val = super.getTopValue();
+      long l = Long.parseLong(val.toString());
+      String newVal = (l + amount) + "";
+      return new Value(newVal.getBytes(UTF_8));
+    }
+
+    @Override
+    public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
+      this.setSource(source);
+      amount = Long.parseLong(options.get("amount"));
+    }
+  }
+
+  public static class MultiplyingIterator extends WrappingIterator {
+    long amount = 0;
+
+    @Override
+    public Value getTopValue() {
+      Value val = super.getTopValue();
+      long l = Long.parseLong(val.toString());
+      String newVal = l * amount + "";
+      return new Value(newVal.getBytes(UTF_8));
+    }
+
+    @Override
+    public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
+      this.setSource(source);
+      amount = Long.parseLong(options.get("amount"));
+    }
+  }
+
+  @Test
+  public void testTableAndConditionIterators() throws Exception {
+
+    // test w/ table that has iterators configured
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    IteratorSetting aiConfig1 = new IteratorSetting(30, "AI1", AddingIterator.class);
+    aiConfig1.addOption("amount", "2");
+    IteratorSetting aiConfig2 = new IteratorSetting(35, "MI1", MultiplyingIterator.class);
+    aiConfig2.addOption("amount", "3");
+    IteratorSetting aiConfig3 = new IteratorSetting(40, "AI2", AddingIterator.class);
+    aiConfig3.addOption("amount", "5");
+
+    conn.tableOperations().create(tableName);
+
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
+
+    Mutation m = new Mutation("ACCUMULO-1000");
+    m.put("count", "comments", "6");
+    bw.addMutation(m);
+
+    m = new Mutation("ACCUMULO-1001");
+    m.put("count", "comments", "7");
+    bw.addMutation(m);
+
+    m = new Mutation("ACCUMULO-1002");
+    m.put("count", "comments", "8");
+    bw.addMutation(m);
+
+    bw.close();
+
+    conn.tableOperations().attachIterator(tableName, aiConfig1, EnumSet.of(IteratorScope.scan));
+    conn.tableOperations().offline(tableName, true);
+    conn.tableOperations().online(tableName, true);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig())) {
+
+      ConditionalMutation cm6 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setValue("8"));
+      cm6.put("count", "comments", "7");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm6).getStatus());
+
+      Scanner scanner = conn.createScanner(tableName, new Authorizations());
+      scanner.setRange(new Range("ACCUMULO-1000"));
+      scanner.fetchColumn(new Text("count"), new Text("comments"));
+
+      Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("9", entry.getValue().toString());
+
+      ConditionalMutation cm7 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setIterators(aiConfig2).setValue("27"));
+      cm7.put("count", "comments", "8");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm7).getStatus());
+
+      entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("10", entry.getValue().toString());
+
+      ConditionalMutation cm8 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setIterators(aiConfig2, aiConfig3).setValue("35"));
+      cm8.put("count", "comments", "9");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm8).getStatus());
+
+      entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("11", entry.getValue().toString());
+
+      ConditionalMutation cm3 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setIterators(aiConfig2).setValue("33"));
+      cm3.put("count", "comments", "3");
+
+      ConditionalMutation cm4 = new ConditionalMutation("ACCUMULO-1001", new Condition("count", "comments").setIterators(aiConfig3).setValue("14"));
+      cm4.put("count", "comments", "3");
+
+      ConditionalMutation cm5 = new ConditionalMutation("ACCUMULO-1002", new Condition("count", "comments").setIterators(aiConfig3).setValue("10"));
+      cm5.put("count", "comments", "3");
+
+      Iterator<Result> results = cw.write(Arrays.asList(cm3, cm4, cm5).iterator());
+      Map<String,Status> actual = new HashMap<>();
+
+      while (results.hasNext()) {
+        Result result = results.next();
+        String k = new String(result.getMutation().getRow());
+        Assert.assertFalse("Did not expect to see multiple resultus for the row: " + k, actual.containsKey(k));
+        actual.put(k, result.getStatus());
+      }
+
+      Map<String,Status> expected = new HashMap<>();
+      expected.put("ACCUMULO-1000", Status.ACCEPTED);
+      expected.put("ACCUMULO-1001", Status.ACCEPTED);
+      expected.put("ACCUMULO-1002", Status.REJECTED);
+
+      Assert.assertEquals(expected, actual);
+    }
+  }
+
+  @Test
+  public void testBatch() throws Exception {
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    conn.tableOperations().create(tableName);
+
+    conn.securityOperations().changeUserAuthorizations(getAdminPrincipal(), new Authorizations("A", "B"));
+
+    ColumnVisibility cvab = new ColumnVisibility("A|B");
+
+    ArrayList<ConditionalMutation> mutations = new ArrayList<>();
+
+    ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvab));
+    cm0.put("name", "last", cvab, "doe");
+    cm0.put("name", "first", cvab, "john");
+    cm0.put("tx", "seq", cvab, "1");
+    mutations.add(cm0);
+
+    ConditionalMutation cm1 = new ConditionalMutation("59056", new Condition("tx", "seq").setVisibility(cvab));
+    cm1.put("name", "last", cvab, "doe");
+    cm1.put("name", "first", cvab, "jane");
+    cm1.put("tx", "seq", cvab, "1");
+    mutations.add(cm1);
+
+    ConditionalMutation cm2 = new ConditionalMutation("19059", new Condition("tx", "seq").setVisibility(cvab));
+    cm2.put("name", "last", cvab, "doe");
+    cm2.put("name", "first", cvab, "jack");
+    cm2.put("tx", "seq", cvab, "1");
+    mutations.add(cm2);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig().setAuthorizations(new Authorizations("A")))) {
+      Iterator<Result> results = cw.write(mutations.iterator());
+      int count = 0;
+      while (results.hasNext()) {
+        Result result = results.next();
+        Assert.assertEquals(Status.ACCEPTED, result.getStatus());
+        count++;
+      }
+
+      Assert.assertEquals(3, count);
+
+      Scanner scanner = conn.createScanner(tableName, new Authorizations("A"));
+      scanner.fetchColumn(new Text("tx"), new Text("seq"));
+
+      for (String row : new String[] {"99006", "59056", "19059"}) {
+        scanner.setRange(new Range(row));
+        Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
+        Assert.assertEquals("1", entry.getValue().toString());
+      }
+
+      TreeSet<Text> splits = new TreeSet<>();
+      splits.add(new Text("7"));
+      splits.add(new Text("3"));
+      conn.tableOperations().addSplits(tableName, splits);
+
+      mutations.clear();
+
+      ConditionalMutation cm3 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvab).setValue("1"));
+      cm3.put("name", "last", cvab, "Doe");
+      cm3.put("tx", "seq", cvab, "2");
+      mutations.add(cm3);
+
+      ConditionalMutation cm4 = new ConditionalMutation("59056", new Condition("tx", "seq").setVisibility(cvab));
+      cm4.put("name", "last", cvab, "Doe");
+      cm4.put("tx", "seq", cvab, "1");
+      mutations.add(cm4);
+
+      ConditionalMutation cm5 = new ConditionalMutation("19059", new Condition("tx", "seq").setVisibility(cvab).setValue("2"));
+      cm5.put("name", "last", cvab, "Doe");
+      cm5.put("tx", "seq", cvab, "3");
+      mutations.add(cm5);
+
+      results = cw.write(mutations.iterator());
+      int accepted = 0;
+      int rejected = 0;
+      while (results.hasNext()) {
+        Result result = results.next();
+        if (new String(result.getMutation().getRow()).equals("99006")) {
+          Assert.assertEquals(Status.ACCEPTED, result.getStatus());
+          accepted++;
+        } else {
+          Assert.assertEquals(Status.REJECTED, result.getStatus());
+          rejected++;
+        }
+      }
+
+      Assert.assertEquals("Expected only one accepted conditional mutation", 1, accepted);
+      Assert.assertEquals("Expected two rejected conditional mutations", 2, rejected);
+
+      for (String row : new String[] {"59056", "19059"}) {
+        scanner.setRange(new Range(row));
+        Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
+        Assert.assertEquals("1", entry.getValue().toString());
+      }
+
+      scanner.setRange(new Range("99006"));
+      Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("2", entry.getValue().toString());
+
+      scanner.clearColumns();
+      scanner.fetchColumn(new Text("name"), new Text("last"));
+      entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("Doe", entry.getValue().toString());
+    }
+  }
+
+  @Test
+  public void testBigBatch() throws Exception {
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    conn.tableOperations().create(tableName);
+    conn.tableOperations().addSplits(tableName, nss("2", "4", "6"));
+
+    sleepUninterruptibly(2, TimeUnit.SECONDS);
+
+    int num = 100;
+
+    ArrayList<byte[]> rows = new ArrayList<>(num);
+    ArrayList<ConditionalMutation> cml = new ArrayList<>(num);
+
+    Random r = new Random();
+    byte[] e = new byte[0];
+
+    for (int i = 0; i < num; i++) {
+      rows.add(FastFormat.toZeroPaddedString(abs(r.nextLong()), 16, 16, e));
+    }
+
+    for (int i = 0; i < num; i++) {
+      ConditionalMutation cm = new ConditionalMutation(rows.get(i), new Condition("meta", "seq"));
+
+      cm.put("meta", "seq", "1");
+      cm.put("meta", "tx", UUID.randomUUID().toString());
+
+      cml.add(cm);
+    }
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig())) {
+
+      Iterator<Result> results = cw.write(cml.iterator());
+
+      int count = 0;
+
+      // TODO check got each row back
+      while (results.hasNext()) {
+        Result result = results.next();
+        Assert.assertEquals(Status.ACCEPTED, result.getStatus());
+        count++;
+      }
+
+      Assert.assertEquals("Did not receive the expected number of results", num, count);
+
+      ArrayList<ConditionalMutation> cml2 = new ArrayList<>(num);
+
+      for (int i = 0; i < num; i++) {
+        ConditionalMutation cm = new ConditionalMutation(rows.get(i), new Condition("meta", "seq").setValue("1"));
+
+        cm.put("meta", "seq", "2");
+        cm.put("meta", "tx", UUID.randomUUID().toString());
+
+        cml2.add(cm);
+      }
+
+      count = 0;
+
+      results = cw.write(cml2.iterator());
+
+      while (results.hasNext()) {
+        Result result = results.next();
+        Assert.assertEquals(Status.ACCEPTED, result.getStatus());
+        count++;
+      }
+
+      Assert.assertEquals("Did not receive the expected number of results", num, count);
+    }
+  }
+
+  @Test
+  public void testBatchErrors() throws Exception {
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    conn.tableOperations().create(tableName);
+    conn.tableOperations().addConstraint(tableName, AlphaNumKeyConstraint.class.getName());
+    conn.tableOperations().clone(tableName, tableName + "_clone", true, new HashMap<String,String>(), new HashSet<String>());
+
+    conn.securityOperations().changeUserAuthorizations(getAdminPrincipal(), new Authorizations("A", "B"));
+
+    ColumnVisibility cvaob = new ColumnVisibility("A|B");
+    ColumnVisibility cvaab = new ColumnVisibility("A&B");
+
+    switch ((new Random()).nextInt(3)) {
+      case 1:
+        conn.tableOperations().addSplits(tableName, nss("6"));
+        break;
+      case 2:
+        conn.tableOperations().addSplits(tableName, nss("2", "95"));
+        break;
+    }
+
+    ArrayList<ConditionalMutation> mutations = new ArrayList<>();
+
+    ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvaob));
+    cm0.put("name+", "last", cvaob, "doe");
+    cm0.put("name", "first", cvaob, "john");
+    cm0.put("tx", "seq", cvaob, "1");
+    mutations.add(cm0);
+
+    ConditionalMutation cm1 = new ConditionalMutation("59056", new Condition("tx", "seq").setVisibility(cvaab));
+    cm1.put("name", "last", cvaab, "doe");
+    cm1.put("name", "first", cvaab, "jane");
+    cm1.put("tx", "seq", cvaab, "1");
+    mutations.add(cm1);
+
+    ConditionalMutation cm2 = new ConditionalMutation("19059", new Condition("tx", "seq").setVisibility(cvaob));
+    cm2.put("name", "last", cvaob, "doe");
+    cm2.put("name", "first", cvaob, "jack");
+    cm2.put("tx", "seq", cvaob, "1");
+    mutations.add(cm2);
+
+    ConditionalMutation cm3 = new ConditionalMutation("90909", new Condition("tx", "seq").setVisibility(cvaob).setValue("1"));
+    cm3.put("name", "last", cvaob, "doe");
+    cm3.put("name", "first", cvaob, "john");
+    cm3.put("tx", "seq", cvaob, "2");
+    mutations.add(cm3);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig().setAuthorizations(new Authorizations("A")))) {
+      Iterator<Result> results = cw.write(mutations.iterator());
+      HashSet<String> rows = new HashSet<>();
+      while (results.hasNext()) {
+        Result result = results.next();
+        String row = new String(result.getMutation().getRow());
+        if (row.equals("19059")) {
+          Assert.assertEquals(Status.ACCEPTED, result.getStatus());
+        } else if (row.equals("59056")) {
+          Assert.assertEquals(Status.INVISIBLE_VISIBILITY, result.getStatus());
+        } else if (row.equals("99006")) {
+          Assert.assertEquals(Status.VIOLATED, result.getStatus());
+        } else if (row.equals("90909")) {
+          Assert.assertEquals(Status.REJECTED, result.getStatus());
+        }
+        rows.add(row);
+      }
+
+      Assert.assertEquals(4, rows.size());
+
+      Scanner scanner = conn.createScanner(tableName, new Authorizations("A"));
+      scanner.fetchColumn(new Text("tx"), new Text("seq"));
+
+      Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
+      Assert.assertEquals("1", entry.getValue().toString());
+    }
+  }
+
+  @Test
+  public void testSameRow() throws Exception {
+    // test multiple mutations for same row in same batch
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    conn.tableOperations().create(tableName);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig())) {
+
+      ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq"));
+      cm1.put("tx", "seq", "1");
+      cm1.put("data", "x", "a");
+
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm1).getStatus());
+
+      ConditionalMutation cm2 = new ConditionalMutation("r1", new Condition("tx", "seq").setValue("1"));
+      cm2.put("tx", "seq", "2");
+      cm2.put("data", "x", "b");
+
+      ConditionalMutation cm3 = new ConditionalMutation("r1", new Condition("tx", "seq").setValue("1"));
+      cm3.put("tx", "seq", "2");
+      cm3.put("data", "x", "c");
+
+      ConditionalMutation cm4 = new ConditionalMutation("r1", new Condition("tx", "seq").setValue("1"));
+      cm4.put("tx", "seq", "2");
+      cm4.put("data", "x", "d");
+
+      Iterator<Result> results = cw.write(Arrays.asList(cm2, cm3, cm4).iterator());
+
+      int accepted = 0;
+      int rejected = 0;
+      int total = 0;
+
+      while (results.hasNext()) {
+        Status status = results.next().getStatus();
+        if (status == Status.ACCEPTED)
+          accepted++;
+        if (status == Status.REJECTED)
+          rejected++;
+        total++;
+      }
+
+      Assert.assertEquals("Expected one accepted result", 1, accepted);
+      Assert.assertEquals("Expected two rejected results", 2, rejected);
+      Assert.assertEquals("Expected three total results", 3, total);
+    }
+  }
+
+  private static class Stats {
+
+    ByteSequence row = null;
+    int seq;
+    long sum;
+    int data[] = new int[10];
+
+    public Stats(Iterator<Entry<Key,Value>> iterator) {
+      while (iterator.hasNext()) {
+        Entry<Key,Value> entry = iterator.next();
+
+        if (row == null)
+          row = entry.getKey().getRowData();
+
+        String cf = entry.getKey().getColumnFamilyData().toString();
+        String cq = entry.getKey().getColumnQualifierData().toString();
+
+        if (cf.equals("data")) {
+          data[Integer.parseInt(cq)] = Integer.parseInt(entry.getValue().toString());
+        } else if (cf.equals("meta")) {
+          if (cq.equals("sum")) {
+            sum = Long.parseLong(entry.getValue().toString());
+          } else if (cq.equals("seq")) {
+            seq = Integer.parseInt(entry.getValue().toString());
+          }
+        }
+      }
+
+      long sum2 = 0;
+
+      for (int datum : data) {
+        sum2 += datum;
+      }
+
+      Assert.assertEquals(sum2, sum);
+    }
+
+    public Stats(ByteSequence row) {
+      this.row = row;
+      for (int i = 0; i < data.length; i++) {
+        this.data[i] = 0;
+      }
+      this.seq = -1;
+      this.sum = 0;
+    }
+
+    void set(int index, int value) {
+      sum -= data[index];
+      sum += value;
+      data[index] = value;
+    }
+
+    ConditionalMutation toMutation() {
+      Condition cond = new Condition("meta", "seq");
+      if (seq >= 0)
+        cond.setValue(seq + "");
+
+      ConditionalMutation cm = new ConditionalMutation(row, cond);
+
+      cm.put("meta", "seq", (seq + 1) + "");
+      cm.put("meta", "sum", (sum) + "");
+
+      for (int i = 0; i < data.length; i++) {
+        cm.put("data", i + "", data[i] + "");
+      }
+
+      return cm;
+    }
+
+    @Override
+    public String toString() {
+      return row + " " + seq + " " + sum;
+    }
+  }
+
+  private static class MutatorTask implements Runnable {
+    String tableName;
+    ArrayList<ByteSequence> rows;
+    ConditionalWriter cw;
+    Connector conn;
+    AtomicBoolean failed;
+
+    public MutatorTask(String tableName, Connector conn, ArrayList<ByteSequence> rows, ConditionalWriter cw, AtomicBoolean failed) {
+      this.tableName = tableName;
+      this.rows = rows;
+      this.conn = conn;
+      this.cw = cw;
+      this.failed = failed;
+    }
+
+    @Override
+    public void run() {
+      try (Scanner scanner = new IsolatedScanner(conn.createScanner(tableName, Authorizations.EMPTY))) {
+        Random rand = new Random();
+
+        for (int i = 0; i < 20; i++) {
+          int numRows = rand.nextInt(10) + 1;
+
+          ArrayList<ByteSequence> changes = new ArrayList<>(numRows);
+          ArrayList<ConditionalMutation> mutations = new ArrayList<>();
+
+          for (int j = 0; j < numRows; j++)
+            changes.add(rows.get(rand.nextInt(rows.size())));
+
+          for (ByteSequence row : changes) {
+            scanner.setRange(new Range(row.toString()));
+            Stats stats = new Stats(scanner.iterator());
+            stats.set(rand.nextInt(10), rand.nextInt(Integer.MAX_VALUE));
+            mutations.add(stats.toMutation());
+          }
+
+          ArrayList<ByteSequence> changed = new ArrayList<>(numRows);
+          Iterator<Result> results = cw.write(mutations.iterator());
+          while (results.hasNext()) {
+            Result result = results.next();
+            changed.add(new ArrayByteSequence(result.getMutation().getRow()));
+          }
+
+          Collections.sort(changes);
+          Collections.sort(changed);
+
+          Assert.assertEquals(changes, changed);
+        }
+      } catch (Exception e) {
+        log.error("{}", e.getMessage(), e);
+        failed.set(true);
+      }
+    }
+  }
+
+  @Test
+  public void testThreads() throws Exception {
+    // test multiple threads using a single conditional writer
+
+    String tableName = getUniqueNames(1)[0];
+    Connector conn = getConnector();
+
+    conn.tableOperations().create(tableName);
+
+    Random rand = new Random();
+
+    switch (rand.nextInt(3)) {
+      case 1:
+        conn.tableOperations().addSplits(tableName, nss("4"));
+        break;
+      case 2:
+        conn.tableOperations().addSplits(tableName, nss("3", "5"));
+        break;
+    }
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig())) {
+
+      ArrayList<ByteSequence> rows = new ArrayList<>();
+
+      for (int i = 0; i < 1000; i++) {
+        rows.add(new ArrayByteSequence(FastFormat.toZeroPaddedString(abs(rand.nextLong()), 16, 16, new byte[0])));
+      }
+
+      ArrayList<ConditionalMutation> mutations = new ArrayList<>();
+
+      for (ByteSequence row : rows)
+        mutations.add(new Stats(row).toMutation());
+
+      ArrayList<ByteSequence> rows2 = new ArrayList<>();
+      Iterator<Result> results = cw.write(mutations.iterator());
+      while (results.hasNext()) {
+        Result result = results.next();
+        Assert.assertEquals(Status.ACCEPTED, result.getStatus());
+        rows2.add(new ArrayByteSequence(result.getMutation().getRow()));
+      }
+
+      Collections.sort(rows);
+      Collections.sort(rows2);
+
+      Assert.assertEquals(rows, rows2);
+
+      AtomicBoolean failed = new AtomicBoolean(false);
+
+      ExecutorService tp = Executors.newFixedThreadPool(5);
+      for (int i = 0; i < 5; i++) {
+        tp.submit(new MutatorTask(tableName, conn, rows, cw, failed));
+      }
+
+      tp.shutdown();
+
+      while (!tp.isTerminated()) {
+        tp.awaitTermination(1, TimeUnit.MINUTES);
+      }
+
+      Assert.assertFalse("A MutatorTask failed with an exception", failed.get());
+    }
+
+    try (Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY)) {
+
+      RowIterator rowIter = new RowIterator(scanner);
+
+      while (rowIter.hasNext()) {
+        Iterator<Entry<Key,Value>> row = rowIter.next();
+        new Stats(row);
+      }
+    }
+  }
+
+  private SortedSet<Text> nss(String... splits) {
+    TreeSet<Text> ret = new TreeSet<>();
+    for (String split : splits)
+      ret.add(new Text(split));
+
+    return ret;
+  }
+
+  @Test
+  public void testSecurity() throws Exception {
+    // test against table user does not have read and/or write permissions for
+    Connector conn = getConnector();
+    String user = null;
+    ClientConfiguration clientConf = cluster.getClientConfig();
+    final boolean saslEnabled = clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false);
+
+    // Create a new user
+    ClusterUser user1 = getUser(0);
+    user = user1.getPrincipal();
+    if (saslEnabled) {
+      conn.securityOperations().createLocalUser(user, null);
+    } else {
+      conn.securityOperations().createLocalUser(user, new PasswordToken(user1.getPassword()));
+    }
+
+    String[] tables = getUniqueNames(3);
+    String table1 = tables[0], table2 = tables[1], table3 = tables[2];
+
+    // Create three tables
+    conn.tableOperations().create(table1);
+    conn.tableOperations().create(table2);
+    conn.tableOperations().create(table3);
+
+    // Grant R on table1, W on table2, R/W on table3
+    conn.securityOperations().grantTablePermission(user, table1, TablePermission.READ);
+    conn.securityOperations().grantTablePermission(user, table2, TablePermission.WRITE);
+    conn.securityOperations().grantTablePermission(user, table3, TablePermission.READ);
+    conn.securityOperations().grantTablePermission(user, table3, TablePermission.WRITE);
+
+    // Login as the user
+    Connector conn2 = conn.getInstance().getConnector(user, user1.getToken());
+
+    ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq"));
+    cm1.put("tx", "seq", "1");
+    cm1.put("data", "x", "a");
+
+    try (ConditionalWriter cw1 = conn2.createConditionalWriter(table1, new ConditionalWriterConfig());
+        ConditionalWriter cw2 = conn2.createConditionalWriter(table2, new ConditionalWriterConfig());
+        ConditionalWriter cw3 = conn2.createConditionalWriter(table3, new ConditionalWriterConfig())) {
+
+      // Should be able to conditional-update a table we have R/W on
+      Assert.assertEquals(Status.ACCEPTED, cw3.write(cm1).getStatus());
+
+      // Conditional-update to a table we only have read on should fail
+      try {
+        Status status = cw1.write(cm1).getStatus();
+        Assert.fail("Expected exception writing conditional mutation to table the user doesn't have write access to, Got status: " + status);
+      } catch (AccumuloSecurityException ase) {
+
+      }
+
+      // Conditional-update to a table we only have writer on should fail
+      try {
+        Status status = cw2.write(cm1).getStatus();
+        Assert.fail("Expected exception writing conditional mutation to table the user doesn't have read access to. Got status: " + status);
+      } catch (AccumuloSecurityException ase) {
+
+      }
+    }
+  }
+
+  @Test
+  public void testTimeout() throws Exception {
+    Connector conn = getConnector();
+
+    String table = getUniqueNames(1)[0];
+
+    conn.tableOperations().create(table);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig().setTimeout(3, TimeUnit.SECONDS))) {
+
+      ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq"));
+      cm1.put("tx", "seq", "1");
+      cm1.put("data", "x", "a");
+
+      Assert.assertEquals(cw.write(cm1).getStatus(), Status.ACCEPTED);
+
+      IteratorSetting is = new IteratorSetting(5, SlowIterator.class);
+      SlowIterator.setSeekSleepTime(is, 5000);
+
+      ConditionalMutation cm2 = new ConditionalMutation("r1", new Condition("tx", "seq").setValue("1").setIterators(is));
+      cm2.put("tx", "seq", "2");
+      cm2.put("data", "x", "b");
+
+      Assert.assertEquals(cw.write(cm2).getStatus(), Status.UNKNOWN);
+
+      Scanner scanner = conn.createScanner(table, Authorizations.EMPTY);
+
+      for (Entry<Key,Value> entry : scanner) {
+        String cf = entry.getKey().getColumnFamilyData().toString();
+        String cq = entry.getKey().getColumnQualifierData().toString();
+        String val = entry.getValue().toString();
+
+        if (cf.equals("tx") && cq.equals("seq"))
+          Assert.assertEquals("Unexpected value in tx:seq", "1", val);
+        else if (cf.equals("data") && cq.equals("x"))
+          Assert.assertEquals("Unexpected value in data:x", "a", val);
+        else
+          Assert.fail("Saw unexpected column family and qualifier: " + entry);
+      }
+
+      ConditionalMutation cm3 = new ConditionalMutation("r1", new Condition("tx", "seq").setValue("1"));
+      cm3.put("tx", "seq", "2");
+      cm3.put("data", "x", "b");
+
+      Assert.assertEquals(cw.write(cm3).getStatus(), Status.ACCEPTED);
+    }
+  }
+
+  @Test
+  public void testDeleteTable() throws Exception {
+    String table = getUniqueNames(1)[0];
+    Connector conn = getConnector();
+
+    try {
+      conn.createConditionalWriter(table, new ConditionalWriterConfig());
+      Assert.fail("Creating conditional writer for table that doesn't exist should fail");
+    } catch (TableNotFoundException e) {}
+
+    conn.tableOperations().create(table);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig())) {
+
+      conn.tableOperations().delete(table);
+
+      ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq"));
+      cm1.put("tx", "seq", "1");
+      cm1.put("data", "x", "a");
+
+      Result result = cw.write(cm1);
+
+      try {
+        Status status = result.getStatus();
+        Assert.fail("Expected exception writing conditional mutation to deleted table. Got status: " + status);
+      } catch (AccumuloException ae) {
+        Assert.assertEquals(TableDeletedException.class, ae.getCause().getClass());
+      }
+    }
+  }
+
+  @Test
+  public void testOffline() throws Exception {
+    String table = getUniqueNames(1)[0];
+    Connector conn = getConnector();
+
+    conn.tableOperations().create(table);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig())) {
+
+      conn.tableOperations().offline(table, true);
+
+      ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq"));
+      cm1.put("tx", "seq", "1");
+      cm1.put("data", "x", "a");
+
+      Result result = cw.write(cm1);
+
+      try {
+        Status status = result.getStatus();
+        Assert.fail("Expected exception writing conditional mutation to offline table. Got status: " + status);
+      } catch (AccumuloException ae) {
+        Assert.assertEquals(TableOfflineException.class, ae.getCause().getClass());
+      }
+
+      try {
+        conn.createConditionalWriter(table, new ConditionalWriterConfig());
+        Assert.fail("Expected exception creating conditional writer to offline table");
+      } catch (TableOfflineException e) {}
+    }
+  }
+
+  @Test
+  public void testError() throws Exception {
+    String table = getUniqueNames(1)[0];
+    Connector conn = getConnector();
+
+    conn.tableOperations().create(table);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig())) {
+
+      IteratorSetting iterSetting = new IteratorSetting(5, BadIterator.class);
+
+      ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq").setIterators(iterSetting));
+      cm1.put("tx", "seq", "1");
+      cm1.put("data", "x", "a");
+
+      Result result = cw.write(cm1);
+
+      try {
+        Status status = result.getStatus();
+        Assert.fail("Expected exception using iterator which throws an error, Got status: " + status);
+      } catch (AccumuloException ae) {
+
+      }
+
+    }
+  }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testNoConditions() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException {
+    String table = getUniqueNames(1)[0];
+    Connector conn = getConnector();
+
+    conn.tableOperations().create(table);
+
+    try (ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig())) {
+
+      ConditionalMutation cm1 = new ConditionalMutation("r1");
+      cm1.put("tx", "seq", "1");
+      cm1.put("data", "x", "a");
+
+      cw.write(cm1);
+    }
+  }
+
+  @Test
+  public void testTrace() throws Exception {
+    // Need to add a getClientConfig() to AccumuloCluster
+    Assume.assumeTrue(getClusterType() == ClusterType.MINI);
+    Process tracer = null;
+    Connector conn = getConnector();
+    AccumuloCluster cluster = getCluster();
+    MiniAccumuloClusterImpl mac = (MiniAccumuloClusterImpl) cluster;
+    if (!conn.tableOperations().exists("trace")) {
+      tracer = mac.exec(TraceServer.class);
+      while (!conn.tableOperations().exists("trace")) {
+        sleepUninterruptibly(1, TimeUnit.SECONDS);
+      }
+    }
+
+    String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
+
+    DistributedTrace.enable("localhost", "testTrace", mac.getClientConfig());
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
+    Span root = Trace.on("traceTest");
+    try (ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig())) {
+
+      // mutation conditional on column tx:seq not exiting
+      ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq"));
+      cm0.put("name", "last", "doe");
+      cm0.put("name", "first", "john");
+      cm0.put("tx", "seq", "1");
+      Assert.assertEquals(Status.ACCEPTED, cw.write(cm0).getStatus());
+      root.stop();
+    }
+
+    final Scanner scanner = conn.createScanner("trace", Authorizations.EMPTY);
+    scanner.setRange(new Range(new Text(Long.toHexString(root.traceId()))));
+    loop: while (true) {
+      final StringBuilder finalBuffer = new StringBuilder();
+      int traceCount = TraceDump.printTrace(scanner, new Printer() {
+        @Override
+        public void print(final String line) {
+          try {
+            finalBuffer.append(line).append("\n");
+          } catch (Exception ex) {
+            throw new RuntimeException(ex);
+          }
+        }
+      });
+      String traceOutput = finalBuffer.toString();
+      log.info("Trace output:" + traceOutput);
+      if (traceCount > 0) {
+        int lastPos = 0;
+        for (String part : "traceTest, startScan,startConditionalUpdate,conditionalUpdate,Check conditions,apply conditional mutations".split(",")) {
+          log.info("Looking in trace output for '" + part + "'");
+          int pos = traceOutput.indexOf(part);
+          if (-1 == pos) {
+            log.info("Trace output doesn't contain '" + part + "'");
+            Thread.sleep(1000);
+            break loop;
+          }
+          assertTrue("Did not find '" + part + "' in output", pos > 0);
+          assertTrue("'" + part + "' occurred earlier than the previous element unexpectedly", pos > lastPos);
+          lastPos = pos;
+        }
+        break;
+      } else {
+        log.info("Ignoring trace output as traceCount not greater than zero: " + traceCount);
+        Thread.sleep(1000);
+      }
+    }
+    if (tracer != null) {
+      tracer.destroy();
+    }
+  }
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/ConfigurableMajorCompactionIT.java b/test/src/main/java/org/apache/accumulo/test/ConfigurableMajorCompactionIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/ConfigurableMajorCompactionIT.java
rename to test/src/main/java/org/apache/accumulo/test/ConfigurableMajorCompactionIT.java
index 2a13aed..20979af 100644
--- a/test/src/test/java/org/apache/accumulo/test/ConfigurableMajorCompactionIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ConfigurableMajorCompactionIT.java
@@ -34,7 +34,7 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.fate.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.tserver.compaction.CompactionPlan;
 import org.apache.accumulo.tserver.compaction.CompactionStrategy;
 import org.apache.accumulo.tserver.compaction.MajorCompactionRequest;
@@ -44,7 +44,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class ConfigurableMajorCompactionIT extends ConfigurableMacIT {
+public class ConfigurableMajorCompactionIT extends ConfigurableMacBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -53,7 +53,7 @@
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
-    Map<String,String> siteConfig = new HashMap<String,String>();
+    Map<String,String> siteConfig = new HashMap<>();
     siteConfig.put(Property.TSERV_MAJC_DELAY.getKey(), "1s");
     cfg.setSiteConfig(siteConfig);
   }
diff --git a/test/src/main/java/org/apache/accumulo/test/CreateRandomRFile.java b/test/src/main/java/org/apache/accumulo/test/CreateRandomRFile.java
index ada8504..2a05f4e 100644
--- a/test/src/main/java/org/apache/accumulo/test/CreateRandomRFile.java
+++ b/test/src/main/java/org/apache/accumulo/test/CreateRandomRFile.java
@@ -69,7 +69,7 @@
     FileSKVWriter mfw;
     try {
       FileSystem fs = FileSystem.get(conf);
-      mfw = new RFileOperations().openWriter(file, fs, conf, AccumuloConfiguration.getDefaultConfiguration());
+      mfw = new RFileOperations().newWriterBuilder().forFile(file, fs, conf).withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
     } catch (IOException e) {
       throw new RuntimeException(e);
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/CreateTableWithNewTableConfigIT.java b/test/src/main/java/org/apache/accumulo/test/CreateTableWithNewTableConfigIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/CreateTableWithNewTableConfigIT.java
rename to test/src/main/java/org/apache/accumulo/test/CreateTableWithNewTableConfigIT.java
index 93b36ba..7fd2dd1 100644
--- a/test/src/test/java/org/apache/accumulo/test/CreateTableWithNewTableConfigIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/CreateTableWithNewTableConfigIT.java
@@ -32,7 +32,7 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ServerColumnFamily;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.BeforeClass;
@@ -45,7 +45,7 @@
 /**
  *
  */
-public class CreateTableWithNewTableConfigIT extends SharedMiniClusterIT {
+public class CreateTableWithNewTableConfigIT extends SharedMiniClusterBase {
   static private final Logger log = LoggerFactory.getLogger(CreateTableWithNewTableConfigIT.class);
 
   @Override
@@ -55,12 +55,12 @@
 
   @BeforeClass
   public static void setup() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
   }
 
   @AfterClass
   public static void teardown() throws Exception {
-    SharedMiniClusterIT.stopMiniCluster();
+    SharedMiniClusterBase.stopMiniCluster();
   }
 
   public int numProperties(Connector connector, String tableName) throws AccumuloException, TableNotFoundException {
@@ -171,7 +171,7 @@
     log.info("Starting addCustomPropAndChangeExisting");
 
     // Create and populate initial properties map for creating table 1
-    Map<String,String> properties = new HashMap<String,String>();
+    Map<String,String> properties = new HashMap<>();
     String propertyName = Property.TABLE_SPLIT_THRESHOLD.getKey();
     String volume = "10K";
     properties.put(propertyName, volume);
diff --git a/test/src/test/java/org/apache/accumulo/test/DetectDeadTabletServersIT.java b/test/src/main/java/org/apache/accumulo/test/DetectDeadTabletServersIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/DetectDeadTabletServersIT.java
rename to test/src/main/java/org/apache/accumulo/test/DetectDeadTabletServersIT.java
index f7ee089..f207353 100644
--- a/test/src/test/java/org/apache/accumulo/test/DetectDeadTabletServersIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/DetectDeadTabletServersIT.java
@@ -32,13 +32,13 @@
 import org.apache.accumulo.core.trace.Tracer;
 import org.apache.accumulo.fate.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
 
-public class DetectDeadTabletServersIT extends ConfigurableMacIT {
+public class DetectDeadTabletServersIT extends ConfigurableMacBase {
 
   @Override
   protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
diff --git a/test/src/test/java/org/apache/accumulo/test/DumpConfigIT.java b/test/src/main/java/org/apache/accumulo/test/DumpConfigIT.java
similarity index 89%
rename from test/src/test/java/org/apache/accumulo/test/DumpConfigIT.java
rename to test/src/main/java/org/apache/accumulo/test/DumpConfigIT.java
index 5252e68..245d61d 100644
--- a/test/src/test/java/org/apache/accumulo/test/DumpConfigIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/DumpConfigIT.java
@@ -28,17 +28,13 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.util.Admin;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.test.functional.FunctionalTestUtils;
 import org.apache.hadoop.conf.Configuration;
-import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.TemporaryFolder;
 
-public class DumpConfigIT extends ConfigurableMacIT {
-
-  @Rule
-  public TemporaryFolder folder = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
+public class DumpConfigIT extends ConfigurableMacBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -52,6 +48,10 @@
 
   @Test
   public void test() throws Exception {
+    File target = new File(System.getProperty("user.dir"), "target");
+    assertTrue(target.exists() || target.mkdirs());
+    TemporaryFolder folder = new TemporaryFolder(target);
+    folder.create();
     File siteFileBackup = new File(folder.getRoot(), "accumulo-site.xml.bak");
     assertFalse(siteFileBackup.exists());
     assertEquals(0, exec(Admin.class, new String[] {"dumpConfig", "-a", "-d", folder.getRoot().getPath()}).waitFor());
diff --git a/test/src/test/java/org/apache/accumulo/test/ExistingMacIT.java b/test/src/main/java/org/apache/accumulo/test/ExistingMacIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/ExistingMacIT.java
rename to test/src/main/java/org/apache/accumulo/test/ExistingMacIT.java
index 414cb4d..9a72051 100644
--- a/test/src/test/java/org/apache/accumulo/test/ExistingMacIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ExistingMacIT.java
@@ -25,6 +25,7 @@
 import java.util.Collection;
 import java.util.Map.Entry;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -39,19 +40,20 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ProcessReference;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.RawLocalFileSystem;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class ExistingMacIT extends ConfigurableMacIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class ExistingMacIT extends ConfigurableMacBase {
   @Override
   public int defaultTimeoutSeconds() {
     return 2 * 60;
@@ -103,7 +105,7 @@
 
     // TODO clean out zookeeper? following sleep waits for ephemeral nodes to go away
     long zkTimeout = AccumuloConfiguration.getTimeInMillis(getCluster().getConfig().getSiteConfig().get(Property.INSTANCE_ZK_TIMEOUT.getKey()));
-    UtilWaitThread.sleep(zkTimeout + 500);
+    sleepUninterruptibly(zkTimeout + 500, TimeUnit.MILLISECONDS);
 
     File hadoopConfDir = createTestDir(ExistingMacIT.class.getSimpleName() + "_hadoop_conf");
     FileUtils.deleteQuietly(hadoopConfDir);
diff --git a/test/src/main/java/org/apache/accumulo/test/FairVolumeChooser.java b/test/src/main/java/org/apache/accumulo/test/FairVolumeChooser.java
index 2325086..7c94004 100644
--- a/test/src/main/java/org/apache/accumulo/test/FairVolumeChooser.java
+++ b/test/src/main/java/org/apache/accumulo/test/FairVolumeChooser.java
@@ -26,7 +26,7 @@
  */
 public class FairVolumeChooser implements VolumeChooser {
 
-  private final ConcurrentHashMap<Integer,Integer> optionLengthToLastChoice = new ConcurrentHashMap<Integer,Integer>();
+  private final ConcurrentHashMap<Integer,Integer> optionLengthToLastChoice = new ConcurrentHashMap<>();
 
   @Override
   public String choose(VolumeChooserEnvironment env, String[] options) {
diff --git a/test/src/main/java/org/apache/accumulo/test/FaultyConditionalWriter.java b/test/src/main/java/org/apache/accumulo/test/FaultyConditionalWriter.java
index adafdfb..b1a5e91 100644
--- a/test/src/main/java/org/apache/accumulo/test/FaultyConditionalWriter.java
+++ b/test/src/main/java/org/apache/accumulo/test/FaultyConditionalWriter.java
@@ -44,8 +44,8 @@
 
   @Override
   public Iterator<Result> write(Iterator<ConditionalMutation> mutations) {
-    ArrayList<Result> resultList = new ArrayList<Result>();
-    ArrayList<ConditionalMutation> writes = new ArrayList<ConditionalMutation>();
+    ArrayList<Result> resultList = new ArrayList<>();
+    ArrayList<ConditionalMutation> writes = new ArrayList<>();
 
     while (mutations.hasNext()) {
       ConditionalMutation cm = mutations.next();
diff --git a/test/src/test/java/org/apache/accumulo/test/FileArchiveIT.java b/test/src/main/java/org/apache/accumulo/test/FileArchiveIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/FileArchiveIT.java
rename to test/src/main/java/org/apache/accumulo/test/FileArchiveIT.java
index 2e45d80..8e51984 100644
--- a/test/src/test/java/org/apache/accumulo/test/FileArchiveIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/FileArchiveIT.java
@@ -29,13 +29,11 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.ServerConstants;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.LocalFileSystem;
 import org.apache.hadoop.fs.Path;
 import org.junit.Assert;
 import org.junit.Test;
@@ -45,7 +43,7 @@
 /**
  * Tests that files are archived instead of deleted when configured.
  */
-public class FileArchiveIT extends ConfigurableMacIT {
+public class FileArchiveIT extends ConfigurableMacBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -91,7 +89,7 @@
 
     log.info("File for table: " + file);
 
-    FileSystem fs = LocalFileSystem.get(CachedConfiguration.getInstance());
+    FileSystem fs = getCluster().getFileSystem();
     int i = 0;
     while (fs.exists(p)) {
       i++;
@@ -148,7 +146,7 @@
 
     log.info("File for table: " + file);
 
-    FileSystem fs = LocalFileSystem.get(CachedConfiguration.getInstance());
+    FileSystem fs = getCluster().getFileSystem();
     int i = 0;
     while (fs.exists(p)) {
       i++;
@@ -206,7 +204,7 @@
 
     log.info("File for table: " + file);
 
-    FileSystem fs = LocalFileSystem.get(CachedConfiguration.getInstance());
+    FileSystem fs = getCluster().getFileSystem();
     int i = 0;
     while (fs.exists(p)) {
       i++;
diff --git a/core/src/test/java/org/apache/accumulo/core/client/admin/FindMaxTest.java b/test/src/main/java/org/apache/accumulo/test/FindMaxIT.java
similarity index 60%
rename from core/src/test/java/org/apache/accumulo/core/client/admin/FindMaxTest.java
rename to test/src/main/java/org/apache/accumulo/test/FindMaxIT.java
index 2dc6ba5..96ab317 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/admin/FindMaxTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/FindMaxIT.java
@@ -14,26 +14,27 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.client.admin;
+package org.apache.accumulo.test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
 
 import java.util.ArrayList;
 import java.util.Map.Entry;
 
-import junit.framework.TestCase;
-
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
+import org.junit.Test;
 
-public class FindMaxTest extends TestCase {
+public class FindMaxIT extends AccumuloClusterHarness {
 
   private static Mutation nm(byte[] row) {
     Mutation m = new Mutation(new Text(row));
@@ -47,13 +48,14 @@
     return m;
   }
 
+  @Test
   public void test1() throws Exception {
-    MockInstance mi = new MockInstance();
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
 
-    Connector conn = mi.getConnector("root", new PasswordToken(""));
-    conn.tableOperations().create("foo");
+    conn.tableOperations().create(tableName);
 
-    BatchWriter bw = conn.createBatchWriter("foo", new BatchWriterConfig());
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
     bw.addMutation(nm(new byte[] {0}));
     bw.addMutation(nm(new byte[] {0, 0}));
@@ -64,48 +66,48 @@
     bw.addMutation(nm(new byte[] {(byte) 0xff}));
     bw.addMutation(nm(new byte[] {(byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff, (byte) 0xff}));
 
-    for (int i = 0; i < 1000; i++) {
+    for (int i = 0; i < 1000; i += 5) {
       bw.addMutation(nm(String.format("r%05d", i)));
     }
 
     bw.close();
 
-    Scanner scanner = conn.createScanner("foo", Authorizations.EMPTY);
+    Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
 
-    ArrayList<Text> rows = new ArrayList<Text>();
+    ArrayList<Text> rows = new ArrayList<>();
 
     for (Entry<Key,Value> entry : scanner) {
       rows.add(entry.getKey().getRow());
     }
 
     for (int i = rows.size() - 1; i > 0; i--) {
-      Text max = FindMax.findMax(conn.createScanner("foo", Authorizations.EMPTY), null, true, rows.get(i), false);
+      Text max = conn.tableOperations().getMaxRow(tableName, Authorizations.EMPTY, null, true, rows.get(i), false);
       assertEquals(rows.get(i - 1), max);
 
-      max = FindMax.findMax(conn.createScanner("foo", Authorizations.EMPTY), rows.get(i - 1), true, rows.get(i), false);
+      max = conn.tableOperations().getMaxRow(tableName, Authorizations.EMPTY, rows.get(i - 1), true, rows.get(i), false);
       assertEquals(rows.get(i - 1), max);
 
-      max = FindMax.findMax(conn.createScanner("foo", Authorizations.EMPTY), rows.get(i - 1), false, rows.get(i), false);
+      max = conn.tableOperations().getMaxRow(tableName, Authorizations.EMPTY, rows.get(i - 1), false, rows.get(i), false);
       assertNull(max);
 
-      max = FindMax.findMax(conn.createScanner("foo", Authorizations.EMPTY), null, true, rows.get(i), true);
+      max = conn.tableOperations().getMaxRow(tableName, Authorizations.EMPTY, null, true, rows.get(i), true);
       assertEquals(rows.get(i), max);
 
-      max = FindMax.findMax(conn.createScanner("foo", Authorizations.EMPTY), rows.get(i), true, rows.get(i), true);
+      max = conn.tableOperations().getMaxRow(tableName, Authorizations.EMPTY, rows.get(i), true, rows.get(i), true);
       assertEquals(rows.get(i), max);
 
-      max = FindMax.findMax(conn.createScanner("foo", Authorizations.EMPTY), rows.get(i - 1), false, rows.get(i), true);
+      max = conn.tableOperations().getMaxRow(tableName, Authorizations.EMPTY, rows.get(i - 1), false, rows.get(i), true);
       assertEquals(rows.get(i), max);
 
     }
 
-    Text max = FindMax.findMax(conn.createScanner("foo", Authorizations.EMPTY), null, true, null, true);
+    Text max = conn.tableOperations().getMaxRow(tableName, Authorizations.EMPTY, null, true, null, true);
     assertEquals(rows.get(rows.size() - 1), max);
 
-    max = FindMax.findMax(conn.createScanner("foo", Authorizations.EMPTY), null, true, new Text(new byte[] {0}), false);
+    max = conn.tableOperations().getMaxRow(tableName, Authorizations.EMPTY, null, true, new Text(new byte[] {0}), false);
     assertNull(max);
 
-    max = FindMax.findMax(conn.createScanner("foo", Authorizations.EMPTY), null, true, new Text(new byte[] {0}), true);
+    max = conn.tableOperations().getMaxRow(tableName, Authorizations.EMPTY, null, true, new Text(new byte[] {0}), true);
     assertEquals(rows.get(0), max);
   }
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/GarbageCollectWALIT.java b/test/src/main/java/org/apache/accumulo/test/GarbageCollectWALIT.java
new file mode 100644
index 0000000..141ee27
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/GarbageCollectWALIT.java
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import static org.junit.Assert.assertEquals;
+
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.fate.util.UtilWaitThread;
+import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.junit.Test;
+
+import com.google.common.collect.Iterators;
+
+public class GarbageCollectWALIT extends ConfigurableMacBase {
+
+  @Override
+  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.setProperty(Property.INSTANCE_ZK_HOST, "5s");
+    cfg.setProperty(Property.GC_CYCLE_START, "1s");
+    cfg.setProperty(Property.GC_CYCLE_DELAY, "1s");
+    cfg.setNumTservers(1);
+    hadoopCoreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
+  }
+
+  @Test(timeout = 2 * 60 * 1000)
+  public void test() throws Exception {
+    // not yet, please
+    String tableName = getUniqueNames(1)[0];
+    cluster.getClusterControl().stop(ServerType.GARBAGE_COLLECTOR);
+    Connector c = getConnector();
+    c.tableOperations().create(tableName);
+    // count the number of WALs in the filesystem
+    assertEquals(2, countWALsInFS(cluster));
+    cluster.getClusterControl().stop(ServerType.TABLET_SERVER);
+    cluster.getClusterControl().start(ServerType.GARBAGE_COLLECTOR);
+    cluster.getClusterControl().start(ServerType.TABLET_SERVER);
+    Iterators.size(c.createScanner(MetadataTable.NAME, Authorizations.EMPTY).iterator());
+    // let GC run
+    UtilWaitThread.sleep(3 * 5 * 1000);
+    assertEquals(2, countWALsInFS(cluster));
+  }
+
+  private int countWALsInFS(MiniAccumuloClusterImpl cluster) throws Exception {
+    FileSystem fs = cluster.getFileSystem();
+    RemoteIterator<LocatedFileStatus> iterator = fs.listFiles(new Path(cluster.getConfig().getAccumuloDir() + "/wal"), true);
+    int result = 0;
+    while (iterator.hasNext()) {
+      LocatedFileStatus next = iterator.next();
+      if (!next.isDirectory()) {
+        result++;
+      }
+    }
+    return result;
+  }
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/GenerateSequentialRFile.java b/test/src/main/java/org/apache/accumulo/test/GenerateSequentialRFile.java
index 64a1aaf..16a3122 100644
--- a/test/src/main/java/org/apache/accumulo/test/GenerateSequentialRFile.java
+++ b/test/src/main/java/org/apache/accumulo/test/GenerateSequentialRFile.java
@@ -58,7 +58,8 @@
       final Configuration conf = new Configuration();
       Path p = new Path(opts.filePath);
       final FileSystem fs = p.getFileSystem(conf);
-      FileSKVWriter writer = FileOperations.getInstance().openWriter(opts.filePath, fs, conf, DefaultConfiguration.getInstance());
+      FileSKVWriter writer = FileOperations.getInstance().newWriterBuilder().forFile(opts.filePath, fs, conf)
+          .withTableConfiguration(DefaultConfiguration.getInstance()).build();
 
       writer.startDefaultLocalityGroup();
 
diff --git a/test/src/main/java/org/apache/accumulo/test/GetFileInfoBulkIT.java b/test/src/main/java/org/apache/accumulo/test/GetFileInfoBulkIT.java
new file mode 100644
index 0000000..22a8cef
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/GetFileInfoBulkIT.java
@@ -0,0 +1,170 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.FileOperations;
+import org.apache.accumulo.core.file.FileSKVWriter;
+import org.apache.accumulo.core.file.rfile.RFile;
+import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
+import org.apache.accumulo.core.util.CachedConfiguration;
+import org.apache.accumulo.core.util.Pair;
+import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.accumulo.test.functional.FunctionalTestUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+import org.junit.Test;
+
+import com.google.common.util.concurrent.Uninterruptibles;
+import com.google.gson.Gson;
+
+// ACCUMULO-3949, ACCUMULO-3953
+public class GetFileInfoBulkIT extends ConfigurableMacBase {
+
+  @Override
+  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.setNumTservers(1);
+    cfg.useMiniDFS(true);
+    cfg.setProperty(Property.GC_FILE_ARCHIVE, "false");
+  }
+
+  @SuppressWarnings("unchecked")
+  long getOpts() throws Exception {
+    String uri = getCluster().getMiniDfs().getHttpUri(0);
+    URL url = new URL(uri + "/jmx");
+    log.debug("Fetching web page " + url);
+    String jsonString = FunctionalTestUtils.readAll(url.openStream());
+    Gson gson = new Gson();
+    Map<Object,Object> jsonObject = (Map<Object,Object>) gson.fromJson(jsonString, Object.class);
+    List<Object> beans = (List<Object>) jsonObject.get("beans");
+    for (Object bean : beans) {
+      Map<Object,Object> map = (Map<Object,Object>) bean;
+      if (map.get("name").toString().equals("Hadoop:service=NameNode,name=NameNodeActivity")) {
+        return (long) Double.parseDouble(map.get("FileInfoOps").toString());
+      }
+    }
+    return 0;
+  }
+
+  @Test
+  public void test() throws Exception {
+    final Connector c = getConnector();
+    getCluster().getClusterControl().kill(ServerType.GARBAGE_COLLECTOR, "localhost");
+    final String tableName = getUniqueNames(1)[0];
+    c.tableOperations().create(tableName);
+    // turn off compactions
+    c.tableOperations().setProperty(tableName, Property.TABLE_MAJC_RATIO.getKey(), "2000");
+    c.tableOperations().setProperty(tableName, Property.TABLE_FILE_MAX.getKey(), "2000");
+    // splits to slow down bulk import
+    SortedSet<Text> splits = new TreeSet<>();
+    for (int i = 1; i < 0xf; i++) {
+      splits.add(new Text(Integer.toHexString(i)));
+    }
+    c.tableOperations().addSplits(tableName, splits);
+
+    MasterMonitorInfo stats = getCluster().getMasterMonitorInfo();
+    assertEquals(1, stats.tServerInfo.size());
+
+    log.info("Creating lots of bulk import files");
+    final FileSystem fs = getCluster().getFileSystem();
+    final Path basePath = getCluster().getTemporaryPath();
+    CachedConfiguration.setInstance(fs.getConf());
+
+    final Path base = new Path(basePath, "testBulkLoad" + tableName);
+    fs.delete(base, true);
+    fs.mkdirs(base);
+
+    ExecutorService es = Executors.newFixedThreadPool(5);
+    List<Future<Pair<String,String>>> futures = new ArrayList<>();
+    for (int i = 0; i < 10; i++) {
+      final int which = i;
+      futures.add(es.submit(new Callable<Pair<String,String>>() {
+        @Override
+        public Pair<String,String> call() throws Exception {
+          Path bulkFailures = new Path(base, "failures" + which);
+          Path files = new Path(base, "files" + which);
+          fs.mkdirs(bulkFailures);
+          fs.mkdirs(files);
+          for (int i = 0; i < 100; i++) {
+            FileSKVWriter writer = FileOperations.getInstance().newWriterBuilder()
+                .forFile(files.toString() + "/bulk_" + i + "." + RFile.EXTENSION, fs, fs.getConf())
+                .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
+            writer.startDefaultLocalityGroup();
+            for (int j = 0x100; j < 0xfff; j += 3) {
+              writer.append(new Key(Integer.toHexString(j)), new Value(new byte[0]));
+            }
+            writer.close();
+          }
+          return new Pair<>(files.toString(), bulkFailures.toString());
+        }
+      }));
+    }
+    List<Pair<String,String>> dirs = new ArrayList<>();
+    for (Future<Pair<String,String>> f : futures) {
+      dirs.add(f.get());
+    }
+    log.info("Importing");
+    long startOps = getOpts();
+    long now = System.currentTimeMillis();
+    List<Future<Object>> errs = new ArrayList<>();
+    for (Pair<String,String> entry : dirs) {
+      final String dir = entry.getFirst();
+      final String err = entry.getSecond();
+      errs.add(es.submit(new Callable<Object>() {
+        @Override
+        public Object call() throws Exception {
+          c.tableOperations().importDirectory(tableName, dir, err, false);
+          return null;
+        }
+      }));
+    }
+    for (Future<Object> err : errs) {
+      err.get();
+    }
+    es.shutdown();
+    es.awaitTermination(2, TimeUnit.MINUTES);
+    log.info(String.format("Completed in %.2f seconds", (System.currentTimeMillis() - now) / 1000.));
+    Uninterruptibles.sleepUninterruptibly(30, TimeUnit.SECONDS);
+    long getFileInfoOpts = getOpts() - startOps;
+    log.info("# opts: {}", getFileInfoOpts);
+    assertTrue("unexpected number of getFileOps", getFileInfoOpts < 2100 && getFileInfoOpts > 1000);
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/GetMasterStats.java b/test/src/main/java/org/apache/accumulo/test/GetMasterStats.java
index 5c2cbf3..0d0449e 100644
--- a/test/src/main/java/org/apache/accumulo/test/GetMasterStats.java
+++ b/test/src/main/java/org/apache/accumulo/test/GetMasterStats.java
@@ -21,6 +21,7 @@
 import java.util.Map.Entry;
 
 import org.apache.accumulo.core.client.impl.MasterClient;
+import org.apache.accumulo.core.master.thrift.BulkImportStatus;
 import org.apache.accumulo.core.master.thrift.DeadServer;
 import org.apache.accumulo.core.master.thrift.MasterClientService;
 import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
@@ -67,6 +68,12 @@
       out(2, "Last report: %s", new SimpleDateFormat().format(new Date(dead.lastStatus)));
       out(2, "Cause: %s", dead.status);
     }
+    out(0, "Bulk imports: %s", stats.bulkImports.size());
+    for (BulkImportStatus bulk : stats.bulkImports) {
+      out(1, "Import directory: %s", bulk.filename);
+      out(2, "Bulk state %s", bulk.state);
+      out(2, "Bulk start %s", bulk.startTime);
+    }
     if (stats.tableMap != null && stats.tableMap.size() > 0) {
       out(0, "Tables");
       for (Entry<String,TableInfo> entry : stats.tableMap.entrySet()) {
@@ -117,6 +124,13 @@
           out(3, "Progress: %.2f%%", sort.progress * 100);
           out(3, "Time running: %s", sort.runtime / 1000.);
         }
+        out(3, "Bulk imports: %s", stats.bulkImports.size());
+        for (BulkImportStatus bulk : stats.bulkImports) {
+          out(4, "Import file: %s", bulk.filename);
+          out(5, "Bulk state %s", bulk.state);
+          out(5, "Bulk start %s", bulk.startTime);
+        }
+
       }
     }
   }
diff --git a/test/src/main/java/org/apache/accumulo/test/IMMLGBenchmark.java b/test/src/main/java/org/apache/accumulo/test/IMMLGBenchmark.java
index 1c3d89e..59c46ec 100644
--- a/test/src/main/java/org/apache/accumulo/test/IMMLGBenchmark.java
+++ b/test/src/main/java/org/apache/accumulo/test/IMMLGBenchmark.java
@@ -25,6 +25,7 @@
 import java.util.Random;
 import java.util.Set;
 import java.util.TreeMap;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -42,10 +43,10 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.FastFormat;
 import org.apache.accumulo.core.util.Stat;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.hadoop.io.Text;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 /**
  *
@@ -57,13 +58,13 @@
 
     int numlg = Integer.parseInt(args[0]);
 
-    ArrayList<byte[]> cfset = new ArrayList<byte[]>();
+    ArrayList<byte[]> cfset = new ArrayList<>();
 
     for (int i = 0; i < 32; i++) {
       cfset.add(String.format("%04x", i).getBytes());
     }
 
-    Map<String,Stat> stats = new TreeMap<String,Stat>();
+    Map<String,Stat> stats = new TreeMap<>();
 
     for (int i = 0; i < 5; i++) {
       runTest(conn, numlg, cfset, i > 1 ? stats : null);
@@ -168,9 +169,9 @@
       int gNum = 0;
 
       Iterator<byte[]> cfiter = cfset.iterator();
-      Map<String,Set<Text>> groups = new HashMap<String,Set<Text>>();
+      Map<String,Set<Text>> groups = new HashMap<>();
       while (cfiter.hasNext()) {
-        HashSet<Text> groupCols = new HashSet<Text>();
+        HashSet<Text> groupCols = new HashSet<>();
         for (int i = 0; i < numCF && cfiter.hasNext(); i++) {
           groupCols.add(new Text(cfiter.next()));
         }
@@ -180,7 +181,7 @@
 
       conn.tableOperations().setLocalityGroups(table, groups);
       conn.tableOperations().offline(table);
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
       conn.tableOperations().online(table);
     }
   }
diff --git a/test/src/test/java/org/apache/accumulo/test/ImportExportIT.java b/test/src/main/java/org/apache/accumulo/test/ImportExportIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/ImportExportIT.java
rename to test/src/main/java/org/apache/accumulo/test/ImportExportIT.java
index f30a970..ddc9404 100644
--- a/test/src/test/java/org/apache/accumulo/test/ImportExportIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ImportExportIT.java
@@ -16,7 +16,6 @@
  */
 package org.apache.accumulo.test;
 
-import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 import java.io.BufferedReader;
@@ -36,11 +35,10 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FileUtil;
-import org.apache.hadoop.fs.FsShell;
 import org.apache.hadoop.fs.Path;
 import org.junit.Assert;
 import org.junit.Test;
@@ -55,7 +53,7 @@
  * ACCUMULO-3215
  *
  */
-public class ImportExportIT extends AccumuloClusterIT {
+public class ImportExportIT extends AccumuloClusterHarness {
 
   private static final Logger log = LoggerFactory.getLogger(ImportExportIT.class);
 
@@ -103,9 +101,6 @@
       assertTrue("Failed to create " + baseDir, fs.mkdirs(p));
     }
 
-    FsShell fsShell = new FsShell(fs.getConf());
-    assertEquals("Failed to chmod " + baseDir, 0, fsShell.run(new String[] {"-chmod", "-R", "777", baseDir.toString()}));
-
     log.info("Exporting table to {}", exportDir);
     log.info("Importing table from {}", importDir);
 
diff --git a/test/src/test/java/org/apache/accumulo/test/InMemoryMapIT.java b/test/src/main/java/org/apache/accumulo/test/InMemoryMapIT.java
similarity index 81%
rename from test/src/test/java/org/apache/accumulo/test/InMemoryMapIT.java
rename to test/src/main/java/org/apache/accumulo/test/InMemoryMapIT.java
index 6eec2e8..8c1bc07 100644
--- a/test/src/test/java/org/apache/accumulo/test/InMemoryMapIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/InMemoryMapIT.java
@@ -27,7 +27,11 @@
 import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.Set;
+
+import org.apache.accumulo.core.conf.ConfigurationCopy;
+import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.ArrayByteSequence;
 import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
@@ -201,11 +205,26 @@
     InMemoryMap localityGroupMapWithNative = null;
 
     try {
-      defaultMap = new InMemoryMap(false, tempFolder.newFolder().getAbsolutePath());
-      nativeMapWrapper = new InMemoryMap(true, tempFolder.newFolder().getAbsolutePath());
-      localityGroupMap = new InMemoryMap(getLocalityGroups(), false, tempFolder.newFolder().getAbsolutePath());
-      localityGroupMapWithNative = new InMemoryMap(getLocalityGroups(), true, tempFolder.newFolder().getAbsolutePath());
-    } catch (IOException e) {
+      Map<String,String> defaultMapConfig = new HashMap<>();
+      defaultMapConfig.put(Property.TSERV_NATIVEMAP_ENABLED.getKey(), "false");
+      defaultMapConfig.put(Property.TSERV_MEMDUMP_DIR.getKey(), tempFolder.newFolder().getAbsolutePath());
+      defaultMapConfig.put(Property.TABLE_LOCALITY_GROUPS.getKey(), "");
+      Map<String,String> nativeMapConfig = new HashMap<>();
+      nativeMapConfig.put(Property.TSERV_NATIVEMAP_ENABLED.getKey(), "true");
+      nativeMapConfig.put(Property.TSERV_MEMDUMP_DIR.getKey(), tempFolder.newFolder().getAbsolutePath());
+      nativeMapConfig.put(Property.TABLE_LOCALITY_GROUPS.getKey(), "");
+      Map<String,String> localityGroupConfig = new HashMap<>();
+      localityGroupConfig.put(Property.TSERV_NATIVEMAP_ENABLED.getKey(), "false");
+      localityGroupConfig.put(Property.TSERV_MEMDUMP_DIR.getKey(), tempFolder.newFolder().getAbsolutePath());
+      Map<String,String> localityGroupNativeConfig = new HashMap<>();
+      localityGroupNativeConfig.put(Property.TSERV_NATIVEMAP_ENABLED.getKey(), "true");
+      localityGroupNativeConfig.put(Property.TSERV_MEMDUMP_DIR.getKey(), tempFolder.newFolder().getAbsolutePath());
+
+      defaultMap = new InMemoryMap(new ConfigurationCopy(defaultMapConfig));
+      nativeMapWrapper = new InMemoryMap(new ConfigurationCopy(nativeMapConfig));
+      localityGroupMap = new InMemoryMap(updateConfigurationForLocalityGroups(new ConfigurationCopy(localityGroupConfig)));
+      localityGroupMapWithNative = new InMemoryMap(updateConfigurationForLocalityGroups(new ConfigurationCopy(localityGroupNativeConfig)));
+    } catch (Exception e) {
       log.error("Error getting new InMemoryMap ", e);
       fail(e.getMessage());
     }
@@ -268,7 +287,7 @@
   private List<MemKey> getArrayOfMemKeys(InMemoryMap imm) {
     SortedKeyValueIterator<Key,Value> skvi = imm.compactionIterator();
 
-    List<MemKey> memKeys = new ArrayList<MemKey>();
+    List<MemKey> memKeys = new ArrayList<>();
     try {
       skvi.seek(new Range(), new ArrayList<ByteSequence>(), false); // everything
       while (skvi.hasTop()) {
@@ -299,15 +318,38 @@
   }
 
   private int getUniqKVCount(List<MemKey> memKeys) {
-    List<Integer> kvCounts = new ArrayList<Integer>();
+    List<Integer> kvCounts = new ArrayList<>();
     for (MemKey m : memKeys) {
       kvCounts.add(m.getKVCount());
     }
     return ImmutableSet.copyOf(kvCounts).size();
   }
 
+  private ConfigurationCopy updateConfigurationForLocalityGroups(ConfigurationCopy configuration) {
+    Map<String,Set<ByteSequence>> locGroups = getLocalityGroups();
+    StringBuilder enabledLGs = new StringBuilder();
+
+    for (Entry<String,Set<ByteSequence>> entry : locGroups.entrySet()) {
+      if (enabledLGs.length() > 0) {
+        enabledLGs.append(",");
+      }
+
+      StringBuilder value = new StringBuilder();
+      for (ByteSequence bytes : entry.getValue()) {
+        if (value.length() > 0) {
+          value.append(",");
+        }
+        value.append(new String(bytes.toArray()));
+      }
+      configuration.set("table.group." + entry.getKey(), value.toString());
+      enabledLGs.append(entry.getKey());
+    }
+    configuration.set(Property.TABLE_LOCALITY_GROUPS, enabledLGs.toString());
+    return configuration;
+  }
+
   private Map<String,Set<ByteSequence>> getLocalityGroups() {
-    Map<String,Set<ByteSequence>> locgro = new HashMap<String,Set<ByteSequence>>();
+    Map<String,Set<ByteSequence>> locgro = new HashMap<>();
     locgro.put("a", newCFSet("cf", "cf2"));
     locgro.put("b", newCFSet("cf3", "cf4"));
     return locgro;
@@ -315,7 +357,7 @@
 
   // from InMemoryMapTest
   private Set<ByteSequence> newCFSet(String... cfs) {
-    HashSet<ByteSequence> cfSet = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> cfSet = new HashSet<>();
     for (String cf : cfs) {
       cfSet.add(new ArrayByteSequence(cf));
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/InMemoryMapMemoryUsageTest.java b/test/src/main/java/org/apache/accumulo/test/InMemoryMapMemoryUsageTest.java
index fb0050f..05b405e 100644
--- a/test/src/main/java/org/apache/accumulo/test/InMemoryMapMemoryUsageTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/InMemoryMapMemoryUsageTest.java
@@ -18,9 +18,11 @@
 
 import java.util.Collections;
 
+import org.apache.accumulo.core.conf.DefaultConfiguration;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.core.util.LocalityGroupUtil.LocalityGroupConfigurationError;
 import org.apache.accumulo.tserver.InMemoryMap;
 import org.apache.hadoop.io.Text;
 
@@ -51,7 +53,11 @@
 
   @Override
   void init() {
-    imm = new InMemoryMap(false, "/tmp");
+    try {
+      imm = new InMemoryMap(DefaultConfiguration.getInstance());
+    } catch (LocalityGroupConfigurationError e) {
+      throw new RuntimeException(e);
+    }
     key = new Text();
 
     colf = new Text(String.format("%0" + colFamLen + "d", 0));
diff --git a/test/src/test/java/org/apache/accumulo/test/InterruptibleScannersIT.java b/test/src/main/java/org/apache/accumulo/test/InterruptibleScannersIT.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/InterruptibleScannersIT.java
rename to test/src/main/java/org/apache/accumulo/test/InterruptibleScannersIT.java
index 35d4048..bdf62a1 100644
--- a/test/src/test/java/org/apache/accumulo/test/InterruptibleScannersIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/InterruptibleScannersIT.java
@@ -24,7 +24,7 @@
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.admin.ActiveScan;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.functional.SlowIterator;
 import org.apache.hadoop.conf.Configuration;
@@ -34,7 +34,7 @@
 import com.google.common.collect.Iterators;
 
 // ACCUMULO-3030
-public class InterruptibleScannersIT extends AccumuloClusterIT {
+public class InterruptibleScannersIT extends AccumuloClusterHarness {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -67,7 +67,7 @@
           // ensure the scan is running: not perfect, the metadata tables could be scanned, too.
           String tserver = conn.instanceOperations().getTabletServers().iterator().next();
           do {
-            ArrayList<ActiveScan> scans = new ArrayList<ActiveScan>(conn.instanceOperations().getActiveScans(tserver));
+            ArrayList<ActiveScan> scans = new ArrayList<>(conn.instanceOperations().getActiveScans(tserver));
             Iterator<ActiveScan> iter = scans.iterator();
             while (iter.hasNext()) {
               ActiveScan scan = iter.next();
diff --git a/test/src/test/java/org/apache/accumulo/test/IsolationAndDeepCopyIT.java b/test/src/main/java/org/apache/accumulo/test/IsolationAndDeepCopyIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/IsolationAndDeepCopyIT.java
rename to test/src/main/java/org/apache/accumulo/test/IsolationAndDeepCopyIT.java
index 6af1fdf..5309525 100644
--- a/test/src/test/java/org/apache/accumulo/test/IsolationAndDeepCopyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/IsolationAndDeepCopyIT.java
@@ -31,12 +31,12 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.user.IntersectingIterator;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class IsolationAndDeepCopyIT extends AccumuloClusterIT {
+public class IsolationAndDeepCopyIT extends AccumuloClusterHarness {
 
   @Test
   public void testBugFix() throws Exception {
diff --git a/test/src/test/java/org/apache/accumulo/test/KeyValueEqualityIT.java b/test/src/main/java/org/apache/accumulo/test/KeyValueEqualityIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/KeyValueEqualityIT.java
rename to test/src/main/java/org/apache/accumulo/test/KeyValueEqualityIT.java
index 1bcd82c..b0734b4 100644
--- a/test/src/test/java/org/apache/accumulo/test/KeyValueEqualityIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/KeyValueEqualityIT.java
@@ -27,11 +27,11 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class KeyValueEqualityIT extends AccumuloClusterIT {
+public class KeyValueEqualityIT extends AccumuloClusterHarness {
 
   @Override
   public int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/LargeSplitRowIT.java b/test/src/main/java/org/apache/accumulo/test/LargeSplitRowIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/LargeSplitRowIT.java
rename to test/src/main/java/org/apache/accumulo/test/LargeSplitRowIT.java
index a465955..52e331a 100644
--- a/test/src/test/java/org/apache/accumulo/test/LargeSplitRowIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/LargeSplitRowIT.java
@@ -37,7 +37,7 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.conf.TableConfiguration;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
 import org.junit.Assert;
@@ -45,14 +45,14 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class LargeSplitRowIT extends ConfigurableMacIT {
+public class LargeSplitRowIT extends ConfigurableMacBase {
   static private final Logger log = LoggerFactory.getLogger(LargeSplitRowIT.class);
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
     cfg.setNumTservers(1);
 
-    Map<String,String> siteConfig = new HashMap<String,String>();
+    Map<String,String> siteConfig = new HashMap<>();
     siteConfig.put(Property.TSERV_MAJC_DELAY.getKey(), "50ms");
     cfg.setSiteConfig(siteConfig);
   }
@@ -77,7 +77,7 @@
     batchWriter.close();
 
     // Create a split point that is too large to be an end row and fill it with all 'm'
-    SortedSet<Text> partitionKeys = new TreeSet<Text>();
+    SortedSet<Text> partitionKeys = new TreeSet<>();
     byte data[] = new byte[(int) (TableConfiguration.getMemoryInBytes(Property.TABLE_MAX_END_ROW_SIZE.getDefaultValue()) + 2)];
     for (int i = 0; i < data.length; i++) {
       data[i] = 'm';
diff --git a/test/src/main/java/org/apache/accumulo/test/LocatorIT.java b/test/src/main/java/org/apache/accumulo/test/LocatorIT.java
new file mode 100644
index 0000000..f5caf3c
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/LocatorIT.java
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.test;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeSet;
+
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.TableOfflineException;
+import org.apache.accumulo.core.client.admin.Locations;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.TabletId;
+import org.apache.accumulo.core.data.impl.KeyExtent;
+import org.apache.accumulo.core.data.impl.TabletIdImpl;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.hadoop.io.Text;
+import org.junit.Assert;
+import org.junit.Test;
+
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+
+public class LocatorIT extends AccumuloClusterHarness {
+
+  private void assertContains(Locations locations, HashSet<String> tservers, Map<Range,ImmutableSet<TabletId>> expected1,
+      Map<TabletId,ImmutableSet<Range>> expected2) {
+
+    Map<Range,Set<TabletId>> gbr = new HashMap<>();
+    for (Entry<Range,List<TabletId>> entry : locations.groupByRange().entrySet()) {
+      gbr.put(entry.getKey(), new HashSet<>(entry.getValue()));
+    }
+
+    Assert.assertEquals(expected1, gbr);
+
+    Map<TabletId,Set<Range>> gbt = new HashMap<>();
+    for (Entry<TabletId,List<Range>> entry : locations.groupByTablet().entrySet()) {
+      gbt.put(entry.getKey(), new HashSet<>(entry.getValue()));
+
+      TabletId tid = entry.getKey();
+      String location = locations.getTabletLocation(tid);
+      Assert.assertNotNull("Location for " + tid + " was null", location);
+      Assert.assertTrue("Unknown location " + location, tservers.contains(location));
+      Assert.assertTrue("Expected <host>:<port> " + location, location.split(":").length == 2);
+
+    }
+
+    Assert.assertEquals(expected2, gbt);
+  }
+
+  private static TabletId newTabletId(String tableId, String endRow, String prevRow) {
+    return new TabletIdImpl(new KeyExtent(tableId, endRow == null ? null : new Text(endRow), prevRow == null ? null : new Text(prevRow)));
+  }
+
+  @Test
+  public void testBasic() throws Exception {
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+
+    conn.tableOperations().create(tableName);
+
+    Range r1 = new Range("m");
+    Range r2 = new Range("o", "x");
+
+    String tableId = conn.tableOperations().tableIdMap().get(tableName);
+
+    TabletId t1 = newTabletId(tableId, null, null);
+    TabletId t2 = newTabletId(tableId, "r", null);
+    TabletId t3 = newTabletId(tableId, null, "r");
+
+    ArrayList<Range> ranges = new ArrayList<>();
+
+    HashSet<String> tservers = new HashSet<>(conn.instanceOperations().getTabletServers());
+
+    ranges.add(r1);
+    Locations ret = conn.tableOperations().locate(tableName, ranges);
+    assertContains(ret, tservers, ImmutableMap.of(r1, ImmutableSet.of(t1)), ImmutableMap.of(t1, ImmutableSet.of(r1)));
+
+    ranges.add(r2);
+    ret = conn.tableOperations().locate(tableName, ranges);
+    assertContains(ret, tservers, ImmutableMap.of(r1, ImmutableSet.of(t1), r2, ImmutableSet.of(t1)), ImmutableMap.of(t1, ImmutableSet.of(r1, r2)));
+
+    TreeSet<Text> splits = new TreeSet<>();
+    splits.add(new Text("r"));
+    conn.tableOperations().addSplits(tableName, splits);
+
+    ret = conn.tableOperations().locate(tableName, ranges);
+    assertContains(ret, tservers, ImmutableMap.of(r1, ImmutableSet.of(t2), r2, ImmutableSet.of(t2, t3)),
+        ImmutableMap.of(t2, ImmutableSet.of(r1, r2), t3, ImmutableSet.of(r2)));
+
+    conn.tableOperations().offline(tableName, true);
+
+    try {
+      conn.tableOperations().locate(tableName, ranges);
+      Assert.fail();
+    } catch (TableOfflineException e) {
+      // expected
+    }
+
+    conn.tableOperations().delete(tableName);
+
+    try {
+      conn.tableOperations().locate(tableName, ranges);
+      Assert.fail();
+    } catch (TableNotFoundException e) {
+      // expected
+    }
+  }
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/ManySplitIT.java b/test/src/main/java/org/apache/accumulo/test/ManySplitIT.java
new file mode 100644
index 0000000..09aa777
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/ManySplitIT.java
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assume.assumeFalse;
+
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.accumulo.core.client.AccumuloException;
+import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.admin.TableOperations;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.fate.util.UtilWaitThread;
+import org.apache.accumulo.minicluster.MemoryUnit;
+import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.accumulo.test.mrit.IntegrationTestMapReduce;
+import org.apache.accumulo.test.PerformanceTest;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Text;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category(PerformanceTest.class)
+public class ManySplitIT extends ConfigurableMacBase {
+
+  final int SPLITS = 10_000;
+
+  @BeforeClass
+  static public void checkMR() {
+    assumeFalse(IntegrationTestMapReduce.isMapReduce());
+  }
+
+  @Test(timeout = 4 * 60 * 1000)
+  public void test() throws Exception {
+    assumeFalse(IntegrationTestMapReduce.isMapReduce());
+
+    final String tableName = getUniqueNames(1)[0];
+
+    log.info("Creating table");
+    final TableOperations tableOperations = getConnector().tableOperations();
+
+    log.info("splitting metadata table");
+    tableOperations.create(tableName);
+    SortedSet<Text> splits = new TreeSet<>();
+    for (byte b : "123456789abcde".getBytes(UTF_8)) {
+      splits.add(new Text(new byte[] {'1', ';', b}));
+    }
+    tableOperations.addSplits(MetadataTable.NAME, splits);
+    splits.clear();
+    for (int i = 0; i < SPLITS; i++) {
+      splits.add(new Text(Integer.toHexString(i)));
+    }
+    log.info("Adding splits");
+    // print out the number of splits so we have some idea of what's going on
+    final AtomicBoolean stop = new AtomicBoolean(false);
+    Thread t = new Thread() {
+      @Override
+      public void run() {
+        while (!stop.get()) {
+          UtilWaitThread.sleep(1000);
+          try {
+            log.info("splits: " + tableOperations.listSplits(tableName).size());
+          } catch (TableNotFoundException | AccumuloException | AccumuloSecurityException e) {
+            // TODO Auto-generated catch block
+            e.printStackTrace();
+          }
+        }
+      }
+    };
+    t.start();
+    long now = System.currentTimeMillis();
+    tableOperations.addSplits(tableName, splits);
+    long diff = System.currentTimeMillis() - now;
+    double splitsPerSec = SPLITS / (diff / 1000.);
+    log.info("Done: {} splits per second", splitsPerSec);
+    assertTrue("splits created too slowly", splitsPerSec > 100);
+    stop.set(true);
+    t.join();
+  }
+
+  @Override
+  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hdfs) {
+    cfg.setNumTservers(1);
+    cfg.setMemory(ServerType.TABLET_SERVER, cfg.getMemory(ServerType.TABLET_SERVER) * 2, MemoryUnit.BYTE);
+  }
+
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/MasterRepairsDualAssignmentIT.java b/test/src/main/java/org/apache/accumulo/test/MasterRepairsDualAssignmentIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/MasterRepairsDualAssignmentIT.java
rename to test/src/main/java/org/apache/accumulo/test/MasterRepairsDualAssignmentIT.java
index 4d931d8..4f1a44d 100644
--- a/test/src/test/java/org/apache/accumulo/test/MasterRepairsDualAssignmentIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/MasterRepairsDualAssignmentIT.java
@@ -40,11 +40,12 @@
 import org.apache.accumulo.fate.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.server.master.state.ClosableIterator;
 import org.apache.accumulo.server.master.state.MetaDataStateStore;
 import org.apache.accumulo.server.master.state.RootTabletStateStore;
 import org.apache.accumulo.server.master.state.TServerInstance;
 import org.apache.accumulo.server.master.state.TabletLocationState;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.RawLocalFileSystem;
 import org.apache.hadoop.io.Text;
@@ -52,7 +53,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class MasterRepairsDualAssignmentIT extends ConfigurableMacIT {
+public class MasterRepairsDualAssignmentIT extends ConfigurableMacBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -76,14 +77,14 @@
     c.securityOperations().grantTablePermission("root", MetadataTable.NAME, TablePermission.WRITE);
     c.securityOperations().grantTablePermission("root", RootTable.NAME, TablePermission.WRITE);
     c.tableOperations().create(table);
-    SortedSet<Text> partitions = new TreeSet<Text>();
+    SortedSet<Text> partitions = new TreeSet<>();
     for (String part : "a b c d e f g h i j k l m n o p q r s t u v w x y z".split(" ")) {
       partitions.add(new Text(part));
     }
     c.tableOperations().addSplits(table, partitions);
     // scan the metadata table and get the two table location states
-    Set<TServerInstance> states = new HashSet<TServerInstance>();
-    Set<TabletLocationState> oldLocations = new HashSet<TabletLocationState>();
+    Set<TServerInstance> states = new HashSet<>();
+    Set<TabletLocationState> oldLocations = new HashSet<>();
     MetaDataStateStore store = new MetaDataStateStore(context, null);
     while (states.size() < 2) {
       UtilWaitThread.sleep(250);
@@ -108,7 +109,7 @@
       for (TabletLocationState tls : store) {
         if (tls != null && tls.current != null) {
           states.add(tls.current);
-        } else if (tls != null && tls.extent.equals(new KeyExtent(new Text(ReplicationTable.ID), null, null))) {
+        } else if (tls != null && tls.extent.equals(new KeyExtent(ReplicationTable.ID, null, null))) {
           replStates.add(tls.current);
         } else {
           allAssigned = false;
@@ -138,7 +139,7 @@
     waitForCleanStore(store);
     // now jam up the metadata table
     bw = c.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
-    assignment = new Mutation(new KeyExtent(new Text(MetadataTable.ID), null, null).getMetadataEntry());
+    assignment = new Mutation(new KeyExtent(MetadataTable.ID, null, null).getMetadataEntry());
     moved.current.putLocation(assignment);
     bw.addMutation(assignment);
     bw.close();
@@ -147,8 +148,8 @@
 
   private void waitForCleanStore(MetaDataStateStore store) {
     while (true) {
-      try {
-        Iterators.size(store.iterator());
+      try (ClosableIterator<TabletLocationState> iter = store.iterator()) {
+        Iterators.size(iter);
       } catch (Exception ex) {
         System.out.println(ex);
         UtilWaitThread.sleep(250);
diff --git a/test/src/test/java/org/apache/accumulo/test/MetaConstraintRetryIT.java b/test/src/main/java/org/apache/accumulo/test/MetaConstraintRetryIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/MetaConstraintRetryIT.java
rename to test/src/main/java/org/apache/accumulo/test/MetaConstraintRetryIT.java
index dbc10af..468b41a 100644
--- a/test/src/test/java/org/apache/accumulo/test/MetaConstraintRetryIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/MetaConstraintRetryIT.java
@@ -25,12 +25,11 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.tabletserver.thrift.ConstraintViolationException;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.server.util.MetadataTableUtil;
-import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class MetaConstraintRetryIT extends AccumuloClusterIT {
+public class MetaConstraintRetryIT extends AccumuloClusterHarness {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -46,7 +45,7 @@
     Credentials credentials = new Credentials(getAdminPrincipal(), getAdminToken());
     ClientContext context = new ClientContext(getConnector().getInstance(), credentials, cluster.getClientConfig());
     Writer w = new Writer(context, MetadataTable.ID);
-    KeyExtent extent = new KeyExtent(new Text("5"), null, null);
+    KeyExtent extent = new KeyExtent("5", null, null);
 
     Mutation m = new Mutation(extent.getMetadataEntry());
     // unknown columns should cause contraint violation
diff --git a/test/src/test/java/org/apache/accumulo/test/MetaGetsReadersIT.java b/test/src/main/java/org/apache/accumulo/test/MetaGetsReadersIT.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/test/MetaGetsReadersIT.java
rename to test/src/main/java/org/apache/accumulo/test/MetaGetsReadersIT.java
index 6040d32..73714c5 100644
--- a/test/src/test/java/org/apache/accumulo/test/MetaGetsReadersIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/MetaGetsReadersIT.java
@@ -23,6 +23,7 @@
 import java.util.Iterator;
 import java.util.Map.Entry;
 import java.util.Random;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.accumulo.core.client.BatchWriter;
@@ -35,16 +36,16 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.test.functional.SlowIterator;
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class MetaGetsReadersIT extends ConfigurableMacIT {
+public class MetaGetsReadersIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -98,7 +99,7 @@
     t1.start();
     Thread t2 = slowScan(c, tableName, stop);
     t2.start();
-    UtilWaitThread.sleep(500);
+    sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
     long now = System.currentTimeMillis();
     Scanner m = c.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
     Iterators.size(m.iterator());
diff --git a/test/src/main/java/org/apache/accumulo/test/MetaRecoveryIT.java b/test/src/main/java/org/apache/accumulo/test/MetaRecoveryIT.java
new file mode 100644
index 0000000..0c16a5f
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/MetaRecoveryIT.java
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import static org.junit.Assert.assertEquals;
+
+import java.util.Collections;
+import java.util.SortedSet;
+import java.util.TreeSet;
+
+import org.apache.accumulo.core.client.BatchScanner;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.io.Text;
+import org.junit.Test;
+
+import com.google.common.collect.Iterators;
+
+// ACCUMULO-3211
+public class MetaRecoveryIT extends ConfigurableMacBase {
+
+  @Override
+  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    hadoopCoreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
+    cfg.setProperty(Property.GC_CYCLE_DELAY, "1s");
+    cfg.setProperty(Property.GC_CYCLE_START, "1s");
+    cfg.setProperty(Property.TSERV_ARCHIVE_WALOGS, "true");
+    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
+    cfg.setProperty(Property.TSERV_WALOG_MAX_SIZE, "1048576");
+  }
+
+  @Test(timeout = 4 * 60 * 1000)
+  public void test() throws Exception {
+    String[] tables = getUniqueNames(10);
+    Connector c = getConnector();
+    int i = 0;
+    for (String table : tables) {
+      log.info("Creating table {}", i);
+      c.tableOperations().create(table);
+      BatchWriter bw = c.createBatchWriter(table, null);
+      for (int j = 0; j < 1000; j++) {
+        Mutation m = new Mutation("" + j);
+        m.put("cf", "cq", "value");
+        bw.addMutation(m);
+      }
+      bw.close();
+      log.info("Data written to table {}", i);
+      i++;
+    }
+    c.tableOperations().flush(MetadataTable.NAME, null, null, true);
+    c.tableOperations().flush(RootTable.NAME, null, null, true);
+    SortedSet<Text> splits = new TreeSet<>();
+    for (i = 1; i < tables.length; i++) {
+      splits.add(new Text("" + i));
+    }
+    c.tableOperations().addSplits(MetadataTable.NAME, splits);
+    log.info("Added {} splits to {}", splits.size(), MetadataTable.NAME);
+    c.instanceOperations().waitForBalance();
+    log.info("Restarting");
+    getCluster().getClusterControl().kill(ServerType.TABLET_SERVER, "localhost");
+    getCluster().start();
+    log.info("Verifying");
+    for (String table : tables) {
+      BatchScanner scanner = c.createBatchScanner(table, Authorizations.EMPTY, 5);
+      scanner.setRanges(Collections.singletonList(new Range()));
+      assertEquals(1000, Iterators.size(scanner.iterator()));
+      scanner.close();
+    }
+  }
+
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/MetaSplitIT.java b/test/src/main/java/org/apache/accumulo/test/MetaSplitIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/MetaSplitIT.java
rename to test/src/main/java/org/apache/accumulo/test/MetaSplitIT.java
index 51b462e..045b291 100644
--- a/test/src/test/java/org/apache/accumulo/test/MetaSplitIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/MetaSplitIT.java
@@ -31,7 +31,7 @@
 import org.apache.accumulo.core.client.admin.TableOperations;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.After;
 import org.junit.Before;
@@ -39,7 +39,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class MetaSplitIT extends AccumuloClusterIT {
+public class MetaSplitIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(MetaSplitIT.class);
 
   private Collection<Text> metadataSplits = null;
@@ -59,7 +59,7 @@
         log.info("Existing splits on metadata table. Saving them, and applying single original split of '~'");
         metadataSplits = splits;
         conn.tableOperations().merge(MetadataTable.NAME, null, null);
-        conn.tableOperations().addSplits(MetadataTable.NAME, new TreeSet<Text>(Collections.singleton(new Text("~"))));
+        conn.tableOperations().addSplits(MetadataTable.NAME, new TreeSet<>(Collections.singleton(new Text("~"))));
       }
     }
   }
@@ -70,14 +70,14 @@
       log.info("Restoring split on metadata table");
       Connector conn = getConnector();
       conn.tableOperations().merge(MetadataTable.NAME, null, null);
-      conn.tableOperations().addSplits(MetadataTable.NAME, new TreeSet<Text>(metadataSplits));
+      conn.tableOperations().addSplits(MetadataTable.NAME, new TreeSet<>(metadataSplits));
     }
   }
 
   @Test(expected = AccumuloException.class)
   public void testRootTableSplit() throws Exception {
     TableOperations opts = getConnector().tableOperations();
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     splits.add(new Text("5"));
     opts.addSplits(RootTable.NAME, splits);
   }
@@ -89,7 +89,7 @@
   }
 
   private void addSplits(TableOperations opts, String... points) throws Exception {
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (String point : points) {
       splits.add(new Text(point));
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/MissingWalHeaderCompletesRecoveryIT.java b/test/src/main/java/org/apache/accumulo/test/MissingWalHeaderCompletesRecoveryIT.java
similarity index 90%
rename from test/src/test/java/org/apache/accumulo/test/MissingWalHeaderCompletesRecoveryIT.java
rename to test/src/main/java/org/apache/accumulo/test/MissingWalHeaderCompletesRecoveryIT.java
index b78a311..c04201d 100644
--- a/test/src/test/java/org/apache/accumulo/test/MissingWalHeaderCompletesRecoveryIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/MissingWalHeaderCompletesRecoveryIT.java
@@ -19,7 +19,6 @@
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.io.File;
-import java.util.Collections;
 import java.util.UUID;
 
 import org.apache.accumulo.core.client.BatchWriter;
@@ -28,6 +27,7 @@
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
@@ -36,7 +36,7 @@
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.ServerConstants;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.tserver.log.DfsLogger;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
@@ -56,7 +56,7 @@
 /**
  *
  */
-public class MissingWalHeaderCompletesRecoveryIT extends ConfigurableMacIT {
+public class MissingWalHeaderCompletesRecoveryIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(MissingWalHeaderCompletesRecoveryIT.class);
 
   private boolean rootHasWritePermission;
@@ -105,7 +105,7 @@
   public void testEmptyWalRecoveryCompletes() throws Exception {
     Connector conn = getConnector();
     MiniAccumuloClusterImpl cluster = getCluster();
-    FileSystem fs = FileSystem.get(new Configuration());
+    FileSystem fs = cluster.getFileSystem();
 
     // Fake out something that looks like host:port, it's irrelevant
     String fakeServer = "127.0.0.1:12345";
@@ -127,18 +127,14 @@
     String tableId = conn.tableOperations().tableIdMap().get(tableName);
     Assert.assertNotNull("Table ID was null", tableId);
 
-    LogEntry logEntry = new LogEntry();
-    logEntry.server = "127.0.0.1:12345";
-    logEntry.filename = emptyWalog.toURI().toString();
-    logEntry.tabletId = 10;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
+    LogEntry logEntry = new LogEntry(new KeyExtent(tableId, null, null), 0, "127.0.0.1:12345", emptyWalog.toURI().toString());
 
     log.info("Taking {} offline", tableName);
     conn.tableOperations().offline(tableName, true);
 
     log.info("{} is offline", tableName);
 
-    Text row = MetadataSchema.TabletsSection.getRow(new Text(tableId), null);
+    Text row = MetadataSchema.TabletsSection.getRow(tableId, null);
     Mutation m = new Mutation(row);
     m.put(logEntry.getColumnFamily(), logEntry.getColumnQualifier(), logEntry.getValue());
 
@@ -161,7 +157,7 @@
   public void testPartialHeaderWalRecoveryCompletes() throws Exception {
     Connector conn = getConnector();
     MiniAccumuloClusterImpl cluster = getCluster();
-    FileSystem fs = FileSystem.get(new Configuration());
+    FileSystem fs = getCluster().getFileSystem();
 
     // Fake out something that looks like host:port, it's irrelevant
     String fakeServer = "127.0.0.1:12345";
@@ -186,18 +182,14 @@
     String tableId = conn.tableOperations().tableIdMap().get(tableName);
     Assert.assertNotNull("Table ID was null", tableId);
 
-    LogEntry logEntry = new LogEntry();
-    logEntry.server = "127.0.0.1:12345";
-    logEntry.filename = partialHeaderWalog.toURI().toString();
-    logEntry.tabletId = 10;
-    logEntry.logSet = Collections.singleton(logEntry.filename);
+    LogEntry logEntry = new LogEntry(null, 0, "127.0.0.1:12345", partialHeaderWalog.toURI().toString());
 
     log.info("Taking {} offline", tableName);
     conn.tableOperations().offline(tableName, true);
 
     log.info("{} is offline", tableName);
 
-    Text row = MetadataSchema.TabletsSection.getRow(new Text(tableId), null);
+    Text row = MetadataSchema.TabletsSection.getRow(tableId, null);
     Mutation m = new Mutation(row);
     m.put(logEntry.getColumnFamily(), logEntry.getColumnQualifier(), logEntry.getValue());
 
diff --git a/test/src/test/java/org/apache/accumulo/test/MultiTableBatchWriterIT.java b/test/src/main/java/org/apache/accumulo/test/MultiTableBatchWriterIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/MultiTableBatchWriterIT.java
rename to test/src/main/java/org/apache/accumulo/test/MultiTableBatchWriterIT.java
index 5e99f6e..d33b12c 100644
--- a/test/src/test/java/org/apache/accumulo/test/MultiTableBatchWriterIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/MultiTableBatchWriterIT.java
@@ -41,14 +41,14 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
 import com.google.common.collect.Maps;
 
-public class MultiTableBatchWriterIT extends AccumuloClusterIT {
+public class MultiTableBatchWriterIT extends AccumuloClusterHarness {
 
   private Connector connector;
   private MultiTableBatchWriter mtbw;
@@ -99,16 +99,16 @@
 
       mtbw.close();
 
-      Map<Entry<String,String>,String> table1Expectations = new HashMap<Entry<String,String>,String>();
+      Map<Entry<String,String>,String> table1Expectations = new HashMap<>();
       table1Expectations.put(Maps.immutableEntry("bar", "col1"), "val1");
 
-      Map<Entry<String,String>,String> table2Expectations = new HashMap<Entry<String,String>,String>();
+      Map<Entry<String,String>,String> table2Expectations = new HashMap<>();
       table2Expectations.put(Maps.immutableEntry("foo", "col1"), "val1");
       table2Expectations.put(Maps.immutableEntry("bar", "col1"), "val1");
 
       Scanner s = connector.createScanner(table1, new Authorizations());
       s.setRange(new Range());
-      Map<Entry<String,String>,String> actual = new HashMap<Entry<String,String>,String>();
+      Map<Entry<String,String>,String> actual = new HashMap<>();
       for (Entry<Key,Value> entry : s) {
         actual.put(Maps.immutableEntry(entry.getKey().getRow().toString(), entry.getKey().getColumnFamily().toString()), entry.getValue().toString());
       }
@@ -117,7 +117,7 @@
 
       s = connector.createScanner(table2, new Authorizations());
       s.setRange(new Range());
-      actual = new HashMap<Entry<String,String>,String>();
+      actual = new HashMap<>();
       for (Entry<Key,Value> entry : s) {
         actual.put(Maps.immutableEntry(entry.getKey().getRow().toString(), entry.getKey().getColumnFamily().toString()), entry.getValue().toString());
       }
@@ -164,7 +164,7 @@
 
       mtbw.close();
 
-      Map<Entry<String,String>,String> expectations = new HashMap<Entry<String,String>,String>();
+      Map<Entry<String,String>,String> expectations = new HashMap<>();
       expectations.put(Maps.immutableEntry("foo", "col1"), "val1");
       expectations.put(Maps.immutableEntry("foo", "col2"), "val2");
       expectations.put(Maps.immutableEntry("bar", "col1"), "val1");
@@ -173,7 +173,7 @@
       for (String table : Arrays.asList(newTable1, newTable2)) {
         Scanner s = connector.createScanner(table, new Authorizations());
         s.setRange(new Range());
-        Map<Entry<String,String>,String> actual = new HashMap<Entry<String,String>,String>();
+        Map<Entry<String,String>,String> actual = new HashMap<>();
         for (Entry<Key,Value> entry : s) {
           actual.put(Maps.immutableEntry(entry.getKey().getRow().toString(), entry.getKey().getColumnFamily().toString()), entry.getValue().toString());
         }
@@ -240,7 +240,7 @@
 
       mtbw.close();
 
-      Map<Entry<String,String>,String> expectations = new HashMap<Entry<String,String>,String>();
+      Map<Entry<String,String>,String> expectations = new HashMap<>();
       expectations.put(Maps.immutableEntry("foo", "col1"), "val1");
       expectations.put(Maps.immutableEntry("foo", "col2"), "val2");
       expectations.put(Maps.immutableEntry("bar", "col1"), "val1");
@@ -249,7 +249,7 @@
       for (String table : Arrays.asList(newTable1, newTable2)) {
         Scanner s = connector.createScanner(table, new Authorizations());
         s.setRange(new Range());
-        Map<Entry<String,String>,String> actual = new HashMap<Entry<String,String>,String>();
+        Map<Entry<String,String>,String> actual = new HashMap<>();
         for (Entry<Key,Value> entry : s) {
           actual.put(Maps.immutableEntry(entry.getKey().getRow().toString(), entry.getKey().getColumnFamily().toString()), entry.getValue().toString());
         }
diff --git a/test/src/test/java/org/apache/accumulo/test/MultiTableRecoveryIT.java b/test/src/main/java/org/apache/accumulo/test/MultiTableRecoveryIT.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/MultiTableRecoveryIT.java
rename to test/src/main/java/org/apache/accumulo/test/MultiTableRecoveryIT.java
index 410514e..e62b5ad 100644
--- a/test/src/test/java/org/apache/accumulo/test/MultiTableRecoveryIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/MultiTableRecoveryIT.java
@@ -21,6 +21,7 @@
 
 import java.util.Map.Entry;
 import java.util.Random;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.accumulo.core.client.BatchWriter;
@@ -32,17 +33,17 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.RawLocalFileSystem;
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class MultiTableRecoveryIT extends ConfigurableMacIT {
+public class MultiTableRecoveryIT extends ConfigurableMacBase {
 
   @Override
   protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -116,7 +117,7 @@
         try {
           int i = 0;
           while (!stop.get()) {
-            UtilWaitThread.sleep(10 * 1000);
+            sleepUninterruptibly(10, TimeUnit.SECONDS);
             System.out.println("Restarting");
             getCluster().getClusterControl().stop(ServerType.TABLET_SERVER);
             getCluster().start();
diff --git a/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java b/test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
rename to test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
index aaa6a6e..cdb3d00 100644
--- a/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
@@ -34,6 +34,7 @@
 import java.util.Set;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.cluster.ClusterUser;
 import org.apache.accumulo.core.client.AccumuloException;
@@ -74,18 +75,19 @@
 import org.apache.accumulo.core.security.NamespacePermission;
 import org.apache.accumulo.core.security.SystemPermission;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.examples.simple.constraints.NumericValueConstraint;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.After;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 // Testing default namespace configuration with inheritance requires altering the system state and restoring it back to normal
 // Punt on this for now and just let it use a minicluster.
-public class NamespacesIT extends AccumuloClusterIT {
+public class NamespacesIT extends AccumuloClusterHarness {
 
   private Connector c;
   private String namespace;
@@ -329,7 +331,7 @@
     // verify entry is filtered out (also, verify conflict checking API)
     c.namespaceOperations().checkIteratorConflicts(namespace, setting, EnumSet.allOf(IteratorScope.class));
     c.namespaceOperations().attachIterator(namespace, setting);
-    UtilWaitThread.sleep(2 * 1000);
+    sleepUninterruptibly(2, TimeUnit.SECONDS);
     try {
       c.namespaceOperations().checkIteratorConflicts(namespace, setting, EnumSet.allOf(IteratorScope.class));
       fail();
@@ -345,7 +347,7 @@
 
     // verify can see inserted entry again
     c.namespaceOperations().removeIterator(namespace, setting.getName(), EnumSet.allOf(IteratorScope.class));
-    UtilWaitThread.sleep(2 * 1000);
+    sleepUninterruptibly(2, TimeUnit.SECONDS);
     assertFalse(c.namespaceOperations().listIterators(namespace).containsKey(iterName));
     assertFalse(c.tableOperations().listIterators(t1).containsKey(iterName));
     s = c.createScanner(t1, Authorizations.EMPTY);
@@ -872,7 +874,7 @@
     // set the filter, verify that accumulo namespace is the only one unaffected
     c.instanceOperations().setProperty(k, v);
     // doesn't take effect immediately, needs time to propagate to tserver's ZooKeeper cache
-    UtilWaitThread.sleep(250);
+    sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
     assertTrue(c.instanceOperations().getSystemConfiguration().containsValue(v));
     assertEquals(systemNamespaceShouldInherit, checkNamespaceHasProp(Namespaces.ACCUMULO_NAMESPACE, k, v));
     assertEquals(systemNamespaceShouldInherit, checkTableHasProp(RootTable.NAME, k, v));
@@ -885,7 +887,7 @@
     // verify it is no longer inherited
     c.instanceOperations().removeProperty(k);
     // doesn't take effect immediately, needs time to propagate to tserver's ZooKeeper cache
-    UtilWaitThread.sleep(250);
+    sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
     assertFalse(c.instanceOperations().getSystemConfiguration().containsValue(v));
     assertFalse(checkNamespaceHasProp(Namespaces.ACCUMULO_NAMESPACE, k, v));
     assertFalse(checkTableHasProp(RootTable.NAME, k, v));
diff --git a/test/src/main/java/org/apache/accumulo/test/NativeMapPerformanceTest.java b/test/src/main/java/org/apache/accumulo/test/NativeMapPerformanceTest.java
index 0285092..f2a8dfe 100644
--- a/test/src/main/java/org/apache/accumulo/test/NativeMapPerformanceTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/NativeMapPerformanceTest.java
@@ -25,15 +25,17 @@
 import java.util.SortedMap;
 import java.util.TreeMap;
 import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.util.FastFormat;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.tserver.NativeMap;
 import org.apache.hadoop.io.Text;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class NativeMapPerformanceTest {
 
   private static final byte ROW_PREFIX[] = new byte[] {'r'};
@@ -59,7 +61,7 @@
     NativeMap nm = null;
 
     if (mapType.equals("SKIP_LIST"))
-      tm = new ConcurrentSkipListMap<Key,Value>();
+      tm = new ConcurrentSkipListMap<>();
     else if (mapType.equals("TREE_MAP"))
       tm = Collections.synchronizedSortedMap(new TreeMap<Key,Value>());
     else if (mapType.equals("NATIVE_MAP"))
@@ -170,7 +172,7 @@
     System.gc();
     System.gc();
 
-    UtilWaitThread.sleep(3000);
+    sleepUninterruptibly(3, TimeUnit.SECONDS);
 
     System.out.printf("mapType:%10s   put rate:%,6.2f  scan rate:%,6.2f  get rate:%,6.2f  delete time : %6.2f  mem : %,d%n", "" + mapType, (numRows * numCols)
         / ((tpe - tps) / 1000.0), (size) / ((tie - tis) / 1000.0), numLookups / ((tge - tgs) / 1000.0), (tde - tds) / 1000.0, memUsed);
diff --git a/test/src/main/java/org/apache/accumulo/test/NativeMapStressTest.java b/test/src/main/java/org/apache/accumulo/test/NativeMapStressTest.java
index 72831d8..27c7ab9 100644
--- a/test/src/main/java/org/apache/accumulo/test/NativeMapStressTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/NativeMapStressTest.java
@@ -25,6 +25,8 @@
 import java.util.Map.Entry;
 import java.util.Random;
 import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
 
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
@@ -32,12 +34,12 @@
 import org.apache.accumulo.core.util.OpTimer;
 import org.apache.accumulo.tserver.NativeMap;
 import org.apache.hadoop.io.Text;
-import org.apache.log4j.Level;
-import org.apache.log4j.Logger;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class NativeMapStressTest {
 
-  private static final Logger log = Logger.getLogger(NativeMapStressTest.class);
+  private static final Logger log = LoggerFactory.getLogger(NativeMapStressTest.class);
 
   public static void main(String[] args) {
     testLotsOfMapDeletes(true);
@@ -54,7 +56,7 @@
 
   private static void testLotsOfGetsAndScans() {
 
-    ArrayList<Thread> threads = new ArrayList<Thread>();
+    ArrayList<Thread> threads = new ArrayList<>();
 
     final int numThreads = 8;
     final int totalGets = 100000000;
@@ -69,9 +71,13 @@
 
           Random r = new Random();
 
-          OpTimer opTimer = new OpTimer(log, Level.INFO);
+          OpTimer timer = null;
+          AtomicLong nextOpid = new AtomicLong();
 
-          opTimer.start("Creating map of size " + mapSizePerThread);
+          if (log.isInfoEnabled()) {
+            log.info("tid={} oid={} Creating map of size {}", Thread.currentThread().getId(), nextOpid.get(), mapSizePerThread);
+            timer = new OpTimer().start();
+          }
 
           for (int i = 0; i < mapSizePerThread; i++) {
             String row = String.format("r%08d", i);
@@ -79,9 +85,17 @@
             put(nm, row, val, i);
           }
 
-          opTimer.stop("Created map of size " + nm.size() + " in %DURATION%");
+          if (timer != null) {
 
-          opTimer.start("Doing " + getsPerThread + " gets()");
+            // stop and log created elapsed time
+            timer.stop();
+            log.info("tid={} oid={} Created map of size {} in {}", Thread.currentThread().getId(), nextOpid.getAndIncrement(), nm.size(),
+                String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+
+            // start timer for gets
+            log.info("tid={} oid={} Doing {} gets()", Thread.currentThread().getId(), nextOpid.get(), getsPerThread);
+            timer.reset().start();
+          }
 
           for (int i = 0; i < getsPerThread; i++) {
             String row = String.format("r%08d", r.nextInt(mapSizePerThread));
@@ -89,16 +103,24 @@
 
             Value value = nm.get(new Key(new Text(row)));
             if (value == null || !value.toString().equals(val)) {
-              log.error("nm.get(" + row + ") failed");
+              log.error("nm.get({}) failed", row);
             }
           }
 
-          opTimer.stop("Finished " + getsPerThread + " gets in %DURATION%");
+          if (timer != null) {
+
+            // stop and log created elapsed time
+            timer.stop();
+            log.info("tid={} oid={} Finished {} gets in {}", Thread.currentThread().getId(), nextOpid.getAndIncrement(), getsPerThread,
+                String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+
+            // start timer for random iterations
+            log.info("tid={} oid={} Doing {} random iterations", Thread.currentThread().getId(), nextOpid.get(), getsPerThread);
+            timer.reset().start();
+          }
 
           int scanned = 0;
 
-          opTimer.start("Doing " + getsPerThread + " random iterations");
-
           for (int i = 0; i < getsPerThread; i++) {
             int startRow = r.nextInt(mapSizePerThread);
             String row = String.format("r%08d", startRow);
@@ -113,7 +135,7 @@
 
               Entry<Key,Value> entry = iter.next();
               if (!entry.getValue().toString().equals(val2) || !entry.getKey().equals(new Key(new Text(row2)))) {
-                log.error("nm.iter(" + row2 + ") failed row = " + row + " count = " + count + " row2 = " + row + " val2 = " + val2);
+                log.error("nm.iter({}) failed row = {} count = {} row2 = {} val2 = {}", row2, row, count, row, val2);
               }
 
               count++;
@@ -122,7 +144,14 @@
             scanned += count;
           }
 
-          opTimer.stop("Finished " + getsPerThread + " random iterations (scanned = " + scanned + ") in %DURATION%");
+          if (timer != null) {
+
+            // stop and log created elapsed time
+            timer.stop();
+            log.info("tid={} oid={} Finished {}  random iterations (scanned = {}) in {}", Thread.currentThread().getId(), nextOpid.getAndIncrement(),
+                getsPerThread, scanned, String.format("%.3f secs", timer.scale(TimeUnit.SECONDS)));
+
+          }
 
           nm.delete();
         }
@@ -138,7 +167,7 @@
       try {
         thread.join();
       } catch (InterruptedException e) {
-        log.error("Could not join thread '" + thread.getName() + "'.", e);
+        log.error("Could not join thread '{}'.", thread.getName(), e);
         throw new RuntimeException(e);
       }
     }
@@ -153,7 +182,7 @@
 
     System.out.println("insertsPerMapPerThread " + insertsPerMapPerThread);
 
-    ArrayList<Thread> threads = new ArrayList<Thread>();
+    ArrayList<Thread> threads = new ArrayList<>();
 
     for (int i = 0; i < numThreads; i++) {
       Runnable r = new Runnable() {
@@ -200,21 +229,21 @@
       try {
         thread.join();
       } catch (InterruptedException e) {
-        log.error("Could not join thread '" + thread.getName() + "'.", e);
+        log.error("Could not join thread '{}'.", thread.getName(), e);
         throw new RuntimeException(e);
       }
     }
   }
 
   private static void testLotsOfOverwrites() {
-    final Map<Integer,NativeMap> nativeMaps = new HashMap<Integer,NativeMap>();
+    final Map<Integer,NativeMap> nativeMaps = new HashMap<>();
 
     int numThreads = 8;
     final int insertsPerThread = (int) (100000000 / (double) numThreads);
     final int rowRange = 10000;
     final int numMaps = 50;
 
-    ArrayList<Thread> threads = new ArrayList<Thread>();
+    ArrayList<Thread> threads = new ArrayList<>();
 
     for (int i = 0; i < numThreads; i++) {
       Runnable r = new Runnable() {
@@ -261,7 +290,7 @@
       try {
         thread.join();
       } catch (InterruptedException e) {
-        log.error("Could not join thread '" + thread.getName() + "'.", e);
+        log.error("Could not join thread '{}'.", thread.getName(), e);
         throw new RuntimeException(e);
       }
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/TJsonProtocolProxyIT.java b/test/src/main/java/org/apache/accumulo/test/PerformanceTest.java
similarity index 63%
copy from test/src/test/java/org/apache/accumulo/test/proxy/TJsonProtocolProxyIT.java
copy to test/src/main/java/org/apache/accumulo/test/PerformanceTest.java
index d3c8bc8..ab652c1 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/TJsonProtocolProxyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/PerformanceTest.java
@@ -14,22 +14,14 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.test.proxy;
-
-import org.apache.accumulo.harness.SharedMiniClusterIT;
-import org.apache.thrift.protocol.TJSONProtocol;
-import org.junit.BeforeClass;
+package org.apache.accumulo.test;
 
 /**
- *
+ * Annotate integration tests which test performance-related aspects of Accumulo or are sensitive to timings and hardware capabilities.
+ * <p>
+ * Intended to be used with the JUnit Category annotation on integration test classes. The Category annotation should be placed at the class-level. Test class
+ * names should still be suffixed with 'IT' as the rest of the integration tests.
  */
-public class TJsonProtocolProxyIT extends SimpleProxyBase {
-
-  @BeforeClass
-  public static void setProtocol() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
-    SimpleProxyBase.factory = new TJSONProtocol.Factory();
-    setUpProxy();
-  }
+public interface PerformanceTest {
 
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/QueryMetadataTable.java b/test/src/main/java/org/apache/accumulo/test/QueryMetadataTable.java
index dd58cc8..4dcd39d 100644
--- a/test/src/main/java/org/apache/accumulo/test/QueryMetadataTable.java
+++ b/test/src/main/java/org/apache/accumulo/test/QueryMetadataTable.java
@@ -104,9 +104,9 @@
     Connector connector = opts.getConnector();
     Scanner scanner = connector.createScanner(MetadataTable.NAME, opts.auths);
     scanner.setBatchSize(scanOpts.scanBatchSize);
-    Text mdrow = new Text(KeyExtent.getMetadataEntry(new Text(MetadataTable.ID), null));
+    Text mdrow = new Text(KeyExtent.getMetadataEntry(MetadataTable.ID, null));
 
-    HashSet<Text> rowSet = new HashSet<Text>();
+    HashSet<Text> rowSet = new HashSet<>();
 
     int count = 0;
 
@@ -127,7 +127,7 @@
 
     System.out.printf(" %,d%n", count);
 
-    ArrayList<Text> rows = new ArrayList<Text>(rowSet);
+    ArrayList<Text> rows = new ArrayList<>(rowSet);
 
     Random r = new Random();
 
diff --git a/test/src/test/java/org/apache/accumulo/test/RecoveryCompactionsAreFlushesIT.java b/test/src/main/java/org/apache/accumulo/test/RecoveryCompactionsAreFlushesIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/RecoveryCompactionsAreFlushesIT.java
rename to test/src/main/java/org/apache/accumulo/test/RecoveryCompactionsAreFlushesIT.java
index 57c2c34..f79e174 100644
--- a/test/src/test/java/org/apache/accumulo/test/RecoveryCompactionsAreFlushesIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/RecoveryCompactionsAreFlushesIT.java
@@ -30,7 +30,7 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
@@ -41,7 +41,7 @@
 import com.google.common.collect.Iterators;
 
 // Accumulo3010
-public class RecoveryCompactionsAreFlushesIT extends AccumuloClusterIT {
+public class RecoveryCompactionsAreFlushesIT extends AccumuloClusterHarness {
 
   @Override
   public int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/RewriteTabletDirectoriesIT.java b/test/src/main/java/org/apache/accumulo/test/RewriteTabletDirectoriesIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/RewriteTabletDirectoriesIT.java
rename to test/src/main/java/org/apache/accumulo/test/RewriteTabletDirectoriesIT.java
index 5a19de9..3317558 100644
--- a/test/src/test/java/org/apache/accumulo/test/RewriteTabletDirectoriesIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/RewriteTabletDirectoriesIT.java
@@ -44,7 +44,7 @@
 import org.apache.accumulo.server.init.Initialize;
 import org.apache.accumulo.server.util.Admin;
 import org.apache.accumulo.server.util.RandomizeVolumes;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.RawLocalFileSystem;
@@ -52,7 +52,7 @@
 import org.junit.Test;
 
 // ACCUMULO-3263
-public class RewriteTabletDirectoriesIT extends ConfigurableMacIT {
+public class RewriteTabletDirectoriesIT extends ConfigurableMacBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -87,7 +87,7 @@
 
     // Write some data to a table and add some splits
     BatchWriter bw = c.createBatchWriter(tableName, null);
-    final SortedSet<Text> splits = new TreeSet<Text>();
+    final SortedSet<Text> splits = new TreeSet<>();
     for (String split : "a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z".split(",")) {
       splits.add(new Text(split));
       Mutation m = new Mutation(new Text(split));
diff --git a/test/src/main/java/org/apache/accumulo/test/SampleIT.java b/test/src/main/java/org/apache/accumulo/test/SampleIT.java
new file mode 100644
index 0000000..15ca4a6
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/SampleIT.java
@@ -0,0 +1,498 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.client.AccumuloException;
+import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.BatchScanner;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.ClientSideIteratorScanner;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.IsolatedScanner;
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.MutationsRejectedException;
+import org.apache.accumulo.core.client.SampleNotPresentException;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.ScannerBase;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.admin.CompactionConfig;
+import org.apache.accumulo.core.client.admin.NewTableConfiguration;
+import org.apache.accumulo.core.client.impl.Credentials;
+import org.apache.accumulo.core.client.impl.OfflineScanner;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.iterators.WrappingIterator;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.junit.Assert;
+import org.junit.Test;
+
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+
+public class SampleIT extends AccumuloClusterHarness {
+
+  private static final Map<String,String> OPTIONS_1 = ImmutableMap.of("hasher", "murmur3_32", "modulus", "1009");
+  private static final Map<String,String> OPTIONS_2 = ImmutableMap.of("hasher", "murmur3_32", "modulus", "997");
+
+  private static final SamplerConfiguration SC1 = new SamplerConfiguration(RowSampler.class.getName()).setOptions(OPTIONS_1);
+  private static final SamplerConfiguration SC2 = new SamplerConfiguration(RowSampler.class.getName()).setOptions(OPTIONS_2);
+
+  public static class IteratorThatUsesSample extends WrappingIterator {
+    private SortedKeyValueIterator<Key,Value> sampleDC;
+    private boolean hasTop;
+
+    @Override
+    public boolean hasTop() {
+      return hasTop && super.hasTop();
+    }
+
+    @Override
+    public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
+
+      int sampleCount = 0;
+      sampleDC.seek(range, columnFamilies, inclusive);
+
+      while (sampleDC.hasTop()) {
+        sampleCount++;
+        sampleDC.next();
+      }
+
+      if (sampleCount < 10) {
+        hasTop = true;
+        super.seek(range, columnFamilies, inclusive);
+      } else {
+        // its too much data
+        hasTop = false;
+      }
+    }
+
+    @Override
+    public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
+      super.init(source, options, env);
+
+      IteratorEnvironment sampleEnv = env.cloneWithSamplingEnabled();
+
+      sampleDC = source.deepCopy(sampleEnv);
+    }
+  }
+
+  @Test
+  public void testBasic() throws Exception {
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    String clone = tableName + "_clone";
+
+    conn.tableOperations().create(tableName, new NewTableConfiguration().enableSampling(SC1));
+
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
+
+    TreeMap<Key,Value> expected = new TreeMap<>();
+    String someRow = writeData(bw, SC1, expected);
+    Assert.assertEquals(20, expected.size());
+
+    Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
+    Scanner isoScanner = new IsolatedScanner(conn.createScanner(tableName, Authorizations.EMPTY));
+    Scanner csiScanner = new ClientSideIteratorScanner(conn.createScanner(tableName, Authorizations.EMPTY));
+    scanner.setSamplerConfiguration(SC1);
+    csiScanner.setSamplerConfiguration(SC1);
+    isoScanner.setSamplerConfiguration(SC1);
+    isoScanner.setBatchSize(10);
+
+    BatchScanner bScanner = conn.createBatchScanner(tableName, Authorizations.EMPTY, 2);
+    bScanner.setSamplerConfiguration(SC1);
+    bScanner.setRanges(Arrays.asList(new Range()));
+
+    check(expected, scanner, bScanner, isoScanner, csiScanner);
+
+    conn.tableOperations().flush(tableName, null, null, true);
+
+    Scanner oScanner = newOfflineScanner(conn, tableName, clone, SC1);
+    check(expected, scanner, bScanner, isoScanner, csiScanner, oScanner);
+
+    // ensure non sample data can be scanned after scanning sample data
+    for (ScannerBase sb : Arrays.asList(scanner, bScanner, isoScanner, csiScanner, oScanner)) {
+      sb.clearSamplerConfiguration();
+      Assert.assertEquals(20000, Iterables.size(sb));
+      sb.setSamplerConfiguration(SC1);
+    }
+
+    Iterator<Key> it = expected.keySet().iterator();
+    while (it.hasNext()) {
+      Key k = it.next();
+      if (k.getRow().toString().equals(someRow)) {
+        it.remove();
+      }
+    }
+
+    expected.put(new Key(someRow, "cf1", "cq1", 8), new Value("42".getBytes()));
+    expected.put(new Key(someRow, "cf1", "cq3", 8), new Value("suprise".getBytes()));
+
+    Mutation m = new Mutation(someRow);
+
+    m.put("cf1", "cq1", 8, "42");
+    m.putDelete("cf1", "cq2", 8);
+    m.put("cf1", "cq3", 8, "suprise");
+
+    bw.addMutation(m);
+    bw.close();
+
+    check(expected, scanner, bScanner, isoScanner, csiScanner);
+
+    conn.tableOperations().flush(tableName, null, null, true);
+
+    oScanner = newOfflineScanner(conn, tableName, clone, SC1);
+    check(expected, scanner, bScanner, isoScanner, csiScanner, oScanner);
+
+    scanner.setRange(new Range(someRow));
+    isoScanner.setRange(new Range(someRow));
+    csiScanner.setRange(new Range(someRow));
+    oScanner.setRange(new Range(someRow));
+    bScanner.setRanges(Arrays.asList(new Range(someRow)));
+
+    expected.clear();
+
+    expected.put(new Key(someRow, "cf1", "cq1", 8), new Value("42".getBytes()));
+    expected.put(new Key(someRow, "cf1", "cq3", 8), new Value("suprise".getBytes()));
+
+    check(expected, scanner, bScanner, isoScanner, csiScanner, oScanner);
+
+    bScanner.close();
+  }
+
+  private Scanner newOfflineScanner(Connector conn, String tableName, String clone, SamplerConfiguration sc) throws Exception {
+    if (conn.tableOperations().exists(clone)) {
+      conn.tableOperations().delete(clone);
+    }
+    Map<String,String> em = Collections.emptyMap();
+    Set<String> es = Collections.emptySet();
+    conn.tableOperations().clone(tableName, clone, false, em, es);
+    conn.tableOperations().offline(clone, true);
+    String cloneID = conn.tableOperations().tableIdMap().get(clone);
+    OfflineScanner oScanner = new OfflineScanner(conn.getInstance(), new Credentials(getAdminPrincipal(), getAdminToken()), cloneID, Authorizations.EMPTY);
+    if (sc != null) {
+      oScanner.setSamplerConfiguration(sc);
+    }
+    return oScanner;
+  }
+
+  private void updateExpected(SamplerConfiguration sc, TreeMap<Key,Value> expected) {
+    expected.clear();
+
+    RowSampler sampler = new RowSampler();
+    sampler.init(sc);
+
+    for (int i = 0; i < 10000; i++) {
+      String row = String.format("r_%06d", i);
+
+      Key k1 = new Key(row, "cf1", "cq1", 7);
+      if (sampler.accept(k1)) {
+        expected.put(k1, new Value(("" + i).getBytes()));
+      }
+
+      Key k2 = new Key(row, "cf1", "cq2", 7);
+      if (sampler.accept(k2)) {
+        expected.put(k2, new Value(("" + (100000000 - i)).getBytes()));
+      }
+    }
+  }
+
+  private String writeData(BatchWriter bw, SamplerConfiguration sc, TreeMap<Key,Value> expected) throws MutationsRejectedException {
+    int count = 0;
+    String someRow = null;
+
+    RowSampler sampler = new RowSampler();
+    sampler.init(sc);
+
+    for (int i = 0; i < 10000; i++) {
+      String row = String.format("r_%06d", i);
+      Mutation m = new Mutation(row);
+
+      m.put("cf1", "cq1", 7, "" + i);
+      m.put("cf1", "cq2", 7, "" + (100000000 - i));
+
+      bw.addMutation(m);
+
+      Key k1 = new Key(row, "cf1", "cq1", 7);
+      if (sampler.accept(k1)) {
+        expected.put(k1, new Value(("" + i).getBytes()));
+        count++;
+        if (count == 5) {
+          someRow = row;
+        }
+      }
+
+      Key k2 = new Key(row, "cf1", "cq2", 7);
+      if (sampler.accept(k2)) {
+        expected.put(k2, new Value(("" + (100000000 - i)).getBytes()));
+      }
+    }
+
+    bw.flush();
+
+    return someRow;
+  }
+
+  private int countEntries(Iterable<Entry<Key,Value>> scanner) {
+
+    int count = 0;
+    Iterator<Entry<Key,Value>> iter = scanner.iterator();
+
+    while (iter.hasNext()) {
+      iter.next();
+      count++;
+    }
+
+    return count;
+  }
+
+  private void setRange(Range range, List<? extends ScannerBase> scanners) {
+    for (ScannerBase s : scanners) {
+      if (s instanceof Scanner) {
+        ((Scanner) s).setRange(range);
+      } else {
+        ((BatchScanner) s).setRanges(Collections.singleton(range));
+      }
+
+    }
+  }
+
+  @Test
+  public void testIterator() throws Exception {
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    String clone = tableName + "_clone";
+
+    conn.tableOperations().create(tableName, new NewTableConfiguration().enableSampling(SC1));
+
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
+
+    TreeMap<Key,Value> expected = new TreeMap<>();
+    writeData(bw, SC1, expected);
+
+    ArrayList<Key> keys = new ArrayList<>(expected.keySet());
+
+    Range range1 = new Range(keys.get(6), true, keys.get(11), true);
+
+    Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
+    Scanner isoScanner = new IsolatedScanner(conn.createScanner(tableName, Authorizations.EMPTY));
+    ClientSideIteratorScanner csiScanner = new ClientSideIteratorScanner(conn.createScanner(tableName, Authorizations.EMPTY));
+    BatchScanner bScanner = conn.createBatchScanner(tableName, Authorizations.EMPTY, 2);
+
+    csiScanner.setIteratorSamplerConfiguration(SC1);
+
+    List<? extends ScannerBase> scanners = Arrays.asList(scanner, isoScanner, bScanner, csiScanner);
+
+    for (ScannerBase s : scanners) {
+      s.addScanIterator(new IteratorSetting(100, IteratorThatUsesSample.class));
+    }
+
+    // the iterator should see less than 10 entries in sample data, and return data
+    setRange(range1, scanners);
+    for (ScannerBase s : scanners) {
+      Assert.assertEquals(2954, countEntries(s));
+    }
+
+    Range range2 = new Range(keys.get(5), true, keys.get(18), true);
+    setRange(range2, scanners);
+
+    // the iterator should see more than 10 entries in sample data, and return no data
+    for (ScannerBase s : scanners) {
+      Assert.assertEquals(0, countEntries(s));
+    }
+
+    // flush an rerun same test against files
+    conn.tableOperations().flush(tableName, null, null, true);
+
+    Scanner oScanner = newOfflineScanner(conn, tableName, clone, null);
+    oScanner.addScanIterator(new IteratorSetting(100, IteratorThatUsesSample.class));
+    scanners = Arrays.asList(scanner, isoScanner, bScanner, csiScanner, oScanner);
+
+    setRange(range1, scanners);
+    for (ScannerBase s : scanners) {
+      Assert.assertEquals(2954, countEntries(s));
+    }
+
+    setRange(range2, scanners);
+    for (ScannerBase s : scanners) {
+      Assert.assertEquals(0, countEntries(s));
+    }
+
+    updateSamplingConfig(conn, tableName, SC2);
+
+    csiScanner.setIteratorSamplerConfiguration(SC2);
+
+    oScanner = newOfflineScanner(conn, tableName, clone, null);
+    oScanner.addScanIterator(new IteratorSetting(100, IteratorThatUsesSample.class));
+    scanners = Arrays.asList(scanner, isoScanner, bScanner, csiScanner, oScanner);
+
+    for (ScannerBase s : scanners) {
+      try {
+        countEntries(s);
+        Assert.fail("Expected SampleNotPresentException, but it did not happen : " + s.getClass().getSimpleName());
+      } catch (SampleNotPresentException e) {
+
+      }
+    }
+  }
+
+  private void setSamplerConfig(SamplerConfiguration sc, ScannerBase... scanners) {
+    for (ScannerBase s : scanners) {
+      s.setSamplerConfiguration(sc);
+    }
+  }
+
+  @Test
+  public void testSampleNotPresent() throws Exception {
+
+    Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    String clone = tableName + "_clone";
+
+    conn.tableOperations().create(tableName);
+
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
+
+    TreeMap<Key,Value> expected = new TreeMap<>();
+    writeData(bw, SC1, expected);
+
+    Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
+    Scanner isoScanner = new IsolatedScanner(conn.createScanner(tableName, Authorizations.EMPTY));
+    isoScanner.setBatchSize(10);
+    Scanner csiScanner = new ClientSideIteratorScanner(conn.createScanner(tableName, Authorizations.EMPTY));
+    BatchScanner bScanner = conn.createBatchScanner(tableName, Authorizations.EMPTY, 2);
+    bScanner.setRanges(Arrays.asList(new Range()));
+
+    // ensure sample not present exception occurs when sampling is not configured
+    assertSampleNotPresent(SC1, scanner, isoScanner, bScanner, csiScanner);
+
+    conn.tableOperations().flush(tableName, null, null, true);
+
+    Scanner oScanner = newOfflineScanner(conn, tableName, clone, SC1);
+    assertSampleNotPresent(SC1, scanner, isoScanner, bScanner, csiScanner, oScanner);
+
+    // configure sampling, however there exist an rfile w/o sample data... so should still see sample not present exception
+
+    updateSamplingConfig(conn, tableName, SC1);
+
+    // create clone with new config
+    oScanner = newOfflineScanner(conn, tableName, clone, SC1);
+
+    assertSampleNotPresent(SC1, scanner, isoScanner, bScanner, csiScanner, oScanner);
+
+    // create rfile with sample data present
+    conn.tableOperations().compact(tableName, new CompactionConfig().setWait(true));
+
+    // should be able to scan sample now
+    oScanner = newOfflineScanner(conn, tableName, clone, SC1);
+    setSamplerConfig(SC1, scanner, csiScanner, isoScanner, bScanner, oScanner);
+    check(expected, scanner, isoScanner, bScanner, csiScanner, oScanner);
+
+    // change sampling config
+    updateSamplingConfig(conn, tableName, SC2);
+
+    // create clone with new config
+    oScanner = newOfflineScanner(conn, tableName, clone, SC2);
+
+    // rfile should have different sample config than table, and scan should not work
+    assertSampleNotPresent(SC2, scanner, isoScanner, bScanner, csiScanner, oScanner);
+
+    // create rfile that has same sample data as table config
+    conn.tableOperations().compact(tableName, new CompactionConfig().setWait(true));
+
+    // should be able to scan sample now
+    updateExpected(SC2, expected);
+    oScanner = newOfflineScanner(conn, tableName, clone, SC2);
+    setSamplerConfig(SC2, scanner, csiScanner, isoScanner, bScanner, oScanner);
+    check(expected, scanner, isoScanner, bScanner, csiScanner, oScanner);
+
+    bScanner.close();
+  }
+
+  private void updateSamplingConfig(Connector conn, String tableName, SamplerConfiguration sc) throws TableNotFoundException, AccumuloException,
+      AccumuloSecurityException {
+    conn.tableOperations().setSamplerConfiguration(tableName, sc);
+    // wait for for config change
+    conn.tableOperations().offline(tableName, true);
+    conn.tableOperations().online(tableName, true);
+  }
+
+  private void assertSampleNotPresent(SamplerConfiguration sc, ScannerBase... scanners) {
+
+    for (ScannerBase scanner : scanners) {
+      SamplerConfiguration csc = scanner.getSamplerConfiguration();
+
+      scanner.setSamplerConfiguration(sc);
+
+      try {
+        for (Iterator<Entry<Key,Value>> i = scanner.iterator(); i.hasNext();) {
+          Entry<Key,Value> entry = i.next();
+          entry.getKey();
+        }
+        Assert.fail("Expected SampleNotPresentException, but it did not happen : " + scanner.getClass().getSimpleName());
+      } catch (SampleNotPresentException e) {
+
+      }
+
+      scanner.clearSamplerConfiguration();
+      for (Iterator<Entry<Key,Value>> i = scanner.iterator(); i.hasNext();) {
+        Entry<Key,Value> entry = i.next();
+        entry.getKey();
+      }
+
+      if (csc == null) {
+        scanner.clearSamplerConfiguration();
+      } else {
+        scanner.setSamplerConfiguration(csc);
+      }
+    }
+  }
+
+  private void check(TreeMap<Key,Value> expected, ScannerBase... scanners) {
+    TreeMap<Key,Value> actual = new TreeMap<>();
+    for (ScannerBase s : scanners) {
+      actual.clear();
+      for (Entry<Key,Value> entry : s) {
+        actual.put(entry.getKey(), entry.getValue());
+      }
+      Assert.assertEquals(String.format("Saw %d instead of %d entries using %s", actual.size(), expected.size(), s.getClass().getSimpleName()), expected,
+          actual);
+    }
+  }
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/ScanFlushWithTimeIT.java b/test/src/main/java/org/apache/accumulo/test/ScanFlushWithTimeIT.java
new file mode 100644
index 0000000..78ffa60
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/ScanFlushWithTimeIT.java
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import static org.junit.Assert.assertTrue;
+
+import java.util.Collections;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.client.BatchScanner;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.IsolatedScanner;
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.ScannerBase;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.fate.util.UtilWaitThread;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.test.functional.SlowIterator;
+import org.apache.hadoop.io.Text;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ScanFlushWithTimeIT extends AccumuloClusterHarness {
+
+  private static final Logger log = LoggerFactory.getLogger(ScanFlushWithTimeIT.class);
+
+  @Test(timeout = 30 * 1000)
+  public void test() throws Exception {
+    log.info("Creating table");
+    String tableName = getUniqueNames(1)[0];
+    Connector c = getConnector();
+    c.tableOperations().create(tableName);
+    log.info("Adding slow iterator");
+    IteratorSetting setting = new IteratorSetting(50, SlowIterator.class);
+    SlowIterator.setSleepTime(setting, 1000);
+    c.tableOperations().attachIterator(tableName, setting);
+    log.info("Splitting the table");
+    SortedSet<Text> partitionKeys = new TreeSet<>();
+    partitionKeys.add(new Text("5"));
+    c.tableOperations().addSplits(tableName, partitionKeys);
+    log.info("waiting for zookeeper propagation");
+    UtilWaitThread.sleep(5 * 1000);
+    log.info("Adding a few entries");
+    BatchWriter bw = c.createBatchWriter(tableName, null);
+    for (int i = 0; i < 10; i++) {
+      Mutation m = new Mutation("" + i);
+      m.put("", "", "");
+      bw.addMutation(m);
+    }
+    bw.close();
+    log.info("Fetching some entries: should timeout and return something");
+
+    log.info("Scanner");
+    Scanner s = c.createScanner(tableName, Authorizations.EMPTY);
+    s.setBatchTimeout(500, TimeUnit.MILLISECONDS);
+    testScanner(s, 1200);
+
+    log.info("IsolatedScanner");
+    IsolatedScanner is = new IsolatedScanner(s);
+    is.setReadaheadThreshold(1);
+    // buffers an entire row
+    testScanner(is, 2200);
+
+    log.info("BatchScanner");
+    BatchScanner bs = c.createBatchScanner(tableName, Authorizations.EMPTY, 5);
+    bs.setBatchTimeout(500, TimeUnit.MILLISECONDS);
+    bs.setRanges(Collections.singletonList(new Range()));
+    testScanner(bs, 1200);
+  }
+
+  private void testScanner(ScannerBase s, long expected) {
+    long now = System.currentTimeMillis();
+    try {
+      s.iterator().next();
+    } finally {
+      s.close();
+    }
+    long diff = System.currentTimeMillis() - now;
+    log.info("Diff = {}", diff);
+    assertTrue("Scanner taking too long to return intermediate results: " + diff, diff < expected);
+  }
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/ShellConfigIT.java b/test/src/main/java/org/apache/accumulo/test/ShellConfigIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/ShellConfigIT.java
rename to test/src/main/java/org/apache/accumulo/test/ShellConfigIT.java
index 96f87f5..ae2e4cc 100644
--- a/test/src/test/java/org/apache/accumulo/test/ShellConfigIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ShellConfigIT.java
@@ -27,7 +27,7 @@
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.harness.conf.StandaloneAccumuloClusterConfiguration;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.test.ShellServerIT.TestShell;
@@ -36,7 +36,7 @@
 import org.junit.Before;
 import org.junit.Test;
 
-public class ShellConfigIT extends AccumuloClusterIT {
+public class ShellConfigIT extends AccumuloClusterHarness {
   @Override
   public int defaultTimeoutSeconds() {
     return 30;
diff --git a/test/src/test/java/org/apache/accumulo/test/ShellServerIT.java b/test/src/main/java/org/apache/accumulo/test/ShellServerIT.java
similarity index 82%
rename from test/src/test/java/org/apache/accumulo/test/ShellServerIT.java
rename to test/src/main/java/org/apache/accumulo/test/ShellServerIT.java
index ced4a6a..61d3d4a 100644
--- a/test/src/test/java/org/apache/accumulo/test/ShellServerIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ShellServerIT.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotNull;
@@ -28,18 +29,17 @@
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
-import java.io.OutputStreamWriter;
 import java.io.PrintWriter;
 import java.lang.reflect.Constructor;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
+import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Random;
-
-import jline.console.ConsoleReader;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.ClientConfiguration;
@@ -50,6 +50,7 @@
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.admin.TableOperations;
 import org.apache.accumulo.core.client.impl.Namespaces;
+import org.apache.accumulo.core.client.sample.RowSampler;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
@@ -61,9 +62,10 @@
 import org.apache.accumulo.core.file.FileSKVWriter;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
+import org.apache.accumulo.core.util.format.Formatter;
+import org.apache.accumulo.core.util.format.FormatterConfig;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
-import org.apache.accumulo.harness.SharedMiniClusterIT;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.shell.Shell;
 import org.apache.accumulo.test.functional.SlowIterator;
@@ -88,9 +90,12 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import com.google.common.collect.Iterables;
 import com.google.common.collect.Iterators;
 
-public class ShellServerIT extends SharedMiniClusterIT {
+import jline.console.ConsoleReader;
+
+public class ShellServerIT extends SharedMiniClusterBase {
   public static class TestOutputStream extends OutputStream {
     StringBuilder sb = new StringBuilder();
 
@@ -157,8 +162,7 @@
       // start the shell
       output = new TestOutputStream();
       input = new StringInputStream();
-      PrintWriter pw = new PrintWriter(new OutputStreamWriter(output));
-      shell = new Shell(new ConsoleReader(input, output), pw);
+      shell = new Shell(new ConsoleReader(input, output));
       shell.setLogErrorsToConsole();
       if (clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
         // Pull the kerberos principal out when we're using SASL
@@ -259,7 +263,7 @@
 
   @BeforeClass
   public static void setupMiniCluster() throws Exception {
-    SharedMiniClusterIT.startMiniClusterWithConfig(new ShellServerITConfigCallback());
+    SharedMiniClusterBase.startMiniClusterWithConfig(new ShellServerITConfigCallback());
     rootPath = getMiniClusterDir().getAbsolutePath();
 
     // history file is updated in $HOME
@@ -273,7 +277,7 @@
 
     // give the tracer some time to start
     while (!tops.exists("trace")) {
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
   }
 
@@ -289,7 +293,7 @@
       traceProcess.destroy();
     }
 
-    SharedMiniClusterIT.stopMiniCluster();
+    SharedMiniClusterBase.stopMiniCluster();
   }
 
   @After
@@ -329,7 +333,7 @@
     String exportUri = "file://" + exportDir.toString();
     String localTmp = "file://" + new File(rootPath, "ShellServerIT.tmp").toString();
     ts.exec("exporttable -t " + table + " " + exportUri, true);
-    DistCp cp = newDistCp();
+    DistCp cp = newDistCp(new Configuration(false));
     String import_ = "file://" + new File(rootPath, "ShellServerIT.import").toString();
     if (getCluster().getClientConfig().getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
       // DistCp bugs out trying to get a fs delegation token to perform the cp. Just copy it ourselves by hand.
@@ -364,12 +368,12 @@
     ts.exec("config -t " + table2 + " -np", true, "345M", true);
     ts.exec("getsplits -t " + table2, true, "row5", true);
     ts.exec("constraint --list -t " + table2, true, "VisibilityConstraint=2", true);
-    ts.exec("onlinetable " + table, true);
+    ts.exec("online " + table, true);
     ts.exec("deletetable -f " + table, true);
     ts.exec("deletetable -f " + table2, true);
   }
 
-  private DistCp newDistCp() {
+  private DistCp newDistCp(Configuration conf) {
     try {
       @SuppressWarnings("unchecked")
       Constructor<DistCp>[] constructors = (Constructor<DistCp>[]) DistCp.class.getConstructors();
@@ -377,9 +381,9 @@
         Class<?>[] parameterTypes = constructor.getParameterTypes();
         if (parameterTypes.length > 0 && parameterTypes[0].equals(Configuration.class)) {
           if (parameterTypes.length == 1) {
-            return constructor.newInstance(new Configuration());
+            return constructor.newInstance(conf);
           } else if (parameterTypes.length == 2) {
-            return constructor.newInstance(new Configuration(), null);
+            return constructor.newInstance(conf, null);
           }
         }
       }
@@ -718,7 +722,7 @@
         ts.exec("getauths", true, "bar", true);
         passed = true;
       } catch (AssertionError | Exception e) {
-        UtilWaitThread.sleep(300);
+        sleepUninterruptibly(300, TimeUnit.MILLISECONDS);
       }
     }
     assertTrue("Could not successfully see updated authoriations", passed);
@@ -987,6 +991,26 @@
     ts.exec("compact -t " + clone + " -w --sf-ename F.* --sf-lt-esize 1K");
 
     assertEquals(3, countFiles(cloneId));
+
+    String clone2 = table + "_clone_2";
+    ts.exec("clonetable -s table.sampler.opt.hasher=murmur3_32,table.sampler.opt.modulus=7,table.sampler=" + RowSampler.class.getName() + " " + clone + " "
+        + clone2);
+    String clone2Id = getTableId(clone2);
+
+    assertEquals(3, countFiles(clone2Id));
+
+    ts.exec("table " + clone2);
+    ts.exec("insert v n l o");
+    ts.exec("flush -w");
+
+    ts.exec("insert x n l o");
+    ts.exec("flush -w");
+
+    assertEquals(5, countFiles(clone2Id));
+
+    ts.exec("compact -t " + clone2 + " -w --sf-no-sample");
+
+    assertEquals(3, countFiles(clone2Id));
   }
 
   @Test
@@ -1001,6 +1025,54 @@
   }
 
   @Test
+  public void testScanScample() throws Exception {
+    final String table = name.getMethodName();
+
+    // compact
+    ts.exec("createtable " + table);
+
+    ts.exec("insert 9255 doc content 'abcde'");
+    ts.exec("insert 9255 doc url file://foo.txt");
+    ts.exec("insert 8934 doc content 'accumulo scales'");
+    ts.exec("insert 8934 doc url file://accumulo_notes.txt");
+    ts.exec("insert 2317 doc content 'milk, eggs, bread, parmigiano-reggiano'");
+    ts.exec("insert 2317 doc url file://groceries/9.txt");
+    ts.exec("insert 3900 doc content 'EC2 ate my homework'");
+    ts.exec("insert 3900 doc uril file://final_project.txt");
+
+    String clone1 = table + "_clone_1";
+    ts.exec("clonetable -s table.sampler.opt.hasher=murmur3_32,table.sampler.opt.modulus=3,table.sampler=" + RowSampler.class.getName() + " " + table + " "
+        + clone1);
+
+    ts.exec("compact -t " + clone1 + " -w --sf-no-sample");
+
+    ts.exec("table " + clone1);
+    ts.exec("scan --sample", true, "parmigiano-reggiano", true);
+    ts.exec("grep --sample reg", true, "parmigiano-reggiano", true);
+    ts.exec("scan --sample", true, "accumulo", false);
+    ts.exec("grep --sample acc", true, "accumulo", false);
+
+    // create table where table sample config differs from whats in file
+    String clone2 = table + "_clone_2";
+    ts.exec("clonetable -s table.sampler.opt.hasher=murmur3_32,table.sampler.opt.modulus=2,table.sampler=" + RowSampler.class.getName() + " " + clone1 + " "
+        + clone2);
+
+    ts.exec("table " + clone2);
+    ts.exec("scan --sample", false, "SampleNotPresentException", true);
+    ts.exec("grep --sample reg", false, "SampleNotPresentException", true);
+
+    ts.exec("compact -t " + clone2 + " -w --sf-no-sample");
+
+    for (String expected : Arrays.asList("2317", "3900", "9255")) {
+      ts.exec("scan --sample", true, expected, true);
+      ts.exec("grep --sample " + expected.substring(0, 2), true, expected, true);
+    }
+
+    ts.exec("scan --sample", true, "8934", false);
+    ts.exec("grep --sample 89", true, "8934", false);
+  }
+
+  @Test
   public void constraint() throws Exception {
     final String table = name.getMethodName();
 
@@ -1014,7 +1086,7 @@
     ts.exec("constraint -l -t " + table, true, "VisibilityConstraint=2", true);
     ts.exec("constraint -t " + table + " -d 2", true, "Removed constraint 2 from table " + table);
     // wait for zookeeper updates to propagate
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
     ts.exec("constraint -l -t " + table, true, "VisibilityConstraint=2", false);
     ts.exec("deletetable -f " + table);
   }
@@ -1098,6 +1170,112 @@
   }
 
   @Test
+  public void formatter() throws Exception {
+    ts.exec("createtable formatter_test", true);
+    ts.exec("table formatter_test", true);
+    ts.exec("insert row cf cq 1234abcd", true);
+    ts.exec("insert row cf1 cq1 9876fedc", true);
+    ts.exec("insert row2 cf cq 13579bdf", true);
+    ts.exec("insert row2 cf1 cq 2468ace", true);
+
+    ArrayList<String> expectedDefault = new ArrayList<>(4);
+    expectedDefault.add("row cf:cq []    1234abcd");
+    expectedDefault.add("row cf1:cq1 []    9876fedc");
+    expectedDefault.add("row2 cf:cq []    13579bdf");
+    expectedDefault.add("row2 cf1:cq []    2468ace");
+    ArrayList<String> actualDefault = new ArrayList<>(4);
+    boolean isFirst = true;
+    for (String s : ts.exec("scan -np", true).split("[\n\r]+")) {
+      if (isFirst) {
+        isFirst = false;
+      } else {
+        actualDefault.add(s);
+      }
+    }
+
+    ArrayList<String> expectedFormatted = new ArrayList<>(4);
+    expectedFormatted.add("row cf:cq []    0x31 0x32 0x33 0x34 0x61 0x62 0x63 0x64");
+    expectedFormatted.add("row cf1:cq1 []    0x39 0x38 0x37 0x36 0x66 0x65 0x64 0x63");
+    expectedFormatted.add("row2 cf:cq []    0x31 0x33 0x35 0x37 0x39 0x62 0x64 0x66");
+    expectedFormatted.add("row2 cf1:cq []    0x32 0x34 0x36 0x38 0x61 0x63 0x65");
+    ts.exec("formatter -t formatter_test -f " + HexFormatter.class.getName(), true);
+    ArrayList<String> actualFormatted = new ArrayList<>(4);
+    isFirst = true;
+    for (String s : ts.exec("scan -np", true).split("[\n\r]+")) {
+      if (isFirst) {
+        isFirst = false;
+      } else {
+        actualFormatted.add(s);
+      }
+    }
+
+    ts.exec("deletetable -f formatter_test", true);
+
+    assertTrue(Iterables.elementsEqual(expectedDefault, new ArrayList<>(actualDefault)));
+    assertTrue(Iterables.elementsEqual(expectedFormatted, new ArrayList<>(actualFormatted)));
+  }
+
+  /**
+   * Simple <code>Formatter</code> that will convert each character in the Value from decimal to hexadecimal. Will automatically skip over characters in the
+   * value which do not fall within the [0-9,a-f] range.
+   *
+   * <p>
+   * Example: <code>'0'</code> will be displayed as <code>'0x30'</code>
+   */
+  public static class HexFormatter implements Formatter {
+    private Iterator<Entry<Key,Value>> iter = null;
+    private FormatterConfig config;
+
+    private final static String tab = "\t";
+    private final static String newline = "\n";
+
+    public HexFormatter() {}
+
+    @Override
+    public boolean hasNext() {
+      return this.iter.hasNext();
+    }
+
+    @Override
+    public String next() {
+      final Entry<Key,Value> entry = iter.next();
+
+      String key;
+
+      // Observe the timestamps
+      if (config.willPrintTimestamps()) {
+        key = entry.getKey().toString();
+      } else {
+        key = entry.getKey().toStringNoTime();
+      }
+
+      final Value v = entry.getValue();
+
+      // Approximate how much space we'll need
+      final StringBuilder sb = new StringBuilder(key.length() + v.getSize() * 5);
+
+      sb.append(key).append(tab);
+
+      for (byte b : v.get()) {
+        if ((b >= 48 && b <= 57) || (b >= 97 && b <= 102)) {
+          sb.append(String.format("0x%x ", Integer.valueOf(b)));
+        }
+      }
+
+      return sb.toString().trim() + newline;
+    }
+
+    @Override
+    public void remove() {}
+
+    @Override
+    public void initialize(final Iterable<Entry<Key,Value>> scanner, final FormatterConfig config) {
+      this.iter = scanner.iterator();
+      this.config = new FormatterConfig(config);
+    }
+  }
+
+  @Test
   public void extensions() throws Exception {
     String extName = "ExampleShellExtension";
 
@@ -1171,9 +1349,9 @@
     assertTrue(errorsDir.mkdir());
     fs.mkdirs(new Path(errorsDir.toString()));
     AccumuloConfiguration aconf = AccumuloConfiguration.getDefaultConfiguration();
-    FileSKVWriter evenWriter = FileOperations.getInstance().openWriter(even, fs, conf, aconf);
+    FileSKVWriter evenWriter = FileOperations.getInstance().newWriterBuilder().forFile(even, fs, conf).withTableConfiguration(aconf).build();
     evenWriter.startDefaultLocalityGroup();
-    FileSKVWriter oddWriter = FileOperations.getInstance().openWriter(odd, fs, conf, aconf);
+    FileSKVWriter oddWriter = FileOperations.getInstance().newWriterBuilder().forFile(odd, fs, conf).withTableConfiguration(aconf).build();
     oddWriter.startDefaultLocalityGroup();
     long timestamp = System.currentTimeMillis();
     Text cf = new Text("cf");
@@ -1210,7 +1388,7 @@
     ts.exec("scan -b 02", true, "value", false);
     ts.exec("interpreter -i org.apache.accumulo.core.util.interpret.HexScanInterpreter", true);
     // Need to allow time for this to propagate through zoocache/zookeeper
-    UtilWaitThread.sleep(3000);
+    sleepUninterruptibly(3, TimeUnit.SECONDS);
 
     ts.exec("interpreter -l", true, "HexScan", true);
     ts.exec("scan -b 02", true, "value", true);
@@ -1279,7 +1457,7 @@
       // wait for both tservers to start up
       if (ts.output.get().split("\n").length == 3)
         break;
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
 
     }
     assertEquals(2, ts.output.get().split("\n").length);
@@ -1346,7 +1524,7 @@
     };
     thread.start();
 
-    List<String> scans = new ArrayList<String>();
+    List<String> scans = new ArrayList<>();
     // Try to find the active scan for about 15seconds
     for (int i = 0; i < 50 && scans.isEmpty(); i++) {
       String currentScans = ts.exec("listscans", true);
@@ -1361,7 +1539,7 @@
           log.info("Ignoring scan because of wrong table: " + currentScan);
         }
       }
-      UtilWaitThread.sleep(300);
+      sleepUninterruptibly(300, TimeUnit.MILLISECONDS);
     }
     thread.join();
 
@@ -1380,7 +1558,7 @@
       assertTrue(tserver.matches(hostPortPattern));
       assertTrue(getConnector().instanceOperations().getTabletServers().contains(tserver));
       String client = parts[1].trim();
-      assertTrue(client.matches(hostPortPattern));
+      assertTrue(client + " does not match " + hostPortPattern, client.matches(hostPortPattern));
       // Scan ID should be a long (throwing an exception if it fails to parse)
       Long.parseLong(parts[11].trim());
     }
@@ -1394,11 +1572,11 @@
 
     File fooFilterJar = File.createTempFile("FooFilter", ".jar", new File(rootPath));
 
-    FileUtils.copyURLToFile(this.getClass().getResource("/FooFilter.jar"), fooFilterJar);
+    FileUtils.copyInputStreamToFile(this.getClass().getResourceAsStream("/FooFilter.jar"), fooFilterJar);
     fooFilterJar.deleteOnExit();
 
     File fooConstraintJar = File.createTempFile("FooConstraint", ".jar", new File(rootPath));
-    FileUtils.copyURLToFile(this.getClass().getResource("/FooConstraint.jar"), fooConstraintJar);
+    FileUtils.copyInputStreamToFile(this.getClass().getResourceAsStream("/FooConstraint.jar"), fooConstraintJar);
     fooConstraintJar.deleteOnExit();
 
     ts.exec("config -s " + Property.VFS_CONTEXT_CLASSPATH_PROPERTY.getKey() + "cx1=" + fooFilterJar.toURI().toString() + ","
@@ -1407,7 +1585,7 @@
     ts.exec("createtable " + table, true);
     ts.exec("config -t " + table + " -s " + Property.TABLE_CLASSPATH.getKey() + "=cx1", true);
 
-    UtilWaitThread.sleep(200);
+    sleepUninterruptibly(200, TimeUnit.MILLISECONDS);
 
     // We can't use the setiter command as Filter implements OptionDescriber which
     // forces us to enter more input that I don't know how to input
@@ -1416,7 +1594,7 @@
 
     ts.exec("insert foo f q v", true);
 
-    UtilWaitThread.sleep(100);
+    sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
 
     ts.exec("scan -np", true, "foo", false);
 
@@ -1557,6 +1735,97 @@
   }
 
   @Test
+  public void scansWithClassLoaderContext() throws Exception {
+    try {
+      Class.forName("org.apache.accumulo.test.functional.ValueReversingIterator");
+      fail("ValueReversingIterator already on the classpath");
+    } catch (Exception e) {
+      // Do nothing here, This is success. The following line is here
+      // so that findbugs doesn't have a stroke.
+      assertTrue(true);
+    }
+    ts.exec("createtable t");
+    make10();
+    setupFakeContextPath();
+    // Add the context to the table so that setscaniter works. After setscaniter succeeds, then
+    // remove the property from the table.
+    ts.exec("config -s " + Property.VFS_CONTEXT_CLASSPATH_PROPERTY + FAKE_CONTEXT + "=" + FAKE_CONTEXT_CLASSPATH);
+    ts.exec("config -t t -s table.classpath.context=" + FAKE_CONTEXT);
+    ts.exec("setscaniter -n reverse -t t -p 21 -class org.apache.accumulo.test.functional.ValueReversingIterator");
+    String result = ts.exec("scan -np -b row1 -e row1");
+    assertEquals(2, result.split("\n").length);
+    log.error(result);
+    assertTrue(result.contains("value"));
+    result = ts.exec("scan -np -b row3 -e row5");
+    assertEquals(4, result.split("\n").length);
+    assertTrue(result.contains("value"));
+    result = ts.exec("scan -np -r row3");
+    assertEquals(2, result.split("\n").length);
+    assertTrue(result.contains("value"));
+    result = ts.exec("scan -np -b row:");
+    assertEquals(1, result.split("\n").length);
+    result = ts.exec("scan -np -b row");
+    assertEquals(11, result.split("\n").length);
+    assertTrue(result.contains("value"));
+    result = ts.exec("scan -np -e row:");
+    assertEquals(11, result.split("\n").length);
+    assertTrue(result.contains("value"));
+
+    setupRealContextPath();
+    ts.exec("config -s " + Property.VFS_CONTEXT_CLASSPATH_PROPERTY + REAL_CONTEXT + "=" + REAL_CONTEXT_CLASSPATH);
+    result = ts.exec("scan -np -b row1 -e row1 -cc " + REAL_CONTEXT);
+    log.error(result);
+    assertEquals(2, result.split("\n").length);
+    assertTrue(result.contains("eulav"));
+    assertFalse(result.contains("value"));
+    result = ts.exec("scan -np -b row3 -e row5 -cc " + REAL_CONTEXT);
+    assertEquals(4, result.split("\n").length);
+    assertTrue(result.contains("eulav"));
+    assertFalse(result.contains("value"));
+    result = ts.exec("scan -np -r row3 -cc " + REAL_CONTEXT);
+    assertEquals(2, result.split("\n").length);
+    assertTrue(result.contains("eulav"));
+    assertFalse(result.contains("value"));
+    result = ts.exec("scan -np -b row: -cc " + REAL_CONTEXT);
+    assertEquals(1, result.split("\n").length);
+    result = ts.exec("scan -np -b row -cc " + REAL_CONTEXT);
+    assertEquals(11, result.split("\n").length);
+    assertTrue(result.contains("eulav"));
+    assertFalse(result.contains("value"));
+    result = ts.exec("scan -np -e row: -cc " + REAL_CONTEXT);
+    assertEquals(11, result.split("\n").length);
+    assertTrue(result.contains("eulav"));
+    assertFalse(result.contains("value"));
+    ts.exec("deletetable -f t");
+  }
+
+  private static final String FAKE_CONTEXT = "FAKE";
+  private static final String FAKE_CONTEXT_CLASSPATH = "file:///tmp/ShellServerIT-iterators.jar";
+  private static final String REAL_CONTEXT = "REAL";
+  private static final String REAL_CONTEXT_CLASSPATH = "file:///tmp/TestIterators-tests.jar";
+
+  private void setupRealContextPath() throws Exception {
+    // Copy the TestIterators jar to tmp
+    Path baseDir = new Path(System.getProperty("user.dir"));
+    Path targetDir = new Path(baseDir, "target");
+    Path jarPath = new Path(targetDir, "TestIterators-tests.jar");
+    Path dstPath = new Path(REAL_CONTEXT_CLASSPATH);
+    FileSystem fs = SharedMiniClusterBase.getCluster().getFileSystem();
+    fs.copyFromLocalFile(jarPath, dstPath);
+  }
+
+  private void setupFakeContextPath() throws Exception {
+    // Copy the TestIterators jar to tmp
+    Path baseDir = new Path(System.getProperty("user.dir"));
+    Path targetDir = new Path(baseDir, "target");
+    Path classesDir = new Path(targetDir, "classes");
+    Path jarPath = new Path(classesDir, "ShellServerIT-iterators.jar");
+    Path dstPath = new Path(FAKE_CONTEXT_CLASSPATH);
+    FileSystem fs = SharedMiniClusterBase.getCluster().getFileSystem();
+    fs.copyFromLocalFile(jarPath, dstPath);
+  }
+
+  @Test
   public void whoami() throws Exception {
     AuthenticationToken token = getToken();
     assertTrue(ts.exec("whoami", true).contains(getPrincipal()));
diff --git a/test/src/test/java/org/apache/accumulo/test/SplitCancelsMajCIT.java b/test/src/main/java/org/apache/accumulo/test/SplitCancelsMajCIT.java
similarity index 85%
rename from test/src/test/java/org/apache/accumulo/test/SplitCancelsMajCIT.java
rename to test/src/main/java/org/apache/accumulo/test/SplitCancelsMajCIT.java
index 431c85d..93640c8 100644
--- a/test/src/test/java/org/apache/accumulo/test/SplitCancelsMajCIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/SplitCancelsMajCIT.java
@@ -21,6 +21,7 @@
 import java.util.EnumSet;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.accumulo.core.client.BatchWriter;
@@ -30,16 +31,17 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.apache.accumulo.test.functional.SlowIterator;
 import org.apache.hadoop.io.Text;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 // ACCUMULO-2862
-public class SplitCancelsMajCIT extends SharedMiniClusterIT {
+public class SplitCancelsMajCIT extends SharedMiniClusterBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -48,12 +50,12 @@
 
   @BeforeClass
   public static void setup() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
   }
 
   @AfterClass
   public static void teardown() throws Exception {
-    SharedMiniClusterIT.stopMiniCluster();
+    SharedMiniClusterBase.stopMiniCluster();
   }
 
   @Test
@@ -73,7 +75,7 @@
     }
     bw.flush();
     // start majc
-    final AtomicReference<Exception> ex = new AtomicReference<Exception>();
+    final AtomicReference<Exception> ex = new AtomicReference<>();
     Thread thread = new Thread() {
       @Override
       public void run() {
@@ -87,9 +89,9 @@
     thread.start();
 
     long now = System.currentTimeMillis();
-    UtilWaitThread.sleep(10 * 1000);
+    sleepUninterruptibly(10, TimeUnit.SECONDS);
     // split the table, interrupts the compaction
-    SortedSet<Text> partitionKeys = new TreeSet<Text>();
+    SortedSet<Text> partitionKeys = new TreeSet<>();
     partitionKeys.add(new Text("10"));
     c.tableOperations().addSplits(tableName, partitionKeys);
     thread.join();
diff --git a/test/src/test/java/org/apache/accumulo/test/SplitRecoveryIT.java b/test/src/main/java/org/apache/accumulo/test/SplitRecoveryIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/SplitRecoveryIT.java
rename to test/src/main/java/org/apache/accumulo/test/SplitRecoveryIT.java
index 8fe8471..4f5d8e9 100644
--- a/test/src/test/java/org/apache/accumulo/test/SplitRecoveryIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/SplitRecoveryIT.java
@@ -19,6 +19,7 @@
 import static org.junit.Assert.assertEquals;
 
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -35,14 +36,14 @@
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.DataFileColumnFamily;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class SplitRecoveryIT extends AccumuloClusterIT {
+public class SplitRecoveryIT extends AccumuloClusterHarness {
 
   private Mutation m(String row) {
     Mutation result = new Mutation(row);
@@ -81,13 +82,13 @@
       // take the table offline
       connector.tableOperations().offline(tableName);
       while (!isOffline(tableName, connector))
-        UtilWaitThread.sleep(200);
+        sleepUninterruptibly(200, TimeUnit.MILLISECONDS);
 
       // poke a partial split into the metadata table
       connector.securityOperations().grantTablePermission(getAdminPrincipal(), MetadataTable.NAME, TablePermission.WRITE);
       String tableId = connector.tableOperations().tableIdMap().get(tableName);
 
-      KeyExtent extent = new KeyExtent(new Text(tableId), null, new Text("b"));
+      KeyExtent extent = new KeyExtent(tableId, null, new Text("b"));
       Mutation m = extent.getPrevRowUpdateMutation();
 
       TabletsSection.TabletColumnFamily.SPLIT_RATIO_COLUMN.put(m, new Value(Double.toString(0.5).getBytes()));
@@ -103,7 +104,7 @@
         scanner.setRange(extent.toMetadataRange());
         scanner.fetchColumnFamily(DataFileColumnFamily.NAME);
 
-        KeyExtent extent2 = new KeyExtent(new Text(tableId), new Text("b"), null);
+        KeyExtent extent2 = new KeyExtent(tableId, new Text("b"), null);
         m = extent2.getPrevRowUpdateMutation();
         TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(m, new Value("/t2".getBytes()));
         TabletsSection.ServerColumnFamily.TIME_COLUMN.put(m, new Value("M0".getBytes()));
diff --git a/test/src/test/java/org/apache/accumulo/test/TableConfigurationUpdateIT.java b/test/src/main/java/org/apache/accumulo/test/TableConfigurationUpdateIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/TableConfigurationUpdateIT.java
rename to test/src/main/java/org/apache/accumulo/test/TableConfigurationUpdateIT.java
index c02daea..d91d76e 100644
--- a/test/src/test/java/org/apache/accumulo/test/TableConfigurationUpdateIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/TableConfigurationUpdateIT.java
@@ -30,7 +30,7 @@
 import org.apache.accumulo.core.client.impl.Namespaces;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.server.conf.NamespaceConfiguration;
 import org.apache.accumulo.server.conf.TableConfiguration;
 import org.junit.Assert;
@@ -38,7 +38,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class TableConfigurationUpdateIT extends AccumuloClusterIT {
+public class TableConfigurationUpdateIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(TableConfigurationUpdateIT.class);
 
   @Override
@@ -68,7 +68,7 @@
     long start = System.currentTimeMillis();
     ExecutorService svc = Executors.newFixedThreadPool(numThreads);
     CountDownLatch countDown = new CountDownLatch(numThreads);
-    ArrayList<Future<Exception>> futures = new ArrayList<Future<Exception>>(numThreads);
+    ArrayList<Future<Exception>> futures = new ArrayList<>(numThreads);
 
     for (int i = 0; i < numThreads; i++) {
       futures.add(svc.submit(new TableConfRunner(randomMax, iterations, tableConf, countDown)));
diff --git a/test/src/test/java/org/apache/accumulo/test/TableOperationsIT.java b/test/src/main/java/org/apache/accumulo/test/TableOperationsIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/TableOperationsIT.java
rename to test/src/main/java/org/apache/accumulo/test/TableOperationsIT.java
index 852fd608..d1a52fb 100644
--- a/test/src/test/java/org/apache/accumulo/test/TableOperationsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/TableOperationsIT.java
@@ -34,6 +34,7 @@
 import java.util.SortedSet;
 import java.util.TreeMap;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -55,8 +56,7 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.test.functional.BadIterator;
 import org.apache.hadoop.io.Text;
 import org.apache.thrift.TException;
@@ -65,8 +65,9 @@
 import org.junit.Test;
 
 import com.google.common.collect.Sets;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class TableOperationsIT extends AccumuloClusterIT {
+public class TableOperationsIT extends AccumuloClusterHarness {
 
   static TabletClientService.Client client;
 
@@ -140,7 +141,7 @@
     connector.tableOperations().clone(tableName, newTable, false, null, null);
 
     // verify tables are exactly the same
-    Set<String> tables = new HashSet<String>();
+    Set<String> tables = new HashSet<>();
     tables.add(tableName);
     tables.add(newTable);
     diskUsages = connector.tableOperations().getDiskUsage(tables);
@@ -229,7 +230,7 @@
   }
 
   private Map<String,String> propsToMap(Iterable<Map.Entry<String,String>> props) {
-    Map<String,String> map = new HashMap<String,String>();
+    Map<String,String> map = new HashMap<>();
     for (Map.Entry<String,String> prop : props) {
       map.put(prop.getKey(), prop.getValue());
     }
@@ -360,7 +361,7 @@
     List<IteratorSetting> list = new ArrayList<>();
     list.add(new IteratorSetting(15, BadIterator.class));
     connector.tableOperations().compact(tableName, null, null, list, true, false); // don't block
-    UtilWaitThread.sleep(2000); // start compaction
+    sleepUninterruptibly(2, TimeUnit.SECONDS); // start compaction
     connector.tableOperations().cancelCompaction(tableName);
 
     Scanner scanner = connector.createScanner(tableName, Authorizations.EMPTY);
diff --git a/test/src/test/java/org/apache/accumulo/test/TabletServerGivesUpIT.java b/test/src/main/java/org/apache/accumulo/test/TabletServerGivesUpIT.java
similarity index 89%
rename from test/src/test/java/org/apache/accumulo/test/TabletServerGivesUpIT.java
rename to test/src/main/java/org/apache/accumulo/test/TabletServerGivesUpIT.java
index 68bd07b..33c1798 100644
--- a/test/src/test/java/org/apache/accumulo/test/TabletServerGivesUpIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/TabletServerGivesUpIT.java
@@ -19,19 +19,21 @@
 import static org.junit.Assert.assertEquals;
 
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 // ACCUMULO-2480
-public class TabletServerGivesUpIT extends ConfigurableMacIT {
+public class TabletServerGivesUpIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -68,7 +70,7 @@
     splitter.start();
     // wait for the tserver to give up on writing to the WAL
     while (conn.instanceOperations().getTabletServers().size() == 1) {
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
   }
 
diff --git a/test/src/main/java/org/apache/accumulo/test/TabletServerHdfsRestartIT.java b/test/src/main/java/org/apache/accumulo/test/TabletServerHdfsRestartIT.java
new file mode 100644
index 0000000..1e063f0
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/TabletServerHdfsRestartIT.java
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import static org.junit.Assert.assertEquals;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Test;
+
+import com.google.common.collect.Iterators;
+
+// ACCUMULO-3914
+public class TabletServerHdfsRestartIT extends ConfigurableMacBase {
+
+  private static final int N = 1000;
+
+  @Override
+  public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.useMiniDFS(true);
+    cfg.setNumTservers(1);
+    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
+  }
+
+  @Test(timeout = 2 * 60 * 1000)
+  public void test() throws Exception {
+    final Connector conn = this.getConnector();
+    // Yes, there's a tabletserver
+    assertEquals(1, conn.instanceOperations().getTabletServers().size());
+    final String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
+    BatchWriter bw = conn.createBatchWriter(tableName, null);
+    for (int i = 0; i < N; i++) {
+      Mutation m = new Mutation("" + i);
+      m.put("", "", "");
+      bw.addMutation(m);
+    }
+    bw.close();
+    conn.tableOperations().flush(tableName, null, null, true);
+
+    // Kill dfs
+    cluster.getMiniDfs().restartNameNode(false);
+
+    assertEquals(N, Iterators.size(conn.createScanner(tableName, Authorizations.EMPTY).iterator()));
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/TestBinaryRows.java b/test/src/main/java/org/apache/accumulo/test/TestBinaryRows.java
index 2615ae2..a7f151d 100644
--- a/test/src/main/java/org/apache/accumulo/test/TestBinaryRows.java
+++ b/test/src/main/java/org/apache/accumulo/test/TestBinaryRows.java
@@ -185,7 +185,7 @@
       System.out.printf("rate    : %9.2f lookups/sec%n", numLookups / ((t2 - t1) / 1000.0));
 
     } else if (opts.mode.equals("split")) {
-      TreeSet<Text> splits = new TreeSet<Text>();
+      TreeSet<Text> splits = new TreeSet<>();
       int shift = (int) opts.start;
       int count = (int) opts.num;
 
diff --git a/test/src/main/java/org/apache/accumulo/test/TestIngest.java b/test/src/main/java/org/apache/accumulo/test/TestIngest.java
index 33e03ef..de092e0 100644
--- a/test/src/main/java/org/apache/accumulo/test/TestIngest.java
+++ b/test/src/main/java/org/apache/accumulo/test/TestIngest.java
@@ -129,7 +129,7 @@
 
     long pos = start + splitSize;
 
-    TreeSet<Text> splits = new TreeSet<Text>();
+    TreeSet<Text> splits = new TreeSet<>();
 
     while (pos < end) {
       splits.add(new Text(String.format("row_%010d", pos)));
@@ -200,8 +200,8 @@
     }
   }
 
-  public static void ingest(Connector connector, Opts opts, BatchWriterOpts bwOpts) throws IOException, AccumuloException, AccumuloSecurityException,
-      TableNotFoundException, MutationsRejectedException, TableExistsException {
+  public static void ingest(Connector connector, FileSystem fs, Opts opts, BatchWriterOpts bwOpts) throws IOException, AccumuloException,
+      AccumuloSecurityException, TableNotFoundException, MutationsRejectedException, TableExistsException {
     long stopTime;
 
     byte[][] bytevals = generateValues(opts.dataSize);
@@ -218,8 +218,8 @@
 
     if (opts.outputFile != null) {
       Configuration conf = CachedConfiguration.getInstance();
-      FileSystem fs = FileSystem.get(conf);
-      writer = FileOperations.getInstance().openWriter(opts.outputFile + "." + RFile.EXTENSION, fs, conf, AccumuloConfiguration.getDefaultConfiguration());
+      writer = FileOperations.getInstance().newWriterBuilder().forFile(opts.outputFile + "." + RFile.EXTENSION, fs, conf)
+          .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
       writer.startDefaultLocalityGroup();
     } else {
       bw = connector.createBatchWriter(opts.getTableName(), bwOpts.getBatchWriterConfig());
@@ -336,4 +336,9 @@
     System.out.printf("%,12d records written | %,8d records/sec | %,12d bytes written | %,8d bytes/sec | %6.3f secs   %n", totalValues,
         (int) (totalValues / elapsed), bytesWritten, (int) (bytesWritten / elapsed), elapsed);
   }
+
+  public static void ingest(Connector c, Opts opts, BatchWriterOpts batchWriterOpts) throws MutationsRejectedException, IOException, AccumuloException,
+      AccumuloSecurityException, TableNotFoundException, TableExistsException {
+    ingest(c, FileSystem.get(CachedConfiguration.getInstance()), opts, batchWriterOpts);
+  }
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/TestMultiTableIngest.java b/test/src/main/java/org/apache/accumulo/test/TestMultiTableIngest.java
index 9eb42ec..ae37430 100644
--- a/test/src/main/java/org/apache/accumulo/test/TestMultiTableIngest.java
+++ b/test/src/main/java/org/apache/accumulo/test/TestMultiTableIngest.java
@@ -72,7 +72,7 @@
   }
 
   public static void main(String[] args) throws Exception {
-    ArrayList<String> tableNames = new ArrayList<String>();
+    ArrayList<String> tableNames = new ArrayList<>();
 
     Opts opts = new Opts();
     ScannerOpts scanOpts = new ScannerOpts();
diff --git a/test/src/main/java/org/apache/accumulo/test/TestRandomDeletes.java b/test/src/main/java/org/apache/accumulo/test/TestRandomDeletes.java
index 3b09b67..5292b87 100644
--- a/test/src/main/java/org/apache/accumulo/test/TestRandomDeletes.java
+++ b/test/src/main/java/org/apache/accumulo/test/TestRandomDeletes.java
@@ -79,7 +79,7 @@
   }
 
   private static TreeSet<RowColumn> scanAll(ClientOnDefaultTable opts, ScannerOpts scanOpts, String tableName) throws Exception {
-    TreeSet<RowColumn> result = new TreeSet<RowColumn>();
+    TreeSet<RowColumn> result = new TreeSet<>();
     Connector conn = opts.getConnector();
     Scanner scanner = conn.createScanner(tableName, auths);
     scanner.setBatchSize(scanOpts.scanBatchSize);
@@ -95,7 +95,7 @@
   private static long scrambleDeleteHalfAndCheck(ClientOnDefaultTable opts, ScannerOpts scanOpts, BatchWriterOpts bwOpts, String tableName, Set<RowColumn> rows)
       throws Exception {
     int result = 0;
-    ArrayList<RowColumn> entries = new ArrayList<RowColumn>(rows);
+    ArrayList<RowColumn> entries = new ArrayList<>(rows);
     java.util.Collections.shuffle(entries);
 
     Connector connector = opts.getConnector();
diff --git a/test/src/main/java/org/apache/accumulo/test/TextMemoryUsageTest.java b/test/src/main/java/org/apache/accumulo/test/TextMemoryUsageTest.java
index b9bc37a..b7664cb 100644
--- a/test/src/main/java/org/apache/accumulo/test/TextMemoryUsageTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/TextMemoryUsageTest.java
@@ -41,7 +41,7 @@
 
   @Override
   void init() {
-    map = new TreeMap<Text,Value>();
+    map = new TreeMap<>();
   }
 
   @Override
diff --git a/test/src/test/java/org/apache/accumulo/test/TotalQueuedIT.java b/test/src/main/java/org/apache/accumulo/test/TotalQueuedIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/TotalQueuedIT.java
rename to test/src/main/java/org/apache/accumulo/test/TotalQueuedIT.java
index 708d2d4..be800ad 100644
--- a/test/src/test/java/org/apache/accumulo/test/TotalQueuedIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/TotalQueuedIT.java
@@ -29,19 +29,19 @@
 import org.apache.accumulo.core.master.thrift.TabletServerStatus;
 import org.apache.accumulo.core.rpc.ThriftUtil;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.MemoryUnit;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.AccumuloServerContext;
 import org.apache.accumulo.server.conf.ServerConfigurationFactory;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Test;
 
 import com.google.common.net.HostAndPort;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 // see ACCUMULO-1950
-public class TotalQueuedIT extends ConfigurableMacIT {
+public class TotalQueuedIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -63,7 +63,7 @@
     c.tableOperations().create(tableName);
     c.tableOperations().setProperty(tableName, Property.TABLE_MAJC_RATIO.getKey(), "9999");
     c.tableOperations().setProperty(tableName, Property.TABLE_FILE_MAX.getKey(), "999");
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
     // get an idea of how fast the syncs occur
     byte row[] = new byte[250];
     BatchWriterConfig cfg = new BatchWriterConfig();
@@ -94,7 +94,7 @@
     // Now with a much bigger total queue
     c.instanceOperations().setProperty(Property.TSERV_TOTAL_MUTATION_QUEUE_MAX.getKey(), "" + LARGE_QUEUE_SIZE);
     c.tableOperations().flush(tableName, null, null, true);
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
     bw = c.createBatchWriter(tableName, cfg);
     now = System.currentTimeMillis();
     bytesSent = 0;
diff --git a/test/src/test/java/org/apache/accumulo/test/TracerRecoversAfterOfflineTableIT.java b/test/src/main/java/org/apache/accumulo/test/TracerRecoversAfterOfflineTableIT.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/TracerRecoversAfterOfflineTableIT.java
rename to test/src/main/java/org/apache/accumulo/test/TracerRecoversAfterOfflineTableIT.java
index fe1b836..15609f6 100644
--- a/test/src/test/java/org/apache/accumulo/test/TracerRecoversAfterOfflineTableIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/TracerRecoversAfterOfflineTableIT.java
@@ -18,6 +18,8 @@
 
 import static org.junit.Assert.assertTrue;
 
+import java.util.concurrent.TimeUnit;
+
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
@@ -27,10 +29,9 @@
 import org.apache.accumulo.core.trace.DistributedTrace;
 import org.apache.accumulo.core.trace.Span;
 import org.apache.accumulo.core.trace.Trace;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.tracer.TraceDump;
 import org.apache.accumulo.tracer.TraceDump.Printer;
 import org.apache.accumulo.tracer.TraceServer;
@@ -38,10 +39,12 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 /**
  *
  */
-public class TracerRecoversAfterOfflineTableIT extends ConfigurableMacIT {
+public class TracerRecoversAfterOfflineTableIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
@@ -61,9 +64,9 @@
       MiniAccumuloClusterImpl mac = cluster;
       tracer = mac.exec(TraceServer.class);
       while (!conn.tableOperations().exists("trace")) {
-        UtilWaitThread.sleep(1000);
+        sleepUninterruptibly(1, TimeUnit.SECONDS);
       }
-      UtilWaitThread.sleep(5000);
+      sleepUninterruptibly(5, TimeUnit.SECONDS);
     }
 
     log.info("Taking table offline");
diff --git a/test/src/test/java/org/apache/accumulo/test/TransportCachingIT.java b/test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/TransportCachingIT.java
rename to test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java
index 8bfd5fc..10cde74 100644
--- a/test/src/test/java/org/apache/accumulo/test/TransportCachingIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java
@@ -38,7 +38,7 @@
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooCacheFactory;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.thrift.transport.TTransport;
 import org.apache.thrift.transport.TTransportException;
 import org.junit.Test;
@@ -48,7 +48,7 @@
 /**
  * Test that {@link ThriftTransportPool} actually adheres to the cachedConnection argument
  */
-public class TransportCachingIT extends AccumuloClusterIT {
+public class TransportCachingIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(TransportCachingIT.class);
 
   @Test
@@ -60,7 +60,7 @@
     long rpcTimeout = DefaultConfiguration.getTimeInMillis(Property.GENERAL_RPC_TIMEOUT.getDefaultValue());
 
     // create list of servers
-    ArrayList<ThriftTransportKey> servers = new ArrayList<ThriftTransportKey>();
+    ArrayList<ThriftTransportKey> servers = new ArrayList<>();
 
     // add tservers
     ZooCache zc = new ZooCacheFactory().getZooCache(instance.getZooKeepers(), instance.getZooKeepersSessionTimeOut());
diff --git a/test/src/main/java/org/apache/accumulo/test/UnusedWALIT.java b/test/src/main/java/org/apache/accumulo/test/UnusedWALIT.java
new file mode 100644
index 0000000..281c358
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/UnusedWALIT.java
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test;
+
+import static org.junit.Assert.assertEquals;
+
+import java.util.List;
+import java.util.Map.Entry;
+import java.util.UUID;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.master.state.TServerInstance;
+import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.junit.Test;
+
+import com.google.common.collect.Iterators;
+
+// When reviewing the changes for ACCUMULO-3423, kturner suggested
+// "tablets will now have log references that contain no data,
+// so it may be marked with 3 WALs, the first with data, the 2nd without, a 3rd with data.
+// It would be useful to have an IT that will test this situation.
+public class UnusedWALIT extends ConfigurableMacBase {
+
+  private ZooReaderWriter zk;
+
+  @Override
+  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    final long logSize = 1024 * 1024 * 10;
+    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "5s");
+    cfg.setProperty(Property.TSERV_WALOG_MAX_SIZE, Long.toString(logSize));
+    cfg.setNumTservers(1);
+    // use raw local file system so walogs sync and flush will work
+    hadoopCoreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
+    hadoopCoreSite.set("fs.namenode.fs-limits.min-block-size", Long.toString(logSize));
+  }
+
+  @Test(timeout = 2 * 60 * 1000)
+  public void test() throws Exception {
+    // don't want this bad boy cleaning up walog entries
+    getCluster().getClusterControl().stop(ServerType.GARBAGE_COLLECTOR);
+
+    // make two tables
+    String[] tableNames = getUniqueNames(2);
+    String bigTable = tableNames[0];
+    String lilTable = tableNames[1];
+    Connector c = getConnector();
+    c.tableOperations().create(bigTable);
+    c.tableOperations().create(lilTable);
+
+    Instance i = c.getInstance();
+    zk = new ZooReaderWriter(i.getZooKeepers(), i.getZooKeepersSessionTimeOut(), "");
+
+    // put some data in a log that should be replayed for both tables
+    writeSomeData(c, bigTable, 0, 10, 0, 10);
+    scanSomeData(c, bigTable, 0, 10, 0, 10);
+    writeSomeData(c, lilTable, 0, 1, 0, 1);
+    scanSomeData(c, lilTable, 0, 1, 0, 1);
+    assertEquals(2, getWALCount(i, zk));
+
+    // roll the logs by pushing data into bigTable
+    writeSomeData(c, bigTable, 0, 3000, 0, 1000);
+    assertEquals(3, getWALCount(i, zk));
+
+    // put some data in the latest log
+    writeSomeData(c, lilTable, 1, 10, 0, 10);
+    scanSomeData(c, lilTable, 1, 10, 0, 10);
+
+    // bounce the tserver
+    getCluster().getClusterControl().stop(ServerType.TABLET_SERVER);
+    getCluster().getClusterControl().start(ServerType.TABLET_SERVER);
+
+    // wait for the metadata table to be online
+    Iterators.size(c.createScanner(MetadataTable.NAME, Authorizations.EMPTY).iterator());
+
+    // check our two sets of data in different logs
+    scanSomeData(c, lilTable, 0, 1, 0, 1);
+    scanSomeData(c, lilTable, 1, 10, 0, 10);
+  }
+
+  private void scanSomeData(Connector c, String table, int startRow, int rowCount, int startCol, int colCount) throws Exception {
+    Scanner s = c.createScanner(table, Authorizations.EMPTY);
+    s.setRange(new Range(Integer.toHexString(startRow), Integer.toHexString(startRow + rowCount)));
+    int row = startRow;
+    int col = startCol;
+    for (Entry<Key,Value> entry : s) {
+      assertEquals(row, Integer.parseInt(entry.getKey().getRow().toString(), 16));
+      assertEquals(col++, Integer.parseInt(entry.getKey().getColumnQualifier().toString(), 16));
+      if (col == startCol + colCount) {
+        col = startCol;
+        row++;
+        if (row == startRow + rowCount) {
+          break;
+        }
+      }
+    }
+    assertEquals(row, startRow + rowCount);
+  }
+
+  private int getWALCount(Instance i, ZooReaderWriter zk) throws Exception {
+    WalStateManager wals = new WalStateManager(i, zk);
+    int result = 0;
+    for (Entry<TServerInstance,List<UUID>> entry : wals.getAllMarkers().entrySet()) {
+      result += entry.getValue().size();
+    }
+    return result;
+  }
+
+  private void writeSomeData(Connector conn, String table, int startRow, int rowCount, int startCol, int colCount) throws Exception {
+    BatchWriterConfig config = new BatchWriterConfig();
+    config.setMaxMemory(10 * 1024 * 1024);
+    BatchWriter bw = conn.createBatchWriter(table, config);
+    for (int r = startRow; r < startRow + rowCount; r++) {
+      Mutation m = new Mutation(Integer.toHexString(r));
+      for (int c = startCol; c < startCol + colCount; c++) {
+        m.put("", Integer.toHexString(c), "");
+      }
+      bw.addMutation(m);
+    }
+    bw.close();
+  }
+
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/UserCompactionStrategyIT.java b/test/src/main/java/org/apache/accumulo/test/UserCompactionStrategyIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/UserCompactionStrategyIT.java
rename to test/src/main/java/org/apache/accumulo/test/UserCompactionStrategyIT.java
index fd21cd3..4451987 100644
--- a/test/src/test/java/org/apache/accumulo/test/UserCompactionStrategyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/UserCompactionStrategyIT.java
@@ -17,6 +17,8 @@
 
 package org.apache.accumulo.test;
 
+import java.io.File;
+import java.io.IOException;
 import java.util.Arrays;
 import java.util.HashSet;
 import java.util.Map;
@@ -40,9 +42,11 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.user.RegExFilter;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.test.functional.ConfigurableCompactionIT;
 import org.apache.accumulo.test.functional.FunctionalTestUtils;
 import org.apache.accumulo.test.functional.SlowIterator;
+import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.Assume;
@@ -51,7 +55,7 @@
 import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.ImmutableSet;
 
-public class UserCompactionStrategyIT extends AccumuloClusterIT {
+public class UserCompactionStrategyIT extends AccumuloClusterHarness {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -124,16 +128,18 @@
     // Can't assume that a test-resource will be on the server's classpath
     Assume.assumeTrue(ClusterType.MINI == getClusterType());
 
-    // test pertable classpath + user specified compaction strat
+    // test per-table classpath + user specified compaction strategy
 
     final Connector c = getConnector();
     final String tableName = getUniqueNames(1)[0];
+    File target = new File(System.getProperty("user.dir"), "target");
+    Assert.assertTrue(target.mkdirs() || target.isDirectory());
+    File destFile = installJar(target, "/TestCompactionStrat.jar");
     c.tableOperations().create(tableName);
-    c.instanceOperations().setProperty(Property.VFS_CONTEXT_CLASSPATH_PROPERTY.getKey() + "context1",
-        System.getProperty("user.dir") + "/src/test/resources/TestCompactionStrat.jar");
+    c.instanceOperations().setProperty(Property.VFS_CONTEXT_CLASSPATH_PROPERTY.getKey() + "context1", destFile.toString());
     c.tableOperations().setProperty(tableName, Property.TABLE_CLASSPATH.getKey(), "context1");
 
-    c.tableOperations().addSplits(tableName, new TreeSet<Text>(Arrays.asList(new Text("efg"))));
+    c.tableOperations().addSplits(tableName, new TreeSet<>(Arrays.asList(new Text("efg"))));
 
     writeFlush(c, tableName, "a");
     writeFlush(c, tableName, "b");
@@ -154,6 +160,12 @@
     Assert.assertEquals(2, FunctionalTestUtils.countRFiles(c, tableName));
   }
 
+  private static File installJar(File destDir, String jarFile) throws IOException {
+    File destName = new File(destDir, new File(jarFile).getName());
+    FileUtils.copyInputStreamToFile(ConfigurableCompactionIT.class.getResourceAsStream(jarFile), destName);
+    return destName;
+  }
+
   @Test
   public void testIterators() throws Exception {
     // test compaction strategy + iterators
@@ -276,7 +288,7 @@
   }
 
   private Set<String> getRows(Connector c, String tableName) throws TableNotFoundException {
-    Set<String> rows = new HashSet<String>();
+    Set<String> rows = new HashSet<>();
     Scanner scanner = c.createScanner(tableName, Authorizations.EMPTY);
 
     for (Entry<Key,Value> entry : scanner)
diff --git a/test/src/test/java/org/apache/accumulo/test/UsersIT.java b/test/src/main/java/org/apache/accumulo/test/UsersIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/UsersIT.java
rename to test/src/main/java/org/apache/accumulo/test/UsersIT.java
index 579daee..131f042 100644
--- a/test/src/test/java/org/apache/accumulo/test/UsersIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/UsersIT.java
@@ -27,10 +27,10 @@
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.security.SecurityErrorCode;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.junit.Test;
 
-public class UsersIT extends AccumuloClusterIT {
+public class UsersIT extends AccumuloClusterHarness {
 
   @Test
   public void testCreateExistingUser() throws Exception {
diff --git a/test/src/test/java/org/apache/accumulo/test/VerifySerialRecoveryIT.java b/test/src/main/java/org/apache/accumulo/test/VerifySerialRecoveryIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/VerifySerialRecoveryIT.java
rename to test/src/main/java/org/apache/accumulo/test/VerifySerialRecoveryIT.java
index c318075..3ec0e30 100644
--- a/test/src/test/java/org/apache/accumulo/test/VerifySerialRecoveryIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/VerifySerialRecoveryIT.java
@@ -32,7 +32,7 @@
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ProcessReference;
 import org.apache.accumulo.server.util.Admin;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.test.functional.FunctionalTestUtils;
 import org.apache.accumulo.tserver.TabletServer;
 import org.apache.hadoop.conf.Configuration;
@@ -42,7 +42,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class VerifySerialRecoveryIT extends ConfigurableMacIT {
+public class VerifySerialRecoveryIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -58,7 +58,7 @@
     String tableName = getUniqueNames(1)[0];
     Connector c = getConnector();
     c.tableOperations().create(tableName);
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (int i = 0; i < 200; i++) {
       splits.add(new Text(AssignmentThreadsIT.randomHex(8)));
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/VolumeChooserIT.java b/test/src/main/java/org/apache/accumulo/test/VolumeChooserIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/VolumeChooserIT.java
rename to test/src/main/java/org/apache/accumulo/test/VolumeChooserIT.java
index ce07373..6f575c7 100644
--- a/test/src/test/java/org/apache/accumulo/test/VolumeChooserIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/VolumeChooserIT.java
@@ -47,7 +47,7 @@
 import org.apache.accumulo.server.fs.PerTableVolumeChooser;
 import org.apache.accumulo.server.fs.PreferredVolumeChooser;
 import org.apache.accumulo.server.fs.RandomVolumeChooser;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.RawLocalFileSystem;
@@ -57,7 +57,7 @@
 /**
  *
  */
-public class VolumeChooserIT extends ConfigurableMacIT {
+public class VolumeChooserIT extends ConfigurableMacBase {
 
   private static final Text EMPTY = new Text();
   private static final Value EMPTY_VALUE = new Value(new byte[] {});
@@ -80,7 +80,7 @@
     namespace2 = "ns_" + getUniqueNames(2)[1];
 
     // Set the general volume chooser to the PerTableVolumeChooser so that different choosers can be specified
-    Map<String,String> siteConfig = new HashMap<String,String>();
+    Map<String,String> siteConfig = new HashMap<>();
     siteConfig.put(Property.GENERAL_VOLUME_CHOOSER.getKey(), PerTableVolumeChooser.class.getName());
     cfg.setSiteConfig(siteConfig);
 
@@ -108,7 +108,7 @@
 
   public void addSplits(Connector connector, String tableName) throws TableNotFoundException, AccumuloException, AccumuloSecurityException {
     // Add 10 splits to the table
-    SortedSet<Text> partitions = new TreeSet<Text>();
+    SortedSet<Text> partitions = new TreeSet<>();
     for (String s : "b,e,g,j,l,o,q,t,v,y".split(","))
       partitions.add(new Text(s));
     connector.tableOperations().addSplits(tableName, partitions);
@@ -135,7 +135,7 @@
 
   public void verifyVolumes(Connector connector, String tableName, Range tableRange, String vol) throws TableNotFoundException {
     // Verify the new files are written to the Volumes specified
-    ArrayList<String> volumes = new ArrayList<String>();
+    ArrayList<String> volumes = new ArrayList<>();
     for (String s : vol.split(","))
       volumes.add(s);
 
diff --git a/test/src/test/java/org/apache/accumulo/test/VolumeIT.java b/test/src/main/java/org/apache/accumulo/test/VolumeIT.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/VolumeIT.java
rename to test/src/main/java/org/apache/accumulo/test/VolumeIT.java
index d4b3b61..f9a6a326 100644
--- a/test/src/test/java/org/apache/accumulo/test/VolumeIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/VolumeIT.java
@@ -39,6 +39,7 @@
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableExistsException;
@@ -65,8 +66,11 @@
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.ServerConstants;
 import org.apache.accumulo.server.init.Initialize;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.log.WalStateManager.WalState;
 import org.apache.accumulo.server.util.Admin;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
@@ -76,7 +80,7 @@
 import org.junit.Assert;
 import org.junit.Test;
 
-public class VolumeIT extends ConfigurableMacIT {
+public class VolumeIT extends ConfigurableMacBase {
 
   private static final Text EMPTY = new Text();
   private static final Value EMPTY_VALUE = new Value(new byte[] {});
@@ -103,6 +107,7 @@
     cfg.setProperty(Property.INSTANCE_DFS_DIR, v1Uri.getPath());
     cfg.setProperty(Property.INSTANCE_DFS_URI, v1Uri.getScheme() + v1Uri.getHost());
     cfg.setProperty(Property.INSTANCE_VOLUMES, v1.toString() + "," + v2.toString());
+    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
 
     // use raw local file system so walogs sync and flush will work
     hadoopCoreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
@@ -116,7 +121,7 @@
     Connector connector = getConnector();
     String tableName = getUniqueNames(1)[0];
     connector.tableOperations().create(tableName);
-    SortedSet<Text> partitions = new TreeSet<Text>();
+    SortedSet<Text> partitions = new TreeSet<>();
     // with some splits
     for (String s : "d,m,t".split(","))
       partitions.add(new Text(s));
@@ -153,13 +158,13 @@
     List<DiskUsage> diskUsage = connector.tableOperations().getDiskUsage(Collections.singleton(tableName));
     assertEquals(1, diskUsage.size());
     long usage = diskUsage.get(0).getUsage().longValue();
-    System.out.println("usage " + usage);
+    log.debug("usage {}", usage);
     assertTrue(usage > 700 && usage < 800);
   }
 
   private void verifyData(List<String> expected, Scanner createScanner) {
 
-    List<String> actual = new ArrayList<String>();
+    List<String> actual = new ArrayList<>();
 
     for (Entry<Key,Value> entry : createScanner) {
       Key k = entry.getKey();
@@ -175,7 +180,7 @@
   @Test
   public void testRelativePaths() throws Exception {
 
-    List<String> expected = new ArrayList<String>();
+    List<String> expected = new ArrayList<>();
 
     Connector connector = getConnector();
     String tableName = getUniqueNames(1)[0];
@@ -183,7 +188,7 @@
 
     String tableId = connector.tableOperations().tableIdMap().get(tableName);
 
-    SortedSet<Text> partitions = new TreeSet<Text>();
+    SortedSet<Text> partitions = new TreeSet<>();
     // with some splits
     for (String s : "c,g,k,p,s,v".split(","))
       partitions.add(new Text(s));
@@ -223,7 +228,7 @@
 
     Scanner metaScanner = connector.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
     metaScanner.fetchColumnFamily(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME);
-    metaScanner.setRange(new KeyExtent(new Text(tableId), null, null).toMetadataRange());
+    metaScanner.setRange(new KeyExtent(tableId, null, null).toMetadataRange());
 
     BatchWriter mbw = connector.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
 
@@ -341,7 +346,7 @@
     cluster.start();
 
     // Make sure we can still read the tables (tableNames[0] is very likely to have a file still on v1)
-    List<String> expected = new ArrayList<String>();
+    List<String> expected = new ArrayList<>();
     for (int i = 0; i < 100; i++) {
       String row = String.format("%06d", i * 100 + 3);
       expected.add(row + ":cf1:cq1:1");
@@ -355,7 +360,7 @@
 
   private void writeData(String tableName, Connector conn) throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException,
       MutationsRejectedException {
-    TreeSet<Text> splits = new TreeSet<Text>();
+    TreeSet<Text> splits = new TreeSet<>();
     for (int i = 1; i < 100; i++) {
       splits.add(new Text(String.format("%06d", i * 100)));
     }
@@ -374,12 +379,11 @@
     bw.close();
   }
 
-  private void verifyVolumesUsed(String tableName, boolean shouldExist, Path... paths) throws AccumuloException, AccumuloSecurityException,
-      TableExistsException, TableNotFoundException, MutationsRejectedException {
+  private void verifyVolumesUsed(String tableName, boolean shouldExist, Path... paths) throws Exception {
 
     Connector conn = getConnector();
 
-    List<String> expected = new ArrayList<String>();
+    List<String> expected = new ArrayList<>();
     for (int i = 0; i < 100; i++) {
       String row = String.format("%06d", i * 100 + 3);
       expected.add(row + ":cf1:cq1:1");
@@ -401,7 +405,7 @@
     Scanner metaScanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
     MetadataSchema.TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.fetch(metaScanner);
     metaScanner.fetchColumnFamily(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME);
-    metaScanner.setRange(new KeyExtent(new Text(tableId), null, null).toMetadataRange());
+    metaScanner.setRange(new KeyExtent(tableId, null, null).toMetadataRange());
 
     int counts[] = new int[paths.length];
 
@@ -425,6 +429,18 @@
       Assert.fail("Unexpected volume " + path);
     }
 
+    Instance i = conn.getInstance();
+    ZooReaderWriter zk = new ZooReaderWriter(i.getZooKeepers(), i.getZooKeepersSessionTimeOut(), "");
+    WalStateManager wals = new WalStateManager(i, zk);
+    outer: for (Entry<Path,WalState> entry : wals.getAllState().entrySet()) {
+      for (Path path : paths) {
+        if (entry.getKey().toString().startsWith(path.toString())) {
+          continue outer;
+        }
+      }
+      Assert.fail("Unexpected volume " + entry.getKey());
+    }
+
     // if a volume is chosen randomly for each tablet, then the probability that a volume will not be chosen for any tablet is ((num_volumes -
     // 1)/num_volumes)^num_tablets. For 100 tablets and 3 volumes the probability that only 2 volumes would be chosen is 2.46e-18
 
@@ -435,6 +451,7 @@
     }
 
     Assert.assertEquals(200, sum);
+
   }
 
   @Test
diff --git a/test/src/test/java/org/apache/accumulo/test/WaitForBalanceIT.java b/test/src/main/java/org/apache/accumulo/test/WaitForBalanceIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/WaitForBalanceIT.java
rename to test/src/main/java/org/apache/accumulo/test/WaitForBalanceIT.java
index 0df236a..c6abbce 100644
--- a/test/src/test/java/org/apache/accumulo/test/WaitForBalanceIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/WaitForBalanceIT.java
@@ -33,13 +33,13 @@
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
 
-public class WaitForBalanceIT extends ConfigurableMacIT {
+public class WaitForBalanceIT extends ConfigurableMacBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -56,7 +56,7 @@
     final String tableName = getUniqueNames(1)[0];
     c.tableOperations().create(tableName);
     c.instanceOperations().waitForBalance();
-    final SortedSet<Text> partitionKeys = new TreeSet<Text>();
+    final SortedSet<Text> partitionKeys = new TreeSet<>();
     for (int i = 0; i < 1000; i++) {
       partitionKeys.add(new Text("" + i));
     }
@@ -67,7 +67,7 @@
   }
 
   private boolean isBalanced() throws Exception {
-    final Map<String,Integer> counts = new HashMap<String,Integer>();
+    final Map<String,Integer> counts = new HashMap<>();
     int offline = 0;
     final Connector c = getConnector();
     for (String tableName : new String[] {MetadataTable.NAME, RootTable.NAME}) {
diff --git a/test/src/main/java/org/apache/accumulo/test/WrongTabletTest.java b/test/src/main/java/org/apache/accumulo/test/WrongTabletTest.java
index 6a01e8c..a8fc439 100644
--- a/test/src/main/java/org/apache/accumulo/test/WrongTabletTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/WrongTabletTest.java
@@ -63,7 +63,7 @@
 
       Mutation mutation = new Mutation(new Text("row_0003750001"));
       mutation.putDelete(new Text("colf"), new Text("colq"));
-      client.update(Tracer.traceInfo(), context.rpcCreds(), new KeyExtent(new Text("!!"), null, new Text("row_0003750000")).toThrift(), mutation.toThrift(),
+      client.update(Tracer.traceInfo(), context.rpcCreds(), new KeyExtent("!!", null, new Text("row_0003750000")).toThrift(), mutation.toThrift(),
           TDurability.DEFAULT);
     } catch (Exception e) {
       throw new RuntimeException(e);
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/CreateManyScannersIT.java b/test/src/main/java/org/apache/accumulo/test/ZooKeeperPropertiesIT.java
similarity index 60%
copy from test/src/test/java/org/apache/accumulo/test/functional/CreateManyScannersIT.java
copy to test/src/main/java/org/apache/accumulo/test/ZooKeeperPropertiesIT.java
index ffa527f..d1e4882 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/CreateManyScannersIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ZooKeeperPropertiesIT.java
@@ -14,28 +14,21 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.test.functional;
+package org.apache.accumulo.test;
 
+import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.junit.Test;
 
-public class CreateManyScannersIT extends AccumuloClusterIT {
+public class ZooKeeperPropertiesIT extends AccumuloClusterHarness {
 
-  @Override
-  protected int defaultTimeoutSeconds() {
-    return 60;
-  }
-
-  @Test
-  public void run() throws Exception {
-    Connector c = getConnector();
-    String tableName = getUniqueNames(1)[0];
-    c.tableOperations().create(tableName);
-    for (int i = 0; i < 100000; i++) {
-      c.createScanner(tableName, Authorizations.EMPTY);
-    }
+  @Test(expected = AccumuloException.class)
+  public void testNoFiles() throws Exception {
+    Connector conn = getConnector();
+    // Should throw an error as this property can't be changed in ZooKeeper
+    conn.instanceOperations().setProperty(Property.GENERAL_RPC_TIMEOUT.getKey(), "60s");
   }
 
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousBatchWalker.java b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousBatchWalker.java
index 2c32176..e08be10 100644
--- a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousBatchWalker.java
+++ b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousBatchWalker.java
@@ -35,11 +35,11 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.hadoop.io.Text;
 
 import com.beust.jcommander.Parameter;
 import com.beust.jcommander.validators.PositiveInteger;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 public class ContinuousBatchWalker {
 
@@ -68,7 +68,7 @@
       bs.setTimeout(bsOpts.scanTimeout, TimeUnit.MILLISECONDS);
 
       Set<Text> batch = getBatch(scanner, opts.min, opts.max, scanOpts.scanBatchSize, r);
-      List<Range> ranges = new ArrayList<Range>(batch.size());
+      List<Range> ranges = new ArrayList<>(batch.size());
 
       for (Text row : batch) {
         ranges.add(new Range(row));
@@ -76,7 +76,7 @@
 
       runBatchScan(scanOpts.scanBatchSize, bs, batch, ranges);
 
-      UtilWaitThread.sleep(opts.sleepTime);
+      sleepUninterruptibly(opts.sleepTime, TimeUnit.MILLISECONDS);
     }
 
   }
@@ -84,7 +84,7 @@
   private static void runBatchScan(int batchSize, BatchScanner bs, Set<Text> batch, List<Range> ranges) {
     bs.setRanges(ranges);
 
-    Set<Text> rowsSeen = new HashSet<Text>();
+    Set<Text> rowsSeen = new HashSet<>();
 
     int count = 0;
 
@@ -104,8 +104,8 @@
     long t2 = System.currentTimeMillis();
 
     if (!rowsSeen.equals(batch)) {
-      HashSet<Text> copy1 = new HashSet<Text>(rowsSeen);
-      HashSet<Text> copy2 = new HashSet<Text>(batch);
+      HashSet<Text> copy1 = new HashSet<>(rowsSeen);
+      HashSet<Text> copy2 = new HashSet<>(batch);
 
       copy1.removeAll(batch);
       copy2.removeAll(rowsSeen);
@@ -133,7 +133,7 @@
     }
   }
 
-  private static HashSet<Text> rowsToQuery = new HashSet<Text>();
+  private static HashSet<Text> rowsToQuery = new HashSet<>();
 
   private static Set<Text> getBatch(Scanner scanner, long min, long max, int batchSize, Random r) {
 
@@ -157,10 +157,10 @@
 
       System.out.println("FSB " + t1 + " " + (t2 - t1) + " " + count);
 
-      UtilWaitThread.sleep(100);
+      sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
     }
 
-    HashSet<Text> ret = new HashSet<Text>();
+    HashSet<Text> ret = new HashSet<>();
 
     Iterator<Text> iter = rowsToQuery.iterator();
 
diff --git a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousIngest.java b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousIngest.java
index ddc36aa..b59cf04 100644
--- a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousIngest.java
+++ b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousIngest.java
@@ -57,7 +57,7 @@
       return;
     }
 
-    visibilities = new ArrayList<ColumnVisibility>();
+    visibilities = new ArrayList<>();
 
     FileSystem fs = FileSystem.get(new Configuration());
     BufferedReader in = new BufferedReader(new InputStreamReader(fs.open(new Path(opts.visFile)), UTF_8));
diff --git a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousScanner.java b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousScanner.java
index a77de3d..63709df 100644
--- a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousScanner.java
+++ b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousScanner.java
@@ -21,6 +21,7 @@
 import java.util.Iterator;
 import java.util.Map.Entry;
 import java.util.Random;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.cli.ClientOnDefaultTable;
 import org.apache.accumulo.core.cli.ScannerOpts;
@@ -30,11 +31,11 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.hadoop.io.Text;
 
 import com.beust.jcommander.Parameter;
 import com.beust.jcommander.validators.PositiveInteger;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 public class ContinuousScanner {
 
@@ -100,7 +101,7 @@
       System.out.printf("SCN %d %s %d %d%n", t1, new String(scanStart, UTF_8), (t2 - t1), count);
 
       if (opts.sleepTime > 0)
-        UtilWaitThread.sleep(opts.sleepTime);
+        sleepUninterruptibly(opts.sleepTime, TimeUnit.MILLISECONDS);
     }
 
   }
diff --git a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousStatsCollector.java b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousStatsCollector.java
index cf4e3c1..b880085 100644
--- a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousStatsCollector.java
+++ b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousStatsCollector.java
@@ -52,7 +52,6 @@
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapred.ClusterStatus;
 import org.apache.hadoop.mapred.JobClient;
 import org.slf4j.Logger;
@@ -99,7 +98,7 @@
       scanner.setBatchSize(scanBatchSize);
       scanner.fetchColumnFamily(DataFileColumnFamily.NAME);
       scanner.addScanIterator(new IteratorSetting(1000, "cfc", ColumnFamilyCounter.class.getName()));
-      scanner.setRange(new KeyExtent(new Text(tableId), null, null).toMetadataRange());
+      scanner.setRange(new KeyExtent(tableId, null, null).toMetadataRange());
 
       Stat s = new Stat();
 
@@ -144,7 +143,7 @@
         MasterMonitorInfo stats = client.getMasterStats(Tracer.traceInfo(), context.rpcCreds());
 
         TableInfo all = new TableInfo();
-        Map<String,TableInfo> tableSummaries = new HashMap<String,TableInfo>();
+        Map<String,TableInfo> tableSummaries = new HashMap<>();
 
         for (TabletServerStatus server : stats.tServerInfo) {
           for (Entry<String,TableInfo> info : server.tableMap.entrySet()) {
diff --git a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousVerify.java b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousVerify.java
index 8a8a9a2..4222005 100644
--- a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousVerify.java
+++ b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousVerify.java
@@ -105,7 +105,7 @@
   }
 
   public static class CReducer extends Reducer<LongWritable,VLongWritable,Text,Text> {
-    private ArrayList<Long> refs = new ArrayList<Long>();
+    private ArrayList<Long> refs = new ArrayList<>();
 
     @Override
     public void reduce(LongWritable key, Iterable<VLongWritable> values, Context context) throws IOException, InterruptedException {
@@ -143,14 +143,14 @@
   }
 
   static class Opts extends MapReduceClientOnDefaultTable {
-    @Parameter(names = "--output", description = "location in HDFS to store the results; must not exist", required = true)
+    @Parameter(names = "--output", description = "location in HDFS to store the results; must not exist")
     String outputDir = "/tmp/continuousVerify";
 
-    @Parameter(names = "--maxMappers", description = "the maximum number of mappers to use", required = true, validateWith = PositiveInteger.class)
-    int maxMaps = 0;
+    @Parameter(names = "--maxMappers", description = "the maximum number of mappers to use", validateWith = PositiveInteger.class)
+    int maxMaps = 1;
 
-    @Parameter(names = "--reducers", description = "the number of reducers to use", required = true, validateWith = PositiveInteger.class)
-    int reducers = 0;
+    @Parameter(names = "--reducers", description = "the number of reducers to use", validateWith = PositiveInteger.class)
+    int reducers = 1;
 
     @Parameter(names = "--offline", description = "perform the verification directly on the files while the table is offline")
     boolean scanOffline = false;
diff --git a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousWalk.java b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousWalk.java
index f2e4805..3f75d5a 100644
--- a/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousWalk.java
+++ b/test/src/main/java/org/apache/accumulo/test/continuous/ContinuousWalk.java
@@ -85,7 +85,7 @@
         return;
       }
 
-      auths = new ArrayList<Authorizations>();
+      auths = new ArrayList<>();
 
       FileSystem fs = FileSystem.get(new Configuration());
       BufferedReader in = new BufferedReader(new InputStreamReader(fs.open(new Path(file)), UTF_8));
@@ -113,7 +113,7 @@
 
     Random r = new Random();
 
-    ArrayList<Value> values = new ArrayList<Value>();
+    ArrayList<Value> values = new ArrayList<>();
 
     while (true) {
       Scanner scanner = ContinuousUtil.createScanner(conn, clientOpts.getTableName(), opts.randomAuths.getAuths(r));
diff --git a/test/src/main/java/org/apache/accumulo/test/continuous/Histogram.java b/test/src/main/java/org/apache/accumulo/test/continuous/Histogram.java
index 8dd3c9d..f4b21df 100644
--- a/test/src/main/java/org/apache/accumulo/test/continuous/Histogram.java
+++ b/test/src/main/java/org/apache/accumulo/test/continuous/Histogram.java
@@ -41,7 +41,7 @@
 
   public Histogram() {
     sum = 0;
-    counts = new HashMap<T,HistData<T>>();
+    counts = new HashMap<>();
   }
 
   public void addPoint(T x) {
@@ -52,7 +52,7 @@
 
     HistData<T> hd = counts.get(x);
     if (hd == null) {
-      hd = new HistData<T>(x);
+      hd = new HistData<>(x);
       counts.put(x, hd);
     }
 
@@ -80,7 +80,7 @@
 
   public List<T> getKeysInCountSortedOrder() {
 
-    ArrayList<HistData<T>> sortedCounts = new ArrayList<HistData<T>>(counts.values());
+    ArrayList<HistData<T>> sortedCounts = new ArrayList<>(counts.values());
 
     Collections.sort(sortedCounts, new Comparator<HistData<T>>() {
       @Override
@@ -93,7 +93,7 @@
       }
     });
 
-    ArrayList<T> sortedKeys = new ArrayList<T>();
+    ArrayList<T> sortedKeys = new ArrayList<>();
 
     for (Iterator<HistData<T>> iter = sortedCounts.iterator(); iter.hasNext();) {
       HistData<T> hd = iter.next();
@@ -104,7 +104,7 @@
   }
 
   public void print(StringBuilder out) {
-    TreeSet<HistData<T>> sortedCounts = new TreeSet<HistData<T>>(counts.values());
+    TreeSet<HistData<T>> sortedCounts = new TreeSet<>(counts.values());
 
     int maxValueLen = 0;
 
@@ -133,7 +133,7 @@
     BufferedOutputStream bos = new BufferedOutputStream(fos);
     PrintStream ps = new PrintStream(bos, false, UTF_8.name());
 
-    TreeSet<HistData<T>> sortedCounts = new TreeSet<HistData<T>>(counts.values());
+    TreeSet<HistData<T>> sortedCounts = new TreeSet<>(counts.values());
     for (Iterator<HistData<T>> iter = sortedCounts.iterator(); iter.hasNext();) {
       HistData<T> hd = iter.next();
       ps.println(" " + hd.bin + " " + hd.count);
diff --git a/test/src/main/java/org/apache/accumulo/test/continuous/PrintScanTimeHistogram.java b/test/src/main/java/org/apache/accumulo/test/continuous/PrintScanTimeHistogram.java
index a43c2ba..1a25bab 100644
--- a/test/src/main/java/org/apache/accumulo/test/continuous/PrintScanTimeHistogram.java
+++ b/test/src/main/java/org/apache/accumulo/test/continuous/PrintScanTimeHistogram.java
@@ -32,8 +32,8 @@
   private static final Logger log = LoggerFactory.getLogger(PrintScanTimeHistogram.class);
 
   public static void main(String[] args) throws Exception {
-    Histogram<String> srqHist = new Histogram<String>();
-    Histogram<String> fsrHist = new Histogram<String>();
+    Histogram<String> srqHist = new Histogram<>();
+    Histogram<String> fsrHist = new Histogram<>();
 
     processFile(System.in, srqHist, fsrHist);
 
diff --git a/test/src/main/java/org/apache/accumulo/test/continuous/TimeBinner.java b/test/src/main/java/org/apache/accumulo/test/continuous/TimeBinner.java
index e40bc8e..6ecb14e 100644
--- a/test/src/main/java/org/apache/accumulo/test/continuous/TimeBinner.java
+++ b/test/src/main/java/org/apache/accumulo/test/continuous/TimeBinner.java
@@ -77,10 +77,10 @@
 
     String line = null;
 
-    HashMap<Long,DoubleWrapper> aggregation1 = new HashMap<Long,DoubleWrapper>();
-    HashMap<Long,DoubleWrapper> aggregation2 = new HashMap<Long,DoubleWrapper>();
-    HashMap<Long,DoubleWrapper> aggregation3 = new HashMap<Long,DoubleWrapper>();
-    HashMap<Long,DoubleWrapper> aggregation4 = new HashMap<Long,DoubleWrapper>();
+    HashMap<Long,DoubleWrapper> aggregation1 = new HashMap<>();
+    HashMap<Long,DoubleWrapper> aggregation2 = new HashMap<>();
+    HashMap<Long,DoubleWrapper> aggregation3 = new HashMap<>();
+    HashMap<Long,DoubleWrapper> aggregation4 = new HashMap<>();
 
     while ((line = in.readLine()) != null) {
 
@@ -144,7 +144,7 @@
       }
     }
 
-    TreeMap<Long,DoubleWrapper> sorted = new TreeMap<Long,DoubleWrapper>(aggregation1);
+    TreeMap<Long,DoubleWrapper> sorted = new TreeMap<>(aggregation1);
 
     Set<Entry<Long,DoubleWrapper>> es = sorted.entrySet();
 
diff --git a/test/src/main/java/org/apache/accumulo/test/continuous/UndefinedAnalyzer.java b/test/src/main/java/org/apache/accumulo/test/continuous/UndefinedAnalyzer.java
index 00c7eb0..bcd35d8 100644
--- a/test/src/main/java/org/apache/accumulo/test/continuous/UndefinedAnalyzer.java
+++ b/test/src/main/java/org/apache/accumulo/test/continuous/UndefinedAnalyzer.java
@@ -68,7 +68,7 @@
 
   static class IngestInfo {
 
-    Map<String,TreeMap<Long,Long>> flushes = new HashMap<String,TreeMap<Long,Long>>();
+    Map<String,TreeMap<Long,Long>> flushes = new HashMap<>();
 
     public IngestInfo(String logDir) throws Exception {
       File dir = new File(logDir);
@@ -103,7 +103,7 @@
             return;
           }
 
-          tm = new TreeMap<Long,Long>(Collections.reverseOrder());
+          tm = new TreeMap<>(Collections.reverseOrder());
           tm.put(0l, Long.parseLong(time));
           flushes.put(uuid, tm);
           break;
@@ -162,7 +162,7 @@
 
   static class TabletHistory {
 
-    List<TabletAssignment> assignments = new ArrayList<TabletAssignment>();
+    List<TabletAssignment> assignments = new ArrayList<>();
 
     TabletHistory(String tableId, String acuLogDir) throws Exception {
       File dir = new File(acuLogDir);
@@ -263,7 +263,7 @@
     BatchScannerOpts bsOpts = new BatchScannerOpts();
     opts.parseArgs(UndefinedAnalyzer.class.getName(), args, bsOpts);
 
-    List<UndefinedNode> undefs = new ArrayList<UndefinedNode>();
+    List<UndefinedNode> undefs = new ArrayList<>();
 
     BufferedReader reader = new BufferedReader(new InputStreamReader(System.in, UTF_8));
     String line;
@@ -278,20 +278,20 @@
     Connector conn = opts.getConnector();
     BatchScanner bscanner = conn.createBatchScanner(opts.getTableName(), opts.auths, bsOpts.scanThreads);
     bscanner.setTimeout(bsOpts.scanTimeout, TimeUnit.MILLISECONDS);
-    List<Range> refs = new ArrayList<Range>();
+    List<Range> refs = new ArrayList<>();
 
     for (UndefinedNode undefinedNode : undefs)
       refs.add(new Range(new Text(undefinedNode.ref)));
 
     bscanner.setRanges(refs);
 
-    HashMap<String,List<String>> refInfo = new HashMap<String,List<String>>();
+    HashMap<String,List<String>> refInfo = new HashMap<>();
 
     for (Entry<Key,Value> entry : bscanner) {
       String ref = entry.getKey().getRow().toString();
       List<String> vals = refInfo.get(ref);
       if (vals == null) {
-        vals = new ArrayList<String>();
+        vals = new ArrayList<>();
         refInfo.put(ref, vals);
       }
 
diff --git a/test/src/main/java/org/apache/accumulo/test/examples/simple/dirlist/CountIT.java b/test/src/main/java/org/apache/accumulo/test/examples/simple/dirlist/CountIT.java
new file mode 100644
index 0000000..93708a6
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/examples/simple/dirlist/CountIT.java
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.examples.simple.dirlist;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+
+import java.util.ArrayList;
+import java.util.Map.Entry;
+
+import org.apache.accumulo.core.cli.BatchWriterOpts;
+import org.apache.accumulo.core.cli.ScannerOpts;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.core.util.Pair;
+import org.apache.accumulo.examples.simple.dirlist.FileCount;
+import org.apache.accumulo.examples.simple.dirlist.FileCount.Opts;
+import org.apache.accumulo.examples.simple.dirlist.Ingest;
+import org.apache.accumulo.examples.simple.dirlist.QueryUtil;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.io.Text;
+import org.junit.Before;
+import org.junit.Test;
+
+public class CountIT extends ConfigurableMacBase {
+
+  private Connector conn;
+  private String tableName;
+
+  @Before
+  public void setupInstance() throws Exception {
+    tableName = getUniqueNames(1)[0];
+    conn = getConnector();
+    conn.tableOperations().create(tableName);
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
+    ColumnVisibility cv = new ColumnVisibility();
+    // / has 1 dir
+    // /local has 2 dirs 1 file
+    // /local/user1 has 2 files
+    bw.addMutation(Ingest.buildMutation(cv, "/local", true, false, true, 272, 12345, null));
+    bw.addMutation(Ingest.buildMutation(cv, "/local/user1", true, false, true, 272, 12345, null));
+    bw.addMutation(Ingest.buildMutation(cv, "/local/user2", true, false, true, 272, 12345, null));
+    bw.addMutation(Ingest.buildMutation(cv, "/local/file", false, false, false, 1024, 12345, null));
+    bw.addMutation(Ingest.buildMutation(cv, "/local/file", false, false, false, 1024, 23456, null));
+    bw.addMutation(Ingest.buildMutation(cv, "/local/user1/file1", false, false, false, 2024, 12345, null));
+    bw.addMutation(Ingest.buildMutation(cv, "/local/user1/file2", false, false, false, 1028, 23456, null));
+    bw.close();
+  }
+
+  @Test
+  public void test() throws Exception {
+    Scanner scanner = conn.createScanner(tableName, new Authorizations());
+    scanner.fetchColumn(new Text("dir"), new Text("counts"));
+    assertFalse(scanner.iterator().hasNext());
+
+    Opts opts = new Opts();
+    ScannerOpts scanOpts = new ScannerOpts();
+    BatchWriterOpts bwOpts = new BatchWriterOpts();
+    opts.instance = conn.getInstance().getInstanceName();
+    opts.zookeepers = conn.getInstance().getZooKeepers();
+    opts.setTableName(tableName);
+    opts.setPrincipal(conn.whoami());
+    opts.setPassword(new Opts.Password(ROOT_PASSWORD));
+    FileCount fc = new FileCount(opts, scanOpts, bwOpts);
+    fc.run();
+
+    ArrayList<Pair<String,String>> expected = new ArrayList<>();
+    expected.add(new Pair<>(QueryUtil.getRow("").toString(), "1,0,3,3"));
+    expected.add(new Pair<>(QueryUtil.getRow("/local").toString(), "2,1,2,3"));
+    expected.add(new Pair<>(QueryUtil.getRow("/local/user1").toString(), "0,2,0,2"));
+    expected.add(new Pair<>(QueryUtil.getRow("/local/user2").toString(), "0,0,0,0"));
+
+    int i = 0;
+    for (Entry<Key,Value> e : scanner) {
+      assertEquals(e.getKey().getRow().toString(), expected.get(i).getFirst());
+      assertEquals(e.getValue().toString(), expected.get(i).getSecond());
+      i++;
+    }
+    assertEquals(i, expected.size());
+  }
+}
diff --git a/examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkInputFormatTest.java b/test/src/main/java/org/apache/accumulo/test/examples/simple/filedata/ChunkInputFormatIT.java
similarity index 63%
rename from examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkInputFormatTest.java
rename to test/src/main/java/org/apache/accumulo/test/examples/simple/filedata/ChunkInputFormatIT.java
index b95c00c..cb53ec0 100644
--- a/examples/simple/src/test/java/org/apache/accumulo/examples/simple/filedata/ChunkInputFormatTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/examples/simple/filedata/ChunkInputFormatIT.java
@@ -14,13 +14,12 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.examples.simple.filedata;
+
+package org.apache.accumulo.test.examples.simple.filedata;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import java.io.File;
@@ -33,13 +32,13 @@
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.examples.simple.filedata.ChunkInputFormat;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.mapreduce.Job;
@@ -47,40 +46,53 @@
 import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
+import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
-public class ChunkInputFormatTest {
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
 
-  private static AssertionError e0 = null;
-  private static AssertionError e1 = null;
-  private static AssertionError e2 = null;
-  private static IOException e3 = null;
+public class ChunkInputFormatIT extends AccumuloClusterHarness {
+
+  // track errors in the map reduce job; jobs insert a dummy error for the map and cleanup tasks (to ensure test correctness),
+  // so error tests should check to see if there is at least one error (could be more depending on the test) rather than zero
+  private static Multimap<String,AssertionError> assertionErrors = ArrayListMultimap.create();
 
   private static final Authorizations AUTHS = new Authorizations("A", "B", "C", "D");
 
   private static List<Entry<Key,Value>> data;
   private static List<Entry<Key,Value>> baddata;
 
+  private Connector conn;
+  private String tableName;
+
+  @Before
+  public void setupInstance() throws Exception {
+    conn = getConnector();
+    tableName = getUniqueNames(1)[0];
+    conn.securityOperations().changeUserAuthorizations(conn.whoami(), AUTHS);
+  }
+
   @BeforeClass
   public static void setupClass() {
     System.setProperty("hadoop.tmp.dir", System.getProperty("user.dir") + "/target/hadoop-tmp");
 
-    data = new ArrayList<Entry<Key,Value>>();
-    ChunkInputStreamTest.addData(data, "a", "refs", "ida\0ext", "A&B", "ext");
-    ChunkInputStreamTest.addData(data, "a", "refs", "ida\0name", "A&B", "name");
-    ChunkInputStreamTest.addData(data, "a", "~chunk", 100, 0, "A&B", "asdfjkl;");
-    ChunkInputStreamTest.addData(data, "a", "~chunk", 100, 1, "A&B", "");
-    ChunkInputStreamTest.addData(data, "b", "refs", "ida\0ext", "A&B", "ext");
-    ChunkInputStreamTest.addData(data, "b", "refs", "ida\0name", "A&B", "name");
-    ChunkInputStreamTest.addData(data, "b", "~chunk", 100, 0, "A&B", "qwertyuiop");
-    ChunkInputStreamTest.addData(data, "b", "~chunk", 100, 0, "B&C", "qwertyuiop");
-    ChunkInputStreamTest.addData(data, "b", "~chunk", 100, 1, "A&B", "");
-    ChunkInputStreamTest.addData(data, "b", "~chunk", 100, 1, "B&C", "");
-    ChunkInputStreamTest.addData(data, "b", "~chunk", 100, 1, "D", "");
-    baddata = new ArrayList<Entry<Key,Value>>();
-    ChunkInputStreamTest.addData(baddata, "c", "refs", "ida\0ext", "A&B", "ext");
-    ChunkInputStreamTest.addData(baddata, "c", "refs", "ida\0name", "A&B", "name");
+    data = new ArrayList<>();
+    ChunkInputStreamIT.addData(data, "a", "refs", "ida\0ext", "A&B", "ext");
+    ChunkInputStreamIT.addData(data, "a", "refs", "ida\0name", "A&B", "name");
+    ChunkInputStreamIT.addData(data, "a", "~chunk", 100, 0, "A&B", "asdfjkl;");
+    ChunkInputStreamIT.addData(data, "a", "~chunk", 100, 1, "A&B", "");
+    ChunkInputStreamIT.addData(data, "b", "refs", "ida\0ext", "A&B", "ext");
+    ChunkInputStreamIT.addData(data, "b", "refs", "ida\0name", "A&B", "name");
+    ChunkInputStreamIT.addData(data, "b", "~chunk", 100, 0, "A&B", "qwertyuiop");
+    ChunkInputStreamIT.addData(data, "b", "~chunk", 100, 0, "B&C", "qwertyuiop");
+    ChunkInputStreamIT.addData(data, "b", "~chunk", 100, 1, "A&B", "");
+    ChunkInputStreamIT.addData(data, "b", "~chunk", 100, 1, "B&C", "");
+    ChunkInputStreamIT.addData(data, "b", "~chunk", 100, 1, "D", "");
+    baddata = new ArrayList<>();
+    ChunkInputStreamIT.addData(baddata, "c", "refs", "ida\0ext", "A&B", "ext");
+    ChunkInputStreamIT.addData(baddata, "c", "refs", "ida\0name", "A&B", "name");
   }
 
   public static void entryEquals(Entry<Key,Value> e1, Entry<Key,Value> e2) {
@@ -94,6 +106,9 @@
 
       @Override
       protected void map(List<Entry<Key,Value>> key, InputStream value, Context context) throws IOException, InterruptedException {
+        String table = context.getConfiguration().get("MRTester_tableName");
+        assertNotNull(table);
+
         byte[] b = new byte[20];
         int read;
         try {
@@ -115,10 +130,10 @@
               assertEquals(read = value.read(b), -1);
               break;
             default:
-              assertTrue(false);
+              fail();
           }
         } catch (AssertionError e) {
-          e1 = e;
+          assertionErrors.put(table, e);
         } finally {
           value.close();
         }
@@ -127,10 +142,13 @@
 
       @Override
       protected void cleanup(Context context) throws IOException, InterruptedException {
+        String table = context.getConfiguration().get("MRTester_tableName");
+        assertNotNull(table);
+
         try {
           assertEquals(2, count);
         } catch (AssertionError e) {
-          e2 = e;
+          assertionErrors.put(table, e);
         }
       }
     }
@@ -140,6 +158,9 @@
 
       @Override
       protected void map(List<Entry<Key,Value>> key, InputStream value, Context context) throws IOException, InterruptedException {
+        String table = context.getConfiguration().get("MRTester_tableName");
+        assertNotNull(table);
+
         byte[] b = new byte[5];
         int read;
         try {
@@ -149,17 +170,17 @@
               assertEquals(new String(b, 0, read), "asdfj");
               break;
             default:
-              assertTrue(false);
+              fail();
           }
         } catch (AssertionError e) {
-          e1 = e;
+          assertionErrors.put(table, e);
         }
         count++;
         try {
           context.nextKeyValue();
-          assertTrue(false);
+          fail();
         } catch (IOException ioe) {
-          e3 = ioe;
+          assertionErrors.put(table + "_map_ioexception", new AssertionError(toString(), ioe));
         }
       }
     }
@@ -167,20 +188,23 @@
     public static class TestBadData extends Mapper<List<Entry<Key,Value>>,InputStream,List<Entry<Key,Value>>,InputStream> {
       @Override
       protected void map(List<Entry<Key,Value>> key, InputStream value, Context context) throws IOException, InterruptedException {
+        String table = context.getConfiguration().get("MRTester_tableName");
+        assertNotNull(table);
+
         byte[] b = new byte[20];
         try {
           assertEquals(key.size(), 2);
           entryEquals(key.get(0), baddata.get(0));
           entryEquals(key.get(1), baddata.get(1));
         } catch (AssertionError e) {
-          e0 = e;
+          assertionErrors.put(table, e);
         }
         try {
           assertFalse(value.read(b) > 0);
           try {
             fail();
           } catch (AssertionError e) {
-            e1 = e;
+            assertionErrors.put(table, e);
           }
         } catch (Exception e) {
           // expected, ignore
@@ -190,7 +214,7 @@
           try {
             fail();
           } catch (AssertionError e) {
-            e2 = e;
+            assertionErrors.put(table, e);
           }
         } catch (Exception e) {
           // expected, ignore
@@ -200,14 +224,14 @@
 
     @Override
     public int run(String[] args) throws Exception {
-      if (args.length != 5) {
-        throw new IllegalArgumentException("Usage : " + CIFTester.class.getName() + " <instance name> <user> <pass> <table> <mapperClass>");
+      if (args.length != 2) {
+        throw new IllegalArgumentException("Usage : " + CIFTester.class.getName() + " <table> <mapperClass>");
       }
 
-      String instance = args[0];
-      String user = args[1];
-      String pass = args[2];
-      String table = args[3];
+      String table = args[0];
+      assertionErrors.put(table, new AssertionError("Dummy"));
+      assertionErrors.put(table + "_map_ioexception", new AssertionError("Dummy_ioexception"));
+      getConf().set("MRTester_tableName", table);
 
       Job job = Job.getInstance(getConf());
       job.setJobName(this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
@@ -215,13 +239,13 @@
 
       job.setInputFormatClass(ChunkInputFormat.class);
 
-      ChunkInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
+      ChunkInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
+      ChunkInputFormat.setConnectorInfo(job, getAdminPrincipal(), getAdminToken());
       ChunkInputFormat.setInputTableName(job, table);
       ChunkInputFormat.setScanAuthorizations(job, AUTHS);
-      ChunkInputFormat.setMockInstance(job, instance);
 
       @SuppressWarnings("unchecked")
-      Class<? extends Mapper<?,?,?,?>> forName = (Class<? extends Mapper<?,?,?,?>>) Class.forName(args[4]);
+      Class<? extends Mapper<?,?,?,?>> forName = (Class<? extends Mapper<?,?,?,?>>) Class.forName(args[1]);
       job.setMapperClass(forName);
       job.setMapOutputKeyClass(Key.class);
       job.setMapOutputValueClass(Value.class);
@@ -236,6 +260,7 @@
 
     public static int main(String... args) throws Exception {
       Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
       conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
       return ToolRunner.run(conf, new CIFTester(), args);
     }
@@ -243,10 +268,8 @@
 
   @Test
   public void test() throws Exception {
-    MockInstance instance = new MockInstance("instance1");
-    Connector conn = instance.getConnector("root", new PasswordToken(""));
-    conn.tableOperations().create("test");
-    BatchWriter bw = conn.createBatchWriter("test", new BatchWriterConfig());
+    conn.tableOperations().create(tableName);
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
     for (Entry<Key,Value> e : data) {
       Key k = e.getKey();
@@ -256,17 +279,14 @@
     }
     bw.close();
 
-    assertEquals(0, CIFTester.main("instance1", "root", "", "test", CIFTester.TestMapper.class.getName()));
-    assertNull(e1);
-    assertNull(e2);
+    assertEquals(0, CIFTester.main(tableName, CIFTester.TestMapper.class.getName()));
+    assertEquals(1, assertionErrors.get(tableName).size());
   }
 
   @Test
   public void testErrorOnNextWithoutClose() throws Exception {
-    MockInstance instance = new MockInstance("instance2");
-    Connector conn = instance.getConnector("root", new PasswordToken(""));
-    conn.tableOperations().create("test");
-    BatchWriter bw = conn.createBatchWriter("test", new BatchWriterConfig());
+    conn.tableOperations().create(tableName);
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
 
     for (Entry<Key,Value> e : data) {
       Key k = e.getKey();
@@ -276,18 +296,16 @@
     }
     bw.close();
 
-    assertEquals(1, CIFTester.main("instance2", "root", "", "test", CIFTester.TestNoClose.class.getName()));
-    assertNull(e1);
-    assertNull(e2);
-    assertNotNull(e3);
+    assertEquals(1, CIFTester.main(tableName, CIFTester.TestNoClose.class.getName()));
+    assertEquals(1, assertionErrors.get(tableName).size());
+    // this should actually exist, in addition to the dummy entry
+    assertEquals(2, assertionErrors.get(tableName + "_map_ioexception").size());
   }
 
   @Test
   public void testInfoWithoutChunks() throws Exception {
-    MockInstance instance = new MockInstance("instance3");
-    Connector conn = instance.getConnector("root", new PasswordToken(""));
-    conn.tableOperations().create("test");
-    BatchWriter bw = conn.createBatchWriter("test", new BatchWriterConfig());
+    conn.tableOperations().create(tableName);
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
     for (Entry<Key,Value> e : baddata) {
       Key k = e.getKey();
       Mutation m = new Mutation(k.getRow());
@@ -296,9 +314,7 @@
     }
     bw.close();
 
-    assertEquals(0, CIFTester.main("instance3", "root", "", "test", CIFTester.TestBadData.class.getName()));
-    assertNull(e0);
-    assertNull(e1);
-    assertNull(e2);
+    assertEquals(0, CIFTester.main(tableName, CIFTester.TestBadData.class.getName()));
+    assertEquals(1, assertionErrors.get(tableName).size());
   }
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/examples/simple/filedata/ChunkInputStreamIT.java b/test/src/main/java/org/apache/accumulo/test/examples/simple/filedata/ChunkInputStreamIT.java
new file mode 100644
index 0000000..5b956d7
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/examples/simple/filedata/ChunkInputStreamIT.java
@@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.test.examples.simple.filedata;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map.Entry;
+
+import org.apache.accumulo.core.client.AccumuloException;
+import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.TableExistsException;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.KeyValue;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.core.util.PeekingIterator;
+import org.apache.accumulo.examples.simple.filedata.ChunkInputStream;
+import org.apache.accumulo.examples.simple.filedata.FileDataIngest;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.hadoop.io.Text;
+import org.junit.Before;
+import org.junit.Test;
+
+public class ChunkInputStreamIT extends AccumuloClusterHarness {
+
+  private static final Authorizations AUTHS = new Authorizations("A", "B", "C", "D");
+
+  private Connector conn;
+  private String tableName;
+  private List<Entry<Key,Value>> data;
+  private List<Entry<Key,Value>> baddata;
+  private List<Entry<Key,Value>> multidata;
+
+  @Before
+  public void setupInstance() throws Exception {
+    conn = getConnector();
+    tableName = getUniqueNames(1)[0];
+    conn.securityOperations().changeUserAuthorizations(conn.whoami(), AUTHS);
+  }
+
+  @Before
+  public void setupData() {
+    data = new ArrayList<>();
+    addData(data, "a", "refs", "id\0ext", "A&B", "ext");
+    addData(data, "a", "refs", "id\0name", "A&B", "name");
+    addData(data, "a", "~chunk", 100, 0, "A&B", "asdfjkl;");
+    addData(data, "a", "~chunk", 100, 1, "A&B", "");
+    addData(data, "b", "refs", "id\0ext", "A&B", "ext");
+    addData(data, "b", "refs", "id\0name", "A&B", "name");
+    addData(data, "b", "~chunk", 100, 0, "A&B", "qwertyuiop");
+    addData(data, "b", "~chunk", 100, 0, "B&C", "qwertyuiop");
+    addData(data, "b", "~chunk", 100, 1, "A&B", "");
+    addData(data, "b", "~chunk", 100, 1, "B&C", "");
+    addData(data, "b", "~chunk", 100, 1, "D", "");
+    addData(data, "c", "~chunk", 100, 0, "A&B", "asdfjkl;");
+    addData(data, "c", "~chunk", 100, 1, "A&B", "asdfjkl;");
+    addData(data, "c", "~chunk", 100, 2, "A&B", "");
+    addData(data, "d", "~chunk", 100, 0, "A&B", "");
+    addData(data, "e", "~chunk", 100, 0, "A&B", "asdfjkl;");
+    addData(data, "e", "~chunk", 100, 1, "A&B", "");
+    baddata = new ArrayList<>();
+    addData(baddata, "a", "~chunk", 100, 0, "A", "asdfjkl;");
+    addData(baddata, "b", "~chunk", 100, 0, "B", "asdfjkl;");
+    addData(baddata, "b", "~chunk", 100, 2, "C", "");
+    addData(baddata, "c", "~chunk", 100, 0, "D", "asdfjkl;");
+    addData(baddata, "c", "~chunk", 100, 2, "E", "");
+    addData(baddata, "d", "~chunk", 100, 0, "F", "asdfjkl;");
+    addData(baddata, "d", "~chunk", 100, 1, "G", "");
+    addData(baddata, "d", "~zzzzz", "colq", "H", "");
+    addData(baddata, "e", "~chunk", 100, 0, "I", "asdfjkl;");
+    addData(baddata, "e", "~chunk", 100, 1, "J", "");
+    addData(baddata, "e", "~chunk", 100, 2, "I", "asdfjkl;");
+    addData(baddata, "f", "~chunk", 100, 2, "K", "asdfjkl;");
+    addData(baddata, "g", "~chunk", 100, 0, "L", "");
+    multidata = new ArrayList<>();
+    addData(multidata, "a", "~chunk", 100, 0, "A&B", "asdfjkl;");
+    addData(multidata, "a", "~chunk", 100, 1, "A&B", "");
+    addData(multidata, "a", "~chunk", 200, 0, "B&C", "asdfjkl;");
+    addData(multidata, "b", "~chunk", 100, 0, "A&B", "asdfjkl;");
+    addData(multidata, "b", "~chunk", 200, 0, "B&C", "asdfjkl;");
+    addData(multidata, "b", "~chunk", 200, 1, "B&C", "asdfjkl;");
+    addData(multidata, "c", "~chunk", 100, 0, "A&B", "asdfjkl;");
+    addData(multidata, "c", "~chunk", 100, 1, "B&C", "");
+  }
+
+  static void addData(List<Entry<Key,Value>> data, String row, String cf, String cq, String vis, String value) {
+    data.add(new KeyValue(new Key(new Text(row), new Text(cf), new Text(cq), new Text(vis)), value.getBytes()));
+  }
+
+  static void addData(List<Entry<Key,Value>> data, String row, String cf, int chunkSize, int chunkCount, String vis, String value) {
+    Text chunkCQ = new Text(FileDataIngest.intToBytes(chunkSize));
+    chunkCQ.append(FileDataIngest.intToBytes(chunkCount), 0, 4);
+    data.add(new KeyValue(new Key(new Text(row), new Text(cf), chunkCQ, new Text(vis)), value.getBytes()));
+  }
+
+  @Test
+  public void testWithAccumulo() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException, IOException {
+    conn.tableOperations().create(tableName);
+    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
+
+    for (Entry<Key,Value> e : data) {
+      Key k = e.getKey();
+      Mutation m = new Mutation(k.getRow());
+      m.put(k.getColumnFamily(), k.getColumnQualifier(), new ColumnVisibility(k.getColumnVisibility()), e.getValue());
+      bw.addMutation(m);
+    }
+    bw.close();
+
+    Scanner scan = conn.createScanner(tableName, AUTHS);
+
+    ChunkInputStream cis = new ChunkInputStream();
+    byte[] b = new byte[20];
+    int read;
+    PeekingIterator<Entry<Key,Value>> pi = new PeekingIterator<>(scan.iterator());
+
+    cis.setSource(pi);
+    assertEquals(read = cis.read(b), 8);
+    assertEquals(new String(b, 0, read), "asdfjkl;");
+    assertEquals(read = cis.read(b), -1);
+
+    cis.setSource(pi);
+    assertEquals(read = cis.read(b), 10);
+    assertEquals(new String(b, 0, read), "qwertyuiop");
+    assertEquals(read = cis.read(b), -1);
+    assertEquals(cis.getVisibilities().toString(), "[A&B, B&C, D]");
+    cis.close();
+
+    cis.setSource(pi);
+    assertEquals(read = cis.read(b), 16);
+    assertEquals(new String(b, 0, read), "asdfjkl;asdfjkl;");
+    assertEquals(read = cis.read(b), -1);
+    assertEquals(cis.getVisibilities().toString(), "[A&B]");
+    cis.close();
+
+    cis.setSource(pi);
+    assertEquals(read = cis.read(b), -1);
+    cis.close();
+
+    cis.setSource(pi);
+    assertEquals(read = cis.read(b), 8);
+    assertEquals(new String(b, 0, read), "asdfjkl;");
+    assertEquals(read = cis.read(b), -1);
+    cis.close();
+
+    assertFalse(pi.hasNext());
+  }
+
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/AddSplitIT.java b/test/src/main/java/org/apache/accumulo/test/functional/AddSplitIT.java
similarity index 89%
rename from test/src/test/java/org/apache/accumulo/test/functional/AddSplitIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/AddSplitIT.java
index 5b32b94..0558c7f 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/AddSplitIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/AddSplitIT.java
@@ -22,6 +22,7 @@
 import java.util.Iterator;
 import java.util.Map.Entry;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -34,12 +35,13 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class AddSplitIT extends AccumuloClusterIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class AddSplitIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -55,17 +57,17 @@
 
     insertData(tableName, 1l);
 
-    TreeSet<Text> splits = new TreeSet<Text>();
+    TreeSet<Text> splits = new TreeSet<>();
     splits.add(new Text(String.format("%09d", 333)));
     splits.add(new Text(String.format("%09d", 666)));
 
     c.tableOperations().addSplits(tableName, splits);
 
-    UtilWaitThread.sleep(100);
+    sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
 
     Collection<Text> actualSplits = c.tableOperations().listSplits(tableName);
 
-    if (!splits.equals(new TreeSet<Text>(actualSplits))) {
+    if (!splits.equals(new TreeSet<>(actualSplits))) {
       throw new Exception(splits + " != " + actualSplits);
     }
 
@@ -81,11 +83,11 @@
 
     c.tableOperations().addSplits(tableName, splits);
 
-    UtilWaitThread.sleep(100);
+    sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
 
     actualSplits = c.tableOperations().listSplits(tableName);
 
-    if (!splits.equals(new TreeSet<Text>(actualSplits))) {
+    if (!splits.equals(new TreeSet<>(actualSplits))) {
       throw new Exception(splits + " != " + actualSplits);
     }
 
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BackupMasterIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BackupMasterIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/functional/BackupMasterIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BackupMasterIT.java
index efed7a4..d8979db 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BackupMasterIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BackupMasterIT.java
@@ -25,7 +25,7 @@
 import org.apache.accumulo.master.Master;
 import org.junit.Test;
 
-public class BackupMasterIT extends ConfigurableMacIT {
+public class BackupMasterIT extends ConfigurableMacBase {
 
   @Override
   protected int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BadIteratorMincIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BadIteratorMincIT.java
similarity index 88%
rename from test/src/test/java/org/apache/accumulo/test/functional/BadIteratorMincIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BadIteratorMincIT.java
index 14561c2..c730f9b 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BadIteratorMincIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BadIteratorMincIT.java
@@ -20,6 +20,7 @@
 import static org.junit.Assert.assertEquals;
 
 import java.util.EnumSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -30,14 +31,14 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class BadIteratorMincIT extends AccumuloClusterIT {
+public class BadIteratorMincIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -61,7 +62,7 @@
     bw.close();
 
     c.tableOperations().flush(tableName, null, null, false);
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
 
     // minc should fail, so there should be no files
     FunctionalTestUtils.checkRFiles(c, tableName, 1, 1, 0, 0);
@@ -74,7 +75,7 @@
     // remove the bad iterator
     c.tableOperations().removeIterator(tableName, BadIterator.class.getSimpleName(), EnumSet.of(IteratorScope.minc));
 
-    UtilWaitThread.sleep(5000);
+    sleepUninterruptibly(5, TimeUnit.SECONDS);
 
     // minc should complete
     FunctionalTestUtils.checkRFiles(c, tableName, 1, 1, 1, 1);
@@ -93,12 +94,12 @@
     bw.close();
 
     // make sure property is given time to propagate
-    UtilWaitThread.sleep(500);
+    sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
 
     c.tableOperations().flush(tableName, null, null, false);
 
     // make sure the flush has time to start
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
 
     // this should not hang
     c.tableOperations().delete(tableName);
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BalanceAfterCommsFailureIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BalanceAfterCommsFailureIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/BalanceAfterCommsFailureIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BalanceAfterCommsFailureIT.java
index 7b35db4..525d9f9 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BalanceAfterCommsFailureIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BalanceAfterCommsFailureIT.java
@@ -48,7 +48,7 @@
 
 import com.google.common.collect.Iterables;
 
-public class BalanceAfterCommsFailureIT extends ConfigurableMacIT {
+public class BalanceAfterCommsFailureIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -65,7 +65,7 @@
     Connector c = this.getConnector();
     c.tableOperations().create("test");
     Collection<ProcessReference> tservers = getCluster().getProcesses().get(ServerType.TABLET_SERVER);
-    ArrayList<Integer> tserverPids = new ArrayList<Integer>(tservers.size());
+    ArrayList<Integer> tserverPids = new ArrayList<>(tservers.size());
     for (ProcessReference tserver : tservers) {
       Process p = tserver.getProcess();
       if (!p.getClass().getName().equals("java.lang.UNIXProcess")) {
@@ -85,7 +85,7 @@
     for (int pid : tserverPids) {
       assertEquals(0, Runtime.getRuntime().exec(new String[] {"kill", "-SIGCONT", Integer.toString(pid)}).waitFor());
     }
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (String split : "a b c d e f g h i j k l m n o p q r s t u v w x y z".split(" ")) {
       splits.add(new Text(split));
     }
@@ -120,7 +120,7 @@
 
     assertEquals("Unassigned tablets were not assigned within 30 seconds", 0, unassignedTablets);
 
-    List<Integer> counts = new ArrayList<Integer>();
+    List<Integer> counts = new ArrayList<>();
     for (TabletServerStatus server : stats.tServerInfo) {
       int count = 0;
       for (TableInfo table : server.tableMap.values()) {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BalanceInPresenceOfOfflineTableIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BalanceInPresenceOfOfflineTableIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/functional/BalanceInPresenceOfOfflineTableIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BalanceInPresenceOfOfflineTableIT.java
index b77ce1c..2fe5602 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BalanceInPresenceOfOfflineTableIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BalanceInPresenceOfOfflineTableIT.java
@@ -41,7 +41,7 @@
 import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
 import org.apache.accumulo.core.master.thrift.TableInfo;
 import org.apache.accumulo.core.trace.Tracer;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
@@ -59,7 +59,7 @@
 /**
  * Start a new table, create many splits, and offline before they can rebalance. Then try to have a different table balance
  */
-public class BalanceInPresenceOfOfflineTableIT extends AccumuloClusterIT {
+public class BalanceInPresenceOfOfflineTableIT extends AccumuloClusterHarness {
 
   private static Logger log = LoggerFactory.getLogger(BalanceInPresenceOfOfflineTableIT.class);
 
@@ -93,7 +93,7 @@
     Assume.assumeTrue("Not enough tservers to run test", conn.instanceOperations().getTabletServers().size() >= 2);
 
     // set up splits
-    final SortedSet<Text> splits = new TreeSet<Text>();
+    final SortedSet<Text> splits = new TreeSet<>();
     for (int i = 0; i < NUM_SPLITS; i++) {
       splits.add(new Text(String.format("%08x", i * 1000)));
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BatchScanSplitIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BatchScanSplitIT.java
similarity index 89%
rename from test/src/test/java/org/apache/accumulo/test/functional/BatchScanSplitIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BatchScanSplitIT.java
index fb52c05..48ce3fe 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BatchScanSplitIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BatchScanSplitIT.java
@@ -23,6 +23,7 @@
 import java.util.HashMap;
 import java.util.Map.Entry;
 import java.util.Random;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.BatchWriter;
@@ -34,8 +35,7 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
@@ -43,7 +43,9 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class BatchScanSplitIT extends AccumuloClusterIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class BatchScanSplitIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(BatchScanSplitIT.class);
 
   @Override
@@ -80,15 +82,15 @@
 
     Collection<Text> splits = getConnector().tableOperations().listSplits(tableName);
     while (splits.size() < 2) {
-      UtilWaitThread.sleep(1);
+      sleepUninterruptibly(1, TimeUnit.MILLISECONDS);
       splits = getConnector().tableOperations().listSplits(tableName);
     }
 
     System.out.println("splits : " + splits);
 
     Random random = new Random(19011230);
-    HashMap<Text,Value> expected = new HashMap<Text,Value>();
-    ArrayList<Range> ranges = new ArrayList<Range>();
+    HashMap<Text,Value> expected = new HashMap<>();
+    ArrayList<Range> ranges = new ArrayList<>();
     for (int i = 0; i < 100; i++) {
       int r = random.nextInt(numRows);
       Text row = new Text(String.format("%09x", r));
@@ -98,7 +100,7 @@
 
     // logger.setLevel(Level.TRACE);
 
-    HashMap<Text,Value> found = new HashMap<Text,Value>();
+    HashMap<Text,Value> found = new HashMap<>();
 
     for (int i = 0; i < 20; i++) {
       BatchScanner bs = getConnector().createBatchScanner(tableName, Authorizations.EMPTY, 4);
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BatchWriterFlushIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BatchWriterFlushIT.java
similarity index 85%
rename from test/src/test/java/org/apache/accumulo/test/functional/BatchWriterFlushIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BatchWriterFlushIT.java
index 63bee16..9b50306 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BatchWriterFlushIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BatchWriterFlushIT.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.test.functional;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.util.ArrayList;
@@ -44,15 +45,14 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.SimpleThreadPool;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
 
-public class BatchWriterFlushIT extends AccumuloClusterIT {
+public class BatchWriterFlushIT extends AccumuloClusterHarness {
 
   private static final int NUM_TO_FLUSH = 100000;
   private static final int NUM_THREADS = 3;
@@ -76,30 +76,29 @@
 
   private void runLatencyTest(String tableName) throws Exception {
     // should automatically flush after 2 seconds
-    BatchWriter bw = getConnector().createBatchWriter(tableName, new BatchWriterConfig().setMaxLatency(1000, TimeUnit.MILLISECONDS));
-    Scanner scanner = getConnector().createScanner(tableName, Authorizations.EMPTY);
+    try (BatchWriter bw = getConnector().createBatchWriter(tableName, new BatchWriterConfig().setMaxLatency(1000, TimeUnit.MILLISECONDS))) {
+      Scanner scanner = getConnector().createScanner(tableName, Authorizations.EMPTY);
 
-    Mutation m = new Mutation(new Text(String.format("r_%10d", 1)));
-    m.put(new Text("cf"), new Text("cq"), new Value("1".getBytes(UTF_8)));
-    bw.addMutation(m);
+      Mutation m = new Mutation(new Text(String.format("r_%10d", 1)));
+      m.put(new Text("cf"), new Text("cq"), new Value("1".getBytes(UTF_8)));
+      bw.addMutation(m);
 
-    UtilWaitThread.sleep(500);
+      sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
 
-    int count = Iterators.size(scanner.iterator());
+      int count = Iterators.size(scanner.iterator());
 
-    if (count != 0) {
-      throw new Exception("Flushed too soon");
+      if (count != 0) {
+        throw new Exception("Flushed too soon");
+      }
+
+      sleepUninterruptibly(1500, TimeUnit.MILLISECONDS);
+
+      count = Iterators.size(scanner.iterator());
+
+      if (count != 1) {
+        throw new Exception("Did not flush");
+      }
     }
-
-    UtilWaitThread.sleep(1500);
-
-    count = Iterators.size(scanner.iterator());
-
-    if (count != 1) {
-      throw new Exception("Did not flush");
-    }
-
-    bw.close();
   }
 
   private void runFlushTest(String tableName) throws AccumuloException, AccumuloSecurityException, TableNotFoundException, MutationsRejectedException,
@@ -181,12 +180,13 @@
     String tableName = tableNames[0];
     c.tableOperations().create(tableName);
     for (int x = 0; x < NUM_THREADS; x++) {
-      c.tableOperations().addSplits(tableName, new TreeSet<Text>(Collections.singleton(new Text(Integer.toString(x * NUM_TO_FLUSH)))));
+      c.tableOperations().addSplits(tableName, new TreeSet<>(Collections.singleton(new Text(Integer.toString(x * NUM_TO_FLUSH)))));
     }
+    c.instanceOperations().waitForBalance();
 
     // Logger.getLogger(TabletServerBatchWriter.class).setLevel(Level.TRACE);
-    final List<Set<Mutation>> allMuts = new LinkedList<Set<Mutation>>();
-    List<Mutation> data = new ArrayList<Mutation>();
+    final List<Set<Mutation>> allMuts = new LinkedList<>();
+    List<Mutation> data = new ArrayList<>();
     for (int i = 0; i < NUM_THREADS; i++) {
       final int thread = i;
       for (int j = 0; j < NUM_TO_FLUSH; j++) {
@@ -199,7 +199,7 @@
     Assert.assertEquals(NUM_THREADS * NUM_TO_FLUSH, data.size());
     Collections.shuffle(data);
     for (int n = 0; n < (NUM_THREADS * NUM_TO_FLUSH); n += NUM_TO_FLUSH) {
-      Set<Mutation> muts = new HashSet<Mutation>(data.subList(n, n + NUM_TO_FLUSH));
+      Set<Mutation> muts = new HashSet<>(data.subList(n, n + NUM_TO_FLUSH));
       allMuts.add(muts);
     }
 
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BigRootTabletIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BigRootTabletIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/BigRootTabletIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BigRootTabletIT.java
index f08ea00..11dcb66 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BigRootTabletIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BigRootTabletIT.java
@@ -25,14 +25,14 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
 
-public class BigRootTabletIT extends AccumuloClusterIT {
+public class BigRootTabletIT extends AccumuloClusterHarness {
   // ACCUMULO-542: A large root tablet will fail to load if it does't fit in the tserver scan buffers
 
   @Override
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BinaryIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BinaryIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/functional/BinaryIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BinaryIT.java
index e524fa8..f8732d5 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BinaryIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BinaryIT.java
@@ -22,12 +22,12 @@
 import org.apache.accumulo.core.cli.BatchWriterOpts;
 import org.apache.accumulo.core.cli.ScannerOpts;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.test.TestBinaryRows;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class BinaryIT extends AccumuloClusterIT {
+public class BinaryIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -47,7 +47,7 @@
     String tableName = getUniqueNames(1)[0];
     Connector c = getConnector();
     c.tableOperations().create(tableName);
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     splits.add(new Text("8"));
     splits.add(new Text("256"));
     c.tableOperations().addSplits(tableName, splits);
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BinaryStressIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BinaryStressIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/functional/BinaryStressIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BinaryStressIT.java
index f1bca1b..9ce221a 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BinaryStressIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BinaryStressIT.java
@@ -32,7 +32,7 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
@@ -41,7 +41,7 @@
 import org.junit.Before;
 import org.junit.Test;
 
-public class BinaryStressIT extends AccumuloClusterIT {
+public class BinaryStressIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BloomFilterIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/functional/BloomFilterIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java
index 67a556c..1c6fc9f 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BloomFilterIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java
@@ -41,7 +41,7 @@
 import org.apache.accumulo.core.file.keyfunctor.RowFunctor;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.fate.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.MemoryUnit;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
@@ -50,7 +50,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class BloomFilterIT extends AccumuloClusterIT {
+public class BloomFilterIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(BloomFilterIT.class);
 
   @Override
@@ -169,8 +169,8 @@
   private long query(Connector c, String table, int depth, long start, long end, int num, int step) throws Exception {
     Random r = new Random(42);
 
-    HashSet<Long> expected = new HashSet<Long>();
-    List<Range> ranges = new ArrayList<Range>(num);
+    HashSet<Long> expected = new HashSet<>();
+    List<Range> ranges = new ArrayList<>(num);
     Text key = new Text();
     Text row = new Text("row"), cq = new Text("cq"), cf = new Text("cf");
 
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BulkFileIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BulkFileIT.java
similarity index 86%
rename from test/src/test/java/org/apache/accumulo/test/functional/BulkFileIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BulkFileIT.java
index 6683d73..ec8ce2d 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BulkFileIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BulkFileIT.java
@@ -32,7 +32,7 @@
 import org.apache.accumulo.core.file.FileSKVWriter;
 import org.apache.accumulo.core.file.rfile.RFile;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.MemoryUnit;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
@@ -43,7 +43,7 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class BulkFileIT extends AccumuloClusterIT {
+public class BulkFileIT extends AccumuloClusterHarness {
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration conf) {
@@ -60,7 +60,7 @@
     Connector c = getConnector();
     String tableName = getUniqueNames(1)[0];
     c.tableOperations().create(tableName);
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (String split : "0333 0666 0999 1333 1666".split(" "))
       splits.add(new Text(split));
     c.tableOperations().addSplits(tableName, splits);
@@ -74,17 +74,20 @@
 
     fs.delete(new Path(dir), true);
 
-    FileSKVWriter writer1 = FileOperations.getInstance().openWriter(dir + "/f1." + RFile.EXTENSION, fs, conf, aconf);
+    FileSKVWriter writer1 = FileOperations.getInstance().newWriterBuilder().forFile(dir + "/f1." + RFile.EXTENSION, fs, conf).withTableConfiguration(aconf)
+        .build();
     writer1.startDefaultLocalityGroup();
     writeData(writer1, 0, 333);
     writer1.close();
 
-    FileSKVWriter writer2 = FileOperations.getInstance().openWriter(dir + "/f2." + RFile.EXTENSION, fs, conf, aconf);
+    FileSKVWriter writer2 = FileOperations.getInstance().newWriterBuilder().forFile(dir + "/f2." + RFile.EXTENSION, fs, conf).withTableConfiguration(aconf)
+        .build();
     writer2.startDefaultLocalityGroup();
     writeData(writer2, 334, 999);
     writer2.close();
 
-    FileSKVWriter writer3 = FileOperations.getInstance().openWriter(dir + "/f3." + RFile.EXTENSION, fs, conf, aconf);
+    FileSKVWriter writer3 = FileOperations.getInstance().newWriterBuilder().forFile(dir + "/f3." + RFile.EXTENSION, fs, conf).withTableConfiguration(aconf)
+        .build();
     writer3.startDefaultLocalityGroup();
     writeData(writer3, 1000, 1999);
     writer3.close();
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BulkIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BulkIT.java
similarity index 87%
rename from test/src/test/java/org/apache/accumulo/test/functional/BulkIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BulkIT.java
index 1ed9bdf..f752ed5 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BulkIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BulkIT.java
@@ -20,20 +20,18 @@
 import org.apache.accumulo.core.cli.ScannerOpts;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.util.CachedConfiguration;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.TestIngest.Opts;
 import org.apache.accumulo.test.VerifyIngest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.FsShell;
 import org.apache.hadoop.fs.Path;
 import org.junit.After;
-import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
-public class BulkIT extends AccumuloClusterIT {
+public class BulkIT extends AccumuloClusterHarness {
 
   private static final int N = 100000;
   private static final int COUNT = 5;
@@ -67,7 +65,6 @@
 
   static void runTest(Connector c, FileSystem fs, Path basePath, String principal, String tableName, String filePrefix, String dirSuffix) throws Exception {
     c.tableOperations().create(tableName);
-    CachedConfiguration.setInstance(fs.getConf());
 
     Path base = new Path(basePath, "testBulkFail_" + dirSuffix);
     fs.delete(base, true);
@@ -84,24 +81,21 @@
     opts.instance = c.getInstance().getInstanceName();
     opts.cols = 1;
     opts.setTableName(tableName);
-    opts.conf = CachedConfiguration.getInstance();
+    opts.conf = new Configuration(false);
     opts.fs = fs;
     String fileFormat = filePrefix + "rf%02d";
     for (int i = 0; i < COUNT; i++) {
       opts.outputFile = new Path(files, String.format(fileFormat, i)).toString();
       opts.startRow = N * i;
-      TestIngest.ingest(c, opts, BWOPTS);
+      TestIngest.ingest(c, fs, opts, BWOPTS);
     }
     opts.outputFile = new Path(files, String.format(fileFormat, N)).toString();
     opts.startRow = N;
     opts.rows = 1;
     // create an rfile with one entry, there was a bug with this:
-    TestIngest.ingest(c, opts, BWOPTS);
+    TestIngest.ingest(c, fs, opts, BWOPTS);
 
     // Make sure the server can modify the files
-    FsShell fsShell = new FsShell(fs.getConf());
-    Assert.assertEquals("Failed to chmod " + base.toString(), 0, fsShell.run(new String[] {"-chmod", "-R", "777", base.toString()}));
-
     c.tableOperations().importDirectory(tableName, files.toString(), bulkFailures.toString(), false);
     VerifyIngest.Opts vopts = new VerifyIngest.Opts();
     vopts.setTableName(tableName);
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/BulkSplitOptimizationIT.java b/test/src/main/java/org/apache/accumulo/test/functional/BulkSplitOptimizationIT.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/test/functional/BulkSplitOptimizationIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/BulkSplitOptimizationIT.java
index 7606083..f243562 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/BulkSplitOptimizationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BulkSplitOptimizationIT.java
@@ -18,6 +18,8 @@
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 
+import java.util.concurrent.TimeUnit;
+
 import org.apache.accumulo.core.cli.ClientOpts.Password;
 import org.apache.accumulo.core.cli.ScannerOpts;
 import org.apache.accumulo.core.client.ClientConfiguration;
@@ -26,8 +28,7 @@
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.VerifyIngest;
@@ -40,11 +41,13 @@
 import org.junit.Before;
 import org.junit.Test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 /**
  * This test verifies that when a lot of files are bulk imported into a table with one tablet and then splits that not all map files go to the children tablets.
  */
 
-public class BulkSplitOptimizationIT extends AccumuloClusterIT {
+public class BulkSplitOptimizationIT extends AccumuloClusterHarness {
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -90,8 +93,7 @@
     c.tableOperations().setProperty(tableName, Property.TABLE_MAJC_RATIO.getKey(), "1000");
     c.tableOperations().setProperty(tableName, Property.TABLE_FILE_MAX.getKey(), "1000");
     c.tableOperations().setProperty(tableName, Property.TABLE_SPLIT_THRESHOLD.getKey(), "1G");
-
-    FileSystem fs = getFileSystem();
+    FileSystem fs = cluster.getFileSystem();
     Path testDir = new Path(getUsableDir(), "testmf");
     FunctionalTestUtils.createRFiles(c, fs, testDir.toString(), ROWS, SPLITS, 8);
     FileStatus[] stats = fs.listStatus(testDir);
@@ -104,11 +106,11 @@
     // initiate splits
     getConnector().tableOperations().setProperty(tableName, Property.TABLE_SPLIT_THRESHOLD.getKey(), "100K");
 
-    UtilWaitThread.sleep(2000);
+    sleepUninterruptibly(2, TimeUnit.SECONDS);
 
     // wait until over split threshold -- should be 78 splits
     while (getConnector().tableOperations().listSplits(tableName).size() < 75) {
-      UtilWaitThread.sleep(500);
+      sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
     }
 
     FunctionalTestUtils.checkSplits(c, tableName, 50, 100);
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/CacheTestReader.java b/test/src/main/java/org/apache/accumulo/test/functional/CacheTestReader.java
index fd8ba2c..0988795 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/CacheTestReader.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CacheTestReader.java
@@ -25,11 +25,13 @@
 import java.util.Map;
 import java.util.TreeMap;
 import java.util.UUID;
+import java.util.concurrent.TimeUnit;
 
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.slf4j.LoggerFactory;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class CacheTestReader {
   public static void main(String[] args) throws Exception {
     String rootDir = args[0];
@@ -51,7 +53,7 @@
         return;
       }
 
-      Map<String,String> readData = new TreeMap<String,String>();
+      Map<String,String> readData = new TreeMap<>();
 
       for (int i = 0; i < numData; i++) {
         byte[] v = zc.get(rootDir + "/data" + i);
@@ -77,7 +79,7 @@
       fos.close();
       oos.close();
 
-      UtilWaitThread.sleep(20);
+      sleepUninterruptibly(20, TimeUnit.MILLISECONDS);
     }
 
   }
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/CacheTestWriter.java b/test/src/main/java/org/apache/accumulo/test/functional/CacheTestWriter.java
index 76e8168..50a0b0e 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/CacheTestWriter.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CacheTestWriter.java
@@ -27,13 +27,15 @@
 import java.util.Random;
 import java.util.TreeMap;
 import java.util.UUID;
+import java.util.concurrent.TimeUnit;
 
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeExistsPolicy;
 import org.apache.accumulo.fate.zookeeper.ZooUtil.NodeMissingPolicy;
 import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class CacheTestWriter {
 
   static final int NUM_DATA = 3;
@@ -57,7 +59,7 @@
 
     zk.putPersistentData(rootDir + "/dir", new byte[0], NodeExistsPolicy.FAIL);
 
-    ArrayList<String> children = new ArrayList<String>();
+    ArrayList<String> children = new ArrayList<>();
 
     Random r = new Random();
 
@@ -67,7 +69,7 @@
       // change children in dir
 
       for (int u = 0; u < r.nextInt(4) + 1; u++) {
-        expectedData = new TreeMap<String,String>();
+        expectedData = new TreeMap<>();
 
         if (r.nextFloat() < .5) {
           String child = UUID.randomUUID().toString();
@@ -156,7 +158,7 @@
             break;
         }
 
-        UtilWaitThread.sleep(5);
+        sleepUninterruptibly(5, TimeUnit.MILLISECONDS);
       }
     }
 
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ChaoticBalancerIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ChaoticBalancerIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/ChaoticBalancerIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ChaoticBalancerIT.java
index a2d5971..21d6351 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ChaoticBalancerIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ChaoticBalancerIT.java
@@ -26,7 +26,7 @@
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.master.balancer.ChaoticLoadBalancer;
 import org.apache.accumulo.test.TestIngest;
@@ -35,7 +35,7 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class ChaoticBalancerIT extends AccumuloClusterIT {
+public class ChaoticBalancerIT extends AccumuloClusterHarness {
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -58,7 +58,7 @@
     c.tableOperations().create(tableName);
     c.tableOperations().setProperty(tableName, Property.TABLE_LOAD_BALANCER.getKey(), ChaoticLoadBalancer.class.getName());
     c.tableOperations().setProperty(tableName, Property.TABLE_SPLIT_THRESHOLD.getKey(), "10K");
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (int i = 0; i < 100; i++) {
       splits.add(new Text(String.format("%03d", i)));
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
similarity index 75%
rename from test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
index 4b51bd2..29f2780 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
@@ -20,10 +20,13 @@
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.io.IOException;
+import java.io.InputStream;
 import java.util.Collections;
 import java.util.EnumSet;
 import java.util.Iterator;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -36,10 +39,9 @@
 import org.apache.accumulo.core.iterators.Combiner;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.CachedConfiguration;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.hamcrest.CoreMatchers;
@@ -47,7 +49,9 @@
 import org.junit.Before;
 import org.junit.Test;
 
-public class ClassLoaderIT extends AccumuloClusterIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class ClassLoaderIT extends AccumuloClusterHarness {
 
   private static final long ZOOKEEPER_PROPAGATION_TIME = 10 * 1000;
 
@@ -65,6 +69,19 @@
     rootPath = mac.getConfig().getDir().getAbsolutePath();
   }
 
+  private static void copyStreamToFileSystem(FileSystem fs, String jarName, Path path) throws IOException {
+    byte[] buffer = new byte[10 * 1024];
+    try (FSDataOutputStream dest = fs.create(path); InputStream stream = ClassLoaderIT.class.getResourceAsStream(jarName)) {
+      while (true) {
+        int n = stream.read(buffer, 0, buffer.length);
+        if (n <= 0) {
+          break;
+        }
+        dest.write(buffer, 0, n);
+      }
+    }
+  }
+
   @Test
   public void test() throws Exception {
     Connector c = getConnector();
@@ -76,18 +93,18 @@
     bw.addMutation(m);
     bw.close();
     scanCheck(c, tableName, "Test");
-    FileSystem fs = FileSystem.get(CachedConfiguration.getInstance());
+    FileSystem fs = getCluster().getFileSystem();
     Path jarPath = new Path(rootPath + "/lib/ext/Test.jar");
-    fs.copyFromLocalFile(new Path(System.getProperty("user.dir") + "/src/test/resources/TestCombinerX.jar"), jarPath);
-    UtilWaitThread.sleep(1000);
+    copyStreamToFileSystem(fs, "/TestCombinerX.jar", jarPath);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
     IteratorSetting is = new IteratorSetting(10, "TestCombiner", "org.apache.accumulo.test.functional.TestCombiner");
     Combiner.setColumns(is, Collections.singletonList(new IteratorSetting.Column("cf")));
     c.tableOperations().attachIterator(tableName, is, EnumSet.of(IteratorScope.scan));
-    UtilWaitThread.sleep(ZOOKEEPER_PROPAGATION_TIME);
+    sleepUninterruptibly(ZOOKEEPER_PROPAGATION_TIME, TimeUnit.MILLISECONDS);
     scanCheck(c, tableName, "TestX");
     fs.delete(jarPath, true);
-    fs.copyFromLocalFile(new Path(System.getProperty("user.dir") + "/src/test/resources/TestCombinerY.jar"), jarPath);
-    UtilWaitThread.sleep(5000);
+    copyStreamToFileSystem(fs, "/TestCombinerY.jar", jarPath);
+    sleepUninterruptibly(5, TimeUnit.SECONDS);
     scanCheck(c, tableName, "TestY");
     fs.delete(jarPath, true);
   }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/CleanTmpIT.java b/test/src/main/java/org/apache/accumulo/test/functional/CleanTmpIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/functional/CleanTmpIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/CleanTmpIT.java
index d03007e..751e827 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/CleanTmpIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CleanTmpIT.java
@@ -47,7 +47,7 @@
 import com.google.common.collect.Iterables;
 import com.google.common.collect.Iterators;
 
-public class CleanTmpIT extends ConfigurableMacIT {
+public class CleanTmpIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(CleanTmpIT.class);
 
   @Override
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/CleanUpIT.java b/test/src/main/java/org/apache/accumulo/test/functional/CleanUpIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/functional/CleanUpIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/CleanUpIT.java
index adc48c4..2ff55e8 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/CleanUpIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CleanUpIT.java
@@ -28,7 +28,7 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.CleanUp;
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.BeforeClass;
@@ -40,9 +40,9 @@
  * Ensures that all threads spawned for ZooKeeper and Thrift connectivity are reaped after calling CleanUp.shutdown().
  *
  * Because this is destructive across the current context classloader, the normal teardown methods will fail (because they attempt to create a Connector). Until
- * the ZooKeeperInstance and Connector are self-contained WRT resource management, we can't leverage the AccumuloClusterIT.
+ * the ZooKeeperInstance and Connector are self-contained WRT resource management, we can't leverage the AccumuloClusterBase.
  */
-public class CleanUpIT extends SharedMiniClusterIT {
+public class CleanUpIT extends SharedMiniClusterBase {
   private static final Logger log = LoggerFactory.getLogger(CleanUpIT.class);
 
   @Override
@@ -52,12 +52,12 @@
 
   @BeforeClass
   public static void setup() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
   }
 
   @AfterClass
   public static void teardown() throws Exception {
-    SharedMiniClusterIT.stopMiniCluster();
+    SharedMiniClusterBase.stopMiniCluster();
   }
 
   @Test
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/CloneTestIT.java b/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/test/functional/CloneTestIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
index 4fad30b..64cdc34 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/CloneTestIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
@@ -46,10 +46,9 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.server.ServerConstants;
-import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -61,7 +60,7 @@
 /**
  *
  */
-public class CloneTestIT extends AccumuloClusterIT {
+public class CloneTestIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -84,10 +83,10 @@
 
     BatchWriter bw = writeData(table1, c);
 
-    Map<String,String> props = new HashMap<String,String>();
+    Map<String,String> props = new HashMap<>();
     props.put(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE.getKey(), "500K");
 
-    Set<String> exclude = new HashSet<String>();
+    Set<String> exclude = new HashSet<>();
     exclude.add(Property.TABLE_FILE_MAX.getKey());
 
     c.tableOperations().clone(table1, table2, true, props, exclude);
@@ -102,7 +101,7 @@
 
     checkMetadata(table2, c);
 
-    HashMap<String,String> tableProps = new HashMap<String,String>();
+    HashMap<String,String> tableProps = new HashMap<>();
     for (Entry<String,String> prop : c.tableOperations().getProperties(table2)) {
       tableProps.put(prop.getKey(), prop.getValue());
     }
@@ -119,13 +118,13 @@
   private void checkData(String table2, Connector c) throws TableNotFoundException {
     Scanner scanner = c.createScanner(table2, Authorizations.EMPTY);
 
-    HashMap<String,String> expected = new HashMap<String,String>();
+    HashMap<String,String> expected = new HashMap<>();
     expected.put("001:x", "9");
     expected.put("001:y", "7");
     expected.put("008:x", "3");
     expected.put("008:y", "4");
 
-    HashMap<String,String> actual = new HashMap<String,String>();
+    HashMap<String,String> actual = new HashMap<>();
 
     for (Entry<Key,Value> entry : scanner)
       actual.put(entry.getKey().getRowData().toString() + ":" + entry.getKey().getColumnQualifierData().toString(), entry.getValue().toString());
@@ -209,7 +208,7 @@
     writeData(table3, c).close();
     c.tableOperations().flush(table3, null, null, true);
     // check for files
-    FileSystem fs = FileSystem.get(new Configuration());
+    FileSystem fs = getCluster().getFileSystem();
     String id = c.tableOperations().tableIdMap().get(table3);
     FileStatus[] status = fs.listStatus(new Path(rootPath + "/accumulo/tables/" + id));
     assertTrue(status.length > 0);
@@ -230,10 +229,10 @@
 
     BatchWriter bw = writeData(table1, c);
 
-    Map<String,String> props = new HashMap<String,String>();
+    Map<String,String> props = new HashMap<>();
     props.put(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE.getKey(), "500K");
 
-    Set<String> exclude = new HashSet<String>();
+    Set<String> exclude = new HashSet<>();
     exclude.add(Property.TABLE_FILE_MAX.getKey());
 
     c.tableOperations().clone(table1, table2, true, props, exclude);
@@ -261,8 +260,8 @@
   public void testCloneWithSplits() throws Exception {
     Connector conn = getConnector();
 
-    List<Mutation> mutations = new ArrayList<Mutation>();
-    TreeSet<Text> splits = new TreeSet<Text>();
+    List<Mutation> mutations = new ArrayList<>();
+    TreeSet<Text> splits = new TreeSet<>();
     for (int i = 0; i < 10; i++) {
       splits.add(new Text(Integer.toString(i)));
       Mutation m = new Mutation(Integer.toString(i));
@@ -285,7 +284,7 @@
     conn.tableOperations().deleteRows(tables[1], new Text("4"), new Text("8"));
 
     List<String> rows = Arrays.asList("0", "1", "2", "3", "4", "9");
-    List<String> actualRows = new ArrayList<String>();
+    List<String> actualRows = new ArrayList<>();
     for (Entry<Key,Value> entry : conn.createScanner(tables[1], Authorizations.EMPTY)) {
       actualRows.add(entry.getKey().getRow().toString());
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/CombinerIT.java b/test/src/main/java/org/apache/accumulo/test/functional/CombinerIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/CombinerIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/CombinerIT.java
index 5538797..d4ef18e 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/CombinerIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CombinerIT.java
@@ -35,10 +35,10 @@
 import org.apache.accumulo.core.iterators.LongCombiner.Type;
 import org.apache.accumulo.core.iterators.user.SummingCombiner;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.junit.Test;
 
-public class CombinerIT extends AccumuloClusterIT {
+public class CombinerIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/CompactionIT.java b/test/src/main/java/org/apache/accumulo/test/functional/CompactionIT.java
similarity index 89%
rename from test/src/test/java/org/apache/accumulo/test/functional/CompactionIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/CompactionIT.java
index 818dbc4..e5fecf8 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/CompactionIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CompactionIT.java
@@ -20,9 +20,10 @@
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
-import java.util.ArrayList;
-import java.util.List;
 import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.accumulo.core.cli.ClientOpts.Password;
@@ -37,7 +38,7 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.VerifyIngest;
@@ -45,6 +46,7 @@
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.io.Text;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -53,7 +55,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class CompactionIT extends AccumuloClusterIT {
+public class CompactionIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(CompactionIT.class);
 
   @Override
@@ -122,12 +124,13 @@
 
     final AtomicBoolean fail = new AtomicBoolean(false);
     final ClientConfiguration clientConf = cluster.getClientConfig();
-    for (int count = 0; count < 5; count++) {
-      List<Thread> threads = new ArrayList<Thread>();
+    final int THREADS = 5;
+    for (int count = 0; count < THREADS; count++) {
+      ExecutorService executor = Executors.newFixedThreadPool(THREADS);
       final int span = 500000 / 59;
       for (int i = 0; i < 500000; i += 500000 / 59) {
         final int finalI = i;
-        Thread t = new Thread() {
+        Runnable r = new Runnable() {
           @Override
           public void run() {
             try {
@@ -152,11 +155,10 @@
             }
           }
         };
-        t.start();
-        threads.add(t);
+        executor.execute(r);
       }
-      for (Thread t : threads)
-        t.join();
+      executor.shutdown();
+      executor.awaitTermination(defaultTimeoutSeconds(), TimeUnit.SECONDS);
       assertFalse("Failed to successfully run all threads, Check the test output for error", fail.get());
     }
 
@@ -176,8 +178,8 @@
 
   private int countFiles(Connector c) throws Exception {
     Scanner s = c.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    s.fetchColumnFamily(MetadataSchema.TabletsSection.TabletColumnFamily.NAME);
-    s.fetchColumnFamily(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME);
+    s.fetchColumnFamily(new Text(MetadataSchema.TabletsSection.TabletColumnFamily.NAME));
+    s.fetchColumnFamily(new Text(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME));
     return Iterators.size(s.iterator());
   }
 
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ConcurrencyIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ConcurrencyIT.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/functional/ConcurrencyIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ConcurrencyIT.java
index 859eafd..d462b53 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ConcurrencyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ConcurrencyIT.java
@@ -20,6 +20,7 @@
 
 import java.util.EnumSet;
 import java.util.Map;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -36,16 +37,16 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class ConcurrencyIT extends AccumuloClusterIT {
+public class ConcurrencyIT extends AccumuloClusterHarness {
 
   static class ScanTask extends Thread {
 
@@ -117,7 +118,7 @@
     ScanTask st1 = new ScanTask(c, tableName, 100);
     st1.start();
 
-    UtilWaitThread.sleep(50);
+    sleepUninterruptibly(50, TimeUnit.MILLISECONDS);
     c.tableOperations().flush(tableName, null, null, true);
 
     for (int i = 0; i < 50; i++) {
@@ -142,7 +143,7 @@
     ScanTask st3 = new ScanTask(c, tableName, 150);
     st3.start();
 
-    UtilWaitThread.sleep(50);
+    sleepUninterruptibly(50, TimeUnit.MILLISECONDS);
     c.tableOperations().flush(tableName, null, null, false);
 
     st3.join();
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableCompactionIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableCompactionIT.java
similarity index 89%
rename from test/src/test/java/org/apache/accumulo/test/functional/ConfigurableCompactionIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ConfigurableCompactionIT.java
index a33322c..d812914 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableCompactionIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableCompactionIT.java
@@ -18,6 +18,7 @@
 
 import static org.junit.Assert.assertTrue;
 
+import java.io.File;
 import java.io.IOException;
 import java.util.Arrays;
 import java.util.Collections;
@@ -39,6 +40,7 @@
 import org.apache.accumulo.tserver.compaction.CompactionPlan;
 import org.apache.accumulo.tserver.compaction.CompactionStrategy;
 import org.apache.accumulo.tserver.compaction.MajorCompactionRequest;
+import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
 import org.junit.Assert;
@@ -46,7 +48,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class ConfigurableCompactionIT extends ConfigurableMacIT {
+public class ConfigurableCompactionIT extends ConfigurableMacBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -99,15 +101,15 @@
   public void testPerTableClasspath() throws Exception {
     final Connector c = getConnector();
     final String tableName = getUniqueNames(1)[0];
+    File destFile = installJar(getCluster().getConfig().getAccumuloDir(), "/TestCompactionStrat.jar");
     c.tableOperations().create(tableName);
-    c.instanceOperations().setProperty(Property.VFS_CONTEXT_CLASSPATH_PROPERTY.getKey() + "context1",
-        System.getProperty("user.dir") + "/src/test/resources/TestCompactionStrat.jar");
+    c.instanceOperations().setProperty(Property.VFS_CONTEXT_CLASSPATH_PROPERTY.getKey() + "context1", destFile.toString());
     c.tableOperations().setProperty(tableName, Property.TABLE_MAJC_RATIO.getKey(), "10");
     c.tableOperations().setProperty(tableName, Property.TABLE_CLASSPATH.getKey(), "context1");
     // EfgCompactionStrat will only compact a tablet w/ end row of 'efg'. No other tablets are compacted.
     c.tableOperations().setProperty(tableName, Property.TABLE_COMPACTION_STRATEGY.getKey(), "org.apache.accumulo.test.EfgCompactionStrat");
 
-    c.tableOperations().addSplits(tableName, new TreeSet<Text>(Arrays.asList(new Text("efg"))));
+    c.tableOperations().addSplits(tableName, new TreeSet<>(Arrays.asList(new Text("efg"))));
 
     for (char ch = 'a'; ch < 'l'; ch++)
       writeFlush(c, tableName, ch + "");
@@ -117,6 +119,12 @@
     }
   }
 
+  private static File installJar(File destDir, String jarFile) throws IOException {
+    File destName = new File(destDir, new File(jarFile).getName());
+    FileUtils.copyInputStreamToFile(ConfigurableCompactionIT.class.getResourceAsStream(jarFile), destName);
+    return destName;
+  }
+
   private void writeFlush(Connector conn, String tablename, String row) throws Exception {
     BatchWriter bw = conn.createBatchWriter(tablename, new BatchWriterConfig());
     Mutation m = new Mutation(row);
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
similarity index 90%
rename from test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
index 53eb8e4..85246bf 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
@@ -34,8 +34,8 @@
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.util.MonitorUtil;
-import org.apache.accumulo.harness.AccumuloClusterIT;
-import org.apache.accumulo.harness.AccumuloIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.harness.AccumuloITBase;
 import org.apache.accumulo.minicluster.MiniAccumuloCluster;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
@@ -43,6 +43,7 @@
 import org.apache.accumulo.test.util.CertUtils;
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
 import org.apache.zookeeper.KeeperException;
 import org.junit.After;
 import org.junit.Before;
@@ -51,10 +52,10 @@
 
 /**
  * General Integration-Test base class that provides access to a {@link MiniAccumuloCluster} for testing. Tests using these typically do very disruptive things
- * to the instance, and require specific configuration. Most tests don't need this level of control and should extend {@link AccumuloClusterIT} instead.
+ * to the instance, and require specific configuration. Most tests don't need this level of control and should extend {@link AccumuloClusterHarness} instead.
  */
-public class ConfigurableMacIT extends AccumuloIT {
-  public static final Logger log = LoggerFactory.getLogger(ConfigurableMacIT.class);
+public class ConfigurableMacBase extends AccumuloITBase {
+  public static final Logger log = LoggerFactory.getLogger(ConfigurableMacBase.class);
 
   protected MiniAccumuloClusterImpl cluster;
 
@@ -127,7 +128,9 @@
     // createTestDir will give us a empty directory, we don't need to clean it up ourselves
     File baseDir = createTestDir(this.getClass().getName() + "_" + this.testName.getMethodName());
     MiniAccumuloConfigImpl cfg = new MiniAccumuloConfigImpl(baseDir, ROOT_PASSWORD);
-    cfg.setNativeLibPaths(NativeMapIT.nativeMapLocation().getAbsolutePath());
+    String nativePathInDevTree = NativeMapIT.nativeMapLocation().getAbsolutePath();
+    String nativePathInMapReduce = new File(System.getProperty("user.dir")).toString();
+    cfg.setNativeLibPaths(nativePathInDevTree, nativePathInMapReduce);
     cfg.setProperty(Property.GC_FILE_ARCHIVE, Boolean.TRUE.toString());
     Configuration coreSite = new Configuration(false);
     configure(cfg, coreSite);
@@ -136,12 +139,14 @@
     cluster = new MiniAccumuloClusterImpl(cfg);
     if (coreSite.size() > 0) {
       File csFile = new File(cluster.getConfig().getConfDir(), "core-site.xml");
-      if (csFile.exists())
-        throw new RuntimeException(csFile + " already exist");
-
-      OutputStream out = new BufferedOutputStream(new FileOutputStream(new File(cluster.getConfig().getConfDir(), "core-site.xml")));
+      if (csFile.exists()) {
+        coreSite.addResource(new Path(csFile.getAbsolutePath()));
+      }
+      File tmp = new File(csFile.getAbsolutePath() + ".tmp");
+      OutputStream out = new BufferedOutputStream(new FileOutputStream(tmp));
       coreSite.writeXml(out);
       out.close();
+      assertTrue(tmp.renameTo(csFile));
     }
     beforeClusterStart(cfg);
   }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ConstraintIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ConstraintIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/functional/ConstraintIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ConstraintIT.java
index c694143..27d84de 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ConstraintIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ConstraintIT.java
@@ -23,6 +23,7 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -36,16 +37,17 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.examples.simple.constraints.AlphaNumKeyConstraint;
 import org.apache.accumulo.examples.simple.constraints.NumericValueConstraint;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class ConstraintIT extends AccumuloClusterIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class ConstraintIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(ConstraintIT.class);
 
   @Override
@@ -155,7 +157,7 @@
 
     // remove the numeric value constraint
     getConnector().tableOperations().removeConstraint(tableName, 2);
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
 
     // now should be able to add a non numeric value
     bw = getConnector().createBatchWriter(tableName, new BatchWriterConfig());
@@ -178,7 +180,7 @@
 
     // add a constraint that references a non-existant class
     getConnector().tableOperations().setProperty(tableName, Property.TABLE_CONSTRAINT_PREFIX + "1", "com.foobar.nonExistantClass");
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
 
     // add a mutation
     bw = getConnector().createBatchWriter(tableName, new BatchWriterConfig());
@@ -218,7 +220,7 @@
 
     // remove the bad constraint
     getConnector().tableOperations().removeConstraint(tableName, 1);
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
 
     // try the mutation again
     bw = getConnector().createBatchWriter(tableName, new BatchWriterConfig());
@@ -291,7 +293,7 @@
         throw new Exception("Unexpected constraints");
       }
 
-      HashMap<String,Integer> expected = new HashMap<String,Integer>();
+      HashMap<String,Integer> expected = new HashMap<>();
 
       expected.put("org.apache.accumulo.examples.simple.constraints.NumericValueConstraint", numericErrors);
       expected.put("org.apache.accumulo.examples.simple.constraints.AlphaNumKeyConstraint", 1);
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/CreateAndUseIT.java b/test/src/main/java/org/apache/accumulo/test/functional/CreateAndUseIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/CreateAndUseIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/CreateAndUseIT.java
index 6ad3d4d..919ab30 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/CreateAndUseIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CreateAndUseIT.java
@@ -33,7 +33,7 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.BeforeClass;
@@ -41,7 +41,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class CreateAndUseIT extends AccumuloClusterIT {
+public class CreateAndUseIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -52,7 +52,7 @@
 
   @BeforeClass
   public static void createData() throws Exception {
-    splits = new TreeSet<Text>();
+    splits = new TreeSet<>();
 
     for (int i = 1; i < 256; i++) {
       splits.add(new Text(String.format("%08x", i << 8)));
@@ -110,7 +110,7 @@
 
   @Test
   public void createTableAndBatchScan() throws Exception {
-    ArrayList<Range> ranges = new ArrayList<Range>();
+    ArrayList<Range> ranges = new ArrayList<>();
     for (int i = 1; i < 257; i++) {
       ranges.add(new Range(new Text(String.format("%08x", (i << 8) - 16))));
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/CreateManyScannersIT.java b/test/src/main/java/org/apache/accumulo/test/functional/CreateManyScannersIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/functional/CreateManyScannersIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/CreateManyScannersIT.java
index ffa527f..79151ee 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/CreateManyScannersIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CreateManyScannersIT.java
@@ -18,10 +18,10 @@
 
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.junit.Test;
 
-public class CreateManyScannersIT extends AccumuloClusterIT {
+public class CreateManyScannersIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/CredentialsIT.java b/test/src/main/java/org/apache/accumulo/test/functional/CredentialsIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/functional/CredentialsIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/CredentialsIT.java
index ba2bae3..b383d0a 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/CredentialsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CredentialsIT.java
@@ -39,13 +39,13 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
-public class CredentialsIT extends AccumuloClusterIT {
+public class CredentialsIT extends AccumuloClusterHarness {
 
   private boolean saslEnabled;
   private String username;
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/DeleteEverythingIT.java b/test/src/main/java/org/apache/accumulo/test/functional/DeleteEverythingIT.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/functional/DeleteEverythingIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/DeleteEverythingIT.java
index d51de6e..4577813 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/DeleteEverythingIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/DeleteEverythingIT.java
@@ -20,6 +20,7 @@
 import static org.junit.Assert.assertEquals;
 
 import java.util.Map;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -31,8 +32,7 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
@@ -42,8 +42,9 @@
 
 import com.google.common.collect.Iterables;
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class DeleteEverythingIT extends AccumuloClusterIT {
+public class DeleteEverythingIT extends AccumuloClusterHarness {
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -103,7 +104,7 @@
     getConnector().tableOperations().flush(tableName, null, null, true);
 
     getConnector().tableOperations().setProperty(tableName, Property.TABLE_MAJC_RATIO.getKey(), "1.0");
-    UtilWaitThread.sleep(4000);
+    sleepUninterruptibly(4, TimeUnit.SECONDS);
 
     FunctionalTestUtils.checkRFiles(c, tableName, 1, 1, 0, 0);
 
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/DeleteIT.java b/test/src/main/java/org/apache/accumulo/test/functional/DeleteIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/functional/DeleteIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/DeleteIT.java
index 3200d96..189f68f 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/DeleteIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/DeleteIT.java
@@ -30,13 +30,13 @@
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.TestRandomDeletes;
 import org.apache.accumulo.test.VerifyIngest;
 import org.junit.Test;
 
-public class DeleteIT extends AccumuloClusterIT {
+public class DeleteIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/DeleteRowsIT.java b/test/src/main/java/org/apache/accumulo/test/functional/DeleteRowsIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/DeleteRowsIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/DeleteRowsIT.java
index 6e67f9b..fd89caa 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/DeleteRowsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/DeleteRowsIT.java
@@ -34,7 +34,7 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 import org.slf4j.Logger;
@@ -42,7 +42,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class DeleteRowsIT extends AccumuloClusterIT {
+public class DeleteRowsIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -54,13 +54,13 @@
   private static final int ROWS_PER_TABLET = 10;
   private static final String[] LETTERS = new String[] {"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t",
       "u", "v", "w", "x", "y", "z"};
-  static final SortedSet<Text> SPLITS = new TreeSet<Text>();
+  static final SortedSet<Text> SPLITS = new TreeSet<>();
   static {
     for (String alpha : LETTERS) {
       SPLITS.add(new Text(alpha));
     }
   }
-  static final List<String> ROWS = new ArrayList<String>(Arrays.asList(LETTERS));
+  static final List<String> ROWS = new ArrayList<>(Arrays.asList(LETTERS));
   static {
     // put data on first and last tablet
     ROWS.add("A");
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/DeleteRowsSplitIT.java b/test/src/main/java/org/apache/accumulo/test/functional/DeleteRowsSplitIT.java
similarity index 90%
rename from test/src/test/java/org/apache/accumulo/test/functional/DeleteRowsSplitIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/DeleteRowsSplitIT.java
index 1330779..2a6653d 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/DeleteRowsSplitIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/DeleteRowsSplitIT.java
@@ -25,6 +25,7 @@
 import java.util.Map.Entry;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -34,15 +35,16 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 // attempt to reproduce ACCUMULO-315
-public class DeleteRowsSplitIT extends AccumuloClusterIT {
+public class DeleteRowsSplitIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -52,8 +54,8 @@
   private static final Logger log = LoggerFactory.getLogger(DeleteRowsSplitIT.class);
 
   private static final String LETTERS = "abcdefghijklmnopqrstuvwxyz";
-  static final SortedSet<Text> SPLITS = new TreeSet<Text>();
-  static final List<String> ROWS = new ArrayList<String>();
+  static final SortedSet<Text> SPLITS = new TreeSet<>();
+  static final List<String> ROWS = new ArrayList<>();
   static {
     for (byte b : LETTERS.getBytes(UTF_8)) {
       SPLITS.add(new Text(new byte[] {b}));
@@ -101,7 +103,7 @@
       };
       t.start();
 
-      UtilWaitThread.sleep(test * 2);
+      sleepUninterruptibly(test * 2, TimeUnit.MILLISECONDS);
 
       conn.tableOperations().deleteRows(tableName, start, end);
 
@@ -123,7 +125,7 @@
   }
 
   private void generateRandomRange(Text start, Text end) {
-    List<String> bunch = new ArrayList<String>(ROWS);
+    List<String> bunch = new ArrayList<>(ROWS);
     Collections.shuffle(bunch);
     if (bunch.get(0).compareTo((bunch.get(1))) < 0) {
       start.set(bunch.get(0));
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/DeleteTableDuringSplitIT.java b/test/src/main/java/org/apache/accumulo/test/functional/DeleteTableDuringSplitIT.java
similarity index 89%
rename from test/src/test/java/org/apache/accumulo/test/functional/DeleteTableDuringSplitIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/DeleteTableDuringSplitIT.java
index e645c03..2c34537 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/DeleteTableDuringSplitIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/DeleteTableDuringSplitIT.java
@@ -28,13 +28,16 @@
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.util.SimpleThreadPool;
 import org.apache.accumulo.fate.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.test.PerformanceTest;
 import org.apache.hadoop.io.Text;
 import org.junit.Assert;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 // ACCUMULO-2361
-public class DeleteTableDuringSplitIT extends AccumuloClusterIT {
+@Category(PerformanceTest.class)
+public class DeleteTableDuringSplitIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -50,13 +53,13 @@
     for (String tableName : tableNames) {
       getConnector().tableOperations().create(tableName);
     }
-    final SortedSet<Text> splits = new TreeSet<Text>();
+    final SortedSet<Text> splits = new TreeSet<>();
     for (byte i = 0; i < 100; i++) {
       splits.add(new Text(new byte[] {0, 0, i}));
     }
 
-    List<Future<?>> results = new ArrayList<Future<?>>();
-    List<Runnable> tasks = new ArrayList<Runnable>();
+    List<Future<?>> results = new ArrayList<>();
+    List<Runnable> tasks = new ArrayList<>();
     SimpleThreadPool es = new SimpleThreadPool(batchSize * 2, "concurrent-api-requests");
     for (String tableName : tableNames) {
       final String finalName = tableName;
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/DeletedTablesDontFlushIT.java b/test/src/main/java/org/apache/accumulo/test/functional/DeletedTablesDontFlushIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/functional/DeletedTablesDontFlushIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/DeletedTablesDontFlushIT.java
index d3599b0..a508f60 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/DeletedTablesDontFlushIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/DeletedTablesDontFlushIT.java
@@ -26,13 +26,13 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.fate.util.UtilWaitThread;
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
 // ACCUMULO-2880
-public class DeletedTablesDontFlushIT extends SharedMiniClusterIT {
+public class DeletedTablesDontFlushIT extends SharedMiniClusterBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -41,12 +41,12 @@
 
   @BeforeClass
   public static void setup() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
   }
 
   @AfterClass
   public static void teardown() throws Exception {
-    SharedMiniClusterIT.stopMiniCluster();
+    SharedMiniClusterBase.stopMiniCluster();
   }
 
   @Test
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/DurabilityIT.java b/test/src/main/java/org/apache/accumulo/test/functional/DurabilityIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/functional/DurabilityIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/DurabilityIT.java
index 819347e..00d715f 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/DurabilityIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/DurabilityIT.java
@@ -18,6 +18,7 @@
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assume.assumeFalse;
 
 import java.util.Arrays;
 import java.util.HashMap;
@@ -34,15 +35,20 @@
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ProcessReference;
+import org.apache.accumulo.test.mrit.IntegrationTestMapReduce;
+import org.apache.accumulo.test.PerformanceTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.collect.Iterators;
 
-public class DurabilityIT extends ConfigurableMacIT {
+@Category(PerformanceTest.class)
+public class DurabilityIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(DurabilityIT.class);
 
   @Override
@@ -52,6 +58,11 @@
     cfg.setNumTservers(1);
   }
 
+  @BeforeClass
+  static public void checkMR() {
+    assumeFalse(IntegrationTestMapReduce.isMapReduce());
+  }
+
   static final long N = 100000;
 
   private String[] init() throws Exception {
@@ -163,7 +174,7 @@
   }
 
   private static Map<String,String> map(Iterable<Entry<String,String>> entries) {
-    Map<String,String> result = new HashMap<String,String>();
+    Map<String,String> result = new HashMap<>();
     for (Entry<String,String> entry : entries) {
       result.put(entry.getKey(), entry.getValue());
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/DynamicThreadPoolsIT.java b/test/src/main/java/org/apache/accumulo/test/functional/DynamicThreadPoolsIT.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/test/functional/DynamicThreadPoolsIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/DynamicThreadPoolsIT.java
index 2425d20..62bac85 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/DynamicThreadPoolsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/DynamicThreadPoolsIT.java
@@ -19,6 +19,7 @@
 import static org.junit.Assert.fail;
 
 import java.util.Map;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.cli.BatchWriterOpts;
 import org.apache.accumulo.core.client.ClientConfiguration;
@@ -34,8 +35,7 @@
 import org.apache.accumulo.core.master.thrift.TableInfo;
 import org.apache.accumulo.core.master.thrift.TabletServerStatus;
 import org.apache.accumulo.core.trace.Tracer;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.hadoop.conf.Configuration;
@@ -43,7 +43,9 @@
 import org.junit.Before;
 import org.junit.Test;
 
-public class DynamicThreadPoolsIT extends AccumuloClusterIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class DynamicThreadPoolsIT extends AccumuloClusterHarness {
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -96,7 +98,7 @@
     c.tableOperations().flush(firstTable, null, null, true);
     for (int i = 1; i < tables.length; i++)
       c.tableOperations().clone(firstTable, tables[i], true, null, null);
-    UtilWaitThread.sleep(11 * 1000); // time between checks of the thread pool sizes
+    sleepUninterruptibly(11, TimeUnit.SECONDS); // time between checks of the thread pool sizes
     Credentials creds = new Credentials(getAdminPrincipal(), getAdminToken());
     for (int i = 1; i < tables.length; i++)
       c.tableOperations().compact(tables[i], null, null, true, false);
@@ -119,7 +121,7 @@
       System.out.println("count " + count);
       if (count > 3)
         return;
-      UtilWaitThread.sleep(500);
+      sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
     }
     fail("Could not observe higher number of threads after changing the config");
   }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ExamplesIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ExamplesIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/ExamplesIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ExamplesIT.java
index edc6aed..279e517 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ExamplesIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ExamplesIT.java
@@ -16,10 +16,12 @@
  */
 package org.apache.accumulo.test.functional;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assume.assumeTrue;
 
 import java.io.File;
 import java.io.IOException;
@@ -29,6 +31,7 @@
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
@@ -51,7 +54,6 @@
 import org.apache.accumulo.core.iterators.user.AgeOffFilter;
 import org.apache.accumulo.core.iterators.user.SummingCombiner;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.examples.simple.client.Flush;
 import org.apache.accumulo.examples.simple.client.RandomBatchScanner;
 import org.apache.accumulo.examples.simple.client.RandomBatchWriter;
@@ -80,7 +82,7 @@
 import org.apache.accumulo.examples.simple.shard.Index;
 import org.apache.accumulo.examples.simple.shard.Query;
 import org.apache.accumulo.examples.simple.shard.Reverse;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.MemoryUnit;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl.LogWriter;
@@ -95,7 +97,6 @@
 import org.apache.hadoop.util.Tool;
 import org.junit.After;
 import org.junit.Assert;
-import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
 import org.slf4j.Logger;
@@ -103,7 +104,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class ExamplesIT extends AccumuloClusterIT {
+public class ExamplesIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(ExamplesIT.class);
   private static final BatchWriterOpts bwOpts = new BatchWriterOpts();
   private static final BatchWriterConfig bwc = new BatchWriterConfig();
@@ -171,7 +172,7 @@
       MiniAccumuloClusterImpl impl = (MiniAccumuloClusterImpl) cluster;
       trace = impl.exec(TraceServer.class);
       while (!c.tableOperations().exists("trace"))
-        UtilWaitThread.sleep(500);
+        sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
     }
     String[] args;
     if (saslEnabled) {
@@ -235,6 +236,7 @@
       default:
         throw new RuntimeException("Unknown cluster type");
     }
+    assumeTrue(new File(dirListDirectory).exists());
     // Index a directory listing on /tmp. If this is running against a standalone cluster, we can't guarantee Accumulo source will be there.
     if (saslEnabled) {
       args = new String[] {"-i", instance, "-z", keepers, "-u", user, "--keytab", keytab, "--dirTable", dirTable, "--indexTable", indexTable, "--dataTable",
@@ -253,7 +255,7 @@
         expectedFile = "accumulo-site.xml";
         break;
       case STANDALONE:
-        // Should be in place on standalone installs (not having ot follow symlinks)
+        // Should be in place on standalone installs (not having to follow symlinks)
         expectedFile = "LICENSE";
         break;
       default:
@@ -285,13 +287,13 @@
     is = new IteratorSetting(10, AgeOffFilter.class);
     AgeOffFilter.setTTL(is, 1000L);
     c.tableOperations().attachIterator(tableName, is);
-    UtilWaitThread.sleep(500); // let zookeeper updates propagate.
+    sleepUninterruptibly(500, TimeUnit.MILLISECONDS); // let zookeeper updates propagate.
     bw = c.createBatchWriter(tableName, bwc);
     Mutation m = new Mutation("foo");
     m.put("a", "b", "c");
     bw.addMutation(m);
     bw.close();
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
     assertEquals(0, Iterators.size(c.createScanner(tableName, Authorizations.EMPTY).iterator()));
   }
 
@@ -378,15 +380,17 @@
 
   @Test
   public void testShardedIndex() throws Exception {
+    File src = new File(System.getProperty("user.dir") + "/src");
+    assumeTrue(src.exists());
     String[] names = getUniqueNames(3);
     final String shard = names[0], index = names[1];
     c.tableOperations().create(shard);
     c.tableOperations().create(index);
     bw = c.createBatchWriter(shard, bwc);
-    Index.index(30, new File(System.getProperty("user.dir") + "/src"), "\\W+", bw);
+    Index.index(30, src, "\\W+", bw);
     bw.close();
     BatchScanner bs = c.createBatchScanner(shard, Authorizations.EMPTY, 4);
-    List<String> found = Query.query(bs, Arrays.asList("foo", "bar"));
+    List<String> found = Query.query(bs, Arrays.asList("foo", "bar"), null);
     bs.close();
     // should find ourselves
     boolean thisFile = false;
@@ -440,7 +444,7 @@
   @Test
   public void testBulkIngest() throws Exception {
     // TODO Figure out a way to run M/R with Kerberos
-    Assume.assumeTrue(getAdminToken() instanceof PasswordToken);
+    assumeTrue(getAdminToken() instanceof PasswordToken);
     String tableName = getUniqueNames(1)[0];
     FileSystem fs = getFileSystem();
     Path p = new Path(dir, "tmp");
@@ -473,7 +477,7 @@
   @Test
   public void testTeraSortAndRead() throws Exception {
     // TODO Figure out a way to run M/R with Kerberos
-    Assume.assumeTrue(getAdminToken() instanceof PasswordToken);
+    assumeTrue(getAdminToken() instanceof PasswordToken);
     String tableName = getUniqueNames(1)[0];
     String[] args;
     if (saslEnabled) {
@@ -516,14 +520,19 @@
   @Test
   public void testWordCount() throws Exception {
     // TODO Figure out a way to run M/R with Kerberos
-    Assume.assumeTrue(getAdminToken() instanceof PasswordToken);
+    assumeTrue(getAdminToken() instanceof PasswordToken);
     String tableName = getUniqueNames(1)[0];
     c.tableOperations().create(tableName);
     is = new IteratorSetting(10, SummingCombiner.class);
     SummingCombiner.setColumns(is, Collections.singletonList(new IteratorSetting.Column(new Text("count"))));
     SummingCombiner.setEncodingType(is, SummingCombiner.Type.STRING);
     c.tableOperations().attachIterator(tableName, is);
-    fs.copyFromLocalFile(new Path(new Path(System.getProperty("user.dir")).getParent(), "README.md"), new Path(dir + "/tmp/wc/README.md"));
+    Path readme = new Path(new Path(System.getProperty("user.dir")).getParent(), "README.md");
+    if (!new File(readme.toString()).exists()) {
+      log.info("Not running test: README.md does not exist)");
+      return;
+    }
+    fs.copyFromLocalFile(readme, new Path(dir + "/tmp/wc/README.md"));
     String[] args;
     if (saslEnabled) {
       args = new String[] {"-i", instance, "-u", user, "--keytab", keytab, "-z", keepers, "--input", dir + "/tmp/wc", "-t", tableName};
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/FateStarvationIT.java b/test/src/main/java/org/apache/accumulo/test/functional/FateStarvationIT.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/test/functional/FateStarvationIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/FateStarvationIT.java
index ebbef7c..30f4476 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/FateStarvationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/FateStarvationIT.java
@@ -24,7 +24,7 @@
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
@@ -32,7 +32,7 @@
 /**
  * See ACCUMULO-779
  */
-public class FateStarvationIT extends AccumuloClusterIT {
+public class FateStarvationIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -64,7 +64,7 @@
 
     c.tableOperations().flush(tableName, null, null, true);
 
-    List<Text> splits = new ArrayList<Text>(TestIngest.getSplitPoints(0, 100000, 67));
+    List<Text> splits = new ArrayList<>(TestIngest.getSplitPoints(0, 100000, 67));
     Random rand = new Random();
 
     for (int i = 0; i < 100; i++) {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/FunctionalTestUtils.java b/test/src/main/java/org/apache/accumulo/test/functional/FunctionalTestUtils.java
similarity index 90%
rename from test/src/test/java/org/apache/accumulo/test/functional/FunctionalTestUtils.java
rename to test/src/main/java/org/apache/accumulo/test/functional/FunctionalTestUtils.java
index efdc1b0..829293e 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/FunctionalTestUtils.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/FunctionalTestUtils.java
@@ -46,10 +46,8 @@
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl.LogWriter;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.FsShell;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
-import org.junit.Assert;
 
 import com.google.common.collect.Iterators;
 
@@ -71,7 +69,7 @@
     scanner.fetchColumnFamily(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME);
     MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.fetch(scanner);
 
-    HashMap<Text,Integer> tabletFileCounts = new HashMap<Text,Integer>();
+    HashMap<Text,Integer> tabletFileCounts = new HashMap<>();
 
     for (Entry<Key,Value> entry : scanner) {
 
@@ -106,10 +104,6 @@
     fs.mkdirs(failPath);
 
     // Ensure server can read/modify files
-    FsShell fsShell = new FsShell(fs.getConf());
-    Assert.assertEquals("Failed to chmod " + dir, 0, fsShell.run(new String[] {"-chmod", "-R", "777", dir}));
-    Assert.assertEquals("Failed to chmod " + failDir, 0, fsShell.run(new String[] {"-chmod", "-R", "777", failDir}));
-
     c.tableOperations().importDirectory(table, dir, failDir, false);
 
     if (fs.listStatus(failPath).length > 0) {
@@ -125,7 +119,7 @@
     }
   }
 
-  static public void createRFiles(final Connector c, FileSystem fs, String path, int rows, int splits, int threads) throws Exception {
+  static public void createRFiles(final Connector c, final FileSystem fs, String path, int rows, int splits, int threads) throws Exception {
     fs.delete(new Path(path), true);
     ExecutorService threadPool = Executors.newFixedThreadPool(threads);
     final AtomicBoolean fail = new AtomicBoolean(false);
@@ -142,7 +136,7 @@
         @Override
         public void run() {
           try {
-            TestIngest.ingest(c, opts, new BatchWriterOpts());
+            TestIngest.ingest(c, fs, opts, new BatchWriterOpts());
           } catch (Exception e) {
             fail.set(true);
           }
@@ -183,7 +177,7 @@
   }
 
   public static SortedSet<Text> splits(String[] splits) {
-    SortedSet<Text> result = new TreeSet<Text>();
+    SortedSet<Text> result = new TreeSet<>();
     for (String split : splits)
       result.add(new Text(split));
     return result;
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/GarbageCollectorIT.java b/test/src/main/java/org/apache/accumulo/test/functional/GarbageCollectorIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/functional/GarbageCollectorIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/GarbageCollectorIT.java
index 202bfac..12607a5 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/GarbageCollectorIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/GarbageCollectorIT.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.test.functional;
 
+import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
@@ -25,6 +26,7 @@
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.cli.BatchWriterOpts;
@@ -43,10 +45,8 @@
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.ServerServices;
 import org.apache.accumulo.core.util.ServerServices.Service;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooLock;
 import org.apache.accumulo.gc.SimpleGarbageCollector;
@@ -59,7 +59,6 @@
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.RawLocalFileSystem;
 import org.apache.hadoop.io.Text;
@@ -69,8 +68,9 @@
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class GarbageCollectorIT extends ConfigurableMacIT {
+public class GarbageCollectorIT extends ConfigurableMacBase {
   private static final String OUR_SECRET = "itsreallysecret";
 
   @Override
@@ -119,11 +119,11 @@
     vopts.cols = opts.cols = 1;
     opts.setPrincipal("root");
     vopts.setPrincipal("root");
-    TestIngest.ingest(c, opts, new BatchWriterOpts());
+    TestIngest.ingest(c, cluster.getFileSystem(), opts, new BatchWriterOpts());
     c.tableOperations().compact("test_ingest", null, null, true, true);
     int before = countFiles();
     while (true) {
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
       int more = countFiles();
       if (more <= before)
         break;
@@ -132,7 +132,7 @@
 
     // restart GC
     getCluster().start();
-    UtilWaitThread.sleep(15 * 1000);
+    sleepUninterruptibly(15, TimeUnit.SECONDS);
     int after = countFiles();
     VerifyIngest.verifyIngest(c, vopts, new ScannerOpts());
     assertTrue(after < before);
@@ -147,8 +147,17 @@
     addEntries(c, new BatchWriterOpts());
     cluster.getConfig().setDefaultMemory(10, MemoryUnit.MEGABYTE);
     Process gc = cluster.exec(SimpleGarbageCollector.class);
-    UtilWaitThread.sleep(20 * 1000);
-    String output = FunctionalTestUtils.readAll(cluster, SimpleGarbageCollector.class, gc);
+    sleepUninterruptibly(20, TimeUnit.SECONDS);
+    String output = "";
+    while (!output.contains("delete candidates has exceeded")) {
+      byte buffer[] = new byte[10 * 1024];
+      try {
+        int n = gc.getInputStream().read(buffer);
+        output = new String(buffer, 0, n, UTF_8);
+      } catch (IOException ex) {
+        break;
+      }
+    }
     gc.destroy();
     assertTrue(output.contains("delete candidates has exceeded"));
   }
@@ -162,7 +171,7 @@
     c.tableOperations().create(table);
     // let gc run for a bit
     cluster.start();
-    UtilWaitThread.sleep(20 * 1000);
+    sleepUninterruptibly(20, TimeUnit.SECONDS);
     killMacGc();
     // kill tservers
     for (ProcessReference ref : cluster.getProcesses().get(ServerType.TABLET_SERVER)) {
@@ -210,7 +219,7 @@
     try {
       String output = "";
       while (!output.contains("Ignoring invalid deletion candidate")) {
-        UtilWaitThread.sleep(250);
+        sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
         try {
           output = FunctionalTestUtils.readAll(cluster, SimpleGarbageCollector.class, gc);
         } catch (IOException ioe) {
@@ -279,9 +288,8 @@
   }
 
   private int countFiles() throws Exception {
-    FileSystem fs = FileSystem.get(CachedConfiguration.getInstance());
     Path path = new Path(cluster.getConfig().getDir() + "/accumulo/tables/1/*/*.rf");
-    return Iterators.size(Arrays.asList(fs.globStatus(path)).iterator());
+    return Iterators.size(Arrays.asList(cluster.getFileSystem().globStatus(path)).iterator());
   }
 
   public static void addEntries(Connector conn, BatchWriterOpts bwOpts) throws Exception {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/HalfDeadTServerIT.java b/test/src/main/java/org/apache/accumulo/test/functional/HalfDeadTServerIT.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/functional/HalfDeadTServerIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/HalfDeadTServerIT.java
index a29defd..76a8c5d 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/HalfDeadTServerIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/HalfDeadTServerIT.java
@@ -28,12 +28,12 @@
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Map;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.cli.ScannerOpts;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.util.Daemon;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.start.Main;
@@ -43,7 +43,9 @@
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Test;
 
-public class HalfDeadTServerIT extends ConfigurableMacIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class HalfDeadTServerIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -121,7 +123,7 @@
     String classpath = System.getProperty("java.class.path");
     classpath = new File(cluster.getConfig().getDir(), "conf") + File.pathSeparator + classpath;
     String className = TabletServer.class.getName();
-    ArrayList<String> argList = new ArrayList<String>();
+    ArrayList<String> argList = new ArrayList<>();
     argList.addAll(Arrays.asList(javaBin, "-cp", classpath));
     argList.addAll(Arrays.asList(Main.class.getName(), className));
     ProcessBuilder builder = new ProcessBuilder(argList);
@@ -139,22 +141,22 @@
     DumpOutput t = new DumpOutput(tserver.getInputStream());
     try {
       t.start();
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
       // don't need the regular tablet server
       cluster.killProcess(ServerType.TABLET_SERVER, cluster.getProcesses().get(ServerType.TABLET_SERVER).iterator().next());
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
       c.tableOperations().create("test_ingest");
       assertEquals(1, c.instanceOperations().getTabletServers().size());
       int rows = 100 * 1000;
       ingest = cluster.exec(TestIngest.class, "-u", "root", "-i", cluster.getInstanceName(), "-z", cluster.getZooKeepers(), "-p", ROOT_PASSWORD, "--rows", rows
           + "");
-      UtilWaitThread.sleep(500);
+      sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
 
       // block I/O with some side-channel trickiness
       File trickFile = new File(trickFilename);
       try {
         assertTrue(trickFile.createNewFile());
-        UtilWaitThread.sleep(seconds * 1000);
+        sleepUninterruptibly(seconds, TimeUnit.SECONDS);
       } finally {
         if (!trickFile.delete()) {
           log.error("Couldn't delete " + trickFile);
@@ -168,7 +170,7 @@
         vopts.setPrincipal("root");
         VerifyIngest.verifyIngest(c, vopts, new ScannerOpts());
       } else {
-        UtilWaitThread.sleep(5 * 1000);
+        sleepUninterruptibly(5, TimeUnit.SECONDS);
         tserver.waitFor();
         t.join();
         tserver = null;
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java b/test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
similarity index 64%
rename from test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
index 612718d..e636daa 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
@@ -61,7 +61,7 @@
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.accumulo.core.security.SystemPermission;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.harness.AccumuloIT;
+import org.apache.accumulo.harness.AccumuloITBase;
 import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 import org.apache.accumulo.harness.MiniClusterHarness;
 import org.apache.accumulo.harness.TestingKdc;
@@ -86,7 +86,7 @@
 /**
  * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
  */
-public class KerberosIT extends AccumuloIT {
+public class KerberosIT extends AccumuloITBase {
   private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 
   private static TestingKdc kdc;
@@ -112,6 +112,7 @@
     if (null != krbEnabledForITs) {
       System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
     }
+    UserGroupInformation.setConfiguration(new Configuration(false));
   }
 
   @Override
@@ -129,7 +130,7 @@
       @Override
       public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
         Map<String,String> site = cfg.getSiteConfig();
-        site.put(Property.INSTANCE_ZK_TIMEOUT.getKey(), "10s");
+        site.put(Property.INSTANCE_ZK_TIMEOUT.getKey(), "15s");
         cfg.setSiteConfig(site);
       }
 
@@ -153,19 +154,24 @@
   @Test
   public void testAdminUser() throws Exception {
     // Login as the client (provided to `accumulo init` as the "root" user)
-    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
+    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
+    ugi.doAs(new PrivilegedExceptionAction<Void>() {
+      @Override
+      public Void run() throws Exception {
+        final Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 
-    final Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
+        // The "root" user should have all system permissions
+        for (SystemPermission perm : SystemPermission.values()) {
+          assertTrue("Expected user to have permission: " + perm, conn.securityOperations().hasSystemPermission(conn.whoami(), perm));
+        }
 
-    // The "root" user should have all system permissions
-    for (SystemPermission perm : SystemPermission.values()) {
-      assertTrue("Expected user to have permission: " + perm, conn.securityOperations().hasSystemPermission(conn.whoami(), perm));
-    }
-
-    // and the ability to modify the root and metadata tables
-    for (String table : Arrays.asList(RootTable.NAME, MetadataTable.NAME)) {
-      assertTrue(conn.securityOperations().hasTablePermission(conn.whoami(), table, TablePermission.ALTER_TABLE));
-    }
+        // and the ability to modify the root and metadata tables
+        for (String table : Arrays.asList(RootTable.NAME, MetadataTable.NAME)) {
+          assertTrue(conn.securityOperations().hasTablePermission(conn.whoami(), table, TablePermission.ALTER_TABLE));
+        }
+        return null;
+      }
+    });
   }
 
   @Test
@@ -179,39 +185,51 @@
     // Create a new user
     kdc.createPrincipal(newUserKeytab, newUser);
 
-    newUser = kdc.qualifyUser(newUser);
+    final String newQualifiedUser = kdc.qualifyUser(newUser);
+    final HashSet<String> users = Sets.newHashSet(rootUser.getPrincipal());
 
     // Login as the "root" user
-    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
+    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
     log.info("Logged in as {}", rootUser.getPrincipal());
 
-    Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
-    log.info("Created connector as {}", rootUser.getPrincipal());
-    assertEquals(rootUser.getPrincipal(), conn.whoami());
+    ugi.doAs(new PrivilegedExceptionAction<Void>() {
+      @Override
+      public Void run() throws Exception {
+        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
+        log.info("Created connector as {}", rootUser.getPrincipal());
+        assertEquals(rootUser.getPrincipal(), conn.whoami());
 
-    // Make sure the system user doesn't exist -- this will force some RPC to happen server-side
-    createTableWithDataAndCompact(conn);
+        // Make sure the system user doesn't exist -- this will force some RPC to happen server-side
+        createTableWithDataAndCompact(conn);
 
-    HashSet<String> users = Sets.newHashSet(rootUser.getPrincipal());
-    assertEquals(users, conn.securityOperations().listLocalUsers());
+        assertEquals(users, conn.securityOperations().listLocalUsers());
 
+        return null;
+      }
+    });
     // Switch to a new user
-    UserGroupInformation.loginUserFromKeytab(newUser, newUserKeytab.getAbsolutePath());
-    log.info("Logged in as {}", newUser);
+    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(newQualifiedUser, newUserKeytab.getAbsolutePath());
+    log.info("Logged in as {}", newQualifiedUser);
+    ugi.doAs(new PrivilegedExceptionAction<Void>() {
+      @Override
+      public Void run() throws Exception {
+        Connector conn = mac.getConnector(newQualifiedUser, new KerberosToken());
+        log.info("Created connector as {}", newQualifiedUser);
+        assertEquals(newQualifiedUser, conn.whoami());
 
-    conn = mac.getConnector(newUser, new KerberosToken());
-    log.info("Created connector as {}", newUser);
-    assertEquals(newUser, conn.whoami());
+        // The new user should have no system permissions
+        for (SystemPermission perm : SystemPermission.values()) {
+          assertFalse(conn.securityOperations().hasSystemPermission(newQualifiedUser, perm));
+        }
 
-    // The new user should have no system permissions
-    for (SystemPermission perm : SystemPermission.values()) {
-      assertFalse(conn.securityOperations().hasSystemPermission(newUser, perm));
-    }
+        users.add(newQualifiedUser);
 
-    users.add(newUser);
+        // Same users as before, plus the new user we just created
+        assertEquals(users, conn.securityOperations().listLocalUsers());
+        return null;
+      }
 
-    // Same users as before, plus the new user we just created
-    assertEquals(users, conn.securityOperations().listLocalUsers());
+    });
   }
 
   @Test
@@ -225,42 +243,59 @@
     // Create some new users
     kdc.createPrincipal(user1Keytab, user1);
 
-    user1 = kdc.qualifyUser(user1);
+    final String qualifiedUser1 = kdc.qualifyUser(user1);
 
     // Log in as user1
-    UserGroupInformation.loginUserFromKeytab(user1, user1Keytab.getAbsolutePath());
+    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user1, user1Keytab.getAbsolutePath());
     log.info("Logged in as {}", user1);
+    ugi.doAs(new PrivilegedExceptionAction<Void>() {
+      @Override
+      public Void run() throws Exception {
+        // Indirectly creates this user when we use it
+        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
+        log.info("Created connector as {}", qualifiedUser1);
 
-    // Indirectly creates this user when we use it
-    Connector conn = mac.getConnector(user1, new KerberosToken());
-    log.info("Created connector as {}", user1);
+        // The new user should have no system permissions
+        for (SystemPermission perm : SystemPermission.values()) {
+          assertFalse(conn.securityOperations().hasSystemPermission(qualifiedUser1, perm));
+        }
 
-    // The new user should have no system permissions
-    for (SystemPermission perm : SystemPermission.values()) {
-      assertFalse(conn.securityOperations().hasSystemPermission(user1, perm));
-    }
+        return null;
+      }
+    });
 
-    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
-    conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
-
-    conn.securityOperations().grantSystemPermission(user1, SystemPermission.CREATE_TABLE);
+    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
+    ugi.doAs(new PrivilegedExceptionAction<Void>() {
+      @Override
+      public Void run() throws Exception {
+        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
+        conn.securityOperations().grantSystemPermission(qualifiedUser1, SystemPermission.CREATE_TABLE);
+        return null;
+      }
+    });
 
     // Switch back to the original user
-    UserGroupInformation.loginUserFromKeytab(user1, user1Keytab.getAbsolutePath());
-    conn = mac.getConnector(user1, new KerberosToken());
+    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user1, user1Keytab.getAbsolutePath());
+    ugi.doAs(new PrivilegedExceptionAction<Void>() {
+      @Override
+      public Void run() throws Exception {
+        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
 
-    // Shouldn't throw an exception since we granted the create table permission
-    final String table = testName.getMethodName() + "_user_table";
-    conn.tableOperations().create(table);
+        // Shouldn't throw an exception since we granted the create table permission
+        final String table = testName.getMethodName() + "_user_table";
+        conn.tableOperations().create(table);
 
-    // Make sure we can actually use the table we made
-    BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
-    Mutation m = new Mutation("a");
-    m.put("b", "c", "d");
-    bw.addMutation(m);
-    bw.close();
+        // Make sure we can actually use the table we made
+        BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
+        Mutation m = new Mutation("a");
+        m.put("b", "c", "d");
+        bw.addMutation(m);
+        bw.close();
 
-    conn.tableOperations().compact(table, new CompactionConfig().setWait(true).setFlush(true));
+        conn.tableOperations().compact(table, new CompactionConfig().setWait(true).setFlush(true));
+        return null;
+      }
+    });
   }
 
   @Test
@@ -274,64 +309,81 @@
     // Create some new users -- cannot contain realm
     kdc.createPrincipal(user1Keytab, user1);
 
-    user1 = kdc.qualifyUser(user1);
+    final String qualifiedUser1 = kdc.qualifyUser(user1);
 
     // Log in as user1
-    UserGroupInformation.loginUserFromKeytab(user1, user1Keytab.getAbsolutePath());
+    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(qualifiedUser1, user1Keytab.getAbsolutePath());
     log.info("Logged in as {}", user1);
+    ugi.doAs(new PrivilegedExceptionAction<Void>() {
+      @Override
+      public Void run() throws Exception {
+        // Indirectly creates this user when we use it
+        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
+        log.info("Created connector as {}", qualifiedUser1);
 
-    // Indirectly creates this user when we use it
-    Connector conn = mac.getConnector(user1, new KerberosToken());
-    log.info("Created connector as {}", user1);
+        // The new user should have no system permissions
+        for (SystemPermission perm : SystemPermission.values()) {
+          assertFalse(conn.securityOperations().hasSystemPermission(qualifiedUser1, perm));
+        }
+        return null;
+      }
 
-    // The new user should have no system permissions
-    for (SystemPermission perm : SystemPermission.values()) {
-      assertFalse(conn.securityOperations().hasSystemPermission(user1, perm));
-    }
-
-    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
-    conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
+    });
 
     final String table = testName.getMethodName() + "_user_table";
-    conn.tableOperations().create(table);
-
     final String viz = "viz";
 
-    // Give our unprivileged user permission on the table we made for them
-    conn.securityOperations().grantTablePermission(user1, table, TablePermission.READ);
-    conn.securityOperations().grantTablePermission(user1, table, TablePermission.WRITE);
-    conn.securityOperations().grantTablePermission(user1, table, TablePermission.ALTER_TABLE);
-    conn.securityOperations().grantTablePermission(user1, table, TablePermission.DROP_TABLE);
-    conn.securityOperations().changeUserAuthorizations(user1, new Authorizations(viz));
+    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
+
+    ugi.doAs(new PrivilegedExceptionAction<Void>() {
+      @Override
+      public Void run() throws Exception {
+        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
+        conn.tableOperations().create(table);
+        // Give our unprivileged user permission on the table we made for them
+        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.READ);
+        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.WRITE);
+        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.ALTER_TABLE);
+        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.DROP_TABLE);
+        conn.securityOperations().changeUserAuthorizations(qualifiedUser1, new Authorizations(viz));
+        return null;
+      }
+    });
 
     // Switch back to the original user
-    UserGroupInformation.loginUserFromKeytab(user1, user1Keytab.getAbsolutePath());
-    conn = mac.getConnector(user1, new KerberosToken());
+    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(qualifiedUser1, user1Keytab.getAbsolutePath());
+    ugi.doAs(new PrivilegedExceptionAction<Void>() {
+      @Override
+      public Void run() throws Exception {
+        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
 
-    // Make sure we can actually use the table we made
+        // Make sure we can actually use the table we made
 
-    // Write data
-    final long ts = 1000l;
-    BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
-    Mutation m = new Mutation("a");
-    m.put("b", "c", new ColumnVisibility(viz.getBytes()), ts, "d");
-    bw.addMutation(m);
-    bw.close();
+        // Write data
+        final long ts = 1000l;
+        BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
+        Mutation m = new Mutation("a");
+        m.put("b", "c", new ColumnVisibility(viz.getBytes()), ts, "d");
+        bw.addMutation(m);
+        bw.close();
 
-    // Compact
-    conn.tableOperations().compact(table, new CompactionConfig().setWait(true).setFlush(true));
+        // Compact
+        conn.tableOperations().compact(table, new CompactionConfig().setWait(true).setFlush(true));
 
-    // Alter
-    conn.tableOperations().setProperty(table, Property.TABLE_BLOOM_ENABLED.getKey(), "true");
+        // Alter
+        conn.tableOperations().setProperty(table, Property.TABLE_BLOOM_ENABLED.getKey(), "true");
 
-    // Read (and proper authorizations)
-    Scanner s = conn.createScanner(table, new Authorizations(viz));
-    Iterator<Entry<Key,Value>> iter = s.iterator();
-    assertTrue("No results from iterator", iter.hasNext());
-    Entry<Key,Value> entry = iter.next();
-    assertEquals(new Key("a", "b", "c", viz, ts), entry.getKey());
-    assertEquals(new Value("d".getBytes()), entry.getValue());
-    assertFalse("Had more results from iterator", iter.hasNext());
+        // Read (and proper authorizations)
+        Scanner s = conn.createScanner(table, new Authorizations(viz));
+        Iterator<Entry<Key,Value>> iter = s.iterator();
+        assertTrue("No results from iterator", iter.hasNext());
+        Entry<Key,Value> entry = iter.next();
+        assertEquals(new Key("a", "b", "c", viz, ts), entry.getKey());
+        assertEquals(new Value("d".getBytes()), entry.getValue());
+        assertFalse("Had more results from iterator", iter.hasNext());
+        return null;
+      }
+    });
   }
 
   @Test
@@ -389,16 +441,26 @@
   @Test
   public void testDelegationTokenAsDifferentUser() throws Exception {
     // Login as the "root" user
-    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
+    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
     log.info("Logged in as {}", rootUser.getPrincipal());
 
-    // As the "root" user, open up the connection and get a delegation token
-    Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
-    log.info("Created connector as {}", rootUser.getPrincipal());
-    assertEquals(rootUser.getPrincipal(), conn.whoami());
-    final AuthenticationToken delegationToken = conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
+    final AuthenticationToken delegationToken;
+    try {
+      delegationToken = ugi.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
+        @Override
+        public AuthenticationToken run() throws Exception {
+          // As the "root" user, open up the connection and get a delegation token
+          Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
+          log.info("Created connector as {}", rootUser.getPrincipal());
+          assertEquals(rootUser.getPrincipal(), conn.whoami());
+          return conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
+        }
+      });
+    } catch (UndeclaredThrowableException ex) {
+      throw ex;
+    }
 
-    // The above login with keytab doesn't have a way to logout, so make a fake user that won't have krb credentials
+    // make a fake user that won't have krb credentials
     UserGroupInformation userWithoutPrivs = UserGroupInformation.createUserForTesting("fake_user", new String[0]);
     try {
       // Use the delegation token to try to log in as a different user
@@ -418,7 +480,7 @@
     }
   }
 
-  @Test(expected = AccumuloSecurityException.class)
+  @Test
   public void testGetDelegationTokenDenied() throws Exception {
     String newUser = testName.getMethodName();
     final File newUserKeytab = new File(kdc.getKeytabDir(), newUser + ".keytab");
@@ -429,17 +491,26 @@
     // Create a new user
     kdc.createPrincipal(newUserKeytab, newUser);
 
-    newUser = kdc.qualifyUser(newUser);
+    final String qualifiedNewUser = kdc.qualifyUser(newUser);
 
     // Login as a normal user
-    UserGroupInformation.loginUserFromKeytab(newUser, newUserKeytab.getAbsolutePath());
+    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(qualifiedNewUser, newUserKeytab.getAbsolutePath());
+    try {
+      ugi.doAs(new PrivilegedExceptionAction<Void>() {
+        @Override
+        public Void run() throws Exception {
+          // As the "root" user, open up the connection and get a delegation token
+          Connector conn = mac.getConnector(qualifiedNewUser, new KerberosToken());
+          log.info("Created connector as {}", qualifiedNewUser);
+          assertEquals(qualifiedNewUser, conn.whoami());
 
-    // As the "root" user, open up the connection and get a delegation token
-    Connector conn = mac.getConnector(newUser, new KerberosToken());
-    log.info("Created connector as {}", newUser);
-    assertEquals(newUser, conn.whoami());
-
-    conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
+          conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
+          return null;
+        }
+      });
+    } catch (UndeclaredThrowableException ex) {
+      assertTrue(ex.getCause() instanceof AccumuloSecurityException);
+    }
   }
 
   @Test
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java b/test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
index b9274e0..8001a48 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
@@ -37,7 +37,7 @@
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.rpc.UGIAssumingTransport;
-import org.apache.accumulo.harness.AccumuloIT;
+import org.apache.accumulo.harness.AccumuloITBase;
 import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 import org.apache.accumulo.harness.MiniClusterHarness;
 import org.apache.accumulo.harness.TestingKdc;
@@ -78,7 +78,7 @@
 /**
  * Tests impersonation of clients by the proxy over SASL
  */
-public class KerberosProxyIT extends AccumuloIT {
+public class KerberosProxyIT extends AccumuloITBase {
   private static final Logger log = LoggerFactory.getLogger(KerberosProxyIT.class);
 
   @Rule
@@ -123,6 +123,7 @@
     if (null != krbEnabledForITs) {
       System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
     }
+    UserGroupInformation.setConfiguration(new Configuration(false));
   }
 
   private MiniAccumuloClusterImpl mac;
@@ -183,8 +184,7 @@
 
       UserGroupInformation ugi;
       try {
-        UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
-        ugi = UserGroupInformation.getCurrentUser();
+        ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
       } catch (IOException ex) {
         log.info("Login as root is failing", ex);
         Thread.sleep(3000);
@@ -236,8 +236,7 @@
   @Test
   public void testProxyClient() throws Exception {
     ClusterUser rootUser = kdc.getRootUser();
-    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
-    UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
+    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 
     TSocket socket = new TSocket(hostname, proxyPort);
     log.info("Connecting to proxy with server primary '" + proxyPrimary + "' running on " + hostname);
@@ -317,8 +316,7 @@
     kdc.createPrincipal(keytab, user);
 
     // Login as the new user
-    UserGroupInformation.loginUserFromKeytab(user, keytab.getAbsolutePath());
-    UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
+    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user, keytab.getAbsolutePath());
 
     log.info("Logged in as " + ugi);
 
@@ -370,8 +368,7 @@
     kdc.createPrincipal(keytab, user);
 
     // Login as the new user
-    UserGroupInformation.loginUserFromKeytab(user, keytab.getAbsolutePath());
-    UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
+    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user, keytab.getAbsolutePath());
 
     log.info("Logged in as " + ugi);
 
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java b/test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
index 28c1dfc..142a8bb 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
@@ -39,7 +39,7 @@
 import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloIT;
+import org.apache.accumulo.harness.AccumuloITBase;
 import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 import org.apache.accumulo.harness.MiniClusterHarness;
 import org.apache.accumulo.harness.TestingKdc;
@@ -62,7 +62,7 @@
 /**
  * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
  */
-public class KerberosRenewalIT extends AccumuloIT {
+public class KerberosRenewalIT extends AccumuloITBase {
   private static final Logger log = LoggerFactory.getLogger(KerberosRenewalIT.class);
 
   private static TestingKdc kdc;
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/LargeRowIT.java b/test/src/main/java/org/apache/accumulo/test/functional/LargeRowIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/functional/LargeRowIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/LargeRowIT.java
index 027f1e6..160b164 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/LargeRowIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/LargeRowIT.java
@@ -22,6 +22,7 @@
 import java.util.Map.Entry;
 import java.util.Random;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -33,8 +34,7 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.MemoryUnit;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
@@ -48,7 +48,9 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class LargeRowIT extends AccumuloClusterIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class LargeRowIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(LargeRowIT.class);
 
   @Override
@@ -107,7 +109,7 @@
     Random r = new Random();
     byte rowData[] = new byte[ROW_SIZE];
     r.setSeed(SEED + 1);
-    TreeSet<Text> splitPoints = new TreeSet<Text>();
+    TreeSet<Text> splitPoints = new TreeSet<>();
     for (int i = 0; i < NUM_PRE_SPLITS; i++) {
       r.nextBytes(rowData);
       TestIngest.toPrintableChars(rowData);
@@ -117,7 +119,7 @@
     c.tableOperations().create(REG_TABLE_NAME);
     c.tableOperations().create(PRE_SPLIT_TABLE_NAME);
     c.tableOperations().setProperty(PRE_SPLIT_TABLE_NAME, Property.TABLE_MAX_END_ROW_SIZE.getKey(), "256K");
-    UtilWaitThread.sleep(3 * 1000);
+    sleepUninterruptibly(3, TimeUnit.SECONDS);
     c.tableOperations().addSplits(PRE_SPLIT_TABLE_NAME, splitPoints);
     test1(c);
     test2(c);
@@ -129,7 +131,7 @@
 
     c.tableOperations().setProperty(REG_TABLE_NAME, Property.TABLE_SPLIT_THRESHOLD.getKey(), "" + SPLIT_THRESH);
 
-    UtilWaitThread.sleep(timeoutFactor * 12000);
+    sleepUninterruptibly(timeoutFactor * 12, TimeUnit.SECONDS);
     log.info("checking splits");
     FunctionalTestUtils.checkSplits(c, REG_TABLE_NAME, NUM_PRE_SPLITS / 2, NUM_PRE_SPLITS * 4);
 
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/LateLastContactIT.java b/test/src/main/java/org/apache/accumulo/test/functional/LateLastContactIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/functional/LateLastContactIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/LateLastContactIT.java
index 7b8cb2b..9c310f0 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/LateLastContactIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/LateLastContactIT.java
@@ -28,7 +28,7 @@
 /**
  * Fake the "tablet stops talking but holds its lock" problem we see when hard drives and NFS fail. Start a ZombieTServer, and see that master stops it.
  */
-public class LateLastContactIT extends ConfigurableMacIT {
+public class LateLastContactIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/LogicalTimeIT.java b/test/src/main/java/org/apache/accumulo/test/functional/LogicalTimeIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/functional/LogicalTimeIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/LogicalTimeIT.java
index a20291b..b033dbf 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/LogicalTimeIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/LogicalTimeIT.java
@@ -27,13 +27,13 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class LogicalTimeIT extends AccumuloClusterIT {
+public class LogicalTimeIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(LogicalTimeIT.class);
 
   @Override
@@ -74,7 +74,7 @@
       throws Exception {
     log.info("table " + table);
     conn.tableOperations().create(table, new NewTableConfiguration().setTimeType(TimeType.LOGICAL));
-    TreeSet<Text> splitSet = new TreeSet<Text>();
+    TreeSet<Text> splitSet = new TreeSet<>();
     for (String split : splits) {
       splitSet.add(new Text(split));
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/MapReduceIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MapReduceIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/functional/MapReduceIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/MapReduceIT.java
index 3b34206..8c4666c 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/MapReduceIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MapReduceIT.java
@@ -43,7 +43,7 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class MapReduceIT extends ConfigurableMacIT {
+public class MapReduceIT extends ConfigurableMacBase {
 
   @Override
   protected int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/MasterAssignmentIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MasterAssignmentIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/functional/MasterAssignmentIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/MasterAssignmentIT.java
index a7cdae5..7745a0f 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/MasterAssignmentIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MasterAssignmentIT.java
@@ -31,14 +31,13 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.fate.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.server.master.state.MetaDataTableScanner;
 import org.apache.accumulo.server.master.state.TabletLocationState;
 import org.apache.commons.configuration.ConfigurationException;
-import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class MasterAssignmentIT extends AccumuloClusterIT {
+public class MasterAssignmentIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -92,7 +91,7 @@
   private TabletLocationState getTabletLocationState(Connector c, String tableId) throws FileNotFoundException, ConfigurationException {
     Credentials creds = new Credentials(getAdminPrincipal(), getAdminToken());
     ClientContext context = new ClientContext(c.getInstance(), creds, getCluster().getClientConfig());
-    MetaDataTableScanner s = new MetaDataTableScanner(context, new Range(KeyExtent.getMetadataEntry(new Text(tableId), null)));
+    MetaDataTableScanner s = new MetaDataTableScanner(context, new Range(KeyExtent.getMetadataEntry(tableId, null)));
     TabletLocationState tlState = s.next();
     s.close();
     return tlState;
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/MasterFailoverIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MasterFailoverIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/MasterFailoverIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/MasterFailoverIT.java
index 49160aa..8ac67d9 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/MasterFailoverIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MasterFailoverIT.java
@@ -25,7 +25,7 @@
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.TestIngest;
@@ -33,7 +33,7 @@
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Test;
 
-public class MasterFailoverIT extends AccumuloClusterIT {
+public class MasterFailoverIT extends AccumuloClusterHarness {
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/MaxOpenIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MaxOpenIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/functional/MaxOpenIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/MaxOpenIT.java
index 2b05947..102afaf 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/MaxOpenIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MaxOpenIT.java
@@ -32,7 +32,7 @@
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
@@ -45,7 +45,7 @@
  * A functional test that exercises hitting the max open file limit on a tablet server. This test assumes there are one or two tablet servers.
  */
 
-public class MaxOpenIT extends AccumuloClusterIT {
+public class MaxOpenIT extends AccumuloClusterHarness {
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -118,7 +118,7 @@
       FunctionalTestUtils.checkRFiles(c, tableName, NUM_TABLETS, NUM_TABLETS, i + 1, i + 1);
     }
 
-    List<Range> ranges = new ArrayList<Range>(NUM_TO_INGEST);
+    List<Range> ranges = new ArrayList<>(NUM_TO_INGEST);
 
     for (int i = 0; i < NUM_TO_INGEST; i++) {
       ranges.add(new Range(TestIngest.generateRow(i, 0)));
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/MergeIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MergeIT.java
similarity index 70%
rename from test/src/test/java/org/apache/accumulo/test/functional/MergeIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/MergeIT.java
index 998feaf..8c20673 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/MergeIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MergeIT.java
@@ -27,20 +27,29 @@
 import java.util.TreeSet;
 
 import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.admin.NewTableConfiguration;
 import org.apache.accumulo.core.client.admin.TimeType;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.data.impl.KeyExtent;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.Merge;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.server.util.TabletIterator;
+import org.apache.accumulo.server.util.TabletIterator.TabletDeletedException;
 import org.apache.hadoop.io.Text;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.ExpectedException;
 
-public class MergeIT extends AccumuloClusterIT {
+public class MergeIT extends AccumuloClusterHarness {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -48,7 +57,7 @@
   }
 
   SortedSet<Text> splits(String[] points) {
-    SortedSet<Text> result = new TreeSet<Text>();
+    SortedSet<Text> result = new TreeSet<>();
     for (String point : points)
       result.add(new Text(point));
     return result;
@@ -146,14 +155,14 @@
     System.out.println("Running merge test " + table + " " + Arrays.asList(splits) + " " + start + " " + end);
 
     conn.tableOperations().create(table, new NewTableConfiguration().setTimeType(TimeType.LOGICAL));
-    TreeSet<Text> splitSet = new TreeSet<Text>();
+    TreeSet<Text> splitSet = new TreeSet<>();
     for (String split : splits) {
       splitSet.add(new Text(split));
     }
     conn.tableOperations().addSplits(table, splitSet);
 
     BatchWriter bw = conn.createBatchWriter(table, null);
-    HashSet<String> expected = new HashSet<String>();
+    HashSet<String> expected = new HashSet<>();
     for (String row : inserts) {
       Mutation m = new Mutation(row);
       m.put("cf", "cq", row);
@@ -167,7 +176,7 @@
 
     Scanner scanner = conn.createScanner(table, Authorizations.EMPTY);
 
-    HashSet<String> observed = new HashSet<String>();
+    HashSet<String> observed = new HashSet<>();
     for (Entry<Key,Value> entry : scanner) {
       String row = entry.getKey().getRowData().toString();
       if (!observed.add(row)) {
@@ -179,8 +188,8 @@
       throw new Exception("data inconsistency " + table + " " + observed + " != " + expected);
     }
 
-    HashSet<Text> currentSplits = new HashSet<Text>(conn.tableOperations().listSplits(table));
-    HashSet<Text> ess = new HashSet<Text>();
+    HashSet<Text> currentSplits = new HashSet<>(conn.tableOperations().listSplits(table));
+    HashSet<Text> ess = new HashSet<>();
     for (String es : expectedSplits) {
       ess.add(new Text(es));
     }
@@ -191,4 +200,73 @@
 
   }
 
+  @Rule
+  public ExpectedException exception = ExpectedException.none();
+
+  private static class TestTabletIterator extends TabletIterator {
+
+    private final Connector conn;
+    private final String metadataTableName;
+
+    public TestTabletIterator(Connector conn, String metadataTableName) throws Exception {
+      super(conn.createScanner(metadataTableName, Authorizations.EMPTY), MetadataSchema.TabletsSection.getRange(), true, true);
+      this.conn = conn;
+      this.metadataTableName = metadataTableName;
+    }
+
+    @Override
+    protected void resetScanner() {
+      try {
+        Scanner ds = conn.createScanner(metadataTableName, Authorizations.EMPTY);
+        Text tablet = new KeyExtent("0", new Text("m"), null).getMetadataEntry();
+        ds.setRange(new Range(tablet, true, tablet, true));
+
+        Mutation m = new Mutation(tablet);
+
+        BatchWriter bw = conn.createBatchWriter(metadataTableName, new BatchWriterConfig());
+        for (Entry<Key,Value> entry : ds) {
+          Key k = entry.getKey();
+          m.putDelete(k.getColumnFamily(), k.getColumnQualifier(), k.getTimestamp());
+        }
+
+        bw.addMutation(m);
+
+        bw.close();
+
+      } catch (Exception e) {
+        throw new RuntimeException(e);
+      }
+
+      super.resetScanner();
+    }
+
+  }
+
+  // simulate a merge happening while iterating over tablets
+  @Test
+  public void testMerge() throws Exception {
+    // create a fake metadata table
+    String metadataTableName = getUniqueNames(1)[0];
+    getConnector().tableOperations().create(metadataTableName);
+
+    KeyExtent ke1 = new KeyExtent("0", new Text("m"), null);
+    Mutation mut1 = ke1.getPrevRowUpdateMutation();
+    TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(mut1, new Value("/d1".getBytes()));
+
+    KeyExtent ke2 = new KeyExtent("0", null, null);
+    Mutation mut2 = ke2.getPrevRowUpdateMutation();
+    TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN.put(mut2, new Value("/d2".getBytes()));
+
+    BatchWriter bw1 = getConnector().createBatchWriter(metadataTableName, new BatchWriterConfig());
+    bw1.addMutation(mut1);
+    bw1.addMutation(mut2);
+    bw1.close();
+
+    TestTabletIterator tabIter = new TestTabletIterator(getConnector(), metadataTableName);
+
+    exception.expect(TabletDeletedException.class);
+    while (tabIter.hasNext()) {
+      tabIter.next();
+    }
+  }
 }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/MetadataIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MetadataIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/functional/MetadataIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/MetadataIT.java
index 09e6ecc..883b8dc 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/MetadataIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MetadataIT.java
@@ -25,6 +25,7 @@
 import java.util.Set;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.Connector;
@@ -36,8 +37,7 @@
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
@@ -45,8 +45,9 @@
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class MetadataIT extends AccumuloClusterIT {
+public class MetadataIT extends AccumuloClusterHarness {
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -70,14 +71,14 @@
     rootScanner.setRange(MetadataSchema.TabletsSection.getRange());
     rootScanner.fetchColumnFamily(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME);
 
-    Set<String> files1 = new HashSet<String>();
+    Set<String> files1 = new HashSet<>();
     for (Entry<Key,Value> entry : rootScanner)
       files1.add(entry.getKey().getColumnQualifier().toString());
 
     c.tableOperations().create(tableNames[1]);
     c.tableOperations().flush(MetadataTable.NAME, null, null, true);
 
-    Set<String> files2 = new HashSet<String>();
+    Set<String> files2 = new HashSet<>();
     for (Entry<Key,Value> entry : rootScanner)
       files2.add(entry.getKey().getColumnQualifier().toString());
 
@@ -87,7 +88,7 @@
 
     c.tableOperations().compact(MetadataTable.NAME, null, null, false, true);
 
-    Set<String> files3 = new HashSet<String>();
+    Set<String> files3 = new HashSet<>();
     for (Entry<Key,Value> entry : rootScanner)
       files3.add(entry.getKey().getColumnQualifier().toString());
 
@@ -99,7 +100,7 @@
   public void mergeMeta() throws Exception {
     Connector c = getConnector();
     String[] names = getUniqueNames(5);
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (String id : "1 2 3 4 5".split(" ")) {
       splits.add(new Text(id));
     }
@@ -111,7 +112,7 @@
     Scanner s = c.createScanner(RootTable.NAME, Authorizations.EMPTY);
     s.setRange(MetadataSchema.DeletesSection.getRange());
     while (Iterators.size(s.iterator()) == 0) {
-      UtilWaitThread.sleep(100);
+      sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
     }
     assertEquals(0, c.tableOperations().listSplits(MetadataTable.NAME).size());
   }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/MetadataMaxFilesIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MetadataMaxFilesIT.java
similarity index 84%
rename from test/src/test/java/org/apache/accumulo/test/functional/MetadataMaxFilesIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/MetadataMaxFilesIT.java
index e6c9a0e..6c4939f 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/MetadataMaxFilesIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MetadataMaxFilesIT.java
@@ -18,11 +18,10 @@
 
 import static org.junit.Assert.assertEquals;
 
-import java.util.HashMap;
-import java.util.Map;
 import java.util.Map.Entry;
 import java.util.SortedSet;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.impl.ClientContext;
@@ -37,7 +36,6 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.trace.Tracer;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.util.Admin;
 import org.apache.hadoop.conf.Configuration;
@@ -45,14 +43,15 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class MetadataMaxFilesIT extends ConfigurableMacIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class MetadataMaxFilesIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
-    Map<String,String> siteConfig = new HashMap<String,String>();
-    siteConfig.put(Property.TSERV_MAJC_DELAY.getKey(), "1");
-    siteConfig.put(Property.TSERV_SCAN_MAX_OPENFILES.getKey(), "10");
-    cfg.setSiteConfig(siteConfig);
+    cfg.setProperty(Property.TSERV_MAJC_DELAY, "1");
+    cfg.setProperty(Property.TSERV_SCAN_MAX_OPENFILES, "10");
+    cfg.setProperty(Property.TSERV_ASSIGNMENT_MAXCONCURRENT, "100");
     hadoopCoreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
   }
 
@@ -64,11 +63,13 @@
   @Test
   public void test() throws Exception {
     Connector c = getConnector();
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (int i = 0; i < 1000; i++) {
       splits.add(new Text(String.format("%03d", i)));
     }
     c.tableOperations().setProperty(MetadataTable.NAME, Property.TABLE_SPLIT_THRESHOLD.getKey(), "10000");
+    // propagation time
+    sleepUninterruptibly(5, TimeUnit.SECONDS);
     for (int i = 0; i < 5; i++) {
       String tableName = "table" + i;
       log.info("Creating " + tableName);
@@ -79,22 +80,21 @@
       c.tableOperations().flush(MetadataTable.NAME, null, null, true);
       c.tableOperations().flush(RootTable.NAME, null, null, true);
     }
-    UtilWaitThread.sleep(20 * 1000);
     log.info("shutting down");
     assertEquals(0, cluster.exec(Admin.class, "stopAll").waitFor());
     cluster.stop();
     log.info("starting up");
     cluster.start();
 
-    UtilWaitThread.sleep(30 * 1000);
+    Credentials creds = new Credentials("root", new PasswordToken(ROOT_PASSWORD));
 
     while (true) {
       MasterMonitorInfo stats = null;
-      Credentials creds = new Credentials("root", new PasswordToken(ROOT_PASSWORD));
       Client client = null;
       try {
         ClientContext context = new ClientContext(c.getInstance(), creds, getClientConfig());
         client = MasterClient.getConnectionWithRetry(context);
+        log.info("Fetching stats");
         stats = client.getMasterStats(Tracer.traceInfo(), context.rpcCreds());
       } finally {
         if (client != null)
@@ -108,9 +108,10 @@
           tablets += entry.getValue().onlineTablets;
         }
       }
+      log.info("Online tablets " + tablets);
       if (tablets == 5005)
         break;
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
   }
 }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/MetadataSplitIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MetadataSplitIT.java
similarity index 89%
rename from test/src/test/java/org/apache/accumulo/test/functional/MetadataSplitIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/MetadataSplitIT.java
index 3930cda..58480bc 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/MetadataSplitIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MetadataSplitIT.java
@@ -20,16 +20,18 @@
 import static org.junit.Assert.assertTrue;
 
 import java.util.Collections;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Test;
 
-public class MetadataSplitIT extends ConfigurableMacIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class MetadataSplitIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -50,7 +52,7 @@
       c.tableOperations().create("table" + i);
       c.tableOperations().flush(MetadataTable.NAME, null, null, true);
     }
-    UtilWaitThread.sleep(10 * 1000);
+    sleepUninterruptibly(10, TimeUnit.SECONDS);
     assertTrue(c.tableOperations().listSplits(MetadataTable.NAME).size() > 2);
   }
 }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/MonitorLoggingIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MonitorLoggingIT.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/test/functional/MonitorLoggingIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/MonitorLoggingIT.java
index bf892db..2cf9b84 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/MonitorLoggingIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MonitorLoggingIT.java
@@ -35,7 +35,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class MonitorLoggingIT extends ConfigurableMacIT {
+public class MonitorLoggingIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(MonitorLoggingIT.class);
 
   @Override
@@ -43,8 +43,8 @@
     cfg.setNumTservers(1);
     File confDir = cfg.getConfDir();
     try {
-      FileUtils.copyFileToDirectory(new File(MonitorLoggingIT.class.getResource("/conf/generic_logger.xml").toURI()), confDir);
-      FileUtils.copyFileToDirectory(new File(MonitorLoggingIT.class.getResource("/conf/monitor_logger.xml").toURI()), confDir);
+      FileUtils.copyInputStreamToFile(MonitorLoggingIT.class.getResourceAsStream("/conf/generic_logger.xml"), new File(confDir, "generic_logger.xml"));
+      FileUtils.copyInputStreamToFile(MonitorLoggingIT.class.getResourceAsStream("/conf/monitor_logger.xml"), new File(confDir, "monitor_logger.xml"));
     } catch (Exception e) {
       log.error("Failed to copy Log4J XML files to conf dir", e);
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/MonitorSslIT.java b/test/src/main/java/org/apache/accumulo/test/functional/MonitorSslIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/functional/MonitorSslIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/MonitorSslIT.java
index 197de7e..7283c4d 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/MonitorSslIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/MonitorSslIT.java
@@ -47,7 +47,7 @@
  * Check SSL for the Monitor
  *
  */
-public class MonitorSslIT extends ConfigurableMacIT {
+public class MonitorSslIT extends ConfigurableMacBase {
   @BeforeClass
   public static void initHttps() throws NoSuchAlgorithmException, KeyManagementException {
     SSLContext ctx = SSLContext.getInstance("SSL");
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/NativeMapIT.java b/test/src/main/java/org/apache/accumulo/test/functional/NativeMapIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/functional/NativeMapIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/NativeMapIT.java
index 9175379..2d594f5 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/NativeMapIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/NativeMapIT.java
@@ -313,7 +313,7 @@
 
     NativeMap nm = new NativeMap();
 
-    TreeMap<Key,Value> tm = new TreeMap<Key,Value>();
+    TreeMap<Key,Value> tm = new TreeMap<>();
 
     tm.put(new Key(new Text("fo")), new Value(new byte[] {'0'}));
     tm.put(new Key(new Text("foo")), new Value(new byte[] {'1'}));
@@ -451,14 +451,14 @@
     // generate random data
     Random r = new Random(75);
 
-    ArrayList<Pair<Key,Value>> testData = new ArrayList<Pair<Key,Value>>();
+    ArrayList<Pair<Key,Value>> testData = new ArrayList<>();
 
     for (int i = 0; i < 100000; i++) {
 
       Key k = new Key(rlrf(r, 97), rlrf(r, 13), rlrf(r, 31), rlrf(r, 11), (r.nextLong() & 0x7fffffffffffffffl), false, false);
       Value v = new Value(rlrf(r, 511));
 
-      testData.add(new Pair<Key,Value>(k, v));
+      testData.add(new Pair<>(k, v));
     }
 
     // insert unsorted data
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java b/test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
index 4aea354..2fc256b 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
@@ -52,7 +52,7 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.SystemPermission;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Assume;
 import org.junit.Before;
@@ -62,7 +62,7 @@
 
 // This test verifies the default permissions so a clean instance must be used. A shared instance might
 // not be representative of a fresh installation.
-public class PermissionsIT extends AccumuloClusterIT {
+public class PermissionsIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(PermissionsIT.class);
 
   @Override
@@ -126,7 +126,7 @@
   }
 
   static Map<String,String> map(Iterable<Entry<String,String>> i) {
-    Map<String,String> result = new HashMap<String,String>();
+    Map<String,String> result = new HashMap<>();
     for (Entry<String,String> e : i) {
       result.put(e.getKey(), e.getValue());
     }
@@ -609,8 +609,8 @@
         // test for bulk import permission would go here
         break;
       case ALTER_TABLE:
-        Map<String,Set<Text>> groups = new HashMap<String,Set<Text>>();
-        groups.put("tgroup", new HashSet<Text>(Arrays.asList(new Text("t1"), new Text("t2"))));
+        Map<String,Set<Text>> groups = new HashMap<>();
+        groups.put("tgroup", new HashSet<>(Arrays.asList(new Text("t1"), new Text("t2"))));
         try {
           test_user_conn.tableOperations().setLocalityGroups(tableName, groups);
           throw new IllegalStateException("User should not be able to set locality groups");
@@ -668,8 +668,8 @@
         // test for bulk import permission would go here
         break;
       case ALTER_TABLE:
-        Map<String,Set<Text>> groups = new HashMap<String,Set<Text>>();
-        groups.put("tgroup", new HashSet<Text>(Arrays.asList(new Text("t1"), new Text("t2"))));
+        Map<String,Set<Text>> groups = new HashMap<>();
+        groups.put("tgroup", new HashSet<>(Arrays.asList(new Text("t1"), new Text("t2"))));
         break;
       case DROP_TABLE:
         test_user_conn.tableOperations().delete(tableName);
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ReadWriteIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ReadWriteIT.java
similarity index 89%
rename from test/src/test/java/org/apache/accumulo/test/functional/ReadWriteIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ReadWriteIT.java
index a49d43c..30200ec 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ReadWriteIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ReadWriteIT.java
@@ -84,8 +84,9 @@
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooLock;
 import org.apache.accumulo.fate.zookeeper.ZooReader;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.TestMultiTableIngest;
 import org.apache.accumulo.test.VerifyIngest;
@@ -98,10 +99,15 @@
 
 import com.google.common.collect.Iterators;
 
-public class ReadWriteIT extends AccumuloClusterIT {
+public class ReadWriteIT extends AccumuloClusterHarness {
+  @Override
+  public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
+  }
+
   private static final Logger log = LoggerFactory.getLogger(ReadWriteIT.class);
 
-  static final int ROWS = 200000;
+  static final int ROWS = 100000;
   static final int COLS = 1;
   static final String COLF = "colf";
 
@@ -393,51 +399,51 @@
     final Connector connector = getConnector();
     final String tableName = getUniqueNames(1)[0];
     connector.tableOperations().create(tableName);
-    Map<String,Set<Text>> groups = new TreeMap<String,Set<Text>>();
+    Map<String,Set<Text>> groups = new TreeMap<>();
     groups.put("g1", Collections.singleton(t("colf")));
     connector.tableOperations().setLocalityGroups(tableName, groups);
     ingest(connector, getCluster().getClientConfig(), getAdminPrincipal(), 2000, 1, 50, 0, tableName);
     verify(connector, getCluster().getClientConfig(), getAdminPrincipal(), 2000, 1, 50, 0, tableName);
     connector.tableOperations().flush(tableName, null, null, true);
-    BatchScanner bscanner = connector.createBatchScanner(MetadataTable.NAME, Authorizations.EMPTY, 1);
-    String tableId = connector.tableOperations().tableIdMap().get(tableName);
-    bscanner.setRanges(Collections.singletonList(new Range(new Text(tableId + ";"), new Text(tableId + "<"))));
-    bscanner.fetchColumnFamily(DataFileColumnFamily.NAME);
-    boolean foundFile = false;
-    for (Entry<Key,Value> entry : bscanner) {
-      foundFile = true;
-      ByteArrayOutputStream baos = new ByteArrayOutputStream();
-      PrintStream newOut = new PrintStream(baos);
-      PrintStream oldOut = System.out;
-      try {
-        System.setOut(newOut);
-        List<String> args = new ArrayList<>();
-        args.add(entry.getKey().getColumnQualifier().toString());
-        if (ClusterType.STANDALONE == getClusterType() && cluster.getClientConfig().getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
-          args.add("--config");
-          StandaloneAccumuloCluster sac = (StandaloneAccumuloCluster) cluster;
-          String hadoopConfDir = sac.getHadoopConfDir();
-          args.add(new Path(hadoopConfDir, "core-site.xml").toString());
-          args.add(new Path(hadoopConfDir, "hdfs-site.xml").toString());
+    try (BatchScanner bscanner = connector.createBatchScanner(MetadataTable.NAME, Authorizations.EMPTY, 1)) {
+      String tableId = connector.tableOperations().tableIdMap().get(tableName);
+      bscanner.setRanges(Collections.singletonList(new Range(new Text(tableId + ";"), new Text(tableId + "<"))));
+      bscanner.fetchColumnFamily(DataFileColumnFamily.NAME);
+      boolean foundFile = false;
+      for (Entry<Key,Value> entry : bscanner) {
+        foundFile = true;
+        ByteArrayOutputStream baos = new ByteArrayOutputStream();
+        PrintStream newOut = new PrintStream(baos);
+        PrintStream oldOut = System.out;
+        try {
+          System.setOut(newOut);
+          List<String> args = new ArrayList<>();
+          args.add(entry.getKey().getColumnQualifier().toString());
+          if (ClusterType.STANDALONE == getClusterType() && cluster.getClientConfig().getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
+            args.add("--config");
+            StandaloneAccumuloCluster sac = (StandaloneAccumuloCluster) cluster;
+            String hadoopConfDir = sac.getHadoopConfDir();
+            args.add(new Path(hadoopConfDir, "core-site.xml").toString());
+            args.add(new Path(hadoopConfDir, "hdfs-site.xml").toString());
+          }
+          log.info("Invoking PrintInfo with " + args);
+          PrintInfo.main(args.toArray(new String[args.size()]));
+          newOut.flush();
+          String stdout = baos.toString();
+          assertTrue(stdout.contains("Locality group           : g1"));
+          assertTrue(stdout.contains("families        : [colf]"));
+        } finally {
+          newOut.close();
+          System.setOut(oldOut);
         }
-        log.info("Invoking PrintInfo with " + args);
-        PrintInfo.main(args.toArray(new String[args.size()]));
-        newOut.flush();
-        String stdout = baos.toString();
-        assertTrue(stdout.contains("Locality group         : g1"));
-        assertTrue(stdout.contains("families      : [colf]"));
-      } finally {
-        newOut.close();
-        System.setOut(oldOut);
       }
+      assertTrue(foundFile);
     }
-    bscanner.close();
-    assertTrue(foundFile);
   }
 
   @Test
   public void localityGroupChange() throws Exception {
-    // Make changes to locality groups and ensure nothing is lostssh
+    // Make changes to locality groups and ensure nothing is lost
     final Connector connector = getConnector();
     String table = getUniqueNames(1)[0];
     TableOperations to = connector.tableOperations();
@@ -467,11 +473,11 @@
   }
 
   private Map<String,Set<Text>> getGroups(String cfg) {
-    Map<String,Set<Text>> groups = new TreeMap<String,Set<Text>>();
+    Map<String,Set<Text>> groups = new TreeMap<>();
     if (cfg != null) {
       for (String group : cfg.split(";")) {
         String[] parts = group.split(":");
-        Set<Text> cols = new HashSet<Text>();
+        Set<Text> cols = new HashSet<>();
         for (String col : parts[1].split(",")) {
           cols.add(t(col));
         }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/RecoveryWithEmptyRFileIT.java b/test/src/main/java/org/apache/accumulo/test/functional/RecoveryWithEmptyRFileIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/functional/RecoveryWithEmptyRFileIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/RecoveryWithEmptyRFileIT.java
index 7793627..0408aa0 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/RecoveryWithEmptyRFileIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/RecoveryWithEmptyRFileIT.java
@@ -32,7 +32,6 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
@@ -46,7 +45,7 @@
  * This test should read the file location from the test harness and that file should be on the local filesystem. If you want to take a paranoid approach just
  * make sure the test user doesn't have write access to the HDFS files of any colocated live Accumulo instance or any important local filesystem files..
  */
-public class RecoveryWithEmptyRFileIT extends ConfigurableMacIT {
+public class RecoveryWithEmptyRFileIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(RecoveryWithEmptyRFileIT.class);
 
   static final int ROWS = 200000;
@@ -105,7 +104,6 @@
     }
     scan.close();
     assertEquals(0l, cells);
-    FileSystem.closeAll();
   }
 
 }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/RegexGroupBalanceIT.java b/test/src/main/java/org/apache/accumulo/test/functional/RegexGroupBalanceIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/functional/RegexGroupBalanceIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/RegexGroupBalanceIT.java
index e32d9b1..a8c5bca 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/RegexGroupBalanceIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/RegexGroupBalanceIT.java
@@ -43,7 +43,7 @@
 import com.google.common.collect.HashBasedTable;
 import com.google.common.collect.Table;
 
-public class RegexGroupBalanceIT extends ConfigurableMacIT {
+public class RegexGroupBalanceIT extends ConfigurableMacBase {
 
   @Override
   public void beforeClusterStart(MiniAccumuloConfigImpl cfg) throws Exception {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/RenameIT.java b/test/src/main/java/org/apache/accumulo/test/functional/RenameIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/RenameIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/RenameIT.java
index 6befd7e..0c22196 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/RenameIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/RenameIT.java
@@ -21,12 +21,12 @@
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
 import org.junit.Test;
 
-public class RenameIT extends AccumuloClusterIT {
+public class RenameIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/RestartIT.java b/test/src/main/java/org/apache/accumulo/test/functional/RestartIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/functional/RestartIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/RestartIT.java
index fba8b6d..38d388d 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/RestartIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/RestartIT.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.test.functional;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.junit.Assert.assertEquals;
 
@@ -40,12 +41,11 @@
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooLock;
 import org.apache.accumulo.fate.zookeeper.ZooReader;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.TestIngest;
@@ -59,7 +59,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class RestartIT extends AccumuloClusterIT {
+public class RestartIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(RestartIT.class);
 
   @Override
@@ -189,7 +189,7 @@
     } while (null != masterLockData);
 
     cluster.start();
-    UtilWaitThread.sleep(5);
+    sleepUninterruptibly(5, TimeUnit.MILLISECONDS);
     control.stopAllServers(ServerType.MASTER);
 
     masterLockData = new byte[0];
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/RestartStressIT.java b/test/src/main/java/org/apache/accumulo/test/functional/RestartStressIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/RestartStressIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/RestartStressIT.java
index af2eee1..1fb56ef 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/RestartStressIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/RestartStressIT.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.test.functional;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.junit.Assert.assertEquals;
 
@@ -34,8 +35,7 @@
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.TestIngest;
@@ -48,7 +48,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class RestartStressIT extends AccumuloClusterIT {
+public class RestartStressIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(RestartStressIT.class);
 
   @Override
@@ -131,7 +131,7 @@
     });
 
     for (int i = 0; i < 2; i++) {
-      UtilWaitThread.sleep(10 * 1000);
+      sleepUninterruptibly(10, TimeUnit.SECONDS);
       control.stopAllServers(ServerType.TABLET_SERVER);
       control.startAllServers(ServerType.TABLET_SERVER);
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/RowDeleteIT.java b/test/src/main/java/org/apache/accumulo/test/functional/RowDeleteIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/RowDeleteIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/RowDeleteIT.java
index 06039af..74d76b8 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/RowDeleteIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/RowDeleteIT.java
@@ -35,7 +35,7 @@
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.iterators.user.RowDeletingIterator;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
@@ -43,7 +43,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class RowDeleteIT extends AccumuloClusterIT {
+public class RowDeleteIT extends AccumuloClusterHarness {
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -62,7 +62,7 @@
     Connector c = getConnector();
     String tableName = getUniqueNames(1)[0];
     c.tableOperations().create(tableName);
-    Map<String,Set<Text>> groups = new HashMap<String,Set<Text>>();
+    Map<String,Set<Text>> groups = new HashMap<>();
     groups.put("lg1", Collections.singleton(new Text("foo")));
     groups.put("dg", Collections.<Text> emptySet());
     c.tableOperations().setLocalityGroups(tableName, groups);
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ScanIdIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ScanIdIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/ScanIdIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ScanIdIT.java
index b54d41a..748423e 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ScanIdIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ScanIdIT.java
@@ -33,6 +33,7 @@
 import java.util.concurrent.CountDownLatch;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.accumulo.core.client.AccumuloException;
@@ -52,28 +53,32 @@
 import org.apache.accumulo.core.iterators.IteratorUtil;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 /**
  * ACCUMULO-2641 Integration test. ACCUMULO-2641 Adds scan id to thrift protocol so that {@code org.apache.accumulo.core.client.admin.ActiveScan.getScanid()}
  * returns a unique scan id.
+ *
  * <p>
- * <p/>
  * The test uses the Minicluster and the {@code org.apache.accumulo.test.functional.SlowIterator} to create multiple scan sessions. The test exercises multiple
  * tablet servers with splits and multiple ranges to force the scans to occur across multiple tablet servers for completeness.
- * <p/>
+ *
+ * <p>
  * This patch modified thrift, the TraceRepoDeserializationTest test seems to fail unless the following be added:
- * <p/>
+ *
+ * <p>
  * private static final long serialVersionUID = -4659975753252858243l;
- * <p/>
+ *
+ * <p>
  * back into org.apache.accumulo.trace.thrift.TInfo until that test signature is regenerated.
  */
-public class ScanIdIT extends AccumuloClusterIT {
+public class ScanIdIT extends AccumuloClusterHarness {
 
   private static final Logger log = LoggerFactory.getLogger(ScanIdIT.class);
 
@@ -87,7 +92,7 @@
 
   private static final AtomicBoolean testInProgress = new AtomicBoolean(true);
 
-  private static final Map<Integer,Value> resultsByWorker = new ConcurrentHashMap<Integer,Value>();
+  private static final Map<Integer,Value> resultsByWorker = new ConcurrentHashMap<>();
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -127,7 +132,7 @@
 
       if (resultsByWorker.size() < NUM_SCANNERS) {
         log.trace("Results reported {}", resultsByWorker.size());
-        UtilWaitThread.sleep(750);
+        sleepUninterruptibly(750, TimeUnit.MILLISECONDS);
       } else {
         // each worker has reported at least one result.
         testInProgress.set(false);
@@ -135,13 +140,13 @@
         log.debug("Final result count {}", resultsByWorker.size());
 
         // delay to allow scanners to react to end of test and cleanly close.
-        UtilWaitThread.sleep(1000);
+        sleepUninterruptibly(1, TimeUnit.SECONDS);
       }
 
     }
 
     // all scanner have reported at least 1 result, so check for unique scan ids.
-    Set<Long> scanIds = new HashSet<Long>();
+    Set<Long> scanIds = new HashSet<>();
 
     List<String> tservers = conn.instanceOperations().getTabletServers();
 
@@ -180,7 +185,7 @@
 
   /**
    * Runs scanner in separate thread to allow multiple scanners to execute in parallel.
-   * <p/>
+   * <p>
    * The thread run method is terminated when the testInProgress flag is set to false.
    */
   private static class ScannerThread implements Runnable {
@@ -285,7 +290,7 @@
 
       conn.tableOperations().offline(tableName, true);
 
-      UtilWaitThread.sleep(2000);
+      sleepUninterruptibly(2, TimeUnit.SECONDS);
       conn.tableOperations().online(tableName, true);
 
       for (Text split : conn.tableOperations().listSplits(tableName)) {
@@ -309,7 +314,7 @@
    */
   private SortedSet<Text> createSplits() {
 
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
 
     for (int split = 0; split < 10; split++) {
       splits.add(new Text(Integer.toString(split)));
@@ -320,7 +325,7 @@
 
   /**
    * Generate some sample data using random row id to distribute across splits.
-   * <p/>
+   * <p>
    * The primary goal is to determine that each scanner is assigned a unique scan id. This test does check that the count value for fam1 increases if a scanner
    * reads multiple value, but this is secondary consideration for this test, that is included for completeness.
    *
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ScanIteratorIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ScanIteratorIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/functional/ScanIteratorIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ScanIteratorIT.java
index 74c4fd4..a274dec 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ScanIteratorIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ScanIteratorIT.java
@@ -45,8 +45,8 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.harness.AccumuloClusterIT;
 import org.apache.accumulo.test.functional.AuthsIterator;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -56,7 +56,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class ScanIteratorIT extends AccumuloClusterIT {
+public class ScanIteratorIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(ScanIteratorIT.class);
 
   @Override
@@ -135,7 +135,7 @@
     setupIter(bscanner);
     verify(bscanner, 1, 999);
 
-    ArrayList<Range> ranges = new ArrayList<Range>();
+    ArrayList<Range> ranges = new ArrayList<>();
     ranges.add(new Range(new Text(String.format("%06d", 1))));
     ranges.add(new Range(new Text(String.format("%06d", 6)), new Text(String.format("%06d", 16))));
     ranges.add(new Range(new Text(String.format("%06d", 20))));
@@ -144,8 +144,8 @@
     ranges.add(new Range(new Text(String.format("%06d", 501)), new Text(String.format("%06d", 504))));
     ranges.add(new Range(new Text(String.format("%06d", 998)), new Text(String.format("%06d", 1000))));
 
-    HashSet<Integer> got = new HashSet<Integer>();
-    HashSet<Integer> expected = new HashSet<Integer>();
+    HashSet<Integer> got = new HashSet<>();
+    HashSet<Integer> expected = new HashSet<>();
     for (int i : new int[] {1, 7, 9, 11, 13, 15, 23, 57, 59, 61, 501, 503, 999}) {
       expected.add(i);
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ScanRangeIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ScanRangeIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/functional/ScanRangeIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ScanRangeIT.java
index 3ce1eb1..7ab96c4 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ScanRangeIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ScanRangeIT.java
@@ -30,11 +30,11 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class ScanRangeIT extends AccumuloClusterIT {
+public class ScanRangeIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -54,7 +54,7 @@
     c.tableOperations().create(table1);
     String table2 = tableNames[1];
     c.tableOperations().create(table2);
-    TreeSet<Text> splitRows = new TreeSet<Text>();
+    TreeSet<Text> splitRows = new TreeSet<>();
     int splits = 3;
     for (int i = (ROW_LIMIT / splits); i < ROW_LIMIT; i += (ROW_LIMIT / splits))
       splitRows.add(createRow(i));
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ScanSessionTimeOutIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ScanSessionTimeOutIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/functional/ScanSessionTimeOutIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ScanSessionTimeOutIT.java
index cb5bc18..0074eac 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ScanSessionTimeOutIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ScanSessionTimeOutIT.java
@@ -21,6 +21,7 @@
 import java.util.Iterator;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -33,8 +34,7 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
@@ -44,7 +44,9 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class ScanSessionTimeOutIT extends AccumuloClusterIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class ScanSessionTimeOutIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(ScanSessionTimeOutIT.class);
 
   @Override
@@ -113,7 +115,7 @@
     verify(iter, 0, 200);
 
     // sleep three times the session timeout
-    UtilWaitThread.sleep(9000);
+    sleepUninterruptibly(9, TimeUnit.SECONDS);
 
     verify(iter, 200, 100000);
 
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/ScannerContextIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ScannerContextIT.java
new file mode 100644
index 0000000..91c066f
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ScannerContextIT.java
@@ -0,0 +1,341 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.functional;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.Map.Entry;
+
+import org.apache.accumulo.core.client.BatchScanner;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.CachedConfiguration;
+import org.apache.accumulo.fate.util.UtilWaitThread;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.hamcrest.CoreMatchers;
+import org.junit.Assume;
+import org.junit.Before;
+import org.junit.Test;
+
+public class ScannerContextIT extends AccumuloClusterHarness {
+
+  private static final String CONTEXT = ScannerContextIT.class.getSimpleName();
+  private static final String CONTEXT_PROPERTY = Property.VFS_CONTEXT_CLASSPATH_PROPERTY + CONTEXT;
+  private static final String CONTEXT_DIR = "file:///tmp";
+  private static final String CONTEXT_CLASSPATH = CONTEXT_DIR + "/Test.jar";
+  private static int ITERATIONS = 10;
+  private static final long WAIT = 7000;
+
+  private FileSystem fs;
+
+  @Override
+  protected int defaultTimeoutSeconds() {
+    return 2 * 60;
+  }
+
+  @Before
+  public void checkCluster() throws Exception {
+    Assume.assumeThat(getClusterType(), CoreMatchers.is(ClusterType.MINI));
+    MiniAccumuloClusterImpl.class.cast(getCluster());
+    fs = FileSystem.get(CachedConfiguration.getInstance());
+  }
+
+  @Test
+  public void test() throws Exception {
+    // Copy the TestIterators jar to tmp
+    Path baseDir = new Path(System.getProperty("user.dir"));
+    Path targetDir = new Path(baseDir, "target");
+    Path jarPath = new Path(targetDir, "TestIterators-tests.jar");
+    Path dstPath = new Path(CONTEXT_DIR + "/Test.jar");
+    fs.copyFromLocalFile(jarPath, dstPath);
+    // Sleep to ensure jar change gets picked up
+    UtilWaitThread.sleep(WAIT);
+
+    try {
+      Connector c = getConnector();
+      // Set the classloader context property on the table to point to the TestIterators jar file.
+      c.instanceOperations().setProperty(CONTEXT_PROPERTY, CONTEXT_CLASSPATH);
+
+      // Insert rows with the word "Test" in the value.
+      String tableName = getUniqueNames(1)[0];
+      c.tableOperations().create(tableName);
+      BatchWriter bw = c.createBatchWriter(tableName, new BatchWriterConfig());
+      for (int i = 0; i < ITERATIONS; i++) {
+        Mutation m = new Mutation("row" + i);
+        m.put("cf", "col1", "Test");
+        bw.addMutation(m);
+      }
+      bw.close();
+      // Ensure that we can get the data back
+      scanCheck(c, tableName, null, null, "Test");
+      batchCheck(c, tableName, null, null, "Test");
+
+      // This iterator is in the TestIterators jar file
+      IteratorSetting cfg = new IteratorSetting(21, "reverse", "org.apache.accumulo.test.functional.ValueReversingIterator");
+
+      // Check that ValueReversingIterator is not already on the classpath by not setting the context. This should fail.
+      try {
+        scanCheck(c, tableName, cfg, null, "tseT");
+        fail("This should have failed because context was not set");
+      } catch (Exception e) {
+        // Do nothing, this should fail as the classloader context is not set.
+      }
+      try {
+        batchCheck(c, tableName, cfg, null, "tseT");
+        fail("This should have failed because context was not set");
+      } catch (Exception e) {
+        // Do nothing, this should fail as the classloader context is not set.
+      }
+
+      // Ensure that the value is reversed using the iterator config and classloader context
+      scanCheck(c, tableName, cfg, CONTEXT, "tseT");
+      batchCheck(c, tableName, cfg, CONTEXT, "tseT");
+    } finally {
+      // Delete file in tmp
+      fs.delete(dstPath, true);
+    }
+  }
+
+  @Test
+  public void testScanContextOverridesTableContext() throws Exception {
+    // Copy the TestIterators jar to tmp
+    Path baseDir = new Path(System.getProperty("user.dir"));
+    Path targetDir = new Path(baseDir, "target");
+    Path jarPath = new Path(targetDir, "TestIterators-tests.jar");
+    Path dstPath = new Path(CONTEXT_DIR + "/Test.jar");
+    fs.copyFromLocalFile(jarPath, dstPath);
+    // Sleep to ensure jar change gets picked up
+    UtilWaitThread.sleep(WAIT);
+
+    try {
+      Connector c = getConnector();
+      // Create two contexts FOO and ScanContextIT. The FOO context will point to a classpath
+      // that contains nothing. The ScanContextIT context will point to the TestIterators.jar
+      String tableContext = "FOO";
+      String tableContextProperty = Property.VFS_CONTEXT_CLASSPATH_PROPERTY + tableContext;
+      String tableContextDir = "file:///tmp";
+      String tableContextClasspath = tableContextDir + "/TestFoo.jar";
+      // Define both contexts
+      c.instanceOperations().setProperty(tableContextProperty, tableContextClasspath);
+      c.instanceOperations().setProperty(CONTEXT_PROPERTY, CONTEXT_CLASSPATH);
+
+      String tableName = getUniqueNames(1)[0];
+      c.tableOperations().create(tableName);
+      // Set the FOO context on the table
+      c.tableOperations().setProperty(tableName, Property.TABLE_CLASSPATH.getKey(), tableContext);
+      BatchWriter bw = c.createBatchWriter(tableName, new BatchWriterConfig());
+      for (int i = 0; i < ITERATIONS; i++) {
+        Mutation m = new Mutation("row" + i);
+        m.put("cf", "col1", "Test");
+        bw.addMutation(m);
+      }
+      bw.close();
+      scanCheck(c, tableName, null, null, "Test");
+      batchCheck(c, tableName, null, null, "Test");
+      // This iterator is in the TestIterators jar file
+      IteratorSetting cfg = new IteratorSetting(21, "reverse", "org.apache.accumulo.test.functional.ValueReversingIterator");
+
+      // Check that ValueReversingIterator is not already on the classpath by not setting the context. This should fail.
+      try {
+        scanCheck(c, tableName, cfg, null, "tseT");
+        fail("This should have failed because context was not set");
+      } catch (Exception e) {
+        // Do nothing, this should fail as the classloader context is not set.
+      }
+      try {
+        batchCheck(c, tableName, cfg, null, "tseT");
+        fail("This should have failed because context was not set");
+      } catch (Exception e) {
+        // Do nothing, this should fail as the classloader context is not set.
+      }
+
+      // Ensure that the value is reversed using the iterator config and classloader context
+      scanCheck(c, tableName, cfg, CONTEXT, "tseT");
+      batchCheck(c, tableName, cfg, CONTEXT, "tseT");
+    } finally {
+      // Delete file in tmp
+      fs.delete(dstPath, true);
+    }
+
+  }
+
+  @Test
+  public void testOneScannerDoesntInterfereWithAnother() throws Exception {
+    // Copy the TestIterators jar to tmp
+    Path baseDir = new Path(System.getProperty("user.dir"));
+    Path targetDir = new Path(baseDir, "target");
+    Path jarPath = new Path(targetDir, "TestIterators-tests.jar");
+    Path dstPath = new Path(CONTEXT_DIR + "/Test.jar");
+    fs.copyFromLocalFile(jarPath, dstPath);
+    // Sleep to ensure jar change gets picked up
+    UtilWaitThread.sleep(WAIT);
+
+    try {
+      Connector c = getConnector();
+      // Set the classloader context property on the table to point to the TestIterators jar file.
+      c.instanceOperations().setProperty(CONTEXT_PROPERTY, CONTEXT_CLASSPATH);
+
+      // Insert rows with the word "Test" in the value.
+      String tableName = getUniqueNames(1)[0];
+      c.tableOperations().create(tableName);
+      BatchWriter bw = c.createBatchWriter(tableName, new BatchWriterConfig());
+      for (int i = 0; i < ITERATIONS; i++) {
+        Mutation m = new Mutation("row" + i);
+        m.put("cf", "col1", "Test");
+        bw.addMutation(m);
+      }
+      bw.close();
+
+      Scanner one = c.createScanner(tableName, Authorizations.EMPTY);
+
+      Scanner two = c.createScanner(tableName, Authorizations.EMPTY);
+
+      IteratorSetting cfg = new IteratorSetting(21, "reverse", "org.apache.accumulo.test.functional.ValueReversingIterator");
+      one.addScanIterator(cfg);
+      one.setClassLoaderContext(CONTEXT);
+
+      Iterator<Entry<Key,Value>> iterator = one.iterator();
+      for (int i = 0; i < ITERATIONS; i++) {
+        assertTrue(iterator.hasNext());
+        Entry<Key,Value> next = iterator.next();
+        assertEquals("tseT", next.getValue().toString());
+      }
+
+      Iterator<Entry<Key,Value>> iterator2 = two.iterator();
+      for (int i = 0; i < ITERATIONS; i++) {
+        assertTrue(iterator2.hasNext());
+        Entry<Key,Value> next = iterator2.next();
+        assertEquals("Test", next.getValue().toString());
+      }
+
+    } finally {
+      // Delete file in tmp
+      fs.delete(dstPath, true);
+    }
+  }
+
+  @Test
+  public void testClearContext() throws Exception {
+    // Copy the TestIterators jar to tmp
+    Path baseDir = new Path(System.getProperty("user.dir"));
+    Path targetDir = new Path(baseDir, "target");
+    Path jarPath = new Path(targetDir, "TestIterators-tests.jar");
+    Path dstPath = new Path(CONTEXT_DIR + "/Test.jar");
+    fs.copyFromLocalFile(jarPath, dstPath);
+    // Sleep to ensure jar change gets picked up
+    UtilWaitThread.sleep(WAIT);
+
+    try {
+      Connector c = getConnector();
+      // Set the classloader context property on the table to point to the TestIterators jar file.
+      c.instanceOperations().setProperty(CONTEXT_PROPERTY, CONTEXT_CLASSPATH);
+
+      // Insert rows with the word "Test" in the value.
+      String tableName = getUniqueNames(1)[0];
+      c.tableOperations().create(tableName);
+      BatchWriter bw = c.createBatchWriter(tableName, new BatchWriterConfig());
+      for (int i = 0; i < ITERATIONS; i++) {
+        Mutation m = new Mutation("row" + i);
+        m.put("cf", "col1", "Test");
+        bw.addMutation(m);
+      }
+      bw.close();
+
+      Scanner one = c.createScanner(tableName, Authorizations.EMPTY);
+      IteratorSetting cfg = new IteratorSetting(21, "reverse", "org.apache.accumulo.test.functional.ValueReversingIterator");
+      one.addScanIterator(cfg);
+      one.setClassLoaderContext(CONTEXT);
+
+      Iterator<Entry<Key,Value>> iterator = one.iterator();
+      for (int i = 0; i < ITERATIONS; i++) {
+        assertTrue(iterator.hasNext());
+        Entry<Key,Value> next = iterator.next();
+        assertEquals("tseT", next.getValue().toString());
+      }
+
+      one.removeScanIterator("reverse");
+      one.clearClassLoaderContext();
+      iterator = one.iterator();
+      for (int i = 0; i < ITERATIONS; i++) {
+        assertTrue(iterator.hasNext());
+        Entry<Key,Value> next = iterator.next();
+        assertEquals("Test", next.getValue().toString());
+      }
+
+    } finally {
+      // Delete file in tmp
+      fs.delete(dstPath, true);
+    }
+  }
+
+  private void scanCheck(Connector c, String tableName, IteratorSetting cfg, String context, String expected) throws Exception {
+    Scanner bs = c.createScanner(tableName, Authorizations.EMPTY);
+    if (null != context) {
+      bs.setClassLoaderContext(context);
+    }
+    if (null != cfg) {
+      bs.addScanIterator(cfg);
+    }
+    Iterator<Entry<Key,Value>> iterator = bs.iterator();
+    for (int i = 0; i < ITERATIONS; i++) {
+      assertTrue(iterator.hasNext());
+      Entry<Key,Value> next = iterator.next();
+      assertEquals(expected, next.getValue().toString());
+    }
+    assertFalse(iterator.hasNext());
+  }
+
+  private void batchCheck(Connector c, String tableName, IteratorSetting cfg, String context, String expected) throws Exception {
+    BatchScanner bs = c.createBatchScanner(tableName, Authorizations.EMPTY, 1);
+    bs.setRanges(Collections.singleton(new Range()));
+    try {
+      if (null != context) {
+        bs.setClassLoaderContext(context);
+      }
+      if (null != cfg) {
+        bs.addScanIterator(cfg);
+      }
+      Iterator<Entry<Key,Value>> iterator = bs.iterator();
+      for (int i = 0; i < ITERATIONS; i++) {
+        assertTrue(iterator.hasNext());
+        Entry<Key,Value> next = iterator.next();
+        assertEquals(expected, next.getValue().toString());
+      }
+      assertFalse(iterator.hasNext());
+    } finally {
+      bs.close();
+    }
+  }
+
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ScannerIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ScannerIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/functional/ScannerIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ScannerIT.java
index 9e90468..340a58e 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ScannerIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ScannerIT.java
@@ -31,7 +31,7 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.fate.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -40,7 +40,7 @@
 /**
  *
  */
-public class ScannerIT extends AccumuloClusterIT {
+public class ScannerIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ServerSideErrorIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ServerSideErrorIT.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/test/functional/ServerSideErrorIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ServerSideErrorIT.java
index d99d33a..334cf1c 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ServerSideErrorIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ServerSideErrorIT.java
@@ -18,6 +18,7 @@
 
 import java.util.Collections;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.BatchWriter;
@@ -32,12 +33,13 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.iterators.Combiner;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class ServerSideErrorIT extends AccumuloClusterIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class ServerSideErrorIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -101,7 +103,7 @@
       to.removeProperty(tableName, e.getKey());
     }
 
-    UtilWaitThread.sleep(500);
+    sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
 
     // should be able to scan now
     scanner = c.createScanner(tableName, Authorizations.EMPTY);
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/SessionBlockVerifyIT.java b/test/src/main/java/org/apache/accumulo/test/functional/SessionBlockVerifyIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/functional/SessionBlockVerifyIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/SessionBlockVerifyIT.java
index 05f304b..ca59041 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/SessionBlockVerifyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SessionBlockVerifyIT.java
@@ -110,7 +110,7 @@
 
     final Iterator<Entry<Key,Value>> slow = scanner.iterator();
 
-    final List<Future<Boolean>> callables = new ArrayList<Future<Boolean>>();
+    final List<Future<Boolean>> callables = new ArrayList<>();
     final CountDownLatch latch = new CountDownLatch(10);
     for (int i = 0; i < 10; i++) {
       Future<Boolean> callable = service.submit(new Callable<Boolean>() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/SessionDurabilityIT.java b/test/src/main/java/org/apache/accumulo/test/functional/SessionDurabilityIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/functional/SessionDurabilityIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/SessionDurabilityIT.java
index ca45382..5f7ca88 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/SessionDurabilityIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SessionDurabilityIT.java
@@ -40,7 +40,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class SessionDurabilityIT extends ConfigurableMacIT {
+public class SessionDurabilityIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ShutdownIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ShutdownIT.java
similarity index 90%
rename from test/src/test/java/org/apache/accumulo/test/functional/ShutdownIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ShutdownIT.java
index ac38f19..98e1031 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ShutdownIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ShutdownIT.java
@@ -21,10 +21,10 @@
 
 import java.io.IOException;
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.server.util.Admin;
 import org.apache.accumulo.test.TestIngest;
@@ -32,7 +32,9 @@
 import org.apache.accumulo.test.VerifyIngest;
 import org.junit.Test;
 
-public class ShutdownIT extends ConfigurableMacIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class ShutdownIT extends ConfigurableMacBase {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -43,7 +45,7 @@
   public void shutdownDuringIngest() throws Exception {
     Process ingest = cluster.exec(TestIngest.class, "-i", cluster.getInstanceName(), "-z", cluster.getZooKeepers(), "-u", "root", "-p", ROOT_PASSWORD,
         "--createTable");
-    UtilWaitThread.sleep(100);
+    sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
     assertEquals(0, cluster.exec(Admin.class, "stopAll").waitFor());
     ingest.destroy();
   }
@@ -54,7 +56,7 @@
         cluster.exec(TestIngest.class, "-i", cluster.getInstanceName(), "-z", cluster.getZooKeepers(), "-u", "root", "-p", ROOT_PASSWORD, "--createTable")
             .waitFor());
     Process verify = cluster.exec(VerifyIngest.class, "-i", cluster.getInstanceName(), "-z", cluster.getZooKeepers(), "-u", "root", "-p", ROOT_PASSWORD);
-    UtilWaitThread.sleep(100);
+    sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
     assertEquals(0, cluster.exec(Admin.class, "stopAll").waitFor());
     verify.destroy();
   }
@@ -65,7 +67,7 @@
         cluster.exec(TestIngest.class, "-i", cluster.getInstanceName(), "-z", cluster.getZooKeepers(), "-u", "root", "-p", ROOT_PASSWORD, "--createTable")
             .waitFor());
     Process deleter = cluster.exec(TestRandomDeletes.class, "-i", cluster.getInstanceName(), "-z", cluster.getZooKeepers(), "-u", "root", "-p", ROOT_PASSWORD);
-    UtilWaitThread.sleep(100);
+    sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
     assertEquals(0, cluster.exec(Admin.class, "stopAll").waitFor());
     deleter.destroy();
   }
@@ -76,7 +78,7 @@
     for (int i = 0; i < 10; i++) {
       c.tableOperations().create("table" + i);
     }
-    final AtomicReference<Exception> ref = new AtomicReference<Exception>();
+    final AtomicReference<Exception> ref = new AtomicReference<>();
     Thread async = new Thread() {
       @Override
       public void run() {
@@ -89,7 +91,7 @@
       }
     };
     async.start();
-    UtilWaitThread.sleep(100);
+    sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
     assertEquals(0, cluster.exec(Admin.class, "stopAll").waitFor());
     if (ref.get() != null)
       throw ref.get();
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/SimpleBalancerFairnessIT.java b/test/src/main/java/org/apache/accumulo/test/functional/SimpleBalancerFairnessIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/functional/SimpleBalancerFairnessIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/SimpleBalancerFairnessIT.java
index 1ad363b..864ba06 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/SimpleBalancerFairnessIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SimpleBalancerFairnessIT.java
@@ -16,12 +16,14 @@
  */
 package org.apache.accumulo.test.functional;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 import java.util.ArrayList;
 import java.util.List;
 import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.cli.BatchWriterOpts;
 import org.apache.accumulo.core.client.Connector;
@@ -35,7 +37,6 @@
 import org.apache.accumulo.core.master.thrift.TableInfo;
 import org.apache.accumulo.core.master.thrift.TabletServerStatus;
 import org.apache.accumulo.core.trace.Tracer;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.MemoryUnit;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
@@ -44,7 +45,7 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class SimpleBalancerFairnessIT extends ConfigurableMacIT {
+public class SimpleBalancerFairnessIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
@@ -73,7 +74,7 @@
     opts.setPrincipal("root");
     TestIngest.ingest(c, opts, new BatchWriterOpts());
     c.tableOperations().flush("test_ingest", null, null, false);
-    UtilWaitThread.sleep(45 * 1000);
+    sleepUninterruptibly(45, TimeUnit.SECONDS);
     Credentials creds = new Credentials("root", new PasswordToken(ROOT_PASSWORD));
     ClientContext context = new ClientContext(c.getInstance(), creds, getClientConfig());
 
@@ -98,7 +99,7 @@
     assertEquals("Unassigned tablets were not assigned within 30 seconds", 0, unassignedTablets);
 
     // Compute online tablets per tserver
-    List<Integer> counts = new ArrayList<Integer>();
+    List<Integer> counts = new ArrayList<>();
     for (TabletServerStatus server : stats.tServerInfo) {
       int count = 0;
       for (TableInfo table : server.tableMap.values()) {
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/SlowConstraint.java b/test/src/main/java/org/apache/accumulo/test/functional/SlowConstraint.java
index 187da35..3f8cc27 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/SlowConstraint.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SlowConstraint.java
@@ -17,10 +17,12 @@
 package org.apache.accumulo.test.functional;
 
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.constraints.Constraint;
 import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.util.UtilWaitThread;
+
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 /**
  *
@@ -34,7 +36,7 @@
 
   @Override
   public List<Short> check(Environment env, Mutation mutation) {
-    UtilWaitThread.sleep(20000);
+    sleepUninterruptibly(20, TimeUnit.SECONDS);
     return null;
   }
 
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/SlowIterator.java b/test/src/main/java/org/apache/accumulo/test/functional/SlowIterator.java
index f84a4d9..aeb0dff 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/SlowIterator.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SlowIterator.java
@@ -19,6 +19,7 @@
 import java.io.IOException;
 import java.util.Collection;
 import java.util.Map;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.data.ByteSequence;
@@ -28,7 +29,8 @@
 import org.apache.accumulo.core.iterators.IteratorEnvironment;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.WrappingIterator;
-import org.apache.accumulo.core.util.UtilWaitThread;
+
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 public class SlowIterator extends WrappingIterator {
 
@@ -53,13 +55,13 @@
 
   @Override
   public void next() throws IOException {
-    UtilWaitThread.sleep(sleepTime);
+    sleepUninterruptibly(sleepTime, TimeUnit.MILLISECONDS);
     super.next();
   }
 
   @Override
   public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
-    UtilWaitThread.sleep(seekSleepTime);
+    sleepUninterruptibly(seekSleepTime, TimeUnit.MILLISECONDS);
     super.seek(range, columnFamilies, inclusive);
   }
 
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/SparseColumnFamilyIT.java b/test/src/main/java/org/apache/accumulo/test/functional/SparseColumnFamilyIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/functional/SparseColumnFamilyIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/SparseColumnFamilyIT.java
index 25c45f9..8cece0b 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/SparseColumnFamilyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SparseColumnFamilyIT.java
@@ -28,14 +28,14 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
 /**
  * This test recreates issue ACCUMULO-516. Until that issue is fixed this test should time out.
  */
-public class SparseColumnFamilyIT extends AccumuloClusterIT {
+public class SparseColumnFamilyIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/SplitIT.java b/test/src/main/java/org/apache/accumulo/test/functional/SplitIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/functional/SplitIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/SplitIT.java
index a6f0d00..d8eff6f 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/SplitIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SplitIT.java
@@ -16,12 +16,14 @@
  */
 package org.apache.accumulo.test.functional;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.cluster.ClusterUser;
 import org.apache.accumulo.core.cli.BatchWriterOpts;
@@ -40,15 +42,13 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.server.util.CheckForMetadataProblems;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.io.Text;
 import org.junit.After;
 import org.junit.Assume;
 import org.junit.Before;
@@ -56,7 +56,7 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class SplitIT extends AccumuloClusterIT {
+public class SplitIT extends AccumuloClusterHarness {
   private static final Logger log = LoggerFactory.getLogger(SplitIT.class);
 
   @Override
@@ -145,11 +145,11 @@
     vopts.setTableName(table);
     VerifyIngest.verifyIngest(c, vopts, new ScannerOpts());
     while (c.tableOperations().listSplits(table).size() < 10) {
-      UtilWaitThread.sleep(15 * 1000);
+      sleepUninterruptibly(15, TimeUnit.SECONDS);
     }
     String id = c.tableOperations().tableIdMap().get(table);
     Scanner s = c.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    KeyExtent extent = new KeyExtent(new Text(id), null, null);
+    KeyExtent extent = new KeyExtent(id, null, null);
     s.setRange(extent.toMetadataRange());
     MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.fetch(s);
     int count = 0;
@@ -184,9 +184,9 @@
     c.tableOperations().create(tableName);
     c.tableOperations().setProperty(tableName, Property.TABLE_SPLIT_THRESHOLD.getKey(), "10K");
     c.tableOperations().setProperty(tableName, Property.TABLE_FILE_COMPRESSION_TYPE.getKey(), "none");
-    UtilWaitThread.sleep(5 * 1000);
+    sleepUninterruptibly(5, TimeUnit.SECONDS);
     ReadWriteIT.interleaveTest(c, tableName);
-    UtilWaitThread.sleep(5 * 1000);
+    sleepUninterruptibly(5, TimeUnit.SECONDS);
     int numSplits = c.tableOperations().listSplits(tableName).size();
     while (numSplits <= 20) {
       log.info("Waiting for splits to happen");
@@ -212,7 +212,7 @@
     DeleteIT.deleteTest(c, getCluster(), getAdminPrincipal(), password, tableName, keytab);
     c.tableOperations().flush(tableName, null, null, true);
     for (int i = 0; i < 5; i++) {
-      UtilWaitThread.sleep(10 * 1000);
+      sleepUninterruptibly(10, TimeUnit.SECONDS);
       if (c.tableOperations().listSplits(tableName).size() > 20)
         break;
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/SplitRecoveryIT.java b/test/src/main/java/org/apache/accumulo/test/functional/SplitRecoveryIT.java
similarity index 79%
rename from test/src/test/java/org/apache/accumulo/test/functional/SplitRecoveryIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/SplitRecoveryIT.java
index 0b0e330..7010dc9 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/SplitRecoveryIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SplitRecoveryIT.java
@@ -20,6 +20,7 @@
 import static org.junit.Assert.assertEquals;
 
 import java.util.ArrayList;
+import java.util.Collection;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
@@ -68,7 +69,7 @@
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class SplitRecoveryIT extends ConfigurableMacIT {
+public class SplitRecoveryIT extends ConfigurableMacBase {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -76,7 +77,7 @@
   }
 
   private KeyExtent nke(String table, String endRow, String prevEndRow) {
-    return new KeyExtent(new Text(table), endRow == null ? null : new Text(endRow), prevEndRow == null ? null : new Text(prevEndRow));
+    return new KeyExtent(table, endRow == null ? null : new Text(endRow), prevEndRow == null ? null : new Text(prevEndRow));
   }
 
   private void run() throws Exception {
@@ -137,9 +138,9 @@
     for (int i = 0; i < extents.length; i++) {
       KeyExtent extent = extents[i];
 
-      String tdir = ServerConstants.getTablesDirs()[0] + "/" + extent.getTableId().toString() + "/dir_" + i;
+      String tdir = ServerConstants.getTablesDirs()[0] + "/" + extent.getTableId() + "/dir_" + i;
       MetadataTableUtil.addTablet(extent, tdir, context, TabletTime.LOGICAL_TIME_ID, zl);
-      SortedMap<FileRef,DataFileValue> mapFiles = new TreeMap<FileRef,DataFileValue>();
+      SortedMap<FileRef,DataFileValue> mapFiles = new TreeMap<>();
       mapFiles.put(new FileRef(tdir + "/" + RFile.EXTENSION + "_000_000"), new DataFileValue(1000017 + i, 10000 + i));
 
       if (i == extentToSplit) {
@@ -161,9 +162,9 @@
   private void splitPartiallyAndRecover(AccumuloServerContext context, KeyExtent extent, KeyExtent high, KeyExtent low, double splitRatio,
       SortedMap<FileRef,DataFileValue> mapFiles, Text midRow, String location, int steps, ZooLock zl) throws Exception {
 
-    SortedMap<FileRef,DataFileValue> lowDatafileSizes = new TreeMap<FileRef,DataFileValue>();
-    SortedMap<FileRef,DataFileValue> highDatafileSizes = new TreeMap<FileRef,DataFileValue>();
-    List<FileRef> highDatafilesToRemove = new ArrayList<FileRef>();
+    SortedMap<FileRef,DataFileValue> lowDatafileSizes = new TreeMap<>();
+    SortedMap<FileRef,DataFileValue> highDatafileSizes = new TreeMap<>();
+    List<FileRef> highDatafilesToRemove = new ArrayList<>();
 
     MetadataTableUtil.splitDatafiles(extent.getTableId(), midRow, splitRatio, new HashMap<FileRef,FileUtil.FileInfo>(), mapFiles, lowDatafileSizes,
         highDatafileSizes, highDatafilesToRemove);
@@ -177,11 +178,12 @@
     writer.update(m);
 
     if (steps >= 1) {
-      Map<FileRef,Long> bulkFiles = MetadataTableUtil.getBulkFilesLoaded(context, extent);
+      Map<Long,? extends Collection<FileRef>> bulkFiles = MetadataTableUtil.getBulkFilesLoaded(context, extent);
       MasterMetadataUtil.addNewTablet(context, low, "/lowDir", instance, lowDatafileSizes, bulkFiles, TabletTime.LOGICAL_TIME_ID + "0", -1l, -1l, zl);
     }
-    if (steps >= 2)
+    if (steps >= 2) {
       MetadataTableUtil.finishSplit(high, highDatafileSizes, highDatafilesToRemove, context, zl);
+    }
 
     TabletServer.verifyTabletInformation(context, high, instance, null, "127.0.0.1:0", zl);
 
@@ -189,8 +191,8 @@
       ensureTabletHasNoUnexpectedMetadataEntries(context, low, lowDatafileSizes);
       ensureTabletHasNoUnexpectedMetadataEntries(context, high, highDatafileSizes);
 
-      Map<FileRef,Long> lowBulkFiles = MetadataTableUtil.getBulkFilesLoaded(context, low);
-      Map<FileRef,Long> highBulkFiles = MetadataTableUtil.getBulkFilesLoaded(context, high);
+      Map<Long,? extends Collection<FileRef>> lowBulkFiles = MetadataTableUtil.getBulkFilesLoaded(context, low);
+      Map<Long,? extends Collection<FileRef>> highBulkFiles = MetadataTableUtil.getBulkFilesLoaded(context, high);
 
       if (!lowBulkFiles.equals(highBulkFiles)) {
         throw new Exception(" " + lowBulkFiles + " != " + highBulkFiles + " " + low + " " + high);
@@ -206,47 +208,49 @@
 
   private void ensureTabletHasNoUnexpectedMetadataEntries(AccumuloServerContext context, KeyExtent extent, SortedMap<FileRef,DataFileValue> expectedMapFiles)
       throws Exception {
-    Scanner scanner = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY);
-    scanner.setRange(extent.toMetadataRange());
+    try (Scanner scanner = new ScannerImpl(context, MetadataTable.ID, Authorizations.EMPTY)) {
+      scanner.setRange(extent.toMetadataRange());
 
-    HashSet<ColumnFQ> expectedColumns = new HashSet<ColumnFQ>();
-    expectedColumns.add(TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN);
-    expectedColumns.add(TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN);
-    expectedColumns.add(TabletsSection.ServerColumnFamily.TIME_COLUMN);
-    expectedColumns.add(TabletsSection.ServerColumnFamily.LOCK_COLUMN);
+      HashSet<ColumnFQ> expectedColumns = new HashSet<>();
+      expectedColumns.add(TabletsSection.ServerColumnFamily.DIRECTORY_COLUMN);
+      expectedColumns.add(TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN);
+      expectedColumns.add(TabletsSection.ServerColumnFamily.TIME_COLUMN);
+      expectedColumns.add(TabletsSection.ServerColumnFamily.LOCK_COLUMN);
 
-    HashSet<Text> expectedColumnFamilies = new HashSet<Text>();
-    expectedColumnFamilies.add(DataFileColumnFamily.NAME);
-    expectedColumnFamilies.add(TabletsSection.FutureLocationColumnFamily.NAME);
-    expectedColumnFamilies.add(TabletsSection.CurrentLocationColumnFamily.NAME);
-    expectedColumnFamilies.add(TabletsSection.LastLocationColumnFamily.NAME);
-    expectedColumnFamilies.add(TabletsSection.BulkFileColumnFamily.NAME);
+      HashSet<Text> expectedColumnFamilies = new HashSet<>();
+      expectedColumnFamilies.add(DataFileColumnFamily.NAME);
+      expectedColumnFamilies.add(TabletsSection.FutureLocationColumnFamily.NAME);
+      expectedColumnFamilies.add(TabletsSection.CurrentLocationColumnFamily.NAME);
+      expectedColumnFamilies.add(TabletsSection.LastLocationColumnFamily.NAME);
+      expectedColumnFamilies.add(TabletsSection.BulkFileColumnFamily.NAME);
 
-    Iterator<Entry<Key,Value>> iter = scanner.iterator();
-    while (iter.hasNext()) {
-      Key key = iter.next().getKey();
+      Iterator<Entry<Key,Value>> iter = scanner.iterator();
+      while (iter.hasNext()) {
+        Key key = iter.next().getKey();
 
-      if (!key.getRow().equals(extent.getMetadataEntry())) {
+        if (!key.getRow().equals(extent.getMetadataEntry())) {
+          throw new Exception("Tablet " + extent + " contained unexpected " + MetadataTable.NAME + " entry " + key);
+        }
+
+        if (expectedColumnFamilies.contains(key.getColumnFamily())) {
+          continue;
+        }
+
+        if (expectedColumns.remove(new ColumnFQ(key))) {
+          continue;
+        }
+
         throw new Exception("Tablet " + extent + " contained unexpected " + MetadataTable.NAME + " entry " + key);
       }
 
-      if (expectedColumnFamilies.contains(key.getColumnFamily())) {
-        continue;
+      System.out.println("expectedColumns " + expectedColumns);
+      if (expectedColumns.size() > 1 || (expectedColumns.size() == 1)) {
+        throw new Exception("Not all expected columns seen " + extent + " " + expectedColumns);
       }
 
-      if (expectedColumns.remove(new ColumnFQ(key))) {
-        continue;
-      }
-
-      throw new Exception("Tablet " + extent + " contained unexpected " + MetadataTable.NAME + " entry " + key);
+      SortedMap<FileRef,DataFileValue> fixedMapFiles = MetadataTableUtil.getDataFileSizes(extent, context);
+      verifySame(expectedMapFiles, fixedMapFiles);
     }
-    System.out.println("expectedColumns " + expectedColumns);
-    if (expectedColumns.size() > 1 || (expectedColumns.size() == 1)) {
-      throw new Exception("Not all expected columns seen " + extent + " " + expectedColumns);
-    }
-
-    SortedMap<FileRef,DataFileValue> fixedMapFiles = MetadataTableUtil.getDataFileSizes(extent, context);
-    verifySame(expectedMapFiles, fixedMapFiles);
   }
 
   private void verifySame(SortedMap<FileRef,DataFileValue> datafileSizes, SortedMap<FileRef,DataFileValue> fixedDatafileSizes) throws Exception {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/SslIT.java b/test/src/main/java/org/apache/accumulo/test/functional/SslIT.java
similarity index 87%
rename from test/src/test/java/org/apache/accumulo/test/functional/SslIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/SslIT.java
index 2d157b8..b81b409 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/SslIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/SslIT.java
@@ -20,7 +20,6 @@
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.junit.Test;
 
@@ -29,7 +28,7 @@
  * clusters with `mvn verify -DuseSslForIT`
  *
  */
-public class SslIT extends ConfigurableMacIT {
+public class SslIT extends ConfigurableMacBase {
   @Override
   public int defaultTimeoutSeconds() {
     return 6 * 60;
@@ -60,8 +59,8 @@
 
   @Test
   public void bulk() throws Exception {
-    BulkIT.runTest(getConnector(), FileSystem.getLocal(new Configuration(false)), new Path(getCluster().getConfig().getDir().getAbsolutePath(), "tmp"), "root",
-        getUniqueNames(1)[0], this.getClass().getName(), testName.getMethodName());
+    BulkIT.runTest(getConnector(), cluster.getFileSystem(), new Path(getCluster().getConfig().getDir().getAbsolutePath(), "tmp"), "root", getUniqueNames(1)[0],
+        this.getClass().getName(), testName.getMethodName());
   }
 
   @Test
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/SslWithClientAuthIT.java b/test/src/main/java/org/apache/accumulo/test/functional/SslWithClientAuthIT.java
similarity index 100%
rename from test/src/test/java/org/apache/accumulo/test/functional/SslWithClientAuthIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/SslWithClientAuthIT.java
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/StartIT.java b/test/src/main/java/org/apache/accumulo/test/functional/StartIT.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/test/functional/StartIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/StartIT.java
index 06faaa7..57a8a6f 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/StartIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/StartIT.java
@@ -20,11 +20,11 @@
 import static org.junit.Assert.assertNotEquals;
 
 import org.apache.accumulo.cluster.ClusterControl;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.start.TestMain;
 import org.junit.Test;
 
-public class StartIT extends AccumuloClusterIT {
+public class StartIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java b/test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
index 3061b87..504a5d9 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
@@ -34,21 +34,19 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.CachedConfiguration;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.io.Text;
 import org.hamcrest.CoreMatchers;
 import org.junit.Assume;
 import org.junit.Test;
 
 import com.google.common.collect.Iterators;
 
-public class TableIT extends AccumuloClusterIT {
+public class TableIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -86,11 +84,11 @@
     VerifyIngest.verifyIngest(c, vopts, new ScannerOpts());
     String id = to.tableIdMap().get(tableName);
     Scanner s = c.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    s.setRange(new KeyExtent(new Text(id), null, null).toMetadataRange());
+    s.setRange(new KeyExtent(id, null, null).toMetadataRange());
     s.fetchColumnFamily(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME);
     assertTrue(Iterators.size(s.iterator()) > 0);
 
-    FileSystem fs = FileSystem.get(CachedConfiguration.getInstance());
+    FileSystem fs = getCluster().getFileSystem();
     assertTrue(fs.listStatus(new Path(rootPath + "/accumulo/tables/" + id)).length > 0);
     to.delete(tableName);
     assertEquals(0, Iterators.size(s.iterator()));
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/TabletIT.java b/test/src/main/java/org/apache/accumulo/test/functional/TabletIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/functional/TabletIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/TabletIT.java
index 8aa6cf2..8d52058 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/TabletIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/TabletIT.java
@@ -32,14 +32,14 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.MemoryUnit;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.Text;
 import org.junit.Test;
 
-public class TabletIT extends AccumuloClusterIT {
+public class TabletIT extends AccumuloClusterHarness {
 
   private static final int N = 1000;
 
@@ -68,7 +68,7 @@
     Connector connector = getConnector();
 
     if (!readOnly) {
-      TreeSet<Text> keys = new TreeSet<Text>();
+      TreeSet<Text> keys = new TreeSet<>();
       for (int i = N / 100; i < N; i += N / 100) {
         keys.add(new Text(String.format("%05d", i)));
       }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/TabletStateChangeIteratorIT.java b/test/src/main/java/org/apache/accumulo/test/functional/TabletStateChangeIteratorIT.java
similarity index 63%
rename from test/src/test/java/org/apache/accumulo/test/functional/TabletStateChangeIteratorIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/TabletStateChangeIteratorIT.java
index 02eb419..0cc0b94 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/TabletStateChangeIteratorIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/TabletStateChangeIteratorIT.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.test.functional;
 
+import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.junit.Assert.assertEquals;
 
 import java.util.Collection;
@@ -23,11 +24,14 @@
 import java.util.HashSet;
 import java.util.Map.Entry;
 import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.BatchDeleter;
+import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.MutationsRejectedException;
@@ -36,6 +40,7 @@
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.impl.Tables;
 import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.master.state.tables.TableState;
@@ -45,7 +50,7 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.apache.accumulo.server.master.state.CurrentState;
 import org.apache.accumulo.server.master.state.MergeInfo;
 import org.apache.accumulo.server.master.state.MetaDataTableScanner;
@@ -64,7 +69,7 @@
  * Test to ensure that the {@link TabletStateChangeIterator} properly skips over tablet information in the metadata table when there is no work to be done on
  * the tablet (see ACCUMULO-3580)
  */
-public class TabletStateChangeIteratorIT extends SharedMiniClusterIT {
+public class TabletStateChangeIteratorIT extends SharedMiniClusterBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -73,12 +78,12 @@
 
   @BeforeClass
   public static void setup() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
   }
 
   @AfterClass
   public static void teardown() throws Exception {
-    SharedMiniClusterIT.stopMiniCluster();
+    SharedMiniClusterBase.stopMiniCluster();
   }
 
   @Test
@@ -97,33 +102,76 @@
     // examine a clone of the metadata table, so we can manipulate it
     cloneMetadataTable(cloned);
 
-    assertEquals("No tables should need attention", 0, findTabletsNeedingAttention(cloned));
+    State state = new State();
+    assertEquals("No tables should need attention", 0, findTabletsNeedingAttention(cloned, state));
 
     // test the assigned case (no location)
     removeLocation(cloned, t3);
-    assertEquals("Should have one tablet without a loc", 1, findTabletsNeedingAttention(cloned));
+    assertEquals("Should have two tablets without a loc", 2, findTabletsNeedingAttention(cloned, state));
 
-    // TODO test the cases where the assignment is to a dead tserver
-    // TODO test the cases where there is ongoing merges
-    // TODO test the bad tablet location state case (active split, inconsistent metadata)
+    // test the cases where the assignment is to a dead tserver
+    getConnector().tableOperations().delete(cloned);
+    cloneMetadataTable(cloned);
+    reassignLocation(cloned, t3);
+    assertEquals("Should have one tablet that needs to be unassigned", 1, findTabletsNeedingAttention(cloned, state));
+
+    // test the cases where there is ongoing merges
+    state = new State() {
+      @Override
+      public Collection<MergeInfo> merges() {
+        String tableIdToModify = getConnector().tableOperations().tableIdMap().get(t3);
+        return Collections.singletonList(new MergeInfo(new KeyExtent(tableIdToModify, null, null), MergeInfo.Operation.MERGE));
+      }
+    };
+    assertEquals("Should have 2 tablets that need to be chopped or unassigned", 1, findTabletsNeedingAttention(cloned, state));
+
+    // test the bad tablet location state case (inconsistent metadata)
+    state = new State();
+    cloneMetadataTable(cloned);
+    addDuplicateLocation(cloned, t3);
+    assertEquals("Should have 1 tablet that needs a metadata repair", 1, findTabletsNeedingAttention(cloned, state));
 
     // clean up
-    dropTables(t1, t2, t3);
+    dropTables(t1, t2, t3, cloned);
+  }
+
+  private void addDuplicateLocation(String table, String tableNameToModify) throws TableNotFoundException, MutationsRejectedException {
+    String tableIdToModify = getConnector().tableOperations().tableIdMap().get(tableNameToModify);
+    Mutation m = new Mutation(new KeyExtent(tableIdToModify, null, null).getMetadataEntry());
+    m.put(MetadataSchema.TabletsSection.CurrentLocationColumnFamily.NAME, new Text("1234567"), new Value("fake:9005".getBytes(UTF_8)));
+    BatchWriter bw = getConnector().createBatchWriter(table, null);
+    bw.addMutation(m);
+    bw.close();
+  }
+
+  private void reassignLocation(String table, String tableNameToModify) throws TableNotFoundException, MutationsRejectedException {
+    String tableIdToModify = getConnector().tableOperations().tableIdMap().get(tableNameToModify);
+    Scanner scanner = getConnector().createScanner(table, Authorizations.EMPTY);
+    scanner.setRange(new KeyExtent(tableIdToModify, null, null).toMetadataRange());
+    scanner.fetchColumnFamily(MetadataSchema.TabletsSection.CurrentLocationColumnFamily.NAME);
+    Entry<Key,Value> entry = scanner.iterator().next();
+    Mutation m = new Mutation(entry.getKey().getRow());
+    m.putDelete(entry.getKey().getColumnFamily(), entry.getKey().getColumnQualifier(), entry.getKey().getTimestamp());
+    m.put(entry.getKey().getColumnFamily(), new Text("1234567"), entry.getKey().getTimestamp() + 1, new Value("fake:9005".getBytes(UTF_8)));
+    scanner.close();
+    BatchWriter bw = getConnector().createBatchWriter(table, null);
+    bw.addMutation(m);
+    bw.close();
   }
 
   private void removeLocation(String table, String tableNameToModify) throws TableNotFoundException, MutationsRejectedException {
     String tableIdToModify = getConnector().tableOperations().tableIdMap().get(tableNameToModify);
     BatchDeleter deleter = getConnector().createBatchDeleter(table, Authorizations.EMPTY, 1, new BatchWriterConfig());
-    deleter.setRanges(Collections.singleton(new KeyExtent(new Text(tableIdToModify), null, null).toMetadataRange()));
+    deleter.setRanges(Collections.singleton(new KeyExtent(tableIdToModify, null, null).toMetadataRange()));
     deleter.fetchColumnFamily(MetadataSchema.TabletsSection.CurrentLocationColumnFamily.NAME);
     deleter.delete();
     deleter.close();
   }
 
-  private int findTabletsNeedingAttention(String table) throws TableNotFoundException {
+  private int findTabletsNeedingAttention(String table, State state) throws TableNotFoundException {
     int results = 0;
     Scanner scanner = getConnector().createScanner(table, Authorizations.EMPTY);
-    MetaDataTableScanner.configureScanner(scanner, new State());
+    MetaDataTableScanner.configureScanner(scanner, state);
     scanner.updateScanIteratorOption("tabletChange", "debug", "1");
     for (Entry<Key,Value> e : scanner) {
       if (e != null)
@@ -136,12 +184,20 @@
     Connector conn = getConnector();
     conn.tableOperations().create(t);
     conn.tableOperations().online(t, true);
+    SortedSet<Text> partitionKeys = new TreeSet<>();
+    partitionKeys.add(new Text("some split"));
+    conn.tableOperations().addSplits(t, partitionKeys);
     if (!online) {
       conn.tableOperations().offline(t, true);
     }
   }
 
   private void cloneMetadataTable(String cloned) throws AccumuloException, AccumuloSecurityException, TableNotFoundException, TableExistsException {
+    try {
+      dropTables(cloned);
+    } catch (TableNotFoundException ex) {
+      // ignored
+    }
     getConnector().tableOperations().clone(MetadataTable.NAME, cloned, true, null, null);
   }
 
@@ -151,11 +207,11 @@
     }
   }
 
-  private final class State implements CurrentState {
+  private class State implements CurrentState {
 
     @Override
     public Set<TServerInstance> onlineTabletServers() {
-      HashSet<TServerInstance> tservers = new HashSet<TServerInstance>();
+      HashSet<TServerInstance> tservers = new HashSet<>();
       for (String tserver : getConnector().instanceOperations().getTabletServers()) {
         try {
           String zPath = ZooUtil.getRoot(getConnector().getInstance()) + Constants.ZTSERVERS + "/" + tserver;
@@ -170,7 +226,7 @@
 
     @Override
     public Set<String> onlineTables() {
-      HashSet<String> onlineTables = new HashSet<String>(getConnector().tableOperations().tableIdMap().values());
+      HashSet<String> onlineTables = new HashSet<>(getConnector().tableOperations().tableIdMap().values());
       return Sets.filter(onlineTables, new Predicate<String>() {
         @Override
         public boolean apply(String tableId) {
@@ -190,15 +246,14 @@
     }
 
     @Override
-    public MasterState getMasterState() {
-      return MasterState.NORMAL;
-    }
-
-    @Override
     public Set<TServerInstance> shutdownServers() {
       return Collections.emptySet();
     }
 
+    @Override
+    public MasterState getMasterState() {
+      return MasterState.NORMAL;
+    }
   }
 
 }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/TimeoutIT.java b/test/src/main/java/org/apache/accumulo/test/functional/TimeoutIT.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/functional/TimeoutIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/TimeoutIT.java
index 092ae8b..d13c4af 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/TimeoutIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/TimeoutIT.java
@@ -34,14 +34,15 @@
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.junit.Test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 /**
  *
  */
-public class TimeoutIT extends AccumuloClusterIT {
+public class TimeoutIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -61,7 +62,7 @@
     conn.tableOperations().addConstraint(tableName, SlowConstraint.class.getName());
 
     // give constraint time to propagate through zookeeper
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
 
     BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig().setTimeout(3, TimeUnit.SECONDS));
 
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/VisibilityIT.java b/test/src/main/java/org/apache/accumulo/test/functional/VisibilityIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/functional/VisibilityIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/VisibilityIT.java
index 2c4783a..8285461 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/VisibilityIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/VisibilityIT.java
@@ -43,7 +43,7 @@
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.accumulo.core.util.ByteArraySet;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.io.Text;
 import org.junit.After;
 import org.junit.Before;
@@ -51,7 +51,7 @@
 
 import com.google.common.collect.Iterators;
 
-public class VisibilityIT extends AccumuloClusterIT {
+public class VisibilityIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -94,7 +94,7 @@
   }
 
   private static SortedSet<String> nss(String... labels) {
-    TreeSet<String> ts = new TreeSet<String>();
+    TreeSet<String> ts = new TreeSet<>();
 
     for (String s : labels) {
       ts.add(s);
@@ -153,7 +153,7 @@
     bw.addMutation(m1);
     bw.close();
 
-    Map<Set<String>,Set<String>> expected = new HashMap<Set<String>,Set<String>>();
+    Map<Set<String>,Set<String>> expected = new HashMap<>();
 
     expected.put(nss("A", "L"), nss("v5"));
     expected.put(nss("A", "M"), nss("v5"));
@@ -184,10 +184,10 @@
 
     all.add(prefix);
 
-    TreeSet<String> ss = new TreeSet<String>(suffix);
+    TreeSet<String> ss = new TreeSet<>(suffix);
 
     for (String s : suffix) {
-      TreeSet<String> ps = new TreeSet<String>(prefix);
+      TreeSet<String> ps = new TreeSet<>(prefix);
       ps.add(s);
       ss.remove(s);
 
@@ -196,7 +196,7 @@
   }
 
   private void queryData(Connector c, String tableName) throws Exception {
-    Map<Set<String>,Set<String>> expected = new HashMap<Set<String>,Set<String>>();
+    Map<Set<String>,Set<String>> expected = new HashMap<>();
     expected.put(nss(), nss("v1"));
     expected.put(nss("A"), nss("v2"));
     expected.put(nss("A", "L"), nss("v5"));
@@ -227,14 +227,14 @@
 
     c.securityOperations().changeUserAuthorizations(getAdminPrincipal(), new Authorizations(nbas(userAuths)));
 
-    ArrayList<Set<String>> combos = new ArrayList<Set<String>>();
+    ArrayList<Set<String>> combos = new ArrayList<>();
     uniqueCombos(combos, nss(), allAuths);
 
     for (Set<String> set1 : combos) {
-      Set<String> e = new TreeSet<String>();
+      Set<String> e = new TreeSet<>();
       for (Set<String> set2 : combos) {
 
-        set2 = new HashSet<String>(set2);
+        set2 = new HashSet<>(set2);
         set2.retainAll(userAuths);
 
         if (set1.containsAll(set2) && expected.containsKey(set2)) {
@@ -300,7 +300,7 @@
   }
 
   private void verify(Iterator<Entry<Key,Value>> iter, String... expected) throws Exception {
-    HashSet<String> valuesSeen = new HashSet<String>();
+    HashSet<String> valuesSeen = new HashSet<>();
 
     while (iter.hasNext()) {
       Entry<Key,Value> entry = iter.next();
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/WALSunnyDayIT.java b/test/src/main/java/org/apache/accumulo/test/functional/WALSunnyDayIT.java
new file mode 100644
index 0000000..bbd2b44
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/functional/WALSunnyDayIT.java
@@ -0,0 +1,235 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.functional;
+
+import static org.apache.accumulo.core.conf.Property.GC_CYCLE_DELAY;
+import static org.apache.accumulo.core.conf.Property.GC_CYCLE_START;
+import static org.apache.accumulo.core.conf.Property.INSTANCE_ZK_TIMEOUT;
+import static org.apache.accumulo.core.conf.Property.TSERV_WALOG_MAX_SIZE;
+import static org.apache.accumulo.core.conf.Property.TSERV_WAL_REPLICATION;
+import static org.apache.accumulo.core.security.Authorizations.EMPTY;
+import static org.apache.accumulo.minicluster.ServerType.GARBAGE_COLLECTOR;
+import static org.apache.accumulo.minicluster.ServerType.TABLET_SERVER;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Random;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.data.impl.KeyExtent;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.TabletColumnFamily;
+import org.apache.accumulo.master.state.SetGoalState;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterControl;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.log.WalStateManager.WalState;
+import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.io.Text;
+import org.junit.Assert;
+import org.junit.Test;
+
+import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class WALSunnyDayIT extends ConfigurableMacBase {
+
+  private static final Text CF = new Text(new byte[0]);
+
+  @Override
+  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.setProperty(GC_CYCLE_DELAY, "1s");
+    cfg.setProperty(GC_CYCLE_START, "0s");
+    cfg.setProperty(TSERV_WALOG_MAX_SIZE, "1M");
+    cfg.setProperty(TSERV_WAL_REPLICATION, "1");
+    cfg.setProperty(INSTANCE_ZK_TIMEOUT, "15s");
+    cfg.setNumTservers(1);
+    hadoopCoreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
+  }
+
+  int countTrue(Collection<Boolean> bools) {
+    int result = 0;
+    for (Boolean b : bools) {
+      if (b.booleanValue())
+        result++;
+    }
+    return result;
+  }
+
+  @Test
+  public void test() throws Exception {
+    MiniAccumuloClusterImpl mac = getCluster();
+    MiniAccumuloClusterControl control = mac.getClusterControl();
+    control.stop(GARBAGE_COLLECTOR);
+    Connector c = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    c.tableOperations().create(tableName);
+    writeSomeData(c, tableName, 1, 1);
+
+    // wal markers are added lazily
+    Map<String,Boolean> wals = getWals(c);
+    assertEquals(wals.toString(), 2, wals.size());
+    for (Boolean b : wals.values()) {
+      assertTrue("logs should be in use", b.booleanValue());
+    }
+
+    // roll log, get a new next
+    writeSomeData(c, tableName, 1001, 50);
+    Map<String,Boolean> walsAfterRoll = getWals(c);
+    assertEquals("should have 3 WALs after roll", 3, walsAfterRoll.size());
+    assertTrue("new WALs should be a superset of the old WALs", walsAfterRoll.keySet().containsAll(wals.keySet()));
+    assertEquals("all WALs should be in use", 3, countTrue(walsAfterRoll.values()));
+
+    // flush the tables
+    for (String table : new String[] {tableName, MetadataTable.NAME, RootTable.NAME}) {
+      c.tableOperations().flush(table, null, null, true);
+    }
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
+    // rolled WAL is no longer in use, but needs to be GC'd
+    Map<String,Boolean> walsAfterflush = getWals(c);
+    assertEquals(walsAfterflush.toString(), 3, walsAfterflush.size());
+    assertEquals("inUse should be 2", 2, countTrue(walsAfterflush.values()));
+
+    // let the GC run for a little bit
+    control.start(GARBAGE_COLLECTOR);
+    sleepUninterruptibly(5, TimeUnit.SECONDS);
+    // make sure the unused WAL goes away
+    Map<String,Boolean> walsAfterGC = getWals(c);
+    assertEquals(walsAfterGC.toString(), 2, walsAfterGC.size());
+    control.stop(GARBAGE_COLLECTOR);
+    // restart the tserver, but don't run recovery on all tablets
+    control.stop(TABLET_SERVER);
+    // this delays recovery on the normal tables
+    assertEquals(0, cluster.exec(SetGoalState.class, "SAFE_MODE").waitFor());
+    control.start(TABLET_SERVER);
+
+    // wait for the metadata table to go back online
+    getRecoveryMarkers(c);
+    // allow a little time for the master to notice ASSIGNED_TO_DEAD_SERVER tablets
+    sleepUninterruptibly(5, TimeUnit.SECONDS);
+    Map<KeyExtent,List<String>> markers = getRecoveryMarkers(c);
+    // log.debug("markers " + markers);
+    assertEquals("one tablet should have markers", 1, markers.keySet().size());
+    assertEquals("tableId of the keyExtent should be 1", "1", markers.keySet().iterator().next().getTableId());
+
+    // put some data in the WAL
+    assertEquals(0, cluster.exec(SetGoalState.class, "NORMAL").waitFor());
+    verifySomeData(c, tableName, 1001 * 50 + 1);
+    writeSomeData(c, tableName, 100, 100);
+
+    Map<String,Boolean> walsAfterRestart = getWals(c);
+    // log.debug("wals after " + walsAfterRestart);
+    assertEquals("used WALs after restart should be 4", 4, countTrue(walsAfterRestart.values()));
+    control.start(GARBAGE_COLLECTOR);
+    sleepUninterruptibly(5, TimeUnit.SECONDS);
+    Map<String,Boolean> walsAfterRestartAndGC = getWals(c);
+    assertEquals("wals left should be 2", 2, walsAfterRestartAndGC.size());
+    assertEquals("logs in use should be 2", 2, countTrue(walsAfterRestartAndGC.values()));
+  }
+
+  private void verifySomeData(Connector c, String tableName, int expected) throws Exception {
+    Scanner scan = c.createScanner(tableName, EMPTY);
+    int result = Iterators.size(scan.iterator());
+    scan.close();
+    Assert.assertEquals(expected, result);
+  }
+
+  private void writeSomeData(Connector conn, String tableName, int row, int col) throws Exception {
+    Random rand = new Random();
+    BatchWriter bw = conn.createBatchWriter(tableName, null);
+    byte[] rowData = new byte[10];
+    byte[] cq = new byte[10];
+    byte[] value = new byte[10];
+
+    for (int r = 0; r < row; r++) {
+      rand.nextBytes(rowData);
+      Mutation m = new Mutation(rowData);
+      for (int c = 0; c < col; c++) {
+        rand.nextBytes(cq);
+        rand.nextBytes(value);
+        m.put(CF, new Text(cq), new Value(value));
+      }
+      bw.addMutation(m);
+      if (r % 100 == 0) {
+        bw.flush();
+      }
+    }
+    bw.close();
+  }
+
+  private Map<String,Boolean> getWals(Connector c) throws Exception {
+    Map<String,Boolean> result = new HashMap<>();
+    Instance i = c.getInstance();
+    ZooReaderWriter zk = new ZooReaderWriter(i.getZooKeepers(), i.getZooKeepersSessionTimeOut(), "");
+    WalStateManager wals = new WalStateManager(c.getInstance(), zk);
+    for (Entry<Path,WalState> entry : wals.getAllState().entrySet()) {
+      // WALs are in use if they are not unreferenced
+      result.put(entry.getKey().toString(), entry.getValue() != WalState.UNREFERENCED);
+    }
+    return result;
+  }
+
+  private Map<KeyExtent,List<String>> getRecoveryMarkers(Connector c) throws Exception {
+    Map<KeyExtent,List<String>> result = new HashMap<>();
+    Scanner root = c.createScanner(RootTable.NAME, EMPTY);
+    root.setRange(TabletsSection.getRange());
+    root.fetchColumnFamily(TabletsSection.LogColumnFamily.NAME);
+    TabletColumnFamily.PREV_ROW_COLUMN.fetch(root);
+
+    Scanner meta = c.createScanner(MetadataTable.NAME, EMPTY);
+    meta.setRange(TabletsSection.getRange());
+    meta.fetchColumnFamily(TabletsSection.LogColumnFamily.NAME);
+    TabletColumnFamily.PREV_ROW_COLUMN.fetch(meta);
+
+    List<String> logs = new ArrayList<>();
+    Iterator<Entry<Key,Value>> both = Iterators.concat(root.iterator(), meta.iterator());
+    while (both.hasNext()) {
+      Entry<Key,Value> entry = both.next();
+      Key key = entry.getKey();
+      if (key.getColumnFamily().equals(TabletsSection.LogColumnFamily.NAME)) {
+        logs.add(key.getColumnQualifier().toString());
+      }
+      if (TabletColumnFamily.PREV_ROW_COLUMN.hasColumns(key) && !logs.isEmpty()) {
+        KeyExtent extent = new KeyExtent(key.getRow(), entry.getValue());
+        result.put(extent, logs);
+        logs = new ArrayList<>();
+      }
+    }
+    return result;
+  }
+
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/WatchTheWatchCountIT.java b/test/src/main/java/org/apache/accumulo/test/functional/WatchTheWatchCountIT.java
similarity index 84%
rename from test/src/test/java/org/apache/accumulo/test/functional/WatchTheWatchCountIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/WatchTheWatchCountIT.java
index fff5b16..84b944b 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/WatchTheWatchCountIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/WatchTheWatchCountIT.java
@@ -27,11 +27,10 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.collect.Range;
 import com.google.common.net.HostAndPort;
 
 // ACCUMULO-2757 - make sure we don't make too many more watchers
-public class WatchTheWatchCountIT extends ConfigurableMacIT {
+public class WatchTheWatchCountIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(WatchTheWatchCountIT.class);
 
   public int defaultOverrideSeconds() {
@@ -52,7 +51,8 @@
     }
     c.tableOperations().list();
     String zooKeepers = c.getInstance().getZooKeepers();
-    final Range<Long> expectedWatcherRange = Range.open(475l, 700l);
+    final long MIN = 475L;
+    final long MAX = 700L;
     long total = 0;
     final HostAndPort hostAndPort = HostAndPort.fromString(zooKeepers);
     for (int i = 0; i < 5; i++) {
@@ -64,17 +64,18 @@
         String response = new String(buffer, 0, n);
         total = Long.parseLong(response.split(":")[1].trim());
         log.info("Total: {}", total);
-        if (expectedWatcherRange.contains(total)) {
+        if (total > MIN && total < MAX) {
           break;
         }
-        log.debug("Expected number of watchers to be contained in {}, but actually was {}. Sleeping and retrying", expectedWatcherRange, total);
+        log.debug("Expected number of watchers to be contained in ({}, {}), but actually was {}. Sleeping and retrying", MIN, MAX, total);
         Thread.sleep(5000);
       } finally {
         socket.close();
       }
     }
 
-    assertTrue("Expected number of watchers to be contained in " + expectedWatcherRange + ", but actually was " + total, expectedWatcherRange.contains(total));
+    assertTrue("Expected number of watchers to be contained in (" + MIN + ", " + MAX + "), but actually was " + total, total > MIN && total < MAX);
+
   }
 
 }
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/WriteAheadLogIT.java b/test/src/main/java/org/apache/accumulo/test/functional/WriteAheadLogIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/functional/WriteAheadLogIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/WriteAheadLogIT.java
index bc36257..0074f5f 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/WriteAheadLogIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/WriteAheadLogIT.java
@@ -22,7 +22,7 @@
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.TestIngest;
@@ -31,7 +31,7 @@
 import org.apache.hadoop.fs.RawLocalFileSystem;
 import org.junit.Test;
 
-public class WriteAheadLogIT extends AccumuloClusterIT {
+public class WriteAheadLogIT extends AccumuloClusterHarness {
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/WriteLotsIT.java b/test/src/main/java/org/apache/accumulo/test/functional/WriteLotsIT.java
similarity index 76%
rename from test/src/test/java/org/apache/accumulo/test/functional/WriteLotsIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/WriteLotsIT.java
index d8dba87..719dbdb 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/WriteLotsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/WriteLotsIT.java
@@ -16,8 +16,9 @@
  */
 package org.apache.accumulo.test.functional;
 
-import java.util.ArrayList;
-import java.util.List;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.accumulo.core.cli.BatchWriterOpts;
@@ -25,12 +26,12 @@
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
 import org.junit.Test;
 
-public class WriteLotsIT extends AccumuloClusterIT {
+public class WriteLotsIT extends AccumuloClusterHarness {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -42,12 +43,13 @@
     final Connector c = getConnector();
     final String tableName = getUniqueNames(1)[0];
     c.tableOperations().create(tableName);
-    final AtomicReference<Exception> ref = new AtomicReference<Exception>();
-    List<Thread> threads = new ArrayList<Thread>();
+    final AtomicReference<Exception> ref = new AtomicReference<>();
     final ClientConfiguration clientConfig = getCluster().getClientConfig();
-    for (int i = 0; i < 10; i++) {
+    final int THREADS = 5;
+    ThreadPoolExecutor tpe = new ThreadPoolExecutor(0, THREADS, 0, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(THREADS));
+    for (int i = 0; i < THREADS; i++) {
       final int index = i;
-      Thread t = new Thread() {
+      Runnable r = new Runnable() {
         @Override
         public void run() {
           try {
@@ -60,23 +62,24 @@
             } else {
               opts.setPrincipal(getAdminPrincipal());
             }
+            BatchWriterOpts bwOpts = new BatchWriterOpts();
+            bwOpts.batchMemory = 1024L * 1024;
+            bwOpts.batchThreads = 2;
             TestIngest.ingest(c, opts, new BatchWriterOpts());
           } catch (Exception ex) {
             ref.set(ex);
           }
         }
       };
-      t.start();
-      threads.add(t);
+      tpe.execute(r);
     }
-    for (Thread thread : threads) {
-      thread.join();
-    }
+    tpe.shutdown();
+    tpe.awaitTermination(90, TimeUnit.SECONDS);
     if (ref.get() != null) {
       throw ref.get();
     }
     VerifyIngest.Opts vopts = new VerifyIngest.Opts();
-    vopts.rows = 10000 * 10;
+    vopts.rows = 10000 * THREADS;
     vopts.setTableName(tableName);
     if (clientConfig.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
       vopts.updateKerberosCredentials(clientConfig);
diff --git a/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java b/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java
index da9f1d6..6c20cda 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ZombieTServer.java
@@ -16,14 +16,15 @@
  */
 package org.apache.accumulo.test.functional;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 import static java.nio.charset.StandardCharsets.UTF_8;
 
 import java.util.HashMap;
 import java.util.Random;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.Constants;
 import org.apache.accumulo.core.client.impl.thrift.ThriftSecurityException;
-import org.apache.accumulo.core.master.thrift.TableInfo;
 import org.apache.accumulo.core.master.thrift.TabletServerStatus;
 import org.apache.accumulo.core.security.thrift.TCredentials;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService.Iface;
@@ -32,7 +33,6 @@
 import org.apache.accumulo.core.trace.thrift.TInfo;
 import org.apache.accumulo.core.util.ServerServices;
 import org.apache.accumulo.core.util.ServerServices.Service;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooLock.LockLossReason;
 import org.apache.accumulo.fate.zookeeper.ZooLock.LockWatcher;
@@ -78,11 +78,11 @@
       synchronized (this) {
         if (statusCount++ < 1) {
           TabletServerStatus result = new TabletServerStatus();
-          result.tableMap = new HashMap<String,TableInfo>();
+          result.tableMap = new HashMap<>();
           return result;
         }
       }
-      UtilWaitThread.sleep(Integer.MAX_VALUE);
+      sleepUninterruptibly(Integer.MAX_VALUE, TimeUnit.DAYS);
       return null;
     }
 
@@ -104,8 +104,8 @@
     TransactionWatcher watcher = new TransactionWatcher();
     final ThriftClientHandler tch = new ThriftClientHandler(context, watcher);
     Processor<Iface> processor = new Processor<Iface>(tch);
-    ServerAddress serverPort = TServerUtils.startTServer(context.getConfiguration(), HostAndPort.fromParts("0.0.0.0", port), ThriftServerType.CUSTOM_HS_HA,
-        processor, "ZombieTServer", "walking dead", 2, 1, 1000, 10 * 1024 * 1024, null, null, -1);
+    ServerAddress serverPort = TServerUtils.startTServer(context.getConfiguration(), ThriftServerType.CUSTOM_HS_HA, processor, "ZombieTServer", "walking dead",
+        2, 1, 1000, 10 * 1024 * 1024, null, null, -1, HostAndPort.fromParts("0.0.0.0", port));
 
     String addressString = serverPort.address.toString();
     String zPath = ZooUtil.getRoot(context.getInstance()) + Constants.ZTSERVERS + "/" + addressString;
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ZooCacheIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ZooCacheIT.java
similarity index 92%
rename from test/src/test/java/org/apache/accumulo/test/functional/ZooCacheIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ZooCacheIT.java
index 1f424c4..aa6e7a1 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ZooCacheIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ZooCacheIT.java
@@ -28,7 +28,7 @@
 import org.junit.BeforeClass;
 import org.junit.Test;
 
-public class ZooCacheIT extends ConfigurableMacIT {
+public class ZooCacheIT extends ConfigurableMacBase {
 
   @Override
   protected int defaultTimeoutSeconds() {
@@ -48,8 +48,8 @@
   @Test
   public void test() throws Exception {
     assertEquals(0, exec(CacheTestClean.class, pathName, testDir.getAbsolutePath()).waitFor());
-    final AtomicReference<Exception> ref = new AtomicReference<Exception>();
-    List<Thread> threads = new ArrayList<Thread>();
+    final AtomicReference<Exception> ref = new AtomicReference<>();
+    List<Thread> threads = new ArrayList<>();
     for (int i = 0; i < 3; i++) {
       Thread reader = new Thread() {
         @Override
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ZookeeperRestartIT.java b/test/src/main/java/org/apache/accumulo/test/functional/ZookeeperRestartIT.java
similarity index 90%
rename from test/src/test/java/org/apache/accumulo/test/functional/ZookeeperRestartIT.java
rename to test/src/main/java/org/apache/accumulo/test/functional/ZookeeperRestartIT.java
index fefb9a6..013edb0 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ZookeeperRestartIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ZookeeperRestartIT.java
@@ -24,6 +24,7 @@
 import java.util.Iterator;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.Connector;
@@ -33,18 +34,19 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ProcessReference;
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Test;
 
-public class ZookeeperRestartIT extends ConfigurableMacIT {
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
+public class ZookeeperRestartIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
-    Map<String,String> siteConfig = new HashMap<String,String>();
+    Map<String,String> siteConfig = new HashMap<>();
     siteConfig.put(Property.INSTANCE_ZK_TIMEOUT.getKey(), "15s");
     cfg.setSiteConfig(siteConfig);
   }
@@ -69,7 +71,7 @@
       cluster.killProcess(ServerType.ZOOKEEPER, proc);
 
     // give the servers time to react
-    UtilWaitThread.sleep(1000);
+    sleepUninterruptibly(1, TimeUnit.SECONDS);
 
     // start zookeeper back up
     cluster.start();
diff --git a/test/src/main/java/org/apache/accumulo/test/gc/replication/CloseWriteAheadLogReferencesIT.java b/test/src/main/java/org/apache/accumulo/test/gc/replication/CloseWriteAheadLogReferencesIT.java
new file mode 100644
index 0000000..5af9ebe
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/gc/replication/CloseWriteAheadLogReferencesIT.java
@@ -0,0 +1,184 @@
+package org.apache.accumulo.test.gc.replication;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import static org.easymock.EasyMock.createMock;
+import static org.easymock.EasyMock.expect;
+import static org.easymock.EasyMock.replay;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map.Entry;
+import java.util.Set;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.ConfigurationCopy;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.conf.SiteConfiguration;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.metadata.schema.MetadataSchema.ReplicationSection;
+import org.apache.accumulo.core.protobuf.ProtobufUtil;
+import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
+import org.apache.accumulo.core.replication.ReplicationTable;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.gc.replication.CloseWriteAheadLogReferences;
+import org.apache.accumulo.server.AccumuloServerContext;
+import org.apache.accumulo.server.conf.ServerConfigurationFactory;
+import org.apache.accumulo.server.replication.StatusUtil;
+import org.apache.accumulo.server.replication.proto.Replication.Status;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.io.Text;
+import org.easymock.EasyMock;
+import org.easymock.IAnswer;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import com.google.common.collect.Iterables;
+
+public class CloseWriteAheadLogReferencesIT extends ConfigurableMacBase {
+
+  private WrappedCloseWriteAheadLogReferences refs;
+  private Connector conn;
+
+  private static class WrappedCloseWriteAheadLogReferences extends CloseWriteAheadLogReferences {
+    public WrappedCloseWriteAheadLogReferences(AccumuloServerContext context) {
+      super(context);
+    }
+
+    @Override
+    protected long updateReplicationEntries(Connector conn, Set<String> closedWals) {
+      return super.updateReplicationEntries(conn, closedWals);
+    }
+  }
+
+  @Before
+  public void setupInstance() throws Exception {
+    conn = getConnector();
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.WRITE);
+    conn.securityOperations().grantTablePermission(conn.whoami(), MetadataTable.NAME, TablePermission.WRITE);
+    ReplicationTable.setOnline(conn);
+  }
+
+  @Before
+  public void setupEasyMockStuff() {
+    Instance mockInst = createMock(Instance.class);
+    SiteConfiguration siteConfig = EasyMock.createMock(SiteConfiguration.class);
+    expect(mockInst.getInstanceID()).andReturn(testName.getMethodName()).anyTimes();
+    expect(mockInst.getZooKeepers()).andReturn("localhost").anyTimes();
+    expect(mockInst.getZooKeepersSessionTimeOut()).andReturn(30000).anyTimes();
+    final AccumuloConfiguration systemConf = new ConfigurationCopy(new HashMap<String,String>());
+    ServerConfigurationFactory factory = createMock(ServerConfigurationFactory.class);
+    expect(factory.getConfiguration()).andReturn(systemConf).anyTimes();
+    expect(factory.getInstance()).andReturn(mockInst).anyTimes();
+    expect(factory.getSiteConfiguration()).andReturn(siteConfig).anyTimes();
+
+    // Just make the SiteConfiguration delegate to our AccumuloConfiguration
+    // Presently, we only need get(Property) and iterator().
+    EasyMock.expect(siteConfig.get(EasyMock.anyObject(Property.class))).andAnswer(new IAnswer<String>() {
+      @Override
+      public String answer() {
+        Object[] args = EasyMock.getCurrentArguments();
+        return systemConf.get((Property) args[0]);
+      }
+    }).anyTimes();
+    EasyMock.expect(siteConfig.getBoolean(EasyMock.anyObject(Property.class))).andAnswer(new IAnswer<Boolean>() {
+      @Override
+      public Boolean answer() {
+        Object[] args = EasyMock.getCurrentArguments();
+        return systemConf.getBoolean((Property) args[0]);
+      }
+    }).anyTimes();
+
+    EasyMock.expect(siteConfig.iterator()).andAnswer(new IAnswer<Iterator<Entry<String,String>>>() {
+      @Override
+      public Iterator<Entry<String,String>> answer() {
+        return systemConf.iterator();
+      }
+    }).anyTimes();
+
+    replay(mockInst, factory, siteConfig);
+    refs = new WrappedCloseWriteAheadLogReferences(new AccumuloServerContext(factory));
+  }
+
+  @Test
+  public void unclosedWalsLeaveStatusOpen() throws Exception {
+    Set<String> wals = Collections.emptySet();
+    BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    Mutation m = new Mutation(ReplicationSection.getRowPrefix() + "file:/accumulo/wal/tserver+port/12345");
+    m.put(ReplicationSection.COLF, new Text("1"), StatusUtil.fileCreatedValue(System.currentTimeMillis()));
+    bw.addMutation(m);
+    bw.close();
+
+    refs.updateReplicationEntries(conn, wals);
+
+    Scanner s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
+    s.fetchColumnFamily(ReplicationSection.COLF);
+    Entry<Key,Value> entry = Iterables.getOnlyElement(s);
+    Status status = Status.parseFrom(entry.getValue().get());
+    Assert.assertFalse(status.getClosed());
+  }
+
+  @Test
+  public void closedWalsUpdateStatus() throws Exception {
+    String file = "file:/accumulo/wal/tserver+port/12345";
+    Set<String> wals = Collections.singleton(file);
+    BatchWriter bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
+    Mutation m = new Mutation(ReplicationSection.getRowPrefix() + file);
+    m.put(ReplicationSection.COLF, new Text("1"), StatusUtil.fileCreatedValue(System.currentTimeMillis()));
+    bw.addMutation(m);
+    bw.close();
+
+    refs.updateReplicationEntries(conn, wals);
+
+    Scanner s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
+    s.fetchColumnFamily(ReplicationSection.COLF);
+    Entry<Key,Value> entry = Iterables.getOnlyElement(s);
+    Status status = Status.parseFrom(entry.getValue().get());
+    Assert.assertTrue(status.getClosed());
+  }
+
+  @Test
+  public void partiallyReplicatedReferencedWalsAreNotClosed() throws Exception {
+    String file = "file:/accumulo/wal/tserver+port/12345";
+    Set<String> wals = Collections.singleton(file);
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    Mutation m = new Mutation(file);
+    StatusSection.add(m, "1", ProtobufUtil.toValue(StatusUtil.ingestedUntil(1000)));
+    bw.addMutation(m);
+    bw.close();
+
+    refs.updateReplicationEntries(conn, wals);
+
+    Scanner s = ReplicationTable.getScanner(conn);
+    Entry<Key,Value> entry = Iterables.getOnlyElement(s);
+    Status status = Status.parseFrom(entry.getValue().get());
+    Assert.assertFalse(status.getClosed());
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloFileOutputFormatIT.java b/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloFileOutputFormatIT.java
new file mode 100644
index 0000000..aa19250
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloFileOutputFormatIT.java
@@ -0,0 +1,223 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.accumulo.test.mapred;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.mapred.AccumuloFileOutputFormat;
+import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
+import org.apache.accumulo.core.client.mapreduce.lib.impl.ConfiguratorBase;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.conf.DefaultConfiguration;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.FileSKVIterator;
+import org.apache.accumulo.core.file.rfile.RFileOperations;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
+import org.apache.accumulo.core.util.CachedConfiguration;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.Mapper;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.lib.IdentityMapper;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class AccumuloFileOutputFormatIT extends AccumuloClusterHarness {
+  private static final Logger log = LoggerFactory.getLogger(AccumuloFileOutputFormatIT.class);
+  private static final int JOB_VISIBILITY_CACHE_SIZE = 3000;
+  private static final String PREFIX = AccumuloFileOutputFormatIT.class.getSimpleName();
+  private static final String BAD_TABLE = PREFIX + "_mapred_bad_table";
+  private static final String TEST_TABLE = PREFIX + "_mapred_test_table";
+  private static final String EMPTY_TABLE = PREFIX + "_mapred_empty_table";
+
+  private static AssertionError e1 = null;
+  private static AssertionError e2 = null;
+
+  private static final SamplerConfiguration SAMPLER_CONFIG = new SamplerConfiguration(RowSampler.class.getName()).addOption("hasher", "murmur3_32").addOption(
+      "modulus", "3");
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
+
+  @Test
+  public void testEmptyWrite() throws Exception {
+    getConnector().tableOperations().create(EMPTY_TABLE);
+    handleWriteTests(false);
+  }
+
+  @Test
+  public void testRealWrite() throws Exception {
+    Connector c = getConnector();
+    c.tableOperations().create(TEST_TABLE);
+    BatchWriter bw = c.createBatchWriter(TEST_TABLE, new BatchWriterConfig());
+    Mutation m = new Mutation("Key");
+    m.put("", "", "");
+    bw.addMutation(m);
+    bw.close();
+    handleWriteTests(true);
+  }
+
+  private static class MRTester extends Configured implements Tool {
+    private static class BadKeyMapper implements Mapper<Key,Value,Key,Value> {
+
+      int index = 0;
+
+      @Override
+      public void map(Key key, Value value, OutputCollector<Key,Value> output, Reporter reporter) throws IOException {
+        try {
+          try {
+            output.collect(key, value);
+            if (index == 2)
+              fail();
+          } catch (Exception e) {
+            log.error(e.toString(), e);
+            assertEquals(2, index);
+          }
+        } catch (AssertionError e) {
+          e1 = e;
+        }
+        index++;
+      }
+
+      @Override
+      public void configure(JobConf job) {}
+
+      @Override
+      public void close() throws IOException {
+        try {
+          assertEquals(2, index);
+        } catch (AssertionError e) {
+          e2 = e;
+        }
+      }
+
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+
+      if (args.length != 2) {
+        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <table> <outputfile>");
+      }
+
+      String table = args[0];
+
+      JobConf job = new JobConf(getConf());
+      job.setJarByClass(this.getClass());
+      ConfiguratorBase.setVisibilityCacheSize(job, JOB_VISIBILITY_CACHE_SIZE);
+
+      job.setInputFormat(AccumuloInputFormat.class);
+
+      AccumuloInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
+      AccumuloInputFormat.setConnectorInfo(job, getAdminPrincipal(), getAdminToken());
+      AccumuloInputFormat.setInputTableName(job, table);
+      AccumuloFileOutputFormat.setOutputPath(job, new Path(args[1]));
+      AccumuloFileOutputFormat.setSampler(job, SAMPLER_CONFIG);
+
+      job.setMapperClass(BAD_TABLE.equals(table) ? BadKeyMapper.class : IdentityMapper.class);
+      job.setMapOutputKeyClass(Key.class);
+      job.setMapOutputValueClass(Value.class);
+      job.setOutputFormat(AccumuloFileOutputFormat.class);
+
+      job.setNumReduceTasks(0);
+
+      return JobClient.runJob(job).isSuccessful() ? 0 : 1;
+    }
+
+    public static void main(String[] args) throws Exception {
+      Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
+      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
+      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
+    }
+  }
+
+  private void handleWriteTests(boolean content) throws Exception {
+    File f = folder.newFile(testName.getMethodName());
+    if (f.delete()) {
+      log.debug("Deleted {}", f);
+    }
+    MRTester.main(new String[] {content ? TEST_TABLE : EMPTY_TABLE, f.getAbsolutePath()});
+
+    assertTrue(f.exists());
+    File[] files = f.listFiles(new FileFilter() {
+      @Override
+      public boolean accept(File file) {
+        return file.getName().startsWith("part-m-");
+      }
+    });
+    assertNotNull(files);
+    if (content) {
+      assertEquals(1, files.length);
+      assertTrue(files[0].exists());
+
+      Configuration conf = CachedConfiguration.getInstance();
+      DefaultConfiguration acuconf = DefaultConfiguration.getInstance();
+      FileSKVIterator sample = RFileOperations.getInstance().newReaderBuilder().forFile(files[0].toString(), FileSystem.get(conf), conf)
+          .withTableConfiguration(acuconf).build().getSample(new SamplerConfigurationImpl(SAMPLER_CONFIG));
+      assertNotNull(sample);
+    } else {
+      assertEquals(0, files.length);
+    }
+  }
+
+  @Test
+  public void writeBadVisibility() throws Exception {
+    Connector c = getConnector();
+    c.tableOperations().create(BAD_TABLE);
+    BatchWriter bw = c.createBatchWriter(BAD_TABLE, new BatchWriterConfig());
+    Mutation m = new Mutation("r1");
+    m.put("cf1", "cq1", "A&B");
+    m.put("cf1", "cq1", "A&B");
+    m.put("cf1", "cq2", "A&");
+    bw.addMutation(m);
+    bw.close();
+    File f = folder.newFile(testName.getMethodName());
+    if (f.delete()) {
+      log.debug("Deleted {}", f);
+    }
+    MRTester.main(new String[] {BAD_TABLE, f.getAbsolutePath()});
+    assertNull(e1);
+    assertNull(e2);
+  }
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloInputFormatIT.java b/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloInputFormatIT.java
new file mode 100644
index 0000000..cf002dd
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloInputFormatIT.java
@@ -0,0 +1,249 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.mapred;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Collections;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.admin.NewTableConfiguration;
+import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
+import org.apache.accumulo.core.client.mapred.RangeInputSplit;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.Pair;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapred.InputSplit;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.Mapper;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.lib.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.Level;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class AccumuloInputFormatIT extends AccumuloClusterHarness {
+
+  @BeforeClass
+  public static void setupClass() {
+    System.setProperty("hadoop.tmp.dir", System.getProperty("user.dir") + "/target/hadoop-tmp");
+  }
+
+  private static AssertionError e1 = null;
+  private static int e1Count = 0;
+  private static AssertionError e2 = null;
+  private static int e2Count = 0;
+
+  private static class MRTester extends Configured implements Tool {
+    private static class TestMapper implements Mapper<Key,Value,Key,Value> {
+      Key key = null;
+      int count = 0;
+
+      @Override
+      public void map(Key k, Value v, OutputCollector<Key,Value> output, Reporter reporter) throws IOException {
+        try {
+          if (key != null)
+            assertEquals(key.getRow().toString(), new String(v.get()));
+          assertEquals(k.getRow(), new Text(String.format("%09x", count + 1)));
+          assertEquals(new String(v.get()), String.format("%09x", count));
+        } catch (AssertionError e) {
+          e1 = e;
+          e1Count++;
+        }
+        key = new Key(k);
+        count++;
+      }
+
+      @Override
+      public void configure(JobConf job) {}
+
+      @Override
+      public void close() throws IOException {
+        try {
+          assertEquals(100, count);
+        } catch (AssertionError e) {
+          e2 = e;
+          e2Count++;
+        }
+      }
+
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+
+      if (args.length != 1 && args.length != 3) {
+        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <table> [<batchScan> <scan sample>]");
+      }
+
+      String table = args[0];
+      Boolean batchScan = false;
+      boolean sample = false;
+      if (args.length == 3) {
+        batchScan = Boolean.parseBoolean(args[1]);
+        sample = Boolean.parseBoolean(args[2]);
+      }
+
+      JobConf job = new JobConf(getConf());
+      job.setJarByClass(this.getClass());
+
+      job.setInputFormat(AccumuloInputFormat.class);
+
+      AccumuloInputFormat.setConnectorInfo(job, getAdminPrincipal(), getAdminToken());
+      AccumuloInputFormat.setInputTableName(job, table);
+      AccumuloInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
+      AccumuloInputFormat.setBatchScan(job, batchScan);
+      if (sample) {
+        AccumuloInputFormat.setSamplerConfiguration(job, SAMPLER_CONFIG);
+      }
+
+      job.setMapperClass(TestMapper.class);
+      job.setMapOutputKeyClass(Key.class);
+      job.setMapOutputValueClass(Value.class);
+      job.setOutputFormat(NullOutputFormat.class);
+
+      job.setNumReduceTasks(0);
+
+      return JobClient.runJob(job).isSuccessful() ? 0 : 1;
+    }
+
+    public static void main(String... args) throws Exception {
+      Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
+      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
+      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
+    }
+  }
+
+  @Test
+  public void testMap() throws Exception {
+    String table = getUniqueNames(1)[0];
+    Connector c = getConnector();
+    c.tableOperations().create(table);
+    BatchWriter bw = c.createBatchWriter(table, new BatchWriterConfig());
+    for (int i = 0; i < 100; i++) {
+      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
+      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
+      bw.addMutation(m);
+    }
+    bw.close();
+
+    e1 = null;
+    e2 = null;
+
+    MRTester.main(table);
+    assertNull(e1);
+    assertNull(e2);
+  }
+
+  private static final SamplerConfiguration SAMPLER_CONFIG = new SamplerConfiguration(RowSampler.class.getName()).addOption("hasher", "murmur3_32").addOption(
+      "modulus", "3");
+
+  @Test
+  public void testSample() throws Exception {
+    final String TEST_TABLE_3 = getUniqueNames(1)[0];
+
+    Connector c = getConnector();
+    c.tableOperations().create(TEST_TABLE_3, new NewTableConfiguration().enableSampling(SAMPLER_CONFIG));
+    BatchWriter bw = c.createBatchWriter(TEST_TABLE_3, new BatchWriterConfig());
+    for (int i = 0; i < 100; i++) {
+      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
+      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
+      bw.addMutation(m);
+    }
+    bw.close();
+
+    MRTester.main(TEST_TABLE_3, "False", "True");
+    Assert.assertEquals(38, e1Count);
+    Assert.assertEquals(1, e2Count);
+
+    e2Count = e1Count = 0;
+    MRTester.main(TEST_TABLE_3, "False", "False");
+    Assert.assertEquals(0, e1Count);
+    Assert.assertEquals(0, e2Count);
+
+    e2Count = e1Count = 0;
+    MRTester.main(TEST_TABLE_3, "True", "True");
+    Assert.assertEquals(38, e1Count);
+    Assert.assertEquals(1, e2Count);
+
+  }
+
+  @Test
+  public void testCorrectRangeInputSplits() throws Exception {
+    JobConf job = new JobConf();
+
+    String table = getUniqueNames(1)[0];
+    Authorizations auths = new Authorizations("foo");
+    Collection<Pair<Text,Text>> fetchColumns = Collections.singleton(new Pair<>(new Text("foo"), new Text("bar")));
+    boolean isolated = true, localIters = true;
+    Level level = Level.WARN;
+
+    Connector connector = getConnector();
+    connector.tableOperations().create(table);
+
+    AccumuloInputFormat.setConnectorInfo(job, getAdminPrincipal(), getAdminToken());
+    AccumuloInputFormat.setInputTableName(job, table);
+    AccumuloInputFormat.setScanAuthorizations(job, auths);
+    AccumuloInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
+    AccumuloInputFormat.setScanIsolation(job, isolated);
+    AccumuloInputFormat.setLocalIterators(job, localIters);
+    AccumuloInputFormat.fetchColumns(job, fetchColumns);
+    AccumuloInputFormat.setLogLevel(job, level);
+
+    AccumuloInputFormat aif = new AccumuloInputFormat();
+
+    InputSplit[] splits = aif.getSplits(job, 1);
+
+    Assert.assertEquals(1, splits.length);
+
+    InputSplit split = splits[0];
+
+    Assert.assertEquals(RangeInputSplit.class, split.getClass());
+
+    RangeInputSplit risplit = (RangeInputSplit) split;
+
+    Assert.assertEquals(getAdminPrincipal(), risplit.getPrincipal());
+    Assert.assertEquals(table, risplit.getTableName());
+    Assert.assertEquals(getAdminToken(), risplit.getToken());
+    Assert.assertEquals(auths, risplit.getAuths());
+    Assert.assertEquals(getConnector().getInstance().getInstanceName(), risplit.getInstanceName());
+    Assert.assertEquals(isolated, risplit.isIsolatedScan());
+    Assert.assertEquals(localIters, risplit.usesLocalIterators());
+    Assert.assertEquals(fetchColumns, risplit.getFetchedColumns());
+    Assert.assertEquals(level, risplit.getLogLevel());
+  }
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloMultiTableInputFormatIT.java b/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloMultiTableInputFormatIT.java
new file mode 100644
index 0000000..44ef7d1
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloMultiTableInputFormatIT.java
@@ -0,0 +1,164 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.mapred;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
+import org.apache.accumulo.core.client.mapred.AccumuloMultiTableInputFormat;
+import org.apache.accumulo.core.client.mapred.RangeInputSplit;
+import org.apache.accumulo.core.client.mapreduce.InputTableConfig;
+import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.Mapper;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.lib.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.junit.Test;
+
+public class AccumuloMultiTableInputFormatIT extends AccumuloClusterHarness {
+
+  private static AssertionError e1 = null;
+  private static AssertionError e2 = null;
+
+  private static class MRTester extends Configured implements Tool {
+    private static class TestMapper implements Mapper<Key,Value,Key,Value> {
+      Key key = null;
+      int count = 0;
+
+      @Override
+      public void map(Key k, Value v, OutputCollector<Key,Value> output, Reporter reporter) throws IOException {
+        try {
+          String tableName = ((RangeInputSplit) reporter.getInputSplit()).getTableName();
+          if (key != null)
+            assertEquals(key.getRow().toString(), new String(v.get()));
+          assertEquals(new Text(String.format("%s_%09x", tableName, count + 1)), k.getRow());
+          assertEquals(String.format("%s_%09x", tableName, count), new String(v.get()));
+        } catch (AssertionError e) {
+          e1 = e;
+        }
+        key = new Key(k);
+        count++;
+      }
+
+      @Override
+      public void configure(JobConf job) {}
+
+      @Override
+      public void close() throws IOException {
+        try {
+          assertEquals(100, count);
+        } catch (AssertionError e) {
+          e2 = e;
+        }
+      }
+
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+
+      if (args.length != 2) {
+        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <table1> <table2>");
+      }
+
+      String user = getAdminPrincipal();
+      AuthenticationToken pass = getAdminToken();
+      String table1 = args[0];
+      String table2 = args[1];
+
+      JobConf job = new JobConf(getConf());
+      job.setJarByClass(this.getClass());
+
+      job.setInputFormat(AccumuloInputFormat.class);
+
+      AccumuloMultiTableInputFormat.setConnectorInfo(job, user, pass);
+      AccumuloMultiTableInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
+
+      InputTableConfig tableConfig1 = new InputTableConfig();
+      InputTableConfig tableConfig2 = new InputTableConfig();
+
+      Map<String,InputTableConfig> configMap = new HashMap<>();
+      configMap.put(table1, tableConfig1);
+      configMap.put(table2, tableConfig2);
+
+      AccumuloMultiTableInputFormat.setInputTableConfigs(job, configMap);
+
+      job.setMapperClass(TestMapper.class);
+      job.setMapOutputKeyClass(Key.class);
+      job.setMapOutputValueClass(Value.class);
+      job.setOutputFormat(NullOutputFormat.class);
+
+      job.setNumReduceTasks(0);
+
+      return JobClient.runJob(job).isSuccessful() ? 0 : 1;
+    }
+
+    public static void main(String[] args) throws Exception {
+      Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
+      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
+      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
+    }
+  }
+
+  @Test
+  public void testMap() throws Exception {
+    String[] tableNames = getUniqueNames(2);
+    String table1 = tableNames[0];
+    String table2 = tableNames[1];
+    Connector c = getConnector();
+    c.tableOperations().create(table1);
+    c.tableOperations().create(table2);
+    BatchWriter bw = c.createBatchWriter(table1, new BatchWriterConfig());
+    BatchWriter bw2 = c.createBatchWriter(table2, new BatchWriterConfig());
+    for (int i = 0; i < 100; i++) {
+      Mutation t1m = new Mutation(new Text(String.format("%s_%09x", table1, i + 1)));
+      t1m.put(new Text(), new Text(), new Value(String.format("%s_%09x", table1, i).getBytes()));
+      bw.addMutation(t1m);
+      Mutation t2m = new Mutation(new Text(String.format("%s_%09x", table2, i + 1)));
+      t2m.put(new Text(), new Text(), new Value(String.format("%s_%09x", table2, i).getBytes()));
+      bw2.addMutation(t2m);
+    }
+    bw.close();
+    bw2.close();
+
+    MRTester.main(new String[] {table1, table2});
+    assertNull(e1);
+    assertNull(e2);
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloOutputFormatIT.java b/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloOutputFormatIT.java
new file mode 100644
index 0000000..049a5da
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloOutputFormatIT.java
@@ -0,0 +1,228 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.mapred;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.ClientConfiguration;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.MutationsRejectedException;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
+import org.apache.accumulo.core.client.mapred.AccumuloOutputFormat;
+import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.Mapper;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.RecordWriter;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.junit.Test;
+
+public class AccumuloOutputFormatIT extends ConfigurableMacBase {
+
+  @Override
+  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.setProperty(Property.TSERV_SESSION_MAXIDLE, "1");
+    cfg.setNumTservers(1);
+  }
+
+  // Prevent regression of ACCUMULO-3709.
+  @Test
+  public void testMapred() throws Exception {
+    Connector connector = getConnector();
+    // create a table and put some data in it
+    connector.tableOperations().create(testName.getMethodName());
+
+    JobConf job = new JobConf();
+    BatchWriterConfig batchConfig = new BatchWriterConfig();
+    // no flushes!!!!!
+    batchConfig.setMaxLatency(0, TimeUnit.MILLISECONDS);
+    // use a single thread to ensure our update session times out
+    batchConfig.setMaxWriteThreads(1);
+    // set the max memory so that we ensure we don't flush on the write.
+    batchConfig.setMaxMemory(Long.MAX_VALUE);
+    AccumuloOutputFormat outputFormat = new AccumuloOutputFormat();
+    AccumuloOutputFormat.setBatchWriterOptions(job, batchConfig);
+    AccumuloOutputFormat.setZooKeeperInstance(job, cluster.getClientConfig());
+    AccumuloOutputFormat.setConnectorInfo(job, "root", new PasswordToken(ROOT_PASSWORD));
+    RecordWriter<Text,Mutation> writer = outputFormat.getRecordWriter(null, job, "Test", null);
+
+    try {
+      for (int i = 0; i < 3; i++) {
+        Mutation m = new Mutation(new Text(String.format("%08d", i)));
+        for (int j = 0; j < 3; j++) {
+          m.put(new Text("cf1"), new Text("cq" + j), new Value((i + "_" + j).getBytes(UTF_8)));
+        }
+        writer.write(new Text(testName.getMethodName()), m);
+      }
+
+    } catch (Exception e) {
+      e.printStackTrace();
+      // we don't want the exception to come from write
+    }
+
+    connector.securityOperations().revokeTablePermission("root", testName.getMethodName(), TablePermission.WRITE);
+
+    try {
+      writer.close(null);
+      fail("Did not throw exception");
+    } catch (IOException ex) {
+      log.info(ex.getMessage(), ex);
+      assertTrue(ex.getCause() instanceof MutationsRejectedException);
+    }
+  }
+
+  private static AssertionError e1 = null;
+
+  private static class MRTester extends Configured implements Tool {
+    private static class TestMapper implements Mapper<Key,Value,Text,Mutation> {
+      Key key = null;
+      int count = 0;
+      OutputCollector<Text,Mutation> finalOutput;
+
+      @Override
+      public void map(Key k, Value v, OutputCollector<Text,Mutation> output, Reporter reporter) throws IOException {
+        finalOutput = output;
+        try {
+          if (key != null)
+            assertEquals(key.getRow().toString(), new String(v.get()));
+          assertEquals(k.getRow(), new Text(String.format("%09x", count + 1)));
+          assertEquals(new String(v.get()), String.format("%09x", count));
+        } catch (AssertionError e) {
+          e1 = e;
+        }
+        key = new Key(k);
+        count++;
+      }
+
+      @Override
+      public void configure(JobConf job) {}
+
+      @Override
+      public void close() throws IOException {
+        Mutation m = new Mutation("total");
+        m.put("", "", Integer.toString(count));
+        finalOutput.collect(new Text(), m);
+      }
+
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+
+      if (args.length != 6) {
+        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <inputtable> <outputtable> <instanceName> <zooKeepers>");
+      }
+
+      String user = args[0];
+      String pass = args[1];
+      String table1 = args[2];
+      String table2 = args[3];
+      String instanceName = args[4];
+      String zooKeepers = args[5];
+
+      JobConf job = new JobConf(getConf());
+      job.setJarByClass(this.getClass());
+
+      job.setInputFormat(AccumuloInputFormat.class);
+
+      ClientConfiguration clientConfig = new ClientConfiguration().withInstance(instanceName).withZkHosts(zooKeepers);
+
+      AccumuloInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
+      AccumuloInputFormat.setInputTableName(job, table1);
+      AccumuloInputFormat.setZooKeeperInstance(job, clientConfig);
+
+      job.setMapperClass(TestMapper.class);
+      job.setMapOutputKeyClass(Key.class);
+      job.setMapOutputValueClass(Value.class);
+      job.setOutputFormat(AccumuloOutputFormat.class);
+      job.setOutputKeyClass(Text.class);
+      job.setOutputValueClass(Mutation.class);
+
+      AccumuloOutputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
+      AccumuloOutputFormat.setCreateTables(job, false);
+      AccumuloOutputFormat.setDefaultTableName(job, table2);
+      AccumuloOutputFormat.setZooKeeperInstance(job, clientConfig);
+
+      job.setNumReduceTasks(0);
+
+      return JobClient.runJob(job).isSuccessful() ? 0 : 1;
+    }
+
+    public static void main(String[] args) throws Exception {
+      Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
+      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
+      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
+    }
+  }
+
+  @Test
+  public void testMR() throws Exception {
+    Connector c = getConnector();
+    String instanceName = getCluster().getInstanceName();
+    String table1 = instanceName + "_t1";
+    String table2 = instanceName + "_t2";
+    c.tableOperations().create(table1);
+    c.tableOperations().create(table2);
+    BatchWriter bw = c.createBatchWriter(table1, new BatchWriterConfig());
+    for (int i = 0; i < 100; i++) {
+      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
+      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
+      bw.addMutation(m);
+    }
+    bw.close();
+
+    MRTester.main(new String[] {"root", ROOT_PASSWORD, table1, table2, instanceName, getCluster().getZooKeepers()});
+    assertNull(e1);
+
+    Scanner scanner = c.createScanner(table2, new Authorizations());
+    Iterator<Entry<Key,Value>> iter = scanner.iterator();
+    assertTrue(iter.hasNext());
+    Entry<Key,Value> entry = iter.next();
+    assertEquals(Integer.parseInt(new String(entry.getValue().get())), 100);
+    assertFalse(iter.hasNext());
+  }
+
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloRowInputFormatTest.java b/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloRowInputFormatIT.java
similarity index 73%
rename from core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloRowInputFormatTest.java
rename to test/src/main/java/org/apache/accumulo/test/mapred/AccumuloRowInputFormatIT.java
index 4a52c19..e81741a 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapred/AccumuloRowInputFormatTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/mapred/AccumuloRowInputFormatIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.client.mapred;
+package org.apache.accumulo.test.mapred;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNull;
@@ -31,14 +31,16 @@
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.MutationsRejectedException;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
+import org.apache.accumulo.core.client.mapred.AccumuloRowInputFormat;
+import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.KeyValue;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.accumulo.core.util.PeekingIterator;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.io.Text;
@@ -50,43 +52,34 @@
 import org.apache.hadoop.mapred.lib.NullOutputFormat;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
+import org.junit.BeforeClass;
 import org.junit.Test;
 
-public class AccumuloRowInputFormatTest {
-  private static final String PREFIX = AccumuloRowInputFormatTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapred_instance";
-  private static final String TEST_TABLE_1 = PREFIX + "_mapred_table_1";
+public class AccumuloRowInputFormatIT extends AccumuloClusterHarness {
 
   private static final String ROW1 = "row1";
   private static final String ROW2 = "row2";
   private static final String ROW3 = "row3";
   private static final String COLF1 = "colf1";
-  private static final List<Entry<Key,Value>> row1;
-  private static final List<Entry<Key,Value>> row2;
-  private static final List<Entry<Key,Value>> row3;
+  private static List<Entry<Key,Value>> row1;
+  private static List<Entry<Key,Value>> row2;
+  private static List<Entry<Key,Value>> row3;
   private static AssertionError e1 = null;
   private static AssertionError e2 = null;
 
-  static {
-    row1 = new ArrayList<Entry<Key,Value>>();
+  @BeforeClass
+  public static void prepareRows() {
+    row1 = new ArrayList<>();
     row1.add(new KeyValue(new Key(ROW1, COLF1, "colq1"), "v1".getBytes()));
     row1.add(new KeyValue(new Key(ROW1, COLF1, "colq2"), "v2".getBytes()));
     row1.add(new KeyValue(new Key(ROW1, "colf2", "colq3"), "v3".getBytes()));
-    row2 = new ArrayList<Entry<Key,Value>>();
+    row2 = new ArrayList<>();
     row2.add(new KeyValue(new Key(ROW2, COLF1, "colq4"), "v4".getBytes()));
-    row3 = new ArrayList<Entry<Key,Value>>();
+    row3 = new ArrayList<>();
     row3.add(new KeyValue(new Key(ROW3, COLF1, "colq5"), "v5".getBytes()));
   }
 
-  public static void checkLists(final List<Entry<Key,Value>> first, final List<Entry<Key,Value>> second) {
-    assertEquals("Sizes should be the same.", first.size(), second.size());
-    for (int i = 0; i < first.size(); i++) {
-      assertEquals("Keys should be equal.", first.get(i).getKey(), second.get(i).getKey());
-      assertEquals("Values should be equal.", first.get(i).getValue(), second.get(i).getValue());
-    }
-  }
-
-  public static void checkLists(final List<Entry<Key,Value>> first, final Iterator<Entry<Key,Value>> second) {
+  private static void checkLists(final List<Entry<Key,Value>> first, final Iterator<Entry<Key,Value>> second) {
     int entryIndex = 0;
     while (second.hasNext()) {
       final Entry<Key,Value> entry = second.next();
@@ -96,7 +89,7 @@
     }
   }
 
-  public static void insertList(final BatchWriter writer, final List<Entry<Key,Value>> list) throws MutationsRejectedException {
+  private static void insertList(final BatchWriter writer, final List<Entry<Key,Value>> list) throws MutationsRejectedException {
     for (Entry<Key,Value> e : list) {
       final Key key = e.getKey();
       final Mutation mutation = new Mutation(key.getRow());
@@ -152,22 +145,22 @@
     @Override
     public int run(String[] args) throws Exception {
 
-      if (args.length != 3) {
-        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <table>");
+      if (args.length != 1) {
+        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <table>");
       }
 
-      String user = args[0];
-      String pass = args[1];
-      String table = args[2];
+      String user = getAdminPrincipal();
+      AuthenticationToken pass = getAdminToken();
+      String table = args[0];
 
       JobConf job = new JobConf(getConf());
       job.setJarByClass(this.getClass());
 
       job.setInputFormat(AccumuloRowInputFormat.class);
 
-      AccumuloInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
+      AccumuloInputFormat.setConnectorInfo(job, user, pass);
       AccumuloInputFormat.setInputTableName(job, table);
-      AccumuloRowInputFormat.setMockInstance(job, INSTANCE_NAME);
+      AccumuloRowInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
 
       job.setMapperClass(TestMapper.class);
       job.setMapOutputKeyClass(Key.class);
@@ -181,6 +174,7 @@
 
     public static void main(String[] args) throws Exception {
       Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
       conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
       assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
     }
@@ -188,12 +182,12 @@
 
   @Test
   public void test() throws Exception {
-    final MockInstance instance = new MockInstance(INSTANCE_NAME);
-    final Connector conn = instance.getConnector("root", new PasswordToken(""));
-    conn.tableOperations().create(TEST_TABLE_1);
+    final Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
     BatchWriter writer = null;
     try {
-      writer = conn.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
+      writer = conn.createBatchWriter(tableName, new BatchWriterConfig());
       insertList(writer, row1);
       insertList(writer, row2);
       insertList(writer, row3);
@@ -202,7 +196,7 @@
         writer.close();
       }
     }
-    MRTester.main(new String[] {"root", "", TEST_TABLE_1});
+    MRTester.main(new String[] {tableName});
     assertNull(e1);
     assertNull(e2);
   }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapred/TokenFileTest.java b/test/src/main/java/org/apache/accumulo/test/mapred/TokenFileIT.java
similarity index 77%
rename from core/src/test/java/org/apache/accumulo/core/client/mapred/TokenFileTest.java
rename to test/src/main/java/org/apache/accumulo/test/mapred/TokenFileIT.java
index f025783..78fc76d 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapred/TokenFileTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/mapred/TokenFileIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.client.mapred;
+package org.apache.accumulo.test.mapred;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
@@ -32,13 +32,14 @@
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
+import org.apache.accumulo.core.client.mapred.AccumuloOutputFormat;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.CachedConfiguration;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.io.Text;
@@ -53,15 +54,8 @@
 import org.junit.Test;
 import org.junit.rules.TemporaryFolder;
 
-/**
- *
- */
-public class TokenFileTest {
+public class TokenFileIT extends AccumuloClusterHarness {
   private static AssertionError e1 = null;
-  private static final String PREFIX = TokenFileTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapred_instance";
-  private static final String TEST_TABLE_1 = PREFIX + "_mapred_table_1";
-  private static final String TEST_TABLE_2 = PREFIX + "_mapred_table_2";
 
   private static class MRTokenFileTester extends Configured implements Tool {
     private static class TestMapper implements Mapper<Key,Value,Text,Mutation> {
@@ -99,14 +93,14 @@
     @Override
     public int run(String[] args) throws Exception {
 
-      if (args.length != 4) {
-        throw new IllegalArgumentException("Usage : " + MRTokenFileTester.class.getName() + " <user> <token file> <inputtable> <outputtable>");
+      if (args.length != 3) {
+        throw new IllegalArgumentException("Usage : " + MRTokenFileTester.class.getName() + " <token file> <inputtable> <outputtable>");
       }
 
-      String user = args[0];
-      String tokenFile = args[1];
-      String table1 = args[2];
-      String table2 = args[3];
+      String user = getAdminPrincipal();
+      String tokenFile = args[0];
+      String table1 = args[1];
+      String table2 = args[2];
 
       JobConf job = new JobConf(getConf());
       job.setJarByClass(this.getClass());
@@ -115,7 +109,7 @@
 
       AccumuloInputFormat.setConnectorInfo(job, user, tokenFile);
       AccumuloInputFormat.setInputTableName(job, table1);
-      AccumuloInputFormat.setMockInstance(job, INSTANCE_NAME);
+      AccumuloInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
 
       job.setMapperClass(TestMapper.class);
       job.setMapOutputKeyClass(Key.class);
@@ -127,7 +121,7 @@
       AccumuloOutputFormat.setConnectorInfo(job, user, tokenFile);
       AccumuloOutputFormat.setCreateTables(job, false);
       AccumuloOutputFormat.setDefaultTableName(job, table2);
-      AccumuloOutputFormat.setMockInstance(job, INSTANCE_NAME);
+      AccumuloOutputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
 
       job.setNumReduceTasks(0);
 
@@ -136,7 +130,8 @@
 
     public static void main(String[] args) throws Exception {
       Configuration conf = CachedConfiguration.getInstance();
-      conf.set("hadoop.tmp.dir", new File(args[1]).getParent());
+      conf.set("hadoop.tmp.dir", new File(args[0]).getParent());
+      conf.set("mapreduce.framework.name", "local");
       conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
       assertEquals(0, ToolRunner.run(conf, new MRTokenFileTester(), args));
     }
@@ -147,11 +142,13 @@
 
   @Test
   public void testMR() throws Exception {
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(TEST_TABLE_1);
-    c.tableOperations().create(TEST_TABLE_2);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
+    String[] tableNames = getUniqueNames(2);
+    String table1 = tableNames[0];
+    String table2 = tableNames[1];
+    Connector c = getConnector();
+    c.tableOperations().create(table1);
+    c.tableOperations().create(table2);
+    BatchWriter bw = c.createBatchWriter(table1, new BatchWriterConfig());
     for (int i = 0; i < 100; i++) {
       Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
       m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
@@ -161,14 +158,14 @@
 
     File tf = folder.newFile("root_test.pw");
     PrintStream out = new PrintStream(tf);
-    String outString = new Credentials("root", new PasswordToken("")).serialize();
+    String outString = new Credentials(getAdminPrincipal(), getAdminToken()).serialize();
     out.println(outString);
     out.close();
 
-    MRTokenFileTester.main(new String[] {"root", tf.getAbsolutePath(), TEST_TABLE_1, TEST_TABLE_2});
+    MRTokenFileTester.main(new String[] {tf.getAbsolutePath(), table1, table2});
     assertNull(e1);
 
-    Scanner scanner = c.createScanner(TEST_TABLE_2, new Authorizations());
+    Scanner scanner = c.createScanner(table2, new Authorizations());
     Iterator<Entry<Key,Value>> iter = scanner.iterator();
     assertTrue(iter.hasNext());
     Entry<Key,Value> entry = iter.next();
diff --git a/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloFileOutputFormatIT.java b/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloFileOutputFormatIT.java
new file mode 100644
index 0000000..e160077
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloFileOutputFormatIT.java
@@ -0,0 +1,230 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.mapreduce;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.io.IOException;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.mapreduce.AccumuloFileOutputFormat;
+import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.conf.DefaultConfiguration;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.FileSKVIterator;
+import org.apache.accumulo.core.file.rfile.RFileOperations;
+import org.apache.accumulo.core.sample.impl.SamplerConfigurationImpl;
+import org.apache.accumulo.core.util.CachedConfiguration;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+
+public class AccumuloFileOutputFormatIT extends AccumuloClusterHarness {
+
+  private String PREFIX;
+  private String BAD_TABLE;
+  private String TEST_TABLE;
+  private String EMPTY_TABLE;
+
+  private static final SamplerConfiguration SAMPLER_CONFIG = new SamplerConfiguration(RowSampler.class.getName()).addOption("hasher", "murmur3_32").addOption(
+      "modulus", "3");
+
+  @Override
+  protected int defaultTimeoutSeconds() {
+    return 4 * 60;
+  }
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
+
+  @Before
+  public void setup() throws Exception {
+    PREFIX = testName.getMethodName() + "_";
+    BAD_TABLE = PREFIX + "_mapreduce_bad_table";
+    TEST_TABLE = PREFIX + "_mapreduce_test_table";
+    EMPTY_TABLE = PREFIX + "_mapreduce_empty_table";
+
+    Connector c = getConnector();
+    c.tableOperations().create(EMPTY_TABLE);
+    c.tableOperations().create(TEST_TABLE);
+    c.tableOperations().create(BAD_TABLE);
+    BatchWriter bw = c.createBatchWriter(TEST_TABLE, new BatchWriterConfig());
+    Mutation m = new Mutation("Key");
+    m.put("", "", "");
+    bw.addMutation(m);
+    bw.close();
+    bw = c.createBatchWriter(BAD_TABLE, new BatchWriterConfig());
+    m = new Mutation("r1");
+    m.put("cf1", "cq1", "A&B");
+    m.put("cf1", "cq1", "A&B");
+    m.put("cf1", "cq2", "A&");
+    bw.addMutation(m);
+    bw.close();
+  }
+
+  @Test
+  public void testEmptyWrite() throws Exception {
+    handleWriteTests(false);
+  }
+
+  @Test
+  public void testRealWrite() throws Exception {
+    handleWriteTests(true);
+  }
+
+  private static class MRTester extends Configured implements Tool {
+    private static class BadKeyMapper extends Mapper<Key,Value,Key,Value> {
+      int index = 0;
+
+      @Override
+      protected void map(Key key, Value value, Context context) throws IOException, InterruptedException {
+        String table = context.getConfiguration().get("MRTester_tableName");
+        assertNotNull(table);
+        try {
+          try {
+            context.write(key, value);
+            if (index == 2)
+              assertTrue(false);
+          } catch (Exception e) {
+            assertEquals(2, index);
+          }
+        } catch (AssertionError e) {
+          assertionErrors.put(table + "_map", e);
+        }
+        index++;
+      }
+
+      @Override
+      protected void cleanup(Context context) throws IOException, InterruptedException {
+        String table = context.getConfiguration().get("MRTester_tableName");
+        assertNotNull(table);
+        try {
+          assertEquals(2, index);
+        } catch (AssertionError e) {
+          assertionErrors.put(table + "_cleanup", e);
+        }
+      }
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+
+      if (args.length != 2) {
+        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <table> <outputfile>");
+      }
+
+      String table = args[0];
+      assertionErrors.put(table + "_map", new AssertionError("Dummy_map"));
+      assertionErrors.put(table + "_cleanup", new AssertionError("Dummy_cleanup"));
+
+      Job job = Job.getInstance(getConf(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
+      job.setJarByClass(this.getClass());
+
+      job.setInputFormatClass(AccumuloInputFormat.class);
+
+      AccumuloInputFormat.setConnectorInfo(job, getAdminPrincipal(), getAdminToken());
+      AccumuloInputFormat.setInputTableName(job, table);
+      AccumuloInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
+      AccumuloFileOutputFormat.setOutputPath(job, new Path(args[1]));
+      AccumuloFileOutputFormat.setSampler(job, SAMPLER_CONFIG);
+
+      job.setMapperClass(table.endsWith("_mapreduce_bad_table") ? BadKeyMapper.class : Mapper.class);
+      job.setMapOutputKeyClass(Key.class);
+      job.setMapOutputValueClass(Value.class);
+      job.setOutputFormatClass(AccumuloFileOutputFormat.class);
+      job.getConfiguration().set("MRTester_tableName", table);
+
+      job.setNumReduceTasks(0);
+
+      job.waitForCompletion(true);
+
+      return job.isSuccessful() ? 0 : 1;
+    }
+
+    public static void main(String[] args) throws Exception {
+      Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
+      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
+      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
+    }
+  }
+
+  private void handleWriteTests(boolean content) throws Exception {
+    File f = folder.newFile(testName.getMethodName());
+    assertTrue(f.delete());
+    MRTester.main(new String[] {content ? TEST_TABLE : EMPTY_TABLE, f.getAbsolutePath()});
+
+    assertTrue(f.exists());
+    File[] files = f.listFiles(new FileFilter() {
+      @Override
+      public boolean accept(File file) {
+        return file.getName().startsWith("part-m-");
+      }
+    });
+    assertNotNull(files);
+    if (content) {
+      assertEquals(1, files.length);
+      assertTrue(files[0].exists());
+
+      Configuration conf = CachedConfiguration.getInstance();
+      DefaultConfiguration acuconf = DefaultConfiguration.getInstance();
+      FileSKVIterator sample = RFileOperations.getInstance().newReaderBuilder().forFile(files[0].toString(), FileSystem.get(conf), conf)
+          .withTableConfiguration(acuconf).build().getSample(new SamplerConfigurationImpl(SAMPLER_CONFIG));
+      assertNotNull(sample);
+    } else {
+      assertEquals(0, files.length);
+    }
+  }
+
+  // track errors in the map reduce job; jobs insert a dummy error for the map and cleanup tasks (to ensure test correctness),
+  // so error tests should check to see if there is at least one error (could be more depending on the test) rather than zero
+  private static Multimap<String,AssertionError> assertionErrors = ArrayListMultimap.create();
+
+  @Test
+  public void writeBadVisibility() throws Exception {
+    File f = folder.newFile(testName.getMethodName());
+    assertTrue(f.delete());
+    MRTester.main(new String[] {BAD_TABLE, f.getAbsolutePath()});
+    assertTrue(f.exists());
+    assertEquals(1, assertionErrors.get(BAD_TABLE + "_map").size());
+    assertEquals(1, assertionErrors.get(BAD_TABLE + "_cleanup").size());
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloInputFormatIT.java b/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloInputFormatIT.java
new file mode 100644
index 0000000..a581099
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloInputFormatIT.java
@@ -0,0 +1,521 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.mapreduce;
+
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+import static java.lang.System.currentTimeMillis;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.fail;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.TreeSet;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.accumulo.core.client.AccumuloException;
+import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.ClientConfiguration;
+import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.TableNotFoundException;
+import org.apache.accumulo.core.client.admin.NewTableConfiguration;
+import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
+import org.apache.accumulo.core.client.mapreduce.RangeInputSplit;
+import org.apache.accumulo.core.client.mapreduce.impl.BatchInputSplit;
+import org.apache.accumulo.core.client.sample.RowSampler;
+import org.apache.accumulo.core.client.sample.SamplerConfiguration;
+import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.ConfigurationCopy;
+import org.apache.accumulo.core.conf.DefaultConfiguration;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.util.Pair;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.apache.log4j.Level;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+
+public class AccumuloInputFormatIT extends AccumuloClusterHarness {
+
+  AccumuloInputFormat inputFormat;
+
+  @Override
+  protected int defaultTimeoutSeconds() {
+    return 4 * 60;
+  }
+
+  @Override
+  public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.setNumTservers(1);
+  }
+
+  @Before
+  public void before() {
+    inputFormat = new AccumuloInputFormat();
+  }
+
+  /**
+   * Tests several different paths through the getSplits() method by setting different properties and verifying the results.
+   */
+  @Test
+  public void testGetSplits() throws Exception {
+    Connector conn = getConnector();
+    String table = getUniqueNames(1)[0];
+    conn.tableOperations().create(table);
+    insertData(table, currentTimeMillis());
+
+    ClientConfiguration clientConf = cluster.getClientConfig();
+    AccumuloConfiguration clusterClientConf = new ConfigurationCopy(new DefaultConfiguration());
+
+    // Pass SSL and CredentialProvider options into the ClientConfiguration given to AccumuloInputFormat
+    boolean sslEnabled = Boolean.valueOf(clusterClientConf.get(Property.INSTANCE_RPC_SSL_ENABLED));
+    if (sslEnabled) {
+      ClientProperty[] sslProperties = new ClientProperty[] {ClientProperty.INSTANCE_RPC_SSL_ENABLED, ClientProperty.INSTANCE_RPC_SSL_CLIENT_AUTH,
+          ClientProperty.RPC_SSL_KEYSTORE_PATH, ClientProperty.RPC_SSL_KEYSTORE_TYPE, ClientProperty.RPC_SSL_KEYSTORE_PASSWORD,
+          ClientProperty.RPC_SSL_TRUSTSTORE_PATH, ClientProperty.RPC_SSL_TRUSTSTORE_TYPE, ClientProperty.RPC_SSL_TRUSTSTORE_PASSWORD,
+          ClientProperty.RPC_USE_JSSE, ClientProperty.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS};
+
+      for (ClientProperty prop : sslProperties) {
+        // The default property is returned if it's not in the ClientConfiguration so we don't have to check if the value is actually defined
+        clientConf.setProperty(prop, clusterClientConf.get(prop.getKey()));
+      }
+    }
+
+    Job job = Job.getInstance();
+    AccumuloInputFormat.setInputTableName(job, table);
+    AccumuloInputFormat.setZooKeeperInstance(job, clientConf);
+    AccumuloInputFormat.setConnectorInfo(job, getAdminPrincipal(), getAdminToken());
+
+    // split table
+    TreeSet<Text> splitsToAdd = new TreeSet<>();
+    for (int i = 0; i < 10000; i += 1000)
+      splitsToAdd.add(new Text(String.format("%09d", i)));
+    conn.tableOperations().addSplits(table, splitsToAdd);
+    sleepUninterruptibly(500, TimeUnit.MILLISECONDS); // wait for splits to be propagated
+
+    // get splits without setting any range
+    Collection<Text> actualSplits = conn.tableOperations().listSplits(table);
+    List<InputSplit> splits = inputFormat.getSplits(job);
+    assertEquals(actualSplits.size() + 1, splits.size()); // No ranges set on the job so it'll start with -inf
+
+    // set ranges and get splits
+    List<Range> ranges = new ArrayList<>();
+    for (Text text : actualSplits)
+      ranges.add(new Range(text));
+    AccumuloInputFormat.setRanges(job, ranges);
+    splits = inputFormat.getSplits(job);
+    assertEquals(actualSplits.size(), splits.size());
+
+    // offline mode
+    AccumuloInputFormat.setOfflineTableScan(job, true);
+    try {
+      inputFormat.getSplits(job);
+      fail("An exception should have been thrown");
+    } catch (IOException e) {}
+
+    conn.tableOperations().offline(table, true);
+    splits = inputFormat.getSplits(job);
+    assertEquals(actualSplits.size(), splits.size());
+
+    // auto adjust ranges
+    ranges = new ArrayList<>();
+    for (int i = 0; i < 5; i++)
+      // overlapping ranges
+      ranges.add(new Range(String.format("%09d", i), String.format("%09d", i + 2)));
+    AccumuloInputFormat.setRanges(job, ranges);
+    splits = inputFormat.getSplits(job);
+    assertEquals(2, splits.size());
+
+    AccumuloInputFormat.setAutoAdjustRanges(job, false);
+    splits = inputFormat.getSplits(job);
+    assertEquals(ranges.size(), splits.size());
+
+    // BatchScan not available for offline scans
+    AccumuloInputFormat.setBatchScan(job, true);
+    // Reset auto-adjust ranges too
+    AccumuloInputFormat.setAutoAdjustRanges(job, true);
+
+    AccumuloInputFormat.setOfflineTableScan(job, true);
+    try {
+      inputFormat.getSplits(job);
+      fail("An exception should have been thrown");
+    } catch (IllegalArgumentException e) {}
+
+    conn.tableOperations().online(table, true);
+    AccumuloInputFormat.setOfflineTableScan(job, false);
+
+    // test for resumption of success
+    splits = inputFormat.getSplits(job);
+    assertEquals(2, splits.size());
+
+    // BatchScan not available with isolated iterators
+    AccumuloInputFormat.setScanIsolation(job, true);
+    try {
+      inputFormat.getSplits(job);
+      fail("An exception should have been thrown");
+    } catch (IllegalArgumentException e) {}
+    AccumuloInputFormat.setScanIsolation(job, false);
+
+    // test for resumption of success
+    splits = inputFormat.getSplits(job);
+    assertEquals(2, splits.size());
+
+    // BatchScan not available with local iterators
+    AccumuloInputFormat.setLocalIterators(job, true);
+    try {
+      inputFormat.getSplits(job);
+      fail("An exception should have been thrown");
+    } catch (IllegalArgumentException e) {}
+    AccumuloInputFormat.setLocalIterators(job, false);
+
+    // Check we are getting back correct type pf split
+    conn.tableOperations().online(table);
+    splits = inputFormat.getSplits(job);
+    for (InputSplit split : splits)
+      assert (split instanceof BatchInputSplit);
+
+    // We should divide along the tablet lines similar to when using `setAutoAdjustRanges(job, true)`
+    assertEquals(2, splits.size());
+  }
+
+  private void insertData(String tableName, long ts) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
+    BatchWriter bw = getConnector().createBatchWriter(tableName, null);
+
+    for (int i = 0; i < 10000; i++) {
+      String row = String.format("%09d", i);
+
+      Mutation m = new Mutation(new Text(row));
+      m.put(new Text("cf1"), new Text("cq1"), ts, new Value(("" + i).getBytes()));
+      bw.addMutation(m);
+    }
+    bw.close();
+  }
+
+  // track errors in the map reduce job; jobs insert a dummy error for the map and cleanup tasks (to ensure test correctness),
+  // so error tests should check to see if there is at least one error (could be more depending on the test) rather than zero
+  private static Multimap<String,AssertionError> assertionErrors = ArrayListMultimap.create();
+
+  private static class MRTester extends Configured implements Tool {
+    private static class TestMapper extends Mapper<Key,Value,Key,Value> {
+      Key key = null;
+      int count = 0;
+
+      @Override
+      protected void map(Key k, Value v, Context context) throws IOException, InterruptedException {
+        String table = context.getConfiguration().get("MRTester_tableName");
+        assertNotNull(table);
+        try {
+          if (key != null)
+            assertEquals(key.getRow().toString(), new String(v.get()));
+          assertEquals(k.getRow(), new Text(String.format("%09x", count + 1)));
+          assertEquals(new String(v.get()), String.format("%09x", count));
+        } catch (AssertionError e) {
+          assertionErrors.put(table + "_map", e);
+        }
+        key = new Key(k);
+        count++;
+      }
+
+      @Override
+      protected void cleanup(Context context) throws IOException, InterruptedException {
+        String table = context.getConfiguration().get("MRTester_tableName");
+        assertNotNull(table);
+        try {
+          assertEquals(100, count);
+        } catch (AssertionError e) {
+          assertionErrors.put(table + "_cleanup", e);
+        }
+      }
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+
+      if (args.length != 2 && args.length != 4) {
+        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <table> <inputFormatClass> [<batchScan> <scan sample>]");
+      }
+
+      String table = args[0];
+      String inputFormatClassName = args[1];
+      Boolean batchScan = false;
+      boolean sample = false;
+      if (args.length == 4) {
+        batchScan = Boolean.parseBoolean(args[2]);
+        sample = Boolean.parseBoolean(args[3]);
+      }
+
+      assertionErrors.put(table + "_map", new AssertionError("Dummy_map"));
+      assertionErrors.put(table + "_cleanup", new AssertionError("Dummy_cleanup"));
+
+      @SuppressWarnings("unchecked")
+      Class<? extends InputFormat<?,?>> inputFormatClass = (Class<? extends InputFormat<?,?>>) Class.forName(inputFormatClassName);
+
+      Job job = Job.getInstance(getConf(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
+      job.setJarByClass(this.getClass());
+      job.getConfiguration().set("MRTester_tableName", table);
+
+      job.setInputFormatClass(inputFormatClass);
+
+      AccumuloInputFormat.setZooKeeperInstance(job, cluster.getClientConfig());
+      AccumuloInputFormat.setConnectorInfo(job, getAdminPrincipal(), getAdminToken());
+      AccumuloInputFormat.setInputTableName(job, table);
+      AccumuloInputFormat.setBatchScan(job, batchScan);
+      if (sample) {
+        AccumuloInputFormat.setSamplerConfiguration(job, SAMPLER_CONFIG);
+      }
+
+      job.setMapperClass(TestMapper.class);
+      job.setMapOutputKeyClass(Key.class);
+      job.setMapOutputValueClass(Value.class);
+      job.setOutputFormatClass(NullOutputFormat.class);
+
+      job.setNumReduceTasks(0);
+
+      job.waitForCompletion(true);
+
+      return job.isSuccessful() ? 0 : 1;
+    }
+
+    public static int main(String[] args) throws Exception {
+      Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
+      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
+      return ToolRunner.run(conf, new MRTester(), args);
+    }
+  }
+
+  @Test
+  public void testMap() throws Exception {
+    final String TEST_TABLE_1 = getUniqueNames(1)[0];
+
+    Connector c = getConnector();
+    c.tableOperations().create(TEST_TABLE_1);
+    BatchWriter bw = c.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
+    for (int i = 0; i < 100; i++) {
+      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
+      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
+      bw.addMutation(m);
+    }
+    bw.close();
+
+    Assert.assertEquals(0, MRTester.main(new String[] {TEST_TABLE_1, AccumuloInputFormat.class.getName()}));
+    assertEquals(1, assertionErrors.get(TEST_TABLE_1 + "_map").size());
+    assertEquals(1, assertionErrors.get(TEST_TABLE_1 + "_cleanup").size());
+  }
+
+  private static final SamplerConfiguration SAMPLER_CONFIG = new SamplerConfiguration(RowSampler.class.getName()).addOption("hasher", "murmur3_32").addOption(
+      "modulus", "3");
+
+  @Test
+  public void testSample() throws Exception {
+    final String TEST_TABLE_3 = getUniqueNames(1)[0];
+
+    Connector c = getConnector();
+    c.tableOperations().create(TEST_TABLE_3, new NewTableConfiguration().enableSampling(SAMPLER_CONFIG));
+    BatchWriter bw = c.createBatchWriter(TEST_TABLE_3, new BatchWriterConfig());
+    for (int i = 0; i < 100; i++) {
+      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
+      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
+      bw.addMutation(m);
+    }
+    bw.close();
+
+    Assert.assertEquals(0, MRTester.main(new String[] {TEST_TABLE_3, AccumuloInputFormat.class.getName(), "False", "True"}));
+    assertEquals(39, assertionErrors.get(TEST_TABLE_3 + "_map").size());
+    assertEquals(2, assertionErrors.get(TEST_TABLE_3 + "_cleanup").size());
+
+    assertionErrors.clear();
+    Assert.assertEquals(0, MRTester.main(new String[] {TEST_TABLE_3, AccumuloInputFormat.class.getName(), "False", "False"}));
+    assertEquals(1, assertionErrors.get(TEST_TABLE_3 + "_map").size());
+    assertEquals(1, assertionErrors.get(TEST_TABLE_3 + "_cleanup").size());
+
+    assertionErrors.clear();
+    Assert.assertEquals(0, MRTester.main(new String[] {TEST_TABLE_3, AccumuloInputFormat.class.getName(), "True", "True"}));
+    assertEquals(39, assertionErrors.get(TEST_TABLE_3 + "_map").size());
+    assertEquals(2, assertionErrors.get(TEST_TABLE_3 + "_cleanup").size());
+  }
+
+  @Test
+  public void testMapWithBatchScanner() throws Exception {
+    final String TEST_TABLE_2 = getUniqueNames(1)[0];
+
+    Connector c = getConnector();
+    c.tableOperations().create(TEST_TABLE_2);
+    BatchWriter bw = c.createBatchWriter(TEST_TABLE_2, new BatchWriterConfig());
+    for (int i = 0; i < 100; i++) {
+      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
+      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
+      bw.addMutation(m);
+    }
+    bw.close();
+
+    Assert.assertEquals(0, MRTester.main(new String[] {TEST_TABLE_2, AccumuloInputFormat.class.getName(), "True", "False"}));
+    assertEquals(1, assertionErrors.get(TEST_TABLE_2 + "_map").size());
+    assertEquals(1, assertionErrors.get(TEST_TABLE_2 + "_cleanup").size());
+  }
+
+  @Test
+  public void testCorrectRangeInputSplits() throws Exception {
+    Job job = Job.getInstance();
+
+    String table = getUniqueNames(1)[0];
+    Authorizations auths = new Authorizations("foo");
+    Collection<Pair<Text,Text>> fetchColumns = Collections.singleton(new Pair<>(new Text("foo"), new Text("bar")));
+    boolean isolated = true, localIters = true;
+    Level level = Level.WARN;
+
+    Connector connector = getConnector();
+    connector.tableOperations().create(table);
+
+    AccumuloInputFormat.setZooKeeperInstance(job, cluster.getClientConfig());
+    AccumuloInputFormat.setConnectorInfo(job, getAdminPrincipal(), getAdminToken());
+
+    AccumuloInputFormat.setInputTableName(job, table);
+    AccumuloInputFormat.setScanAuthorizations(job, auths);
+    AccumuloInputFormat.setScanIsolation(job, isolated);
+    AccumuloInputFormat.setLocalIterators(job, localIters);
+    AccumuloInputFormat.fetchColumns(job, fetchColumns);
+    AccumuloInputFormat.setLogLevel(job, level);
+
+    AccumuloInputFormat aif = new AccumuloInputFormat();
+
+    List<InputSplit> splits = aif.getSplits(job);
+
+    Assert.assertEquals(1, splits.size());
+
+    InputSplit split = splits.get(0);
+
+    Assert.assertEquals(RangeInputSplit.class, split.getClass());
+
+    RangeInputSplit risplit = (RangeInputSplit) split;
+
+    Assert.assertEquals(getAdminPrincipal(), risplit.getPrincipal());
+    Assert.assertEquals(table, risplit.getTableName());
+    Assert.assertEquals(getAdminToken(), risplit.getToken());
+    Assert.assertEquals(auths, risplit.getAuths());
+    Assert.assertEquals(getConnector().getInstance().getInstanceName(), risplit.getInstanceName());
+    Assert.assertEquals(isolated, risplit.isIsolatedScan());
+    Assert.assertEquals(localIters, risplit.usesLocalIterators());
+    Assert.assertEquals(fetchColumns, risplit.getFetchedColumns());
+    Assert.assertEquals(level, risplit.getLogLevel());
+  }
+
+  @Test
+  public void testPartialInputSplitDelegationToConfiguration() throws Exception {
+    String table = getUniqueNames(1)[0];
+    Connector c = getConnector();
+    c.tableOperations().create(table);
+    BatchWriter bw = c.createBatchWriter(table, new BatchWriterConfig());
+    for (int i = 0; i < 100; i++) {
+      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
+      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
+      bw.addMutation(m);
+    }
+    bw.close();
+
+    Assert.assertEquals(0, MRTester.main(new String[] {table, EmptySplitsAccumuloInputFormat.class.getName()}));
+    assertEquals(1, assertionErrors.get(table + "_map").size());
+    assertEquals(1, assertionErrors.get(table + "_cleanup").size());
+  }
+
+  @Test
+  public void testPartialFailedInputSplitDelegationToConfiguration() throws Exception {
+    String table = getUniqueNames(1)[0];
+    Connector c = getConnector();
+    c.tableOperations().create(table);
+    BatchWriter bw = c.createBatchWriter(table, new BatchWriterConfig());
+    for (int i = 0; i < 100; i++) {
+      Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
+      m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
+      bw.addMutation(m);
+    }
+    bw.close();
+
+    Assert.assertEquals(1, MRTester.main(new String[] {table, BadPasswordSplitsAccumuloInputFormat.class.getName()}));
+    assertEquals(1, assertionErrors.get(table + "_map").size());
+    // We should fail when the RecordReader fails to get the next key/value pair, because the record reader is set up with a clientcontext, rather than a
+    // connector, so it doesn't do fast-fail on bad credentials
+    assertEquals(2, assertionErrors.get(table + "_cleanup").size());
+  }
+
+  /**
+   * AccumuloInputFormat which returns an "empty" RangeInputSplit
+   */
+  public static class BadPasswordSplitsAccumuloInputFormat extends AccumuloInputFormat {
+
+    @Override
+    public List<InputSplit> getSplits(JobContext context) throws IOException {
+      List<InputSplit> splits = super.getSplits(context);
+
+      for (InputSplit split : splits) {
+        org.apache.accumulo.core.client.mapreduce.RangeInputSplit rangeSplit = (org.apache.accumulo.core.client.mapreduce.RangeInputSplit) split;
+        rangeSplit.setToken(new PasswordToken("anythingelse"));
+      }
+
+      return splits;
+    }
+  }
+
+  /**
+   * AccumuloInputFormat which returns an "empty" RangeInputSplit
+   */
+  public static class EmptySplitsAccumuloInputFormat extends AccumuloInputFormat {
+
+    @Override
+    public List<InputSplit> getSplits(JobContext context) throws IOException {
+      List<InputSplit> oldSplits = super.getSplits(context);
+      List<InputSplit> newSplits = new ArrayList<>(oldSplits.size());
+
+      // Copy only the necessary information
+      for (InputSplit oldSplit : oldSplits) {
+        org.apache.accumulo.core.client.mapreduce.RangeInputSplit newSplit = new org.apache.accumulo.core.client.mapreduce.RangeInputSplit(
+            (org.apache.accumulo.core.client.mapreduce.RangeInputSplit) oldSplit);
+        newSplits.add(newSplit);
+      }
+
+      return newSplits;
+    }
+  }
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloMultiTableInputFormatIT.java b/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloMultiTableInputFormatIT.java
new file mode 100644
index 0000000..350a183
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloMultiTableInputFormatIT.java
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.mapreduce;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.mapreduce.AccumuloMultiTableInputFormat;
+import org.apache.accumulo.core.client.mapreduce.InputTableConfig;
+import org.apache.accumulo.core.client.mapreduce.RangeInputSplit;
+import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.junit.Test;
+
+public class AccumuloMultiTableInputFormatIT extends AccumuloClusterHarness {
+
+  private static AssertionError e1 = null;
+  private static AssertionError e2 = null;
+
+  private static class MRTester extends Configured implements Tool {
+
+    private static class TestMapper extends Mapper<Key,Value,Key,Value> {
+      Key key = null;
+      int count = 0;
+
+      @Override
+      protected void map(Key k, Value v, Context context) throws IOException, InterruptedException {
+        try {
+          String tableName = ((RangeInputSplit) context.getInputSplit()).getTableName();
+          if (key != null)
+            assertEquals(key.getRow().toString(), new String(v.get()));
+          assertEquals(new Text(String.format("%s_%09x", tableName, count + 1)), k.getRow());
+          assertEquals(String.format("%s_%09x", tableName, count), new String(v.get()));
+        } catch (AssertionError e) {
+          e1 = e;
+        }
+        key = new Key(k);
+        count++;
+      }
+
+      @Override
+      protected void cleanup(Context context) throws IOException, InterruptedException {
+        try {
+          assertEquals(100, count);
+        } catch (AssertionError e) {
+          e2 = e;
+        }
+      }
+    }
+
+    @Override
+    public int run(String[] args) throws Exception {
+
+      if (args.length != 2) {
+        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <table1> <table2>");
+      }
+
+      String user = getAdminPrincipal();
+      AuthenticationToken pass = getAdminToken();
+      String table1 = args[0];
+      String table2 = args[1];
+
+      Job job = Job.getInstance(getConf(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
+      job.setJarByClass(this.getClass());
+
+      job.setInputFormatClass(AccumuloMultiTableInputFormat.class);
+
+      AccumuloMultiTableInputFormat.setConnectorInfo(job, user, pass);
+
+      InputTableConfig tableConfig1 = new InputTableConfig();
+      InputTableConfig tableConfig2 = new InputTableConfig();
+
+      Map<String,InputTableConfig> configMap = new HashMap<>();
+      configMap.put(table1, tableConfig1);
+      configMap.put(table2, tableConfig2);
+
+      AccumuloMultiTableInputFormat.setInputTableConfigs(job, configMap);
+      AccumuloMultiTableInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
+
+      job.setMapperClass(TestMapper.class);
+      job.setMapOutputKeyClass(Key.class);
+      job.setMapOutputValueClass(Value.class);
+      job.setOutputFormatClass(NullOutputFormat.class);
+
+      job.setNumReduceTasks(0);
+
+      job.waitForCompletion(true);
+
+      return job.isSuccessful() ? 0 : 1;
+    }
+
+    public static void main(String[] args) throws Exception {
+      Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
+      conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
+      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
+    }
+  }
+
+  /**
+   * Generate incrementing counts and attach table name to the key/value so that order and multi-table data can be verified.
+   */
+  @Test
+  public void testMap() throws Exception {
+    String[] tableNames = getUniqueNames(2);
+    String table1 = tableNames[0];
+    String table2 = tableNames[1];
+    Connector c = getConnector();
+    c.tableOperations().create(table1);
+    c.tableOperations().create(table2);
+    BatchWriter bw = c.createBatchWriter(table1, new BatchWriterConfig());
+    BatchWriter bw2 = c.createBatchWriter(table2, new BatchWriterConfig());
+    for (int i = 0; i < 100; i++) {
+      Mutation t1m = new Mutation(new Text(String.format("%s_%09x", table1, i + 1)));
+      t1m.put(new Text(), new Text(), new Value(String.format("%s_%09x", table1, i).getBytes()));
+      bw.addMutation(t1m);
+      Mutation t2m = new Mutation(new Text(String.format("%s_%09x", table2, i + 1)));
+      t2m.put(new Text(), new Text(), new Value(String.format("%s_%09x", table2, i).getBytes()));
+      bw2.addMutation(t2m);
+    }
+    bw.close();
+    bw2.close();
+
+    MRTester.main(new String[] {table1, table2});
+    assertNull(e1);
+    assertNull(e2);
+  }
+
+}
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/TokenFileTest.java b/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloOutputFormatIT.java
similarity index 63%
copy from core/src/test/java/org/apache/accumulo/core/client/mapreduce/TokenFileTest.java
copy to test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloOutputFormatIT.java
index 8f49751..811f3fe 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/TokenFileTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloOutputFormatIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.client.mapreduce;
+package org.apache.accumulo.test.mapreduce;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
@@ -23,7 +23,6 @@
 
 import java.io.File;
 import java.io.IOException;
-import java.io.PrintStream;
 import java.util.Iterator;
 import java.util.Map.Entry;
 
@@ -31,14 +30,14 @@
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
+import org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat;
+import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.CachedConfiguration;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.io.Text;
@@ -46,21 +45,12 @@
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
-import org.junit.Rule;
 import org.junit.Test;
-import org.junit.rules.TemporaryFolder;
 
-/**
- *
- */
-public class TokenFileTest {
+public class AccumuloOutputFormatIT extends AccumuloClusterHarness {
   private static AssertionError e1 = null;
-  private static final String PREFIX = TokenFileTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapreduce_instance";
-  private static final String TEST_TABLE_1 = PREFIX + "_mapreduce_table_1";
-  private static final String TEST_TABLE_2 = PREFIX + "_mapreduce_table_2";
 
-  private static class MRTokenFileTester extends Configured implements Tool {
+  private static class MRTester extends Configured implements Tool {
     private static class TestMapper extends Mapper<Key,Value,Text,Mutation> {
       Key key = null;
       int count = 0;
@@ -90,23 +80,23 @@
     @Override
     public int run(String[] args) throws Exception {
 
-      if (args.length != 4) {
-        throw new IllegalArgumentException("Usage : " + MRTokenFileTester.class.getName() + " <user> <token file> <inputtable> <outputtable>");
+      if (args.length != 2) {
+        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <inputtable> <outputtable>");
       }
 
-      String user = args[0];
-      String tokenFile = args[1];
-      String table1 = args[2];
-      String table2 = args[3];
+      String user = getAdminPrincipal();
+      AuthenticationToken pass = getAdminToken();
+      String table1 = args[0];
+      String table2 = args[1];
 
       Job job = Job.getInstance(getConf(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
       job.setJarByClass(this.getClass());
 
       job.setInputFormatClass(AccumuloInputFormat.class);
 
-      AccumuloInputFormat.setConnectorInfo(job, user, tokenFile);
+      AccumuloInputFormat.setConnectorInfo(job, user, pass);
       AccumuloInputFormat.setInputTableName(job, table1);
-      AccumuloInputFormat.setMockInstance(job, INSTANCE_NAME);
+      AccumuloInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
 
       job.setMapperClass(TestMapper.class);
       job.setMapOutputKeyClass(Key.class);
@@ -115,10 +105,10 @@
       job.setOutputKeyClass(Text.class);
       job.setOutputValueClass(Mutation.class);
 
-      AccumuloOutputFormat.setConnectorInfo(job, user, tokenFile);
+      AccumuloOutputFormat.setConnectorInfo(job, user, pass);
       AccumuloOutputFormat.setCreateTables(job, false);
       AccumuloOutputFormat.setDefaultTableName(job, table2);
-      AccumuloOutputFormat.setMockInstance(job, INSTANCE_NAME);
+      AccumuloOutputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
 
       job.setNumReduceTasks(0);
 
@@ -128,23 +118,22 @@
     }
 
     public static void main(String[] args) throws Exception {
-      Configuration conf = CachedConfiguration.getInstance();
-      conf.set("hadoop.tmp.dir", new File(args[1]).getParent());
+      Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
       conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
-      assertEquals(0, ToolRunner.run(conf, new MRTokenFileTester(), args));
+      assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
     }
   }
 
-  @Rule
-  public TemporaryFolder folder = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
-
   @Test
   public void testMR() throws Exception {
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(TEST_TABLE_1);
-    c.tableOperations().create(TEST_TABLE_2);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
+    String[] tableNames = getUniqueNames(2);
+    String table1 = tableNames[0];
+    String table2 = tableNames[1];
+    Connector c = getConnector();
+    c.tableOperations().create(table1);
+    c.tableOperations().create(table2);
+    BatchWriter bw = c.createBatchWriter(table1, new BatchWriterConfig());
     for (int i = 0; i < 100; i++) {
       Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
       m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
@@ -152,16 +141,10 @@
     }
     bw.close();
 
-    File tf = folder.newFile("root_test.pw");
-    PrintStream out = new PrintStream(tf);
-    String outString = new Credentials("root", new PasswordToken("")).serialize();
-    out.println(outString);
-    out.close();
-
-    MRTokenFileTester.main(new String[] {"root", tf.getAbsolutePath(), TEST_TABLE_1, TEST_TABLE_2});
+    MRTester.main(new String[] {table1, table2});
     assertNull(e1);
 
-    Scanner scanner = c.createScanner(TEST_TABLE_2, new Authorizations());
+    Scanner scanner = c.createScanner(table2, new Authorizations());
     Iterator<Entry<Key,Value>> iter = scanner.iterator();
     assertTrue(iter.hasNext());
     Entry<Key,Value> entry = iter.next();
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloRowInputFormatTest.java b/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloRowInputFormatIT.java
similarity index 73%
rename from core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloRowInputFormatTest.java
rename to test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloRowInputFormatIT.java
index 2c8bfb1..c03d462 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/AccumuloRowInputFormatTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/mapreduce/AccumuloRowInputFormatIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.client.mapreduce;
+package org.apache.accumulo.test.mapreduce;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNull;
@@ -31,14 +31,16 @@
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.MutationsRejectedException;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
+import org.apache.accumulo.core.client.mapreduce.AccumuloRowInputFormat;
+import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.KeyValue;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.accumulo.core.util.PeekingIterator;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.io.Text;
@@ -47,43 +49,34 @@
 import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
+import org.junit.BeforeClass;
 import org.junit.Test;
 
-public class AccumuloRowInputFormatTest {
-  private static final String PREFIX = AccumuloRowInputFormatTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapreduce_instance";
-  private static final String TEST_TABLE_1 = PREFIX + "_mapreduce_table_1";
+public class AccumuloRowInputFormatIT extends AccumuloClusterHarness {
 
   private static final String ROW1 = "row1";
   private static final String ROW2 = "row2";
   private static final String ROW3 = "row3";
   private static final String COLF1 = "colf1";
-  private static final List<Entry<Key,Value>> row1;
-  private static final List<Entry<Key,Value>> row2;
-  private static final List<Entry<Key,Value>> row3;
+  private static List<Entry<Key,Value>> row1;
+  private static List<Entry<Key,Value>> row2;
+  private static List<Entry<Key,Value>> row3;
   private static AssertionError e1 = null;
   private static AssertionError e2 = null;
 
-  static {
-    row1 = new ArrayList<Entry<Key,Value>>();
+  @BeforeClass
+  public static void prepareRows() {
+    row1 = new ArrayList<>();
     row1.add(new KeyValue(new Key(ROW1, COLF1, "colq1"), "v1".getBytes()));
     row1.add(new KeyValue(new Key(ROW1, COLF1, "colq2"), "v2".getBytes()));
     row1.add(new KeyValue(new Key(ROW1, "colf2", "colq3"), "v3".getBytes()));
-    row2 = new ArrayList<Entry<Key,Value>>();
+    row2 = new ArrayList<>();
     row2.add(new KeyValue(new Key(ROW2, COLF1, "colq4"), "v4".getBytes()));
-    row3 = new ArrayList<Entry<Key,Value>>();
+    row3 = new ArrayList<>();
     row3.add(new KeyValue(new Key(ROW3, COLF1, "colq5"), "v5".getBytes()));
   }
 
-  public static void checkLists(final List<Entry<Key,Value>> first, final List<Entry<Key,Value>> second) {
-    assertEquals("Sizes should be the same.", first.size(), second.size());
-    for (int i = 0; i < first.size(); i++) {
-      assertEquals("Keys should be equal.", first.get(i).getKey(), second.get(i).getKey());
-      assertEquals("Values should be equal.", first.get(i).getValue(), second.get(i).getValue());
-    }
-  }
-
-  public static void checkLists(final List<Entry<Key,Value>> first, final Iterator<Entry<Key,Value>> second) {
+  private static void checkLists(final List<Entry<Key,Value>> first, final Iterator<Entry<Key,Value>> second) {
     int entryIndex = 0;
     while (second.hasNext()) {
       final Entry<Key,Value> entry = second.next();
@@ -93,7 +86,7 @@
     }
   }
 
-  public static void insertList(final BatchWriter writer, final List<Entry<Key,Value>> list) throws MutationsRejectedException {
+  private static void insertList(final BatchWriter writer, final List<Entry<Key,Value>> list) throws MutationsRejectedException {
     for (Entry<Key,Value> e : list) {
       final Key key = e.getKey();
       final Mutation mutation = new Mutation(key.getRow());
@@ -145,22 +138,22 @@
     @Override
     public int run(String[] args) throws Exception {
 
-      if (args.length != 3) {
-        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <user> <pass> <table>");
+      if (args.length != 1) {
+        throw new IllegalArgumentException("Usage : " + MRTester.class.getName() + " <table>");
       }
 
-      String user = args[0];
-      String pass = args[1];
-      String table = args[2];
+      String user = getAdminPrincipal();
+      AuthenticationToken pass = getAdminToken();
+      String table = args[0];
 
       Job job = Job.getInstance(getConf(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
       job.setJarByClass(this.getClass());
 
       job.setInputFormatClass(AccumuloRowInputFormat.class);
 
-      AccumuloInputFormat.setConnectorInfo(job, user, new PasswordToken(pass));
+      AccumuloInputFormat.setConnectorInfo(job, user, pass);
       AccumuloInputFormat.setInputTableName(job, table);
-      AccumuloRowInputFormat.setMockInstance(job, INSTANCE_NAME);
+      AccumuloRowInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
 
       job.setMapperClass(TestMapper.class);
       job.setMapOutputKeyClass(Key.class);
@@ -176,6 +169,7 @@
 
     public static void main(String[] args) throws Exception {
       Configuration conf = new Configuration();
+      conf.set("mapreduce.framework.name", "local");
       conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
       assertEquals(0, ToolRunner.run(conf, new MRTester(), args));
     }
@@ -183,12 +177,12 @@
 
   @Test
   public void test() throws Exception {
-    final MockInstance instance = new MockInstance(INSTANCE_NAME);
-    final Connector conn = instance.getConnector("root", new PasswordToken(""));
-    conn.tableOperations().create(TEST_TABLE_1);
+    final Connector conn = getConnector();
+    String tableName = getUniqueNames(1)[0];
+    conn.tableOperations().create(tableName);
     BatchWriter writer = null;
     try {
-      writer = conn.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
+      writer = conn.createBatchWriter(tableName, new BatchWriterConfig());
       insertList(writer, row1);
       insertList(writer, row2);
       insertList(writer, row3);
@@ -197,7 +191,7 @@
         writer.close();
       }
     }
-    MRTester.main(new String[] {"root", "", TEST_TABLE_1});
+    MRTester.main(new String[] {tableName});
     assertNull(e1);
     assertNull(e2);
   }
diff --git a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/TokenFileTest.java b/test/src/main/java/org/apache/accumulo/test/mapreduce/TokenFileIT.java
similarity index 76%
rename from core/src/test/java/org/apache/accumulo/core/client/mapreduce/TokenFileTest.java
rename to test/src/main/java/org/apache/accumulo/test/mapreduce/TokenFileIT.java
index 8f49751..6c3b9ef 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/mapreduce/TokenFileTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/mapreduce/TokenFileIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.core.client.mapreduce;
+package org.apache.accumulo.test.mapreduce;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
@@ -32,13 +32,14 @@
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
+import org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.CachedConfiguration;
+import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.io.Text;
@@ -50,15 +51,8 @@
 import org.junit.Test;
 import org.junit.rules.TemporaryFolder;
 
-/**
- *
- */
-public class TokenFileTest {
+public class TokenFileIT extends AccumuloClusterHarness {
   private static AssertionError e1 = null;
-  private static final String PREFIX = TokenFileTest.class.getSimpleName();
-  private static final String INSTANCE_NAME = PREFIX + "_mapreduce_instance";
-  private static final String TEST_TABLE_1 = PREFIX + "_mapreduce_table_1";
-  private static final String TEST_TABLE_2 = PREFIX + "_mapreduce_table_2";
 
   private static class MRTokenFileTester extends Configured implements Tool {
     private static class TestMapper extends Mapper<Key,Value,Text,Mutation> {
@@ -90,14 +84,14 @@
     @Override
     public int run(String[] args) throws Exception {
 
-      if (args.length != 4) {
-        throw new IllegalArgumentException("Usage : " + MRTokenFileTester.class.getName() + " <user> <token file> <inputtable> <outputtable>");
+      if (args.length != 3) {
+        throw new IllegalArgumentException("Usage : " + MRTokenFileTester.class.getName() + " <token file> <inputtable> <outputtable>");
       }
 
-      String user = args[0];
-      String tokenFile = args[1];
-      String table1 = args[2];
-      String table2 = args[3];
+      String user = getAdminPrincipal();
+      String tokenFile = args[0];
+      String table1 = args[1];
+      String table2 = args[2];
 
       Job job = Job.getInstance(getConf(), this.getClass().getSimpleName() + "_" + System.currentTimeMillis());
       job.setJarByClass(this.getClass());
@@ -106,7 +100,7 @@
 
       AccumuloInputFormat.setConnectorInfo(job, user, tokenFile);
       AccumuloInputFormat.setInputTableName(job, table1);
-      AccumuloInputFormat.setMockInstance(job, INSTANCE_NAME);
+      AccumuloInputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
 
       job.setMapperClass(TestMapper.class);
       job.setMapOutputKeyClass(Key.class);
@@ -118,7 +112,7 @@
       AccumuloOutputFormat.setConnectorInfo(job, user, tokenFile);
       AccumuloOutputFormat.setCreateTables(job, false);
       AccumuloOutputFormat.setDefaultTableName(job, table2);
-      AccumuloOutputFormat.setMockInstance(job, INSTANCE_NAME);
+      AccumuloOutputFormat.setZooKeeperInstance(job, getCluster().getClientConfig());
 
       job.setNumReduceTasks(0);
 
@@ -129,7 +123,8 @@
 
     public static void main(String[] args) throws Exception {
       Configuration conf = CachedConfiguration.getInstance();
-      conf.set("hadoop.tmp.dir", new File(args[1]).getParent());
+      conf.set("hadoop.tmp.dir", new File(args[0]).getParent());
+      conf.set("mapreduce.framework.name", "local");
       conf.set("mapreduce.cluster.local.dir", new File(System.getProperty("user.dir"), "target/mapreduce-tmp").getAbsolutePath());
       assertEquals(0, ToolRunner.run(conf, new MRTokenFileTester(), args));
     }
@@ -140,11 +135,13 @@
 
   @Test
   public void testMR() throws Exception {
-    MockInstance mockInstance = new MockInstance(INSTANCE_NAME);
-    Connector c = mockInstance.getConnector("root", new PasswordToken(""));
-    c.tableOperations().create(TEST_TABLE_1);
-    c.tableOperations().create(TEST_TABLE_2);
-    BatchWriter bw = c.createBatchWriter(TEST_TABLE_1, new BatchWriterConfig());
+    String[] tableNames = getUniqueNames(2);
+    String table1 = tableNames[0];
+    String table2 = tableNames[1];
+    Connector c = getConnector();
+    c.tableOperations().create(table1);
+    c.tableOperations().create(table2);
+    BatchWriter bw = c.createBatchWriter(table1, new BatchWriterConfig());
     for (int i = 0; i < 100; i++) {
       Mutation m = new Mutation(new Text(String.format("%09x", i + 1)));
       m.put(new Text(), new Text(), new Value(String.format("%09x", i).getBytes()));
@@ -154,14 +151,14 @@
 
     File tf = folder.newFile("root_test.pw");
     PrintStream out = new PrintStream(tf);
-    String outString = new Credentials("root", new PasswordToken("")).serialize();
+    String outString = new Credentials(getAdminPrincipal(), getAdminToken()).serialize();
     out.println(outString);
     out.close();
 
-    MRTokenFileTester.main(new String[] {"root", tf.getAbsolutePath(), TEST_TABLE_1, TEST_TABLE_2});
+    MRTokenFileTester.main(new String[] {tf.getAbsolutePath(), table1, table2});
     assertNull(e1);
 
-    Scanner scanner = c.createScanner(TEST_TABLE_2, new Authorizations());
+    Scanner scanner = c.createScanner(table2, new Authorizations());
     Iterator<Entry<Key,Value>> iter = scanner.iterator();
     assertTrue(iter.hasNext());
     Entry<Key,Value> entry = iter.next();
diff --git a/server/master/src/test/java/org/apache/accumulo/master/TestMergeState.java b/test/src/main/java/org/apache/accumulo/test/master/MergeStateIT.java
similarity index 90%
rename from server/master/src/test/java/org/apache/accumulo/master/TestMergeState.java
rename to test/src/main/java/org/apache/accumulo/test/master/MergeStateIT.java
index 1d7c6d1..2d233c4 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/TestMergeState.java
+++ b/test/src/main/java/org/apache/accumulo/test/master/MergeStateIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.master;
+package org.apache.accumulo.test.master;
 
 import java.util.Collection;
 import java.util.Collections;
@@ -24,10 +24,8 @@
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
@@ -37,9 +35,9 @@
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection;
 import org.apache.accumulo.core.metadata.schema.MetadataSchema.TabletsSection.ChoppedColumnFamily;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.master.state.MergeStats;
 import org.apache.accumulo.server.AccumuloServerContext;
-import org.apache.accumulo.server.conf.ServerConfigurationFactory;
 import org.apache.accumulo.server.master.state.Assignment;
 import org.apache.accumulo.server.master.state.CurrentState;
 import org.apache.accumulo.server.master.state.MergeInfo;
@@ -47,18 +45,17 @@
 import org.apache.accumulo.server.master.state.MetaDataStateStore;
 import org.apache.accumulo.server.master.state.TServerInstance;
 import org.apache.accumulo.server.master.state.TabletLocationState;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.io.Text;
+import org.easymock.EasyMock;
 import org.junit.Assert;
 import org.junit.Test;
 
 import com.google.common.net.HostAndPort;
 
-/**
- *
- */
-public class TestMergeState {
+public class MergeStateIT extends ConfigurableMacBase {
 
-  class MockCurrentState implements CurrentState {
+  private static class MockCurrentState implements CurrentState {
 
     TServerInstance someTServer = new TServerInstance(HostAndPort.fromParts("127.0.0.1", 1234), 0x123456);
     MergeInfo mergeInfo;
@@ -106,15 +103,17 @@
 
   @Test
   public void test() throws Exception {
-    Instance instance = new MockInstance();
-    AccumuloServerContext context = new AccumuloServerContext(new ServerConfigurationFactory(instance));
-    Connector connector = context.getConnector();
+    AccumuloServerContext context = EasyMock.createMock(AccumuloServerContext.class);
+    Connector connector = getConnector();
+    EasyMock.expect(context.getConnector()).andReturn(connector).anyTimes();
+    EasyMock.replay(context);
+    connector.securityOperations().grantTablePermission(connector.whoami(), MetadataTable.NAME, TablePermission.WRITE);
     BatchWriter bw = connector.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
 
     // Create a fake METADATA table with these splits
     String splits[] = {"a", "e", "j", "o", "t", "z"};
     // create metadata for a table "t" with the splits above
-    Text tableId = new Text("t");
+    String tableId = "t";
     Text pr = null;
     for (String s : splits) {
       Text split = new Text(s);
@@ -186,7 +185,7 @@
     // take it offline
     m = tablet.getPrevRowUpdateMutation();
     Collection<Collection<String>> walogs = Collections.emptyList();
-    metaDataStateStore.unassign(Collections.singletonList(new TabletLocationState(tablet, null, state.someTServer, null, walogs, false)));
+    metaDataStateStore.unassign(Collections.singletonList(new TabletLocationState(tablet, null, state.someTServer, null, null, walogs, false)), null);
 
     // now we can split
     stats = scan(state, metaDataStateStore);
diff --git a/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java b/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java
new file mode 100644
index 0000000..de0cf4b
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/master/SuspendedTabletsIT.java
@@ -0,0 +1,348 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.master;
+
+import static java.util.concurrent.TimeUnit.MILLISECONDS;
+import static java.util.concurrent.TimeUnit.NANOSECONDS;
+import static java.util.concurrent.TimeUnit.SECONDS;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Random;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.FutureTask;
+import java.util.concurrent.ThreadFactory;
+import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.ZooKeeperInstance;
+import org.apache.accumulo.core.client.impl.ClientContext;
+import org.apache.accumulo.core.client.impl.ClientExec;
+import org.apache.accumulo.core.client.impl.Credentials;
+import org.apache.accumulo.core.client.impl.MasterClient;
+import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.impl.KeyExtent;
+import org.apache.accumulo.core.master.thrift.MasterClientService;
+import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.minicluster.impl.ProcessReference;
+import org.apache.accumulo.server.master.state.MetaDataTableScanner;
+import org.apache.accumulo.server.master.state.TServerInstance;
+import org.apache.accumulo.server.master.state.TabletLocationState;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Text;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.HashMultimap;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.SetMultimap;
+import com.google.common.net.HostAndPort;
+
+public class SuspendedTabletsIT extends ConfigurableMacBase {
+  private static final Logger log = LoggerFactory.getLogger(SuspendedTabletsIT.class);
+  private static final Random RANDOM = new Random();
+  private static ExecutorService THREAD_POOL;
+
+  public static final int TSERVERS = 5;
+  public static final long SUSPEND_DURATION = MILLISECONDS.convert(30, SECONDS);
+  public static final int TABLETS = 100;
+
+  @Override
+  public void configure(MiniAccumuloConfigImpl cfg, Configuration fsConf) {
+    cfg.setProperty(Property.TABLE_SUSPEND_DURATION, SUSPEND_DURATION + "ms");
+    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "5s");
+    cfg.setNumTservers(TSERVERS);
+  }
+
+  private boolean isAlive(Process p) {
+    try {
+      p.exitValue();
+      return false;
+    } catch (IllegalThreadStateException e) {
+      return true;
+    }
+  }
+
+  @Test
+  public void crashAndResumeTserver() throws Exception {
+    // Run the test body. When we get to the point where we need a tserver to go away, get rid of it via crashing
+    suspensionTestBody(new TServerKiller() {
+      @Override
+      public void eliminateTabletServers(ClientContext ctx, TabletLocations locs, int count) throws Exception {
+        List<ProcessReference> procs = new ArrayList<>(getCluster().getProcesses().get(ServerType.TABLET_SERVER));
+        Collections.shuffle(procs);
+
+        for (int i = 0; i < count; ++i) {
+          ProcessReference pr = procs.get(i);
+          log.info("Crashing {}", pr.getProcess());
+          getCluster().killProcess(ServerType.TABLET_SERVER, pr);
+        }
+      }
+    });
+  }
+
+  @Test
+  public void shutdownAndResumeTserver() throws Exception {
+    // Run the test body. When we get to the point where we need tservers to go away, stop them via a clean shutdown.
+    suspensionTestBody(new TServerKiller() {
+      @Override
+      public void eliminateTabletServers(final ClientContext ctx, TabletLocations locs, int count) throws Exception {
+        Set<TServerInstance> tserversSet = new HashSet<>();
+        for (TabletLocationState tls : locs.locationStates.values()) {
+          if (tls.current != null) {
+            tserversSet.add(tls.current);
+          }
+        }
+        List<TServerInstance> tserversList = new ArrayList<>(tserversSet);
+        Collections.shuffle(tserversList, RANDOM);
+
+        for (int i = 0; i < count; ++i) {
+          final String tserverName = tserversList.get(i).toString();
+          MasterClient.execute(ctx, new ClientExec<MasterClientService.Client>() {
+            @Override
+            public void execute(MasterClientService.Client client) throws Exception {
+              log.info("Sending shutdown command to {} via MasterClientService", tserverName);
+              client.shutdownTabletServer(null, ctx.rpcCreds(), tserverName, false);
+            }
+          });
+        }
+
+        log.info("Waiting for tserver process{} to die", count == 1 ? "" : "es");
+        for (int i = 0; i < 10; ++i) {
+          List<ProcessReference> deadProcs = new ArrayList<>();
+          for (ProcessReference pr : getCluster().getProcesses().get(ServerType.TABLET_SERVER)) {
+            Process p = pr.getProcess();
+            if (!isAlive(p)) {
+              deadProcs.add(pr);
+            }
+          }
+          for (ProcessReference pr : deadProcs) {
+            log.info("Process {} is dead, informing cluster control about this", pr.getProcess());
+            getCluster().getClusterControl().killProcess(ServerType.TABLET_SERVER, pr);
+            --count;
+          }
+          if (count == 0) {
+            return;
+          } else {
+            Thread.sleep(MILLISECONDS.convert(2, SECONDS));
+          }
+        }
+        throw new IllegalStateException("Tablet servers didn't die!");
+      }
+    });
+  }
+
+  /**
+   * Main test body for suspension tests.
+   *
+   * @param serverStopper
+   *          callback which shuts down some tablet servers.
+   */
+  private void suspensionTestBody(TServerKiller serverStopper) throws Exception {
+    Credentials creds = new Credentials("root", new PasswordToken(ROOT_PASSWORD));
+    Instance instance = new ZooKeeperInstance(getCluster().getClientConfig());
+    ClientContext ctx = new ClientContext(instance, creds, getCluster().getClientConfig());
+
+    String tableName = getUniqueNames(1)[0];
+
+    Connector conn = ctx.getConnector();
+
+    // Create a table with a bunch of splits
+    log.info("Creating table " + tableName);
+    conn.tableOperations().create(tableName);
+    SortedSet<Text> splitPoints = new TreeSet<>();
+    for (int i = 1; i < TABLETS; ++i) {
+      splitPoints.add(new Text("" + i));
+    }
+    conn.tableOperations().addSplits(tableName, splitPoints);
+
+    // Wait for all of the tablets to hosted ...
+    log.info("Waiting on hosting and balance");
+    TabletLocations ds;
+    for (ds = TabletLocations.retrieve(ctx, tableName); ds.hostedCount != TABLETS; ds = TabletLocations.retrieve(ctx, tableName)) {
+      Thread.sleep(1000);
+    }
+
+    // ... and balanced.
+    conn.instanceOperations().waitForBalance();
+    do {
+      // Give at least another 5 seconds for migrations to finish up
+      Thread.sleep(5000);
+      ds = TabletLocations.retrieve(ctx, tableName);
+    } while (ds.hostedCount != TABLETS);
+
+    // Pray all of our tservers have at least 1 tablet.
+    Assert.assertEquals(TSERVERS, ds.hosted.keySet().size());
+
+    // Kill two tablet servers hosting our tablets. This should put tablets into suspended state, and thus halt balancing.
+
+    TabletLocations beforeDeathState = ds;
+    log.info("Eliminating tablet servers");
+    serverStopper.eliminateTabletServers(ctx, beforeDeathState, 2);
+
+    // Eventually some tablets will be suspended.
+    log.info("Waiting on suspended tablets");
+    ds = TabletLocations.retrieve(ctx, tableName);
+    // Until we can scan the metadata table, the master probably can't either, so won't have been able to suspend the tablets.
+    // So we note the time that we were first able to successfully scan the metadata table.
+    long killTime = System.nanoTime();
+    while (ds.suspended.keySet().size() != 2) {
+      Thread.sleep(1000);
+      ds = TabletLocations.retrieve(ctx, tableName);
+    }
+
+    SetMultimap<HostAndPort,KeyExtent> deadTabletsByServer = ds.suspended;
+
+    // By this point, all tablets should be either hosted or suspended. All suspended tablets should
+    // "belong" to the dead tablet servers, and should be in exactly the same place as before any tserver death.
+    for (HostAndPort server : deadTabletsByServer.keySet()) {
+      Assert.assertEquals(deadTabletsByServer.get(server), beforeDeathState.hosted.get(server));
+    }
+    Assert.assertEquals(TABLETS, ds.hostedCount + ds.suspendedCount);
+
+    // Restart the first tablet server, making sure it ends up on the same port
+    HostAndPort restartedServer = deadTabletsByServer.keySet().iterator().next();
+    log.info("Restarting " + restartedServer);
+    getCluster().getClusterControl().start(ServerType.TABLET_SERVER, null,
+        ImmutableMap.of(Property.TSERV_CLIENTPORT.getKey(), "" + restartedServer.getPort(), Property.TSERV_PORTSEARCH.getKey(), "false"), 1);
+
+    // Eventually, the suspended tablets should be reassigned to the newly alive tserver.
+    log.info("Awaiting tablet unsuspension for tablets belonging to " + restartedServer);
+    for (ds = TabletLocations.retrieve(ctx, tableName); ds.suspended.containsKey(restartedServer) || ds.assignedCount != 0; ds = TabletLocations.retrieve(ctx,
+        tableName)) {
+      Thread.sleep(1000);
+    }
+    Assert.assertEquals(deadTabletsByServer.get(restartedServer), ds.hosted.get(restartedServer));
+
+    // Finally, after much longer, remaining suspended tablets should be reassigned.
+    log.info("Awaiting tablet reassignment for remaining tablets");
+    for (ds = TabletLocations.retrieve(ctx, tableName); ds.hostedCount != TABLETS; ds = TabletLocations.retrieve(ctx, tableName)) {
+      Thread.sleep(1000);
+    }
+
+    long recoverTime = System.nanoTime();
+    Assert.assertTrue(recoverTime - killTime >= NANOSECONDS.convert(SUSPEND_DURATION, MILLISECONDS));
+  }
+
+  private static interface TServerKiller {
+    public void eliminateTabletServers(ClientContext ctx, TabletLocations locs, int count) throws Exception;
+  }
+
+  private static final AtomicInteger threadCounter = new AtomicInteger(0);
+
+  @BeforeClass
+  public static void init() {
+    THREAD_POOL = Executors.newCachedThreadPool(new ThreadFactory() {
+      @Override
+      public Thread newThread(Runnable r) {
+        return new Thread(r, "Scanning deadline thread #" + threadCounter.incrementAndGet());
+      }
+    });
+  }
+
+  @AfterClass
+  public static void cleanup() {
+    THREAD_POOL.shutdownNow();
+  }
+
+  private static class TabletLocations {
+    public final Map<KeyExtent,TabletLocationState> locationStates = new HashMap<>();
+    public final SetMultimap<HostAndPort,KeyExtent> hosted = HashMultimap.create();
+    public final SetMultimap<HostAndPort,KeyExtent> suspended = HashMultimap.create();
+    public int hostedCount = 0;
+    public int assignedCount = 0;
+    public int suspendedCount = 0;
+
+    private TabletLocations() {}
+
+    public static TabletLocations retrieve(final ClientContext ctx, final String tableName) throws Exception {
+      int sleepTime = 200;
+      int remainingAttempts = 30;
+
+      while (true) {
+        try {
+          FutureTask<TabletLocations> tlsFuture = new FutureTask<>(new Callable<TabletLocations>() {
+            @Override
+            public TabletLocations call() throws Exception {
+              TabletLocations answer = new TabletLocations();
+              answer.scan(ctx, tableName);
+              return answer;
+            }
+          });
+          THREAD_POOL.submit(tlsFuture);
+          return tlsFuture.get(5, SECONDS);
+        } catch (TimeoutException ex) {
+          log.debug("Retrieval timed out", ex);
+        } catch (Exception ex) {
+          log.warn("Failed to scan metadata", ex);
+        }
+        sleepTime = Math.min(2 * sleepTime, 10000);
+        Thread.sleep(sleepTime);
+        --remainingAttempts;
+        if (remainingAttempts == 0) {
+          Assert.fail("Scanning of metadata failed, aborting");
+        }
+      }
+    }
+
+    private void scan(ClientContext ctx, String tableName) throws Exception {
+      Map<String,String> idMap = ctx.getConnector().tableOperations().tableIdMap();
+      String tableId = Objects.requireNonNull(idMap.get(tableName));
+      try (MetaDataTableScanner scanner = new MetaDataTableScanner(ctx, new Range())) {
+        while (scanner.hasNext()) {
+          TabletLocationState tls = scanner.next();
+
+          if (!tls.extent.getTableId().equals(tableId)) {
+            continue;
+          }
+          locationStates.put(tls.extent, tls);
+          if (tls.suspend != null) {
+            suspended.put(tls.suspend.server, tls.extent);
+            ++suspendedCount;
+          } else if (tls.current != null) {
+            hosted.put(tls.current.getLocation(), tls.extent);
+            ++hostedCount;
+          } else if (tls.future != null) {
+            ++assignedCount;
+          } else {
+            // unassigned case
+          }
+        }
+      }
+    }
+  }
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/mrit/IntegrationTestMapReduce.java b/test/src/main/java/org/apache/accumulo/test/mrit/IntegrationTestMapReduce.java
new file mode 100644
index 0000000..04d7dc7
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/mrit/IntegrationTestMapReduce.java
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.mrit;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.MRJobConfig;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.input.NLineInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.junit.runner.Description;
+import org.junit.runner.JUnitCore;
+import org.junit.runner.Result;
+import org.junit.runner.notification.Failure;
+import org.junit.runner.notification.RunListener;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Run the Integration Tests as a Map-Reduce job.
+ * <p>
+ * Each of the Integration tests takes 30s to 20m to run. Using a larger cluster, all the tests can be run in parallel and finish much faster.
+ * <p>
+ * To run the tests, you first need a list of the tests. A simple way to get a list, is to scan the accumulo-test jar file for them.
+ *
+ * <pre>
+ * $ jar -tf lib/accumulo-test.jar | grep IT.class | tr / . | sed -e 's/.class$//' &gt;tests
+ * </pre>
+ *
+ * Put the list of tests into HDFS:
+ *
+ * <pre>
+ * $ hadoop fs -mkdir /tmp
+ * $ hadoop fs -put tests /tmp/tests
+ * </pre>
+ *
+ * Run the class below as a map-reduce job, giving it the lists of tests, and a place to store the results.
+ *
+ * <pre>
+ * $ yarn jar lib/accumulo-test-mrit.jar -libjars lib/native/libaccumulo.so /tmp/tests /tmp/results
+ * </pre>
+ *
+ * The result is a list of IT classes that pass or fail. Those classes that fail will be annotated with the particular test that failed within the class.
+ */
+
+public class IntegrationTestMapReduce extends Configured implements Tool {
+
+  private static final Logger log = LoggerFactory.getLogger(IntegrationTestMapReduce.class);
+
+  private static boolean isMapReduce = false;
+
+  public static boolean isMapReduce() {
+    return isMapReduce;
+  }
+
+  public static class TestMapper extends Mapper<LongWritable,Text,Text,Text> {
+
+    static final Text FAIL = new Text("FAIL");
+    static final Text PASS = new Text("PASS");
+    static final Text ERROR = new Text("ERROR");
+
+    public static enum TestCounts {
+      PASS, FAIL, ERROR
+    }
+
+    @Override
+    protected void map(LongWritable key, Text value, final Mapper<LongWritable,Text,Text,Text>.Context context) throws IOException, InterruptedException {
+      isMapReduce = true;
+      String className = value.toString();
+      if (className.trim().isEmpty()) {
+        return;
+      }
+      final List<String> failures = new ArrayList<>();
+      Class<? extends Object> test = null;
+      try {
+        test = Class.forName(className);
+      } catch (ClassNotFoundException e) {
+        log.debug("Error finding class {}", className, e);
+        context.getCounter(TestCounts.ERROR).increment(1);
+        context.write(ERROR, new Text(e.toString()));
+        return;
+      }
+      log.info("Running test {}", className);
+      JUnitCore core = new JUnitCore();
+      core.addListener(new RunListener() {
+
+        @Override
+        public void testStarted(Description description) throws Exception {
+          log.info("Starting {}", description);
+          context.progress();
+        }
+
+        @Override
+        public void testFinished(Description description) throws Exception {
+          log.info("Finished {}", description);
+          context.progress();
+        }
+
+        @Override
+        public void testFailure(Failure failure) throws Exception {
+          log.info("Test failed: {}", failure.getDescription(), failure.getException());
+          failures.add(failure.getDescription().getMethodName());
+          context.progress();
+        }
+
+      });
+      context.setStatus(test.getSimpleName());
+      try {
+        Result result = core.run(test);
+        if (result.wasSuccessful()) {
+          log.info("{} was successful", className);
+          context.getCounter(TestCounts.PASS).increment(1);
+          context.write(PASS, value);
+        } else {
+          log.info("{} failed", className);
+          context.getCounter(TestCounts.FAIL).increment(1);
+          context.write(FAIL, new Text(className + "(" + StringUtils.join(failures, ", ") + ")"));
+        }
+      } catch (Exception e) {
+        // most likely JUnit issues, like no tests to run
+        log.info("Test failed: {}", className, e);
+      }
+    }
+  }
+
+  public static class TestReducer extends Reducer<Text,Text,Text,Text> {
+
+    @Override
+    protected void reduce(Text code, Iterable<Text> tests, Reducer<Text,Text,Text,Text>.Context context) throws IOException, InterruptedException {
+      StringBuilder result = new StringBuilder("\n");
+      for (Text test : tests) {
+        result.append("   ");
+        result.append(test.toString());
+        result.append("\n");
+      }
+      context.write(code, new Text(result.toString()));
+    }
+  }
+
+  @Override
+  public int run(String[] args) throws Exception {
+    // read a list of tests from the input, and print out the results
+    if (args.length != 2) {
+      System.err.println("Wrong number of args: <input> <output>");
+      return 1;
+    }
+    Configuration conf = getConf();
+    Job job = Job.getInstance(conf, "accumulo integration test runner");
+    conf = job.getConfiguration();
+
+    // some tests take more than 10 minutes
+    conf.setLong(MRJobConfig.TASK_TIMEOUT, 20 * 60 * 1000);
+
+    // minicluster uses a lot of ram
+    conf.setInt(MRJobConfig.MAP_MEMORY_MB, 4000);
+
+    // hadoop puts an ancient version of jline on the classpath
+    conf.setBoolean(MRJobConfig.MAPREDUCE_JOB_USER_CLASSPATH_FIRST, true);
+
+    // no need to run a test multiple times
+    job.setSpeculativeExecution(false);
+
+    // read one line at a time
+    job.setInputFormatClass(NLineInputFormat.class);
+    NLineInputFormat.setNumLinesPerSplit(job, 1);
+
+    // run the test
+    job.setJarByClass(IntegrationTestMapReduce.class);
+    job.setMapperClass(TestMapper.class);
+
+    // group test by result code
+    job.setReducerClass(TestReducer.class);
+    job.setOutputKeyClass(Text.class);
+    job.setOutputValueClass(Text.class);
+
+    FileInputFormat.addInputPath(job, new Path(args[0]));
+    FileOutputFormat.setOutputPath(job, new Path(args[1]));
+    return job.waitForCompletion(true) ? 0 : 1;
+  }
+
+  public static void main(String[] args) throws Exception {
+    System.exit(ToolRunner.run(new IntegrationTestMapReduce(), args));
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/RollWALPerformanceIT.java b/test/src/main/java/org/apache/accumulo/test/performance/RollWALPerformanceIT.java
new file mode 100644
index 0000000..827d6d8
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/performance/RollWALPerformanceIT.java
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.performance;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assume.assumeFalse;
+
+import java.util.SortedSet;
+import java.util.TreeSet;
+
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.metadata.MetadataTable;
+import org.apache.accumulo.core.metadata.RootTable;
+import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.continuous.ContinuousIngest;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.accumulo.test.mrit.IntegrationTestMapReduce;
+import org.apache.accumulo.test.PerformanceTest;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.Text;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category(PerformanceTest.class)
+public class RollWALPerformanceIT extends ConfigurableMacBase {
+
+  @BeforeClass
+  static public void checkMR() {
+    assumeFalse(IntegrationTestMapReduce.isMapReduce());
+  }
+
+  @Override
+  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.setProperty(Property.TSERV_WAL_REPLICATION, "1");
+    cfg.setProperty(Property.TSERV_WALOG_MAX_SIZE, "5M");
+    cfg.setProperty(Property.TABLE_MINC_LOGS_MAX, "100");
+    cfg.setProperty(Property.GC_FILE_ARCHIVE, "false");
+    cfg.setProperty(Property.GC_CYCLE_START, "1s");
+    cfg.setProperty(Property.GC_CYCLE_DELAY, "1s");
+    cfg.useMiniDFS(true);
+  }
+
+  private long ingest() throws Exception {
+    final Connector c = getConnector();
+    final String tableName = getUniqueNames(1)[0];
+
+    log.info("Creating the table");
+    c.tableOperations().create(tableName);
+
+    log.info("Splitting the table");
+    final long SPLIT_COUNT = 100;
+    final long distance = Long.MAX_VALUE / SPLIT_COUNT;
+    final SortedSet<Text> splits = new TreeSet<>();
+    for (int i = 1; i < SPLIT_COUNT; i++) {
+      splits.add(new Text(String.format("%016x", i * distance)));
+    }
+    c.tableOperations().addSplits(tableName, splits);
+
+    log.info("Waiting for balance");
+    c.instanceOperations().waitForBalance();
+
+    final Instance inst = c.getInstance();
+
+    log.info("Starting ingest");
+    final long start = System.currentTimeMillis();
+    final String args[] = {"-i", inst.getInstanceName(), "-z", inst.getZooKeepers(), "-u", "root", "-p", ROOT_PASSWORD, "--batchThreads", "2", "--table",
+        tableName, "--num", Long.toString(50 * 1000), // 50K 100 byte entries
+    };
+
+    ContinuousIngest.main(args);
+    final long result = System.currentTimeMillis() - start;
+    log.debug(String.format("Finished in %,d ms", result));
+    log.debug("Dropping table");
+    c.tableOperations().delete(tableName);
+    return result;
+  }
+
+  private long getAverage() throws Exception {
+    final int REPEAT = 3;
+    long totalTime = 0;
+    for (int i = 0; i < REPEAT; i++) {
+      totalTime += ingest();
+    }
+    return totalTime / REPEAT;
+  }
+
+  private void testWalPerformanceOnce() throws Exception {
+    // get time with a small WAL, which will cause many WAL roll-overs
+    long avg1 = getAverage();
+    // use a bigger WAL max size to eliminate WAL roll-overs
+    Connector c = getConnector();
+    c.instanceOperations().setProperty(Property.TSERV_WALOG_MAX_SIZE.getKey(), "1G");
+    c.tableOperations().flush(MetadataTable.NAME, null, null, true);
+    c.tableOperations().flush(RootTable.NAME, null, null, true);
+    getCluster().getClusterControl().stop(ServerType.TABLET_SERVER);
+    getCluster().start();
+    long avg2 = getAverage();
+    log.info(String.format("Average run time with small WAL %,d with large WAL %,d", avg1, avg2));
+    assertTrue(avg1 > avg2);
+    double percent = (100. * avg1) / avg2;
+    log.info(String.format("Percent of large log: %.2f%%", percent));
+    assertTrue(percent < 125.);
+  }
+
+  @Test(timeout = 20 * 60 * 1000)
+  public void testWalPerformance() throws Exception {
+    testWalPerformanceOnce();
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/metadata/FastBulkImportIT.java b/test/src/main/java/org/apache/accumulo/test/performance/metadata/FastBulkImportIT.java
new file mode 100644
index 0000000..a74cb6b
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/performance/metadata/FastBulkImportIT.java
@@ -0,0 +1,112 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.performance.metadata;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assume.assumeFalse;
+
+import java.util.SortedSet;
+import java.util.TreeSet;
+
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.conf.AccumuloConfiguration;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.file.FileOperations;
+import org.apache.accumulo.core.file.FileSKVWriter;
+import org.apache.accumulo.core.file.rfile.RFile;
+import org.apache.accumulo.core.util.CachedConfiguration;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.accumulo.test.mrit.IntegrationTestMapReduce;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+// ACCUMULO-3327
+public class FastBulkImportIT extends ConfigurableMacBase {
+
+  @BeforeClass
+  static public void checkMR() {
+    assumeFalse(IntegrationTestMapReduce.isMapReduce());
+  }
+
+  @Override
+  protected int defaultTimeoutSeconds() {
+    return 60;
+  }
+
+  @Override
+  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
+    cfg.setNumTservers(3);
+    cfg.setProperty(Property.TSERV_BULK_ASSIGNMENT_THREADS, "5");
+    cfg.setProperty(Property.TSERV_BULK_PROCESS_THREADS, "5");
+    cfg.setProperty(Property.TABLE_MAJC_RATIO, "9999");
+    cfg.setProperty(Property.TABLE_FILE_MAX, "9999");
+    cfg.setProperty(Property.TABLE_DURABILITY, "none");
+  }
+
+  @Test
+  public void test() throws Exception {
+    log.info("Creating table");
+    final String tableName = getUniqueNames(1)[0];
+    final Connector c = getConnector();
+    c.tableOperations().create(tableName);
+    log.info("Adding splits");
+    SortedSet<Text> splits = new TreeSet<>();
+    for (int i = 1; i < 0xfff; i += 7) {
+      splits.add(new Text(Integer.toHexString(i)));
+    }
+    c.tableOperations().addSplits(tableName, splits);
+
+    log.info("Creating lots of bulk import files");
+    FileSystem fs = getCluster().getFileSystem();
+    Path basePath = getCluster().getTemporaryPath();
+    CachedConfiguration.setInstance(fs.getConf());
+
+    Path base = new Path(basePath, "testBulkFail_" + tableName);
+    fs.delete(base, true);
+    fs.mkdirs(base);
+    Path bulkFailures = new Path(base, "failures");
+    Path files = new Path(base, "files");
+    fs.mkdirs(bulkFailures);
+    fs.mkdirs(files);
+    for (int i = 0; i < 100; i++) {
+      FileSKVWriter writer = FileOperations.getInstance().newWriterBuilder().forFile(files.toString() + "/bulk_" + i + "." + RFile.EXTENSION, fs, fs.getConf())
+          .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
+      writer.startDefaultLocalityGroup();
+      for (int j = 0x100; j < 0xfff; j += 3) {
+        writer.append(new Key(Integer.toHexString(j)), new Value(new byte[0]));
+      }
+      writer.close();
+    }
+    log.info("Waiting for balance");
+    c.instanceOperations().waitForBalance();
+
+    log.info("Bulk importing files");
+    long now = System.currentTimeMillis();
+    c.tableOperations().importDirectory(tableName, files.toString(), bulkFailures.toString(), true);
+    double diffSeconds = (System.currentTimeMillis() - now) / 1000.;
+    log.info(String.format("Import took %.2f seconds", diffSeconds));
+    assertTrue(diffSeconds < 30);
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/metadata/MetadataBatchScanTest.java b/test/src/main/java/org/apache/accumulo/test/performance/metadata/MetadataBatchScanTest.java
index 1c7ce67..157d4a0 100644
--- a/test/src/main/java/org/apache/accumulo/test/performance/metadata/MetadataBatchScanTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/performance/metadata/MetadataBatchScanTest.java
@@ -70,17 +70,17 @@
     Instance inst = new ZooKeeperInstance(new ClientConfiguration().withInstance("acu14").withZkHosts("localhost"));
     final Connector connector = inst.getConnector(opts.getPrincipal(), opts.getToken());
 
-    TreeSet<Long> splits = new TreeSet<Long>();
+    TreeSet<Long> splits = new TreeSet<>();
     Random r = new Random(42);
 
     while (splits.size() < 99999) {
       splits.add((r.nextLong() & 0x7fffffffffffffffl) % 1000000000000l);
     }
 
-    Text tid = new Text("8");
+    String tid = "8";
     Text per = null;
 
-    ArrayList<KeyExtent> extents = new ArrayList<KeyExtent>();
+    ArrayList<KeyExtent> extents = new ArrayList<>();
 
     for (Long split : splits) {
       Text er = new Text(String.format("%012d", split));
@@ -128,12 +128,12 @@
       final int numLoop = Integer.parseInt(args[2]);
       int numLookups = Integer.parseInt(args[3]);
 
-      HashSet<Integer> indexes = new HashSet<Integer>();
+      HashSet<Integer> indexes = new HashSet<>();
       while (indexes.size() < numLookups) {
         indexes.add(r.nextInt(extents.size()));
       }
 
-      final List<Range> ranges = new ArrayList<Range>();
+      final List<Range> ranges = new ArrayList<>();
       for (Integer i : indexes) {
         ranges.add(extents.get(i).toMetadataRange());
       }
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/scan/CollectTabletStats.java b/test/src/main/java/org/apache/accumulo/test/performance/scan/CollectTabletStats.java
index 97fbf59..dea1b7f 100644
--- a/test/src/main/java/org/apache/accumulo/test/performance/scan/CollectTabletStats.java
+++ b/test/src/main/java/org/apache/accumulo/test/performance/scan/CollectTabletStats.java
@@ -119,7 +119,7 @@
       System.exit(-1);
     }
 
-    TreeMap<KeyExtent,String> tabletLocations = new TreeMap<KeyExtent,String>();
+    TreeMap<KeyExtent,String> tabletLocations = new TreeMap<>();
     List<KeyExtent> candidates = findTablets(context, !opts.selectFarTablets, opts.getTableName(), tabletLocations);
 
     if (candidates.size() < opts.numThreads) {
@@ -129,7 +129,7 @@
 
     List<KeyExtent> tabletsToTest = selectRandomTablets(opts.numThreads, candidates);
 
-    Map<KeyExtent,List<FileRef>> tabletFiles = new HashMap<KeyExtent,List<FileRef>>();
+    Map<KeyExtent,List<FileRef>> tabletFiles = new HashMap<>();
 
     for (KeyExtent ke : tabletsToTest) {
       List<FileRef> files = getTabletFiles(context, tableId, ke);
@@ -155,7 +155,7 @@
 
     for (int i = 0; i < opts.iterations; i++) {
 
-      ArrayList<Test> tests = new ArrayList<Test>();
+      ArrayList<Test> tests = new ArrayList<>();
 
       for (final KeyExtent ke : tabletsToTest) {
         final List<FileRef> files = tabletFiles.get(ke);
@@ -175,7 +175,7 @@
 
     for (int i = 0; i < opts.iterations; i++) {
 
-      ArrayList<Test> tests = new ArrayList<Test>();
+      ArrayList<Test> tests = new ArrayList<>();
 
       for (final KeyExtent ke : tabletsToTest) {
         final List<FileRef> files = tabletFiles.get(ke);
@@ -193,7 +193,7 @@
     }
 
     for (int i = 0; i < opts.iterations; i++) {
-      ArrayList<Test> tests = new ArrayList<Test>();
+      ArrayList<Test> tests = new ArrayList<>();
 
       for (final KeyExtent ke : tabletsToTest) {
         final List<FileRef> files = tabletFiles.get(ke);
@@ -212,7 +212,7 @@
 
     for (int i = 0; i < opts.iterations; i++) {
 
-      ArrayList<Test> tests = new ArrayList<Test>();
+      ArrayList<Test> tests = new ArrayList<>();
 
       final Connector conn = opts.getConnector();
 
@@ -354,7 +354,7 @@
 
     InetAddress localaddress = InetAddress.getLocalHost();
 
-    List<KeyExtent> candidates = new ArrayList<KeyExtent>();
+    List<KeyExtent> candidates = new ArrayList<>();
 
     for (Entry<KeyExtent,String> entry : tabletLocations.entrySet()) {
       String loc = entry.getValue();
@@ -372,7 +372,7 @@
   }
 
   private static List<KeyExtent> selectRandomTablets(int numThreads, List<KeyExtent> candidates) {
-    List<KeyExtent> tabletsToTest = new ArrayList<KeyExtent>();
+    List<KeyExtent> tabletsToTest = new ArrayList<>();
 
     Random rand = new Random();
     for (int i = 0; i < numThreads; i++) {
@@ -385,7 +385,7 @@
   }
 
   private static List<FileRef> getTabletFiles(ClientContext context, String tableId, KeyExtent ke) throws IOException {
-    return new ArrayList<FileRef>(MetadataTableUtil.getDataFileSizes(ke, context).keySet());
+    return new ArrayList<>(MetadataTableUtil.getDataFileSizes(ke, context).keySet());
   }
 
   private static void reportHdfsBlockLocations(List<FileRef> files) throws Exception {
@@ -423,7 +423,7 @@
 
     SortedMapIterator smi = new SortedMapIterator(new TreeMap<Key,Value>());
 
-    List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<SortedKeyValueIterator<Key,Value>>(mapfiles.size() + 1);
+    List<SortedKeyValueIterator<Key,Value>> iters = new ArrayList<>(mapfiles.size() + 1);
 
     iters.addAll(mapfiles);
     iters.add(smi);
@@ -447,7 +447,8 @@
 
     for (FileRef file : files) {
       FileSystem ns = fs.getVolumeByPath(file.path()).getFileSystem();
-      FileSKVIterator reader = FileOperations.getInstance().openReader(file.path().toString(), false, ns, ns.getConf(), aconf);
+      FileSKVIterator reader = FileOperations.getInstance().newReaderBuilder().forFile(file.path().toString(), ns, ns.getConf()).withTableConfiguration(aconf)
+          .build();
       Range range = new Range(ke.getPrevEndRow(), false, ke.getEndRow(), true);
       reader.seek(range, columnSet, columnSet.size() == 0 ? false : true);
       while (reader.hasTop() && !range.afterEndKey(reader.getTopKey())) {
@@ -461,7 +462,7 @@
   }
 
   private static HashSet<ByteSequence> createColumnBSS(String[] columns) {
-    HashSet<ByteSequence> columnSet = new HashSet<ByteSequence>();
+    HashSet<ByteSequence> columnSet = new HashSet<>();
     for (String c : columns) {
       columnSet.add(new ArrayByteSequence(c));
     }
@@ -473,16 +474,17 @@
 
     SortedKeyValueIterator<Key,Value> reader;
 
-    List<SortedKeyValueIterator<Key,Value>> readers = new ArrayList<SortedKeyValueIterator<Key,Value>>(files.size());
+    List<SortedKeyValueIterator<Key,Value>> readers = new ArrayList<>(files.size());
 
     for (FileRef file : files) {
       FileSystem ns = fs.getVolumeByPath(file.path()).getFileSystem();
-      readers.add(FileOperations.getInstance().openReader(file.path().toString(), false, ns, ns.getConf(), aconf.getConfiguration()));
+      readers.add(FileOperations.getInstance().newReaderBuilder().forFile(file.path().toString(), ns, ns.getConf())
+          .withTableConfiguration(aconf.getConfiguration()).build());
     }
 
     List<IterInfo> emptyIterinfo = Collections.emptyList();
     Map<String,Map<String,String>> emptySsio = Collections.emptyMap();
-    TableConfiguration tconf = aconf.getTableConfiguration(ke.getTableId().toString());
+    TableConfiguration tconf = aconf.getTableConfiguration(ke.getTableId());
     reader = createScanIterator(ke, readers, auths, new byte[] {}, new HashSet<Column>(), emptyIterinfo, emptySsio, useTableIterators, tconf);
 
     HashSet<ByteSequence> columnSet = createColumnBSS(columns);
diff --git a/test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java b/test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java
index d0de29f..05a0c54 100644
--- a/test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java
+++ b/test/src/main/java/org/apache/accumulo/test/performance/thrift/NullTserver.java
@@ -22,6 +22,7 @@
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.cli.Help;
 import org.apache.accumulo.core.client.ClientConfiguration;
@@ -54,12 +55,12 @@
 import org.apache.accumulo.core.tabletserver.thrift.ActiveScan;
 import org.apache.accumulo.core.tabletserver.thrift.NoSuchScanIDException;
 import org.apache.accumulo.core.tabletserver.thrift.TDurability;
+import org.apache.accumulo.core.tabletserver.thrift.TSamplerConfiguration;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService.Iface;
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService.Processor;
 import org.apache.accumulo.core.tabletserver.thrift.TabletStats;
 import org.apache.accumulo.core.trace.thrift.TInfo;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.server.AccumuloServerContext;
 import org.apache.accumulo.server.client.ClientServiceHandler;
 import org.apache.accumulo.server.client.HdfsZooInstance;
@@ -72,12 +73,14 @@
 import org.apache.accumulo.server.rpc.TServerUtils;
 import org.apache.accumulo.server.rpc.ThriftServerType;
 import org.apache.accumulo.server.zookeeper.TransactionWatcher;
-import org.apache.hadoop.io.Text;
 import org.apache.thrift.TException;
 
 import com.beust.jcommander.Parameter;
 import com.google.common.net.HostAndPort;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+import org.apache.accumulo.core.tabletserver.thrift.TUnloadTabletGoal;
+
 /**
  * The purpose of this class is to server as fake tserver that is a data sink like /dev/null. NullTserver modifies the metadata location entries for a table to
  * point to it. This allows thrift performance to be measured by running any client code that writes to a table.
@@ -135,14 +138,15 @@
 
     @Override
     public InitialMultiScan startMultiScan(TInfo tinfo, TCredentials credentials, Map<TKeyExtent,List<TRange>> batch, List<TColumn> columns,
-        List<IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites) {
+        List<IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, TSamplerConfiguration tsc, long batchTimeOut,
+        String context) {
       return null;
     }
 
     @Override
     public InitialScan startScan(TInfo tinfo, TCredentials credentials, TKeyExtent extent, TRange range, List<TColumn> columns, int batchSize,
         List<IterInfo> ssiList, Map<String,Map<String,String>> ssio, List<ByteBuffer> authorizations, boolean waitForWrites, boolean isolated,
-        long readaheadThreshold) {
+        long readaheadThreshold, TSamplerConfiguration tsc, long batchTimeOut, String classLoaderContext) {
       return null;
     }
 
@@ -176,11 +180,11 @@
     public void loadTablet(TInfo tinfo, TCredentials credentials, String lock, TKeyExtent extent) throws TException {}
 
     @Override
-    public void unloadTablet(TInfo tinfo, TCredentials credentials, String lock, TKeyExtent extent, boolean save) throws TException {}
+    public void unloadTablet(TInfo tinfo, TCredentials credentials, String lock, TKeyExtent extent, TUnloadTabletGoal goal, long requestTime) throws TException {}
 
     @Override
     public List<ActiveScan> getActiveScans(TInfo tinfo, TCredentials credentials) throws ThriftSecurityException, TException {
-      return new ArrayList<ActiveScan>();
+      return new ArrayList<>();
     }
 
     @Override
@@ -202,16 +206,13 @@
     }
 
     @Override
-    public void removeLogs(TInfo tinfo, TCredentials credentials, List<String> filenames) throws TException {}
-
-    @Override
     public List<ActiveCompaction> getActiveCompactions(TInfo tinfo, TCredentials credentials) throws ThriftSecurityException, TException {
-      return new ArrayList<ActiveCompaction>();
+      return new ArrayList<>();
     }
 
     @Override
     public TConditionalSession startConditionalUpdate(TInfo tinfo, TCredentials credentials, List<ByteBuffer> authorizations, String tableID,
-        TDurability durability) throws ThriftSecurityException, TException {
+        TDurability durability, String classLoaderContext) throws ThriftSecurityException, TException {
       return null;
     }
 
@@ -231,6 +232,9 @@
     public List<String> getActiveLogs(TInfo tinfo, TCredentials credentials) throws TException {
       return null;
     }
+
+    @Override
+    public void removeLogs(TInfo tinfo, TCredentials credentials, List<String> filenames) throws TException { }
   }
 
   static class Opts extends Help {
@@ -241,7 +245,7 @@
     @Parameter(names = "--table", description = "table to adopt", required = true)
     String tableName = null;
     @Parameter(names = "--port", description = "port number to use")
-    int port = DefaultConfiguration.getInstance().getPort(Property.TSERV_CLIENTPORT);
+    int port = DefaultConfiguration.getInstance().getPort(Property.TSERV_CLIENTPORT)[0];
   }
 
   public static void main(String[] args) throws Exception {
@@ -255,19 +259,19 @@
     TransactionWatcher watcher = new TransactionWatcher();
     ThriftClientHandler tch = new ThriftClientHandler(new AccumuloServerContext(new ServerConfigurationFactory(HdfsZooInstance.getInstance())), watcher);
     Processor<Iface> processor = new Processor<Iface>(tch);
-    TServerUtils.startTServer(context.getConfiguration(), HostAndPort.fromParts("0.0.0.0", opts.port), ThriftServerType.CUSTOM_HS_HA, processor, "NullTServer",
-        "null tserver", 2, 1, 1000, 10 * 1024 * 1024, null, null, -1);
+    TServerUtils.startTServer(context.getConfiguration(), ThriftServerType.CUSTOM_HS_HA, processor, "NullTServer",
+        "null tserver", 2, 1, 1000, 10 * 1024 * 1024, null, null, -1, HostAndPort.fromParts("0.0.0.0", opts.port));
 
     HostAndPort addr = HostAndPort.fromParts(InetAddress.getLocalHost().getHostName(), opts.port);
 
     String tableId = Tables.getTableId(zki, opts.tableName);
 
     // read the locations for the table
-    Range tableRange = new KeyExtent(new Text(tableId), null, null).toMetadataRange();
+    Range tableRange = new KeyExtent(tableId, null, null).toMetadataRange();
     MetaDataTableScanner s = new MetaDataTableScanner(context, tableRange);
     long randomSessionID = opts.port;
     TServerInstance instance = new TServerInstance(addr, randomSessionID);
-    List<Assignment> assignments = new ArrayList<Assignment>();
+    List<Assignment> assignments = new ArrayList<>();
     while (s.hasNext()) {
       TabletLocationState next = s.next();
       assignments.add(new Assignment(next.extent, instance));
@@ -278,7 +282,7 @@
     store.setLocations(assignments);
 
     while (true) {
-      UtilWaitThread.sleep(10000);
+      sleepUninterruptibly(10, TimeUnit.SECONDS);
     }
   }
 }
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/ProxyDurabilityIT.java b/test/src/main/java/org/apache/accumulo/test/proxy/ProxyDurabilityIT.java
similarity index 90%
rename from test/src/test/java/org/apache/accumulo/test/proxy/ProxyDurabilityIT.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/ProxyDurabilityIT.java
index 609b77f..d9a1027 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/ProxyDurabilityIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/ProxyDurabilityIT.java
@@ -27,18 +27,17 @@
 import java.util.Map;
 import java.util.Properties;
 import java.util.TreeMap;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ProcessReference;
-import org.apache.accumulo.proxy.thrift.AccumuloProxy.Client;
 import org.apache.accumulo.proxy.Proxy;
-import org.apache.accumulo.proxy.TestProxyClient;
+import org.apache.accumulo.proxy.thrift.AccumuloProxy.Client;
 import org.apache.accumulo.proxy.thrift.Column;
 import org.apache.accumulo.proxy.thrift.ColumnUpdate;
 import org.apache.accumulo.proxy.thrift.Condition;
@@ -49,7 +48,7 @@
 import org.apache.accumulo.proxy.thrift.TimeType;
 import org.apache.accumulo.proxy.thrift.WriterOptions;
 import org.apache.accumulo.server.util.PortUtils;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.RawLocalFileSystem;
 import org.apache.thrift.protocol.TJSONProtocol;
@@ -58,13 +57,19 @@
 
 import com.google.common.collect.Iterators;
 import com.google.common.net.HostAndPort;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class ProxyDurabilityIT extends ConfigurableMacIT {
+public class ProxyDurabilityIT extends ConfigurableMacBase {
+
+  @Override
+  protected int defaultTimeoutSeconds() {
+    return 120;
+  }
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
     hadoopCoreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
-    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "10s");
+    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
     cfg.setNumTservers(1);
   }
 
@@ -89,9 +94,9 @@
     int proxyPort = PortUtils.getRandomFreePort();
     final TServer proxyServer = Proxy.createProxyServer(HostAndPort.fromParts("localhost", proxyPort), protocol, props).server;
     while (!proxyServer.isServing())
-      UtilWaitThread.sleep(100);
+      sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
     Client client = new TestProxyClient("localhost", proxyPort, protocol).proxy();
-    Map<String,String> properties = new TreeMap<String,String>();
+    Map<String,String> properties = new TreeMap<>();
     properties.put("password", ROOT_PASSWORD);
     ByteBuffer login = client.login("root", properties);
 
@@ -102,7 +107,7 @@
     WriterOptions options = new WriterOptions();
     options.setDurability(Durability.NONE);
     String writer = client.createWriter(login, tableName, options);
-    Map<ByteBuffer,List<ColumnUpdate>> cells = new TreeMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> cells = new TreeMap<>();
     ColumnUpdate column = new ColumnUpdate(bytes("cf"), bytes("cq"));
     column.setValue("value".getBytes());
     cells.put(bytes("row"), Collections.singletonList(column));
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/SimpleProxyBase.java b/test/src/main/java/org/apache/accumulo/test/proxy/SimpleProxyBase.java
similarity index 68%
rename from test/src/test/java/org/apache/accumulo/test/proxy/SimpleProxyBase.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/SimpleProxyBase.java
index a419e43..5bb4ad6 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/SimpleProxyBase.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/SimpleProxyBase.java
@@ -16,6 +16,8 @@
  */
 package org.apache.accumulo.test.proxy;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+import static java.nio.charset.StandardCharsets.UTF_8;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
@@ -40,11 +42,13 @@
 import java.util.Set;
 import java.util.TreeMap;
 import java.util.UUID;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.cluster.ClusterUser;
 import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Instance;
+import org.apache.accumulo.core.client.impl.Namespaces;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.DefaultConfiguration;
@@ -52,6 +56,7 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.file.FileOperations;
 import org.apache.accumulo.core.file.FileSKVWriter;
+import org.apache.accumulo.core.iterators.DebugIterator;
 import org.apache.accumulo.core.iterators.DevNull;
 import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 import org.apache.accumulo.core.iterators.user.SummingCombiner;
@@ -59,15 +64,14 @@
 import org.apache.accumulo.core.metadata.MetadataTable;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.util.ByteBufferUtil;
-import org.apache.accumulo.core.util.UtilWaitThread;
+import org.apache.accumulo.examples.simple.constraints.MaxMutationSize;
 import org.apache.accumulo.examples.simple.constraints.NumericValueConstraint;
 import org.apache.accumulo.harness.MiniClusterHarness;
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.apache.accumulo.harness.TestingKdc;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
-import org.apache.accumulo.proxy.thrift.AccumuloProxy.Client;
 import org.apache.accumulo.proxy.Proxy;
-import org.apache.accumulo.proxy.TestProxyClient;
+import org.apache.accumulo.proxy.thrift.AccumuloProxy.Client;
 import org.apache.accumulo.proxy.thrift.AccumuloSecurityException;
 import org.apache.accumulo.proxy.thrift.ActiveCompaction;
 import org.apache.accumulo.proxy.thrift.ActiveScan;
@@ -87,6 +91,10 @@
 import org.apache.accumulo.proxy.thrift.Key;
 import org.apache.accumulo.proxy.thrift.KeyValue;
 import org.apache.accumulo.proxy.thrift.MutationsRejectedException;
+import org.apache.accumulo.proxy.thrift.NamespaceExistsException;
+import org.apache.accumulo.proxy.thrift.NamespaceNotEmptyException;
+import org.apache.accumulo.proxy.thrift.NamespaceNotFoundException;
+import org.apache.accumulo.proxy.thrift.NamespacePermission;
 import org.apache.accumulo.proxy.thrift.PartialKey;
 import org.apache.accumulo.proxy.thrift.Range;
 import org.apache.accumulo.proxy.thrift.ScanColumn;
@@ -104,6 +112,7 @@
 import org.apache.accumulo.proxy.thrift.WriterOptions;
 import org.apache.accumulo.server.util.PortUtils;
 import org.apache.accumulo.test.functional.SlowIterator;
+import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.fs.FSDataInputStream;
@@ -129,7 +138,7 @@
 /**
  * Call every method on the proxy and try to verify that it works.
  */
-public abstract class SimpleProxyBase extends SharedMiniClusterIT {
+public abstract class SimpleProxyBase extends SharedMiniClusterBase {
   private static final Logger log = LoggerFactory.getLogger(SimpleProxyBase.class);
 
   @Override
@@ -145,10 +154,11 @@
   private org.apache.accumulo.proxy.thrift.AccumuloProxy.Client client;
 
   private static Map<String,String> properties = new HashMap<>();
-  private static ByteBuffer creds = null;
   private static String hostname, proxyPrincipal, proxyPrimary, clientPrincipal;
   private static File proxyKeytab, clientKeytab;
 
+  private ByteBuffer creds = null;
+
   // Implementations can set this
   static TProtocolFactory factory = null;
 
@@ -157,7 +167,7 @@
   }
 
   private static boolean isKerberosEnabled() {
-    return SharedMiniClusterIT.TRUE.equals(System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION));
+    return SharedMiniClusterBase.TRUE.equals(System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION));
   }
 
   /**
@@ -166,7 +176,7 @@
   public static void setUpProxy() throws Exception {
     assertNotNull("Implementations must initialize the TProtocolFactory", factory);
 
-    Connector c = SharedMiniClusterIT.getConnector();
+    Connector c = SharedMiniClusterBase.getConnector();
     Instance inst = c.getInstance();
     waitForAccumulo(c);
 
@@ -211,21 +221,21 @@
     } else {
       clientPrincipal = "root";
       tokenClass = PasswordToken.class.getName();
-      properties.put("password", SharedMiniClusterIT.getRootPassword());
+      properties.put("password", SharedMiniClusterBase.getRootPassword());
       hostname = "localhost";
     }
 
     props.put("tokenClass", tokenClass);
 
-    ClientConfiguration clientConfig = SharedMiniClusterIT.getCluster().getClientConfig();
-    String clientConfPath = new File(SharedMiniClusterIT.getCluster().getConfig().getConfDir(), "client.conf").getAbsolutePath();
+    ClientConfiguration clientConfig = SharedMiniClusterBase.getCluster().getClientConfig();
+    String clientConfPath = new File(SharedMiniClusterBase.getCluster().getConfig().getConfDir(), "client.conf").getAbsolutePath();
     props.put("clientConfigurationFile", clientConfPath);
     properties.put("clientConfigurationFile", clientConfPath);
 
     proxyPort = PortUtils.getRandomFreePort();
     proxyServer = Proxy.createProxyServer(HostAndPort.fromParts(hostname, proxyPort), factory, props, clientConfig).server;
     while (!proxyServer.isServing())
-      UtilWaitThread.sleep(100);
+      sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
   }
 
   @AfterClass
@@ -234,11 +244,12 @@
       proxyServer.stop();
     }
 
-    SharedMiniClusterIT.stopMiniCluster();
+    SharedMiniClusterBase.stopMiniCluster();
   }
 
   final IteratorSetting setting = new IteratorSetting(100, "slow", SlowIterator.class.getName(), Collections.singletonMap("sleepTime", "200"));
-  String table;
+  String tableName;
+  String namespaceName;
   ByteBuffer badLogin;
 
   @Before
@@ -276,33 +287,50 @@
       creds = client.login("root", properties);
 
       // Create 'user'
-      client.createLocalUser(creds, "user", s2bb(SharedMiniClusterIT.getRootPassword()));
+      client.createLocalUser(creds, "user", s2bb(SharedMiniClusterBase.getRootPassword()));
       // Log in as 'user'
       badLogin = client.login("user", properties);
       // Drop 'user', invalidating the credentials
       client.dropLocalUser(creds, "user");
     }
 
+    // Create some unique names for tables, namespaces, etc.
+    String[] uniqueNames = getUniqueNames(2);
+
     // Create a general table to be used
-    table = getUniqueNames(1)[0];
-    client.createTable(creds, table, true, TimeType.MILLIS);
+    tableName = uniqueNames[0];
+    client.createTable(creds, tableName, true, TimeType.MILLIS);
+
+    // Create a general namespace to be used
+    namespaceName = uniqueNames[1];
+    client.createNamespace(creds, namespaceName);
   }
 
   @After
   public void teardown() throws Exception {
-    if (null != table) {
+    if (null != tableName) {
       if (isKerberosEnabled()) {
         UserGroupInformation.loginUserFromKeytab(clientPrincipal, clientKeytab.getAbsolutePath());
       }
       try {
-        if (client.tableExists(creds, table)) {
-          client.deleteTable(creds, table);
+        if (client.tableExists(creds, tableName)) {
+          client.deleteTable(creds, tableName);
         }
       } catch (Exception e) {
         log.warn("Failed to delete test table", e);
       }
     }
 
+    if (null != namespaceName) {
+      try {
+        if (client.namespaceExists(creds, namespaceName)) {
+          client.deleteNamespace(creds, namespaceName);
+        }
+      } catch (Exception e) {
+        log.warn("Failed to delete test namespace", e);
+      }
+    }
+
     // Close the transport after the test
     if (null != proxyClient) {
       proxyClient.close();
@@ -315,72 +343,72 @@
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void addConstraintLoginFailure() throws Exception {
-    client.addConstraint(badLogin, table, NumericValueConstraint.class.getName());
+    client.addConstraint(badLogin, tableName, NumericValueConstraint.class.getName());
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void addSplitsLoginFailure() throws Exception {
-    client.addSplits(badLogin, table, Collections.singleton(s2bb("1")));
+    client.addSplits(badLogin, tableName, Collections.singleton(s2bb("1")));
   }
 
   @Test(expected = TApplicationException.class, timeout = 5000)
   public void clearLocatorCacheLoginFailure() throws Exception {
-    client.clearLocatorCache(badLogin, table);
+    client.clearLocatorCache(badLogin, tableName);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void compactTableLoginFailure() throws Exception {
-    client.compactTable(badLogin, table, null, null, null, true, false, null);
+    client.compactTable(badLogin, tableName, null, null, null, true, false, null);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void cancelCompactionLoginFailure() throws Exception {
-    client.cancelCompaction(badLogin, table);
+    client.cancelCompaction(badLogin, tableName);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void createTableLoginFailure() throws Exception {
-    client.createTable(badLogin, table, false, TimeType.MILLIS);
+    client.createTable(badLogin, tableName, false, TimeType.MILLIS);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void deleteTableLoginFailure() throws Exception {
-    client.deleteTable(badLogin, table);
+    client.deleteTable(badLogin, tableName);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void deleteRowsLoginFailure() throws Exception {
-    client.deleteRows(badLogin, table, null, null);
+    client.deleteRows(badLogin, tableName, null, null);
   }
 
   @Test(expected = TApplicationException.class, timeout = 5000)
   public void tableExistsLoginFailure() throws Exception {
-    client.tableExists(badLogin, table);
+    client.tableExists(badLogin, tableName);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void flustTableLoginFailure() throws Exception {
-    client.flushTable(badLogin, table, null, null, false);
+    client.flushTable(badLogin, tableName, null, null, false);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void getLocalityGroupsLoginFailure() throws Exception {
-    client.getLocalityGroups(badLogin, table);
+    client.getLocalityGroups(badLogin, tableName);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void getMaxRowLoginFailure() throws Exception {
-    client.getMaxRow(badLogin, table, Collections.<ByteBuffer> emptySet(), null, false, null, false);
+    client.getMaxRow(badLogin, tableName, Collections.<ByteBuffer> emptySet(), null, false, null, false);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void getTablePropertiesLoginFailure() throws Exception {
-    client.getTableProperties(badLogin, table);
+    client.getTableProperties(badLogin, tableName);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void listSplitsLoginFailure() throws Exception {
-    client.listSplits(badLogin, table, 10000);
+    client.listSplits(badLogin, tableName, 10000);
   }
 
   @Test(expected = TApplicationException.class, timeout = 5000)
@@ -390,50 +418,50 @@
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void listConstraintsLoginFailure() throws Exception {
-    client.listConstraints(badLogin, table);
+    client.listConstraints(badLogin, tableName);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void mergeTabletsLoginFailure() throws Exception {
-    client.mergeTablets(badLogin, table, null, null);
+    client.mergeTablets(badLogin, tableName, null, null);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void offlineTableLoginFailure() throws Exception {
-    client.offlineTable(badLogin, table, false);
+    client.offlineTable(badLogin, tableName, false);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void onlineTableLoginFailure() throws Exception {
-    client.onlineTable(badLogin, table, false);
+    client.onlineTable(badLogin, tableName, false);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void removeConstraintLoginFailure() throws Exception {
-    client.removeConstraint(badLogin, table, 0);
+    client.removeConstraint(badLogin, tableName, 0);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void removeTablePropertyLoginFailure() throws Exception {
-    client.removeTableProperty(badLogin, table, Property.TABLE_FILE_MAX.getKey());
+    client.removeTableProperty(badLogin, tableName, Property.TABLE_FILE_MAX.getKey());
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void renameTableLoginFailure() throws Exception {
-    client.renameTable(badLogin, table, "someTableName");
+    client.renameTable(badLogin, tableName, "someTableName");
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void setLocalityGroupsLoginFailure() throws Exception {
-    Map<String,Set<String>> groups = new HashMap<String,Set<String>>();
+    Map<String,Set<String>> groups = new HashMap<>();
     groups.put("group1", Collections.singleton("cf1"));
     groups.put("group2", Collections.singleton("cf2"));
-    client.setLocalityGroups(badLogin, table, groups);
+    client.setLocalityGroups(badLogin, tableName, groups);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void setTablePropertyLoginFailure() throws Exception {
-    client.setTableProperty(badLogin, table, Property.TABLE_FILE_MAX.getKey(), "0");
+    client.setTableProperty(badLogin, tableName, Property.TABLE_FILE_MAX.getKey(), "0");
   }
 
   @Test(expected = TException.class, timeout = 5000)
@@ -486,7 +514,7 @@
     if (!isKerberosEnabled()) {
       try {
         // Not really a relevant test for kerberos
-        client.authenticateUser(badLogin, "root", s2pp(SharedMiniClusterIT.getRootPassword()));
+        client.authenticateUser(badLogin, "root", s2pp(SharedMiniClusterBase.getRootPassword()));
         fail("Expected AccumuloSecurityException");
       } catch (AccumuloSecurityException e) {
         // Expected
@@ -497,7 +525,7 @@
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void changeUserAuthorizationsLoginFailure() throws Exception {
-    HashSet<ByteBuffer> auths = new HashSet<ByteBuffer>(Arrays.asList(s2bb("A"), s2bb("B")));
+    HashSet<ByteBuffer> auths = new HashSet<>(Arrays.asList(s2bb("A"), s2bb("B")));
     client.changeUserAuthorizations(badLogin, "stooge", auths);
   }
 
@@ -528,7 +556,7 @@
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void grantTablePermissionLoginFailure() throws Exception {
-    client.grantTablePermission(badLogin, "root", table, TablePermission.WRITE);
+    client.grantTablePermission(badLogin, "root", tableName, TablePermission.WRITE);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
@@ -538,7 +566,7 @@
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void hasTablePermission() throws Exception {
-    client.hasTablePermission(badLogin, "root", table, TablePermission.WRITE);
+    client.hasTablePermission(badLogin, "root", tableName, TablePermission.WRITE);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
@@ -553,27 +581,27 @@
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void revokeTablePermissionLoginFailure() throws Exception {
-    client.revokeTablePermission(badLogin, "root", table, TablePermission.ALTER_TABLE);
+    client.revokeTablePermission(badLogin, "root", tableName, TablePermission.ALTER_TABLE);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void createScannerLoginFailure() throws Exception {
-    client.createScanner(badLogin, table, new ScanOptions());
+    client.createScanner(badLogin, tableName, new ScanOptions());
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void createBatchScannerLoginFailure() throws Exception {
-    client.createBatchScanner(badLogin, table, new BatchScanOptions());
+    client.createBatchScanner(badLogin, tableName, new BatchScanOptions());
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void updateAndFlushLoginFailure() throws Exception {
-    client.updateAndFlush(badLogin, table, new HashMap<ByteBuffer,List<ColumnUpdate>>());
+    client.updateAndFlush(badLogin, tableName, new HashMap<ByteBuffer,List<ColumnUpdate>>());
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void createWriterLoginFailure() throws Exception {
-    client.createWriter(badLogin, table, new WriterOptions());
+    client.createWriter(badLogin, tableName, new WriterOptions());
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
@@ -583,17 +611,17 @@
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void checkIteratorLoginFailure() throws Exception {
-    client.checkIteratorConflicts(badLogin, table, setting, EnumSet.allOf(IteratorScope.class));
+    client.checkIteratorConflicts(badLogin, tableName, setting, EnumSet.allOf(IteratorScope.class));
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void cloneTableLoginFailure() throws Exception {
-    client.cloneTable(badLogin, table, table + "_clone", false, null, null);
+    client.cloneTable(badLogin, tableName, tableName + "_clone", false, null, null);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void exportTableLoginFailure() throws Exception {
-    client.exportTable(badLogin, table, "/tmp");
+    client.exportTable(badLogin, tableName, "/tmp");
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
@@ -603,33 +631,33 @@
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void getIteratorSettingLoginFailure() throws Exception {
-    client.getIteratorSetting(badLogin, table, "foo", IteratorScope.SCAN);
+    client.getIteratorSetting(badLogin, tableName, "foo", IteratorScope.SCAN);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void listIteratorsLoginFailure() throws Exception {
-    client.listIterators(badLogin, table);
+    client.listIterators(badLogin, tableName);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void removeIteratorLoginFailure() throws Exception {
-    client.removeIterator(badLogin, table, "name", EnumSet.allOf(IteratorScope.class));
+    client.removeIterator(badLogin, tableName, "name", EnumSet.allOf(IteratorScope.class));
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void splitRangeByTabletsLoginFailure() throws Exception {
-    client.splitRangeByTablets(badLogin, table, client.getRowRange(ByteBuffer.wrap("row".getBytes())), 10);
+    client.splitRangeByTablets(badLogin, tableName, client.getRowRange(ByteBuffer.wrap("row".getBytes(UTF_8))), 10);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void importDirectoryLoginFailure() throws Exception {
-    MiniAccumuloClusterImpl cluster = SharedMiniClusterIT.getCluster();
+    MiniAccumuloClusterImpl cluster = SharedMiniClusterBase.getCluster();
     Path base = cluster.getTemporaryPath();
     Path importDir = new Path(base, "importDir");
     Path failuresDir = new Path(base, "failuresDir");
     assertTrue(cluster.getFileSystem().mkdirs(importDir));
     assertTrue(cluster.getFileSystem().mkdirs(failuresDir));
-    client.importDirectory(badLogin, table, importDir.toString(), failuresDir.toString(), true);
+    client.importDirectory(badLogin, tableName, importDir.toString(), failuresDir.toString(), true);
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
@@ -644,12 +672,119 @@
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void testTableClassLoadLoginFailure() throws Exception {
-    client.testTableClassLoad(badLogin, table, VersioningIterator.class.getName(), SortedKeyValueIterator.class.getName());
+    client.testTableClassLoad(badLogin, tableName, VersioningIterator.class.getName(), SortedKeyValueIterator.class.getName());
   }
 
   @Test(expected = AccumuloSecurityException.class, timeout = 5000)
   public void createConditionalWriterLoginFailure() throws Exception {
-    client.createConditionalWriter(badLogin, table, new ConditionalWriterOptions());
+    client.createConditionalWriter(badLogin, tableName, new ConditionalWriterOptions());
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void grantNamespacePermissionLoginFailure() throws Exception {
+    client.grantNamespacePermission(badLogin, "stooge", namespaceName, NamespacePermission.ALTER_NAMESPACE);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void hasNamespacePermissionLoginFailure() throws Exception {
+    client.hasNamespacePermission(badLogin, "stooge", namespaceName, NamespacePermission.ALTER_NAMESPACE);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void revokeNamespacePermissionLoginFailure() throws Exception {
+    client.revokeNamespacePermission(badLogin, "stooge", namespaceName, NamespacePermission.ALTER_NAMESPACE);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void listNamespacesLoginFailure() throws Exception {
+    client.listNamespaces(badLogin);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void namespaceExistsLoginFailure() throws Exception {
+    client.namespaceExists(badLogin, namespaceName);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void createNamespaceLoginFailure() throws Exception {
+    client.createNamespace(badLogin, "abcdef");
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void deleteNamespaceLoginFailure() throws Exception {
+    client.deleteNamespace(badLogin, namespaceName);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void renameNamespaceLoginFailure() throws Exception {
+    client.renameNamespace(badLogin, namespaceName, "abcdef");
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void setNamespacePropertyLoginFailure() throws Exception {
+    client.setNamespaceProperty(badLogin, namespaceName, "table.compaction.major.ratio", "4");
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void removeNamespacePropertyLoginFailure() throws Exception {
+    client.removeNamespaceProperty(badLogin, namespaceName, "table.compaction.major.ratio");
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void getNamespacePropertiesLoginFailure() throws Exception {
+    client.getNamespaceProperties(badLogin, namespaceName);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void namespaceIdMapLoginFailure() throws Exception {
+    client.namespaceIdMap(badLogin);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void attachNamespaceIteratorLoginFailure() throws Exception {
+    IteratorSetting setting = new IteratorSetting(100, "DebugTheThings", DebugIterator.class.getName(), Collections.<String,String> emptyMap());
+    client.attachNamespaceIterator(badLogin, namespaceName, setting, EnumSet.allOf(IteratorScope.class));
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void removeNamespaceIteratorLoginFailure() throws Exception {
+    client.removeNamespaceIterator(badLogin, namespaceName, "DebugTheThings", EnumSet.allOf(IteratorScope.class));
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void getNamespaceIteratorSettingLoginFailure() throws Exception {
+    client.getNamespaceIteratorSetting(badLogin, namespaceName, "DebugTheThings", IteratorScope.SCAN);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void listNamespaceIteratorsLoginFailure() throws Exception {
+    client.listNamespaceIterators(badLogin, namespaceName);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void checkNamespaceIteratorConflictsLoginFailure() throws Exception {
+    IteratorSetting setting = new IteratorSetting(100, "DebugTheThings", DebugIterator.class.getName(), Collections.<String,String> emptyMap());
+    client.checkNamespaceIteratorConflicts(badLogin, namespaceName, setting, EnumSet.allOf(IteratorScope.class));
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void addNamespaceConstraintLoginFailure() throws Exception {
+    client.addNamespaceConstraint(badLogin, namespaceName, MaxMutationSize.class.getName());
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void removeNamespaceConstraintLoginFailure() throws Exception {
+    client.removeNamespaceConstraint(badLogin, namespaceName, 1);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void listNamespaceConstraintsLoginFailure() throws Exception {
+    client.listNamespaceConstraints(badLogin, namespaceName);
+  }
+
+  @Test(expected = AccumuloSecurityException.class, timeout = 5000)
+  public void testNamespaceClassLoadLoginFailure() throws Exception {
+    client.testNamespaceClassLoad(badLogin, namespaceName, DebugIterator.class.getName(), SortedKeyValueIterator.class.getName());
   }
 
   @Test
@@ -742,7 +877,7 @@
       fail("exception not thrown");
     } catch (TableNotFoundException ex) {}
     try {
-      MiniAccumuloClusterImpl cluster = SharedMiniClusterIT.getCluster();
+      MiniAccumuloClusterImpl cluster = SharedMiniClusterBase.getCluster();
       Path base = cluster.getTemporaryPath();
       Path importDir = new Path(base, "importDir");
       Path failuresDir = new Path(base, "failuresDir");
@@ -796,7 +931,7 @@
       fail("exception not thrown");
     } catch (TableNotFoundException ex) {}
     try {
-      client.splitRangeByTablets(creds, doesNotExist, client.getRowRange(ByteBuffer.wrap("row".getBytes())), 10);
+      client.splitRangeByTablets(creds, doesNotExist, client.getRowRange(ByteBuffer.wrap("row".getBytes(UTF_8))), 10);
       fail("exception not thrown");
     } catch (TableNotFoundException ex) {}
     try {
@@ -813,10 +948,74 @@
     } catch (TableNotFoundException ex) {}
     try {
       client.createConditionalWriter(creds, doesNotExist, new ConditionalWriterOptions());
+      fail("exception not thrown");
     } catch (TableNotFoundException ex) {}
   }
 
   @Test
+  public void namespaceNotFound() throws Exception {
+    final String doesNotExist = "doesNotExists";
+    try {
+      client.deleteNamespace(creds, doesNotExist);
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.renameNamespace(creds, doesNotExist, "abcdefg");
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.setNamespaceProperty(creds, doesNotExist, "table.compaction.major.ratio", "4");
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.removeNamespaceProperty(creds, doesNotExist, "table.compaction.major.ratio");
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.getNamespaceProperties(creds, doesNotExist);
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      IteratorSetting setting = new IteratorSetting(100, "DebugTheThings", DebugIterator.class.getName(), Collections.<String,String> emptyMap());
+      client.attachNamespaceIterator(creds, doesNotExist, setting, EnumSet.allOf(IteratorScope.class));
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.removeNamespaceIterator(creds, doesNotExist, "DebugTheThings", EnumSet.allOf(IteratorScope.class));
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.getNamespaceIteratorSetting(creds, doesNotExist, "DebugTheThings", IteratorScope.SCAN);
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.listNamespaceIterators(creds, doesNotExist);
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      IteratorSetting setting = new IteratorSetting(100, "DebugTheThings", DebugIterator.class.getName(), Collections.<String,String> emptyMap());
+      client.checkNamespaceIteratorConflicts(creds, doesNotExist, setting, EnumSet.allOf(IteratorScope.class));
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.addNamespaceConstraint(creds, doesNotExist, MaxMutationSize.class.getName());
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.removeNamespaceConstraint(creds, doesNotExist, 1);
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.listNamespaceConstraints(creds, doesNotExist);
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+    try {
+      client.testNamespaceClassLoad(creds, doesNotExist, DebugIterator.class.getName(), SortedKeyValueIterator.class.getName());
+      fail("exception not thrown");
+    } catch (NamespaceNotFoundException ex) {}
+  }
+
+  @Test
   public void testExists() throws Exception {
     client.createTable(creds, "ett1", false, TimeType.MILLIS);
     client.createTable(creds, "ett2", false, TimeType.MILLIS);
@@ -835,8 +1034,27 @@
   }
 
   @Test
+  public void testNamespaceExists() throws Exception {
+    client.createNamespace(creds, "foobar");
+    try {
+      client.createNamespace(creds, namespaceName);
+      fail("exception not thrown");
+    } catch (NamespaceExistsException ex) {}
+    try {
+      client.renameNamespace(creds, "foobar", namespaceName);
+      fail("exception not thrown");
+    } catch (NamespaceExistsException ex) {}
+  }
+
+  @Test(expected = NamespaceNotEmptyException.class)
+  public void testNamespaceNotEmpty() throws Exception {
+    client.createTable(creds, namespaceName + ".abcdefg", true, TimeType.MILLIS);
+    client.deleteNamespace(creds, namespaceName);
+  }
+
+  @Test
   public void testUnknownScanner() throws Exception {
-    String scanner = client.createScanner(creds, table, null);
+    String scanner = client.createScanner(creds, tableName, null);
     assertFalse(client.hasNext(scanner));
     client.closeScanner(scanner);
 
@@ -870,7 +1088,7 @@
 
   @Test
   public void testUnknownWriter() throws Exception {
-    String writer = client.createWriter(creds, table, null);
+    String writer = client.createWriter(creds, tableName, null);
     client.update(writer, mutation("row0", "cf", "cq", "value"));
     client.flush(writer);
     client.update(writer, mutation("row2", "cf", "cq", "value2"));
@@ -899,15 +1117,15 @@
 
   @Test
   public void testDelete() throws Exception {
-    client.updateAndFlush(creds, table, mutation("row0", "cf", "cq", "value"));
+    client.updateAndFlush(creds, tableName, mutation("row0", "cf", "cq", "value"));
 
-    assertScan(new String[][] {{"row0", "cf", "cq", "value"}}, table);
+    assertScan(new String[][] {{"row0", "cf", "cq", "value"}}, tableName);
 
     ColumnUpdate upd = new ColumnUpdate(s2bb("cf"), s2bb("cq"));
     upd.setDeleteCell(false);
     Map<ByteBuffer,List<ColumnUpdate>> notDelete = Collections.singletonMap(s2bb("row0"), Collections.singletonList(upd));
-    client.updateAndFlush(creds, table, notDelete);
-    String scanner = client.createScanner(creds, table, null);
+    client.updateAndFlush(creds, tableName, notDelete);
+    String scanner = client.createScanner(creds, tableName, null);
     ScanResult entries = client.nextK(scanner, 10);
     client.closeScanner(scanner);
     assertFalse(entries.more);
@@ -917,9 +1135,9 @@
     upd.setDeleteCell(true);
     Map<ByteBuffer,List<ColumnUpdate>> delete = Collections.singletonMap(s2bb("row0"), Collections.singletonList(upd));
 
-    client.updateAndFlush(creds, table, delete);
+    client.updateAndFlush(creds, tableName, delete);
 
-    assertScan(new String[][] {}, table);
+    assertScan(new String[][] {}, tableName);
   }
 
   @Test
@@ -934,7 +1152,7 @@
       cfg = client.getSystemConfiguration(creds);
       if ("500M".equals(cfg.get("table.split.threshold")))
         break;
-      UtilWaitThread.sleep(200);
+      sleepUninterruptibly(200, TimeUnit.MILLISECONDS);
     }
     assertEquals("500M", cfg.get("table.split.threshold"));
 
@@ -944,7 +1162,7 @@
       cfg = client.getSystemConfiguration(creds);
       if (!"500M".equals(cfg.get("table.split.threshold")))
         break;
-      UtilWaitThread.sleep(200);
+      sleepUninterruptibly(200, TimeUnit.MILLISECONDS);
     }
     assertNotEquals("500M", cfg.get("table.split.threshold"));
   }
@@ -962,7 +1180,7 @@
   @Test
   public void testSiteConfiguration() throws Exception {
     // get something we know is in the site config
-    MiniAccumuloClusterImpl cluster = SharedMiniClusterIT.getCluster();
+    MiniAccumuloClusterImpl cluster = SharedMiniClusterBase.getCluster();
     Map<String,String> cfg = client.getSiteConfiguration(creds);
     assertTrue(cfg.get("instance.dfs.dir").startsWith(cluster.getConfig().getAccumuloDir().getAbsolutePath()));
   }
@@ -1020,7 +1238,7 @@
     t.start();
 
     // look for the scan many times
-    List<ActiveScan> scans = new ArrayList<ActiveScan>();
+    List<ActiveScan> scans = new ArrayList<>();
     for (int i = 0; i < 100 && scans.isEmpty(); i++) {
       for (String tserver : client.getTabletServers(creds)) {
         List<ActiveScan> scansForServer = client.getActiveScans(creds, tserver);
@@ -1032,7 +1250,7 @@
 
         if (!scans.isEmpty())
           break;
-        UtilWaitThread.sleep(100);
+        sleepUninterruptibly(100, TimeUnit.MILLISECONDS);
       }
     }
     t.join();
@@ -1107,7 +1325,7 @@
     assertNotNull(desiredTableId);
 
     // try to catch it in the act
-    List<ActiveCompaction> compactions = new ArrayList<ActiveCompaction>();
+    List<ActiveCompaction> compactions = new ArrayList<>();
     for (int i = 0; i < 100 && compactions.isEmpty(); i++) {
       // Iterate over the tservers
       for (String tserver : client.getTabletServers(creds)) {
@@ -1125,7 +1343,7 @@
         if (!compactions.isEmpty())
           break;
       }
-      UtilWaitThread.sleep(10);
+      sleepUninterruptibly(10, TimeUnit.MILLISECONDS);
     }
     t.join();
 
@@ -1150,10 +1368,11 @@
   public void userAuthentication() throws Exception {
     if (isKerberosEnabled()) {
       assertTrue(client.authenticateUser(creds, clientPrincipal, Collections.<String,String> emptyMap()));
-      // Can't really authenticate "badly" at the application level w/ kerberos. It's going to fail to even set up an RPC
+      // Can't really authenticate "badly" at the application level w/ kerberos. It's going to fail to even set up
+      // an RPC
     } else {
       // check password
-      assertTrue(client.authenticateUser(creds, "root", s2pp(SharedMiniClusterIT.getRootPassword())));
+      assertTrue(client.authenticateUser(creds, "root", s2pp(SharedMiniClusterBase.getRootPassword())));
       assertFalse(client.authenticateUser(creds, "root", s2pp("")));
     }
   }
@@ -1175,18 +1394,18 @@
     client.createLocalUser(creds, user, password);
     // change auths
     Set<String> users = client.listLocalUsers(creds);
-    Set<String> expectedUsers = new HashSet<String>(Arrays.asList(clientPrincipal, user));
+    Set<String> expectedUsers = new HashSet<>(Arrays.asList(clientPrincipal, user));
     assertTrue("Did not find all expected users: " + expectedUsers, users.containsAll(expectedUsers));
-    HashSet<ByteBuffer> auths = new HashSet<ByteBuffer>(Arrays.asList(s2bb("A"), s2bb("B")));
+    HashSet<ByteBuffer> auths = new HashSet<>(Arrays.asList(s2bb("A"), s2bb("B")));
     client.changeUserAuthorizations(creds, user, auths);
     List<ByteBuffer> update = client.getUserAuthorizations(creds, user);
-    assertEquals(auths, new HashSet<ByteBuffer>(update));
+    assertEquals(auths, new HashSet<>(update));
 
     // change password
     if (!isKerberosEnabled()) {
       password = s2bb("");
       client.changeLocalUserPassword(creds, user, password);
-      assertTrue(client.authenticateUser(creds, user, s2pp(new String(password.array(), password.position(), password.limit()))));
+      assertTrue(client.authenticateUser(creds, user, s2pp(ByteBufferUtil.toString(password))));
     }
 
     if (isKerberosEnabled()) {
@@ -1205,7 +1424,7 @@
       }
     } else {
       // check login with new password
-      client.login(user, s2pp(new String(password.array(), password.position(), password.limit())));
+      client.login(user, s2pp(ByteBufferUtil.toString(password)));
     }
   }
 
@@ -1240,7 +1459,7 @@
       userName = getUniqueNames(1)[0];
       // create a user
       client.createLocalUser(creds, userName, password);
-      user = client.login(userName, s2pp(new String(password.array(), password.position(), password.limit())));
+      user = client.login(userName, s2pp(ByteBufferUtil.toString(password)));
     }
 
     // check permission failure
@@ -1270,7 +1489,7 @@
       UserGroupInformation.loginUserFromKeytab(clientPrincipal, clientKeytab.getAbsolutePath());
       client = origClient;
     }
-    client.listTables(creds).contains("succcess");
+    assertTrue(client.listTables(creds).contains("success"));
 
     // revoke permissions
     client.revokeSystemPermission(creds, userName, SystemPermission.CREATE_TABLE);
@@ -1298,7 +1517,7 @@
         UserGroupInformation.loginUserFromKeytab(otherClient.getPrincipal(), otherClient.getKeytab().getAbsolutePath());
         client = userClient;
       }
-      String scanner = client.createScanner(user, table, null);
+      String scanner = client.createScanner(user, tableName, null);
       client.nextK(scanner, 100);
       fail("stooge should not read table test");
     } catch (AccumuloSecurityException ex) {}
@@ -1310,16 +1529,16 @@
     }
 
     // grant
-    assertFalse(client.hasTablePermission(creds, userName, table, TablePermission.READ));
-    client.grantTablePermission(creds, userName, table, TablePermission.READ);
-    assertTrue(client.hasTablePermission(creds, userName, table, TablePermission.READ));
+    assertFalse(client.hasTablePermission(creds, userName, tableName, TablePermission.READ));
+    client.grantTablePermission(creds, userName, tableName, TablePermission.READ);
+    assertTrue(client.hasTablePermission(creds, userName, tableName, TablePermission.READ));
 
     if (isKerberosEnabled()) {
       // Switch back to the extra user
       UserGroupInformation.loginUserFromKeytab(otherClient.getPrincipal(), otherClient.getKeytab().getAbsolutePath());
       client = userClient;
     }
-    String scanner = client.createScanner(user, table, null);
+    String scanner = client.createScanner(user, tableName, null);
     client.nextK(scanner, 10);
     client.closeScanner(scanner);
 
@@ -1330,15 +1549,15 @@
     }
 
     // revoke
-    client.revokeTablePermission(creds, userName, table, TablePermission.READ);
-    assertFalse(client.hasTablePermission(creds, userName, table, TablePermission.READ));
+    client.revokeTablePermission(creds, userName, tableName, TablePermission.READ);
+    assertFalse(client.hasTablePermission(creds, userName, tableName, TablePermission.READ));
     try {
       if (isKerberosEnabled()) {
         // Switch back to the extra user
         UserGroupInformation.loginUserFromKeytab(otherClient.getPrincipal(), otherClient.getKeytab().getAbsolutePath());
         client = userClient;
       }
-      scanner = client.createScanner(user, table, null);
+      scanner = client.createScanner(user, tableName, null);
       client.nextK(scanner, 100);
       fail("stooge should not read table test");
     } catch (AccumuloSecurityException ex) {}
@@ -1362,14 +1581,113 @@
   }
 
   @Test
+  public void namespacePermissions() throws Exception {
+    String userName;
+    ClusterUser otherClient = null;
+    ByteBuffer password = s2bb("password");
+    ByteBuffer user;
+
+    TestProxyClient origProxyClient = null;
+    Client origClient = null;
+    TestProxyClient userProxyClient = null;
+    Client userClient = null;
+
+    if (isKerberosEnabled()) {
+      otherClient = getKdc().getClientPrincipal(1);
+      userName = otherClient.getPrincipal();
+
+      UserGroupInformation.loginUserFromKeytab(otherClient.getPrincipal(), otherClient.getKeytab().getAbsolutePath());
+      final UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
+      // Re-login in and make a new connection. Can't use the previous one
+
+      userProxyClient = new TestProxyClient(hostname, proxyPort, factory, proxyPrimary, ugi);
+
+      origProxyClient = proxyClient;
+      origClient = client;
+      userClient = client = userProxyClient.proxy();
+
+      user = client.login(userName, Collections.<String,String> emptyMap());
+    } else {
+      userName = getUniqueNames(1)[0];
+      // create a user
+      client.createLocalUser(creds, userName, password);
+      user = client.login(userName, s2pp(ByteBufferUtil.toString(password)));
+    }
+
+    // check permission failure
+    try {
+      client.createTable(user, namespaceName + ".fail", true, TimeType.MILLIS);
+      fail("should not create the table");
+    } catch (AccumuloSecurityException ex) {
+      if (isKerberosEnabled()) {
+        // Switch back to original client
+        UserGroupInformation.loginUserFromKeytab(clientPrincipal, clientKeytab.getAbsolutePath());
+        client = origClient;
+      }
+      assertFalse(client.listTables(creds).contains(namespaceName + ".fail"));
+    }
+
+    // grant permissions and test
+    assertFalse(client.hasNamespacePermission(creds, userName, namespaceName, NamespacePermission.CREATE_TABLE));
+    client.grantNamespacePermission(creds, userName, namespaceName, NamespacePermission.CREATE_TABLE);
+    assertTrue(client.hasNamespacePermission(creds, userName, namespaceName, NamespacePermission.CREATE_TABLE));
+    if (isKerberosEnabled()) {
+      // Switch back to the extra user
+      UserGroupInformation.loginUserFromKeytab(otherClient.getPrincipal(), otherClient.getKeytab().getAbsolutePath());
+      client = userClient;
+    }
+    client.createTable(user, namespaceName + ".success", true, TimeType.MILLIS);
+    if (isKerberosEnabled()) {
+      // Switch back to original client
+      UserGroupInformation.loginUserFromKeytab(clientPrincipal, clientKeytab.getAbsolutePath());
+      client = origClient;
+    }
+    assertTrue(client.listTables(creds).contains(namespaceName + ".success"));
+
+    // revoke permissions
+    client.revokeNamespacePermission(creds, userName, namespaceName, NamespacePermission.CREATE_TABLE);
+    assertFalse(client.hasNamespacePermission(creds, userName, namespaceName, NamespacePermission.CREATE_TABLE));
+    try {
+      if (isKerberosEnabled()) {
+        // Switch back to the extra user
+        UserGroupInformation.loginUserFromKeytab(otherClient.getPrincipal(), otherClient.getKeytab().getAbsolutePath());
+        client = userClient;
+      }
+      client.createTable(user, namespaceName + ".fail", true, TimeType.MILLIS);
+      fail("should not create the table");
+    } catch (AccumuloSecurityException ex) {
+      if (isKerberosEnabled()) {
+        // Switch back to original client
+        UserGroupInformation.loginUserFromKeytab(clientPrincipal, clientKeytab.getAbsolutePath());
+        client = origClient;
+      }
+      assertFalse(client.listTables(creds).contains(namespaceName + ".fail"));
+    }
+
+    // delete user
+    client.dropLocalUser(creds, userName);
+    Set<String> users = client.listLocalUsers(creds);
+    assertFalse("Should not see user after they are deleted", users.contains(userName));
+
+    if (isKerberosEnabled()) {
+      userProxyClient.close();
+      proxyClient = origProxyClient;
+      client = origClient;
+    }
+
+    // delete table from namespace otherwise we can't delete namespace during teardown
+    client.deleteTable(creds, namespaceName + ".success");
+  }
+
+  @Test
   public void testBatchWriter() throws Exception {
-    client.addConstraint(creds, table, NumericValueConstraint.class.getName());
+    client.addConstraint(creds, tableName, NumericValueConstraint.class.getName());
     // zookeeper propagation time
-    UtilWaitThread.sleep(ZOOKEEPER_PROPAGATION_TIME);
+    sleepUninterruptibly(ZOOKEEPER_PROPAGATION_TIME, TimeUnit.MILLISECONDS);
 
     // Take the table offline and online to force a config update
-    client.offlineTable(creds, table, true);
-    client.onlineTable(creds, table, true);
+    client.offlineTable(creds, tableName, true);
+    client.onlineTable(creds, tableName, true);
 
     WriterOptions writerOptions = new WriterOptions();
     writerOptions.setLatencyMs(10000);
@@ -1377,16 +1695,16 @@
     writerOptions.setThreads(1);
     writerOptions.setTimeoutMs(100000);
 
-    Map<String,Integer> constraints = client.listConstraints(creds, table);
+    Map<String,Integer> constraints = client.listConstraints(creds, tableName);
     while (!constraints.containsKey(NumericValueConstraint.class.getName())) {
       log.info("Constraints don't contain NumericValueConstraint");
       Thread.sleep(2000);
-      constraints = client.listConstraints(creds, table);
+      constraints = client.listConstraints(creds, tableName);
     }
 
     boolean success = false;
     for (int i = 0; i < 15; i++) {
-      String batchWriter = client.createWriter(creds, table, writerOptions);
+      String batchWriter = client.createWriter(creds, tableName, writerOptions);
       client.update(batchWriter, mutation("row1", "cf", "cq", "x"));
       client.update(batchWriter, mutation("row1", "cf", "cq", "x"));
       try {
@@ -1409,22 +1727,22 @@
       fail("constraint did not fire");
     }
 
-    client.removeConstraint(creds, table, 2);
+    client.removeConstraint(creds, tableName, 2);
 
     // Take the table offline and online to force a config update
-    client.offlineTable(creds, table, true);
-    client.onlineTable(creds, table, true);
+    client.offlineTable(creds, tableName, true);
+    client.onlineTable(creds, tableName, true);
 
-    constraints = client.listConstraints(creds, table);
+    constraints = client.listConstraints(creds, tableName);
     while (constraints.containsKey(NumericValueConstraint.class.getName())) {
       log.info("Constraints still contains NumericValueConstraint");
       Thread.sleep(2000);
-      constraints = client.listConstraints(creds, table);
+      constraints = client.listConstraints(creds, tableName);
     }
 
-    assertScan(new String[][] {}, table);
+    assertScan(new String[][] {}, tableName);
 
-    UtilWaitThread.sleep(ZOOKEEPER_PROPAGATION_TIME);
+    sleepUninterruptibly(ZOOKEEPER_PROPAGATION_TIME, TimeUnit.MILLISECONDS);
 
     writerOptions = new WriterOptions();
     writerOptions.setLatencyMs(10000);
@@ -1435,7 +1753,7 @@
     success = false;
     for (int i = 0; i < 15; i++) {
       try {
-        String batchWriter = client.createWriter(creds, table, writerOptions);
+        String batchWriter = client.createWriter(creds, tableName, writerOptions);
 
         client.update(batchWriter, mutation("row1", "cf", "cq", "x"));
         client.flush(batchWriter);
@@ -1452,39 +1770,39 @@
       fail("Failed to successfully write data after constraint was removed");
     }
 
-    assertScan(new String[][] {{"row1", "cf", "cq", "x"}}, table);
+    assertScan(new String[][] {{"row1", "cf", "cq", "x"}}, tableName);
 
-    client.deleteTable(creds, table);
+    client.deleteTable(creds, tableName);
   }
 
   @Test
   public void testTableConstraints() throws Exception {
-    log.debug("Setting NumericValueConstraint on " + table);
+    log.debug("Setting NumericValueConstraint on " + tableName);
 
     // constraints
-    client.addConstraint(creds, table, NumericValueConstraint.class.getName());
+    client.addConstraint(creds, tableName, NumericValueConstraint.class.getName());
 
     // zookeeper propagation time
     Thread.sleep(ZOOKEEPER_PROPAGATION_TIME);
 
     // Take the table offline and online to force a config update
-    client.offlineTable(creds, table, true);
-    client.onlineTable(creds, table, true);
+    client.offlineTable(creds, tableName, true);
+    client.onlineTable(creds, tableName, true);
 
     log.debug("Attempting to verify client-side that constraints are observed");
 
-    Map<String,Integer> constraints = client.listConstraints(creds, table);
+    Map<String,Integer> constraints = client.listConstraints(creds, tableName);
     while (!constraints.containsKey(NumericValueConstraint.class.getName())) {
       log.debug("Constraints don't contain NumericValueConstraint");
       Thread.sleep(2000);
-      constraints = client.listConstraints(creds, table);
+      constraints = client.listConstraints(creds, tableName);
     }
 
-    assertEquals(2, client.listConstraints(creds, table).size());
+    assertEquals(2, client.listConstraints(creds, tableName).size());
     log.debug("Verified client-side that constraints exist");
 
     // Write data that satisfies the constraint
-    client.updateAndFlush(creds, table, mutation("row1", "cf", "cq", "123"));
+    client.updateAndFlush(creds, tableName, mutation("row1", "cf", "cq", "123"));
 
     log.debug("Successfully wrote data that satisfies the constraint");
     log.debug("Trying to write data that the constraint should reject");
@@ -1492,7 +1810,7 @@
     // Expect failure on data that fails the constraint
     while (true) {
       try {
-        client.updateAndFlush(creds, table, mutation("row1", "cf", "cq", "x"));
+        client.updateAndFlush(creds, tableName, mutation("row1", "cf", "cq", "x"));
         log.debug("Expected mutation to be rejected, but was not. Waiting and retrying");
         Thread.sleep(5000);
       } catch (MutationsRejectedException ex) {
@@ -1503,29 +1821,29 @@
     log.debug("Saw expected failure on data which fails the constraint");
 
     log.debug("Removing constraint from table");
-    client.removeConstraint(creds, table, 2);
+    client.removeConstraint(creds, tableName, 2);
+
+    sleepUninterruptibly(ZOOKEEPER_PROPAGATION_TIME, TimeUnit.MILLISECONDS);
 
     // Take the table offline and online to force a config update
-    client.offlineTable(creds, table, true);
-    client.onlineTable(creds, table, true);
+    client.offlineTable(creds, tableName, true);
+    client.onlineTable(creds, tableName, true);
 
-    UtilWaitThread.sleep(ZOOKEEPER_PROPAGATION_TIME);
-
-    constraints = client.listConstraints(creds, table);
+    constraints = client.listConstraints(creds, tableName);
     while (constraints.containsKey(NumericValueConstraint.class.getName())) {
       log.debug("Constraints contains NumericValueConstraint");
       Thread.sleep(2000);
-      constraints = client.listConstraints(creds, table);
+      constraints = client.listConstraints(creds, tableName);
     }
 
-    assertEquals(1, client.listConstraints(creds, table).size());
+    assertEquals(1, client.listConstraints(creds, tableName).size());
     log.debug("Verified client-side that the constraint was removed");
 
     log.debug("Attempting to write mutation that should succeed after constraints was removed");
     // Make sure we can write the data after we removed the constraint
     while (true) {
       try {
-        client.updateAndFlush(creds, table, mutation("row1", "cf", "cq", "x"));
+        client.updateAndFlush(creds, tableName, mutation("row1", "cf", "cq", "x"));
         break;
       } catch (MutationsRejectedException ex) {
         log.debug("Expected mutation accepted, but was not. Waiting and retrying");
@@ -1534,24 +1852,24 @@
     }
 
     log.debug("Verifying that record can be read from the table");
-    assertScan(new String[][] {{"row1", "cf", "cq", "x"}}, table);
+    assertScan(new String[][] {{"row1", "cf", "cq", "x"}}, tableName);
   }
 
   @Test
   public void tableMergesAndSplits() throws Exception {
     // add some splits
-    client.addSplits(creds, table, new HashSet<ByteBuffer>(Arrays.asList(s2bb("a"), s2bb("m"), s2bb("z"))));
-    List<ByteBuffer> splits = client.listSplits(creds, table, 1);
+    client.addSplits(creds, tableName, new HashSet<>(Arrays.asList(s2bb("a"), s2bb("m"), s2bb("z"))));
+    List<ByteBuffer> splits = client.listSplits(creds, tableName, 1);
     assertEquals(Arrays.asList(s2bb("m")), splits);
 
     // Merge some of the splits away
-    client.mergeTablets(creds, table, null, s2bb("m"));
-    splits = client.listSplits(creds, table, 10);
+    client.mergeTablets(creds, tableName, null, s2bb("m"));
+    splits = client.listSplits(creds, tableName, 10);
     assertEquals(Arrays.asList(s2bb("m"), s2bb("z")), splits);
 
     // Merge the entire table
-    client.mergeTablets(creds, table, null, null);
-    splits = client.listSplits(creds, table, 10);
+    client.mergeTablets(creds, tableName, null, null);
+    splits = client.listSplits(creds, tableName, 10);
     List<ByteBuffer> empty = Collections.emptyList();
 
     // No splits after merge on whole table
@@ -1561,32 +1879,32 @@
   @Test
   public void iteratorFunctionality() throws Exception {
     // iterators
-    HashMap<String,String> options = new HashMap<String,String>();
+    HashMap<String,String> options = new HashMap<>();
     options.put("type", "STRING");
     options.put("columns", "cf");
-    IteratorSetting setting = new IteratorSetting(10, table, SummingCombiner.class.getName(), options);
-    client.attachIterator(creds, table, setting, EnumSet.allOf(IteratorScope.class));
+    IteratorSetting setting = new IteratorSetting(10, tableName, SummingCombiner.class.getName(), options);
+    client.attachIterator(creds, tableName, setting, EnumSet.allOf(IteratorScope.class));
     for (int i = 0; i < 10; i++) {
-      client.updateAndFlush(creds, table, mutation("row1", "cf", "cq", "1"));
+      client.updateAndFlush(creds, tableName, mutation("row1", "cf", "cq", "1"));
     }
     // 10 updates of "1" in the value w/ SummingCombiner should return value of "10"
-    assertScan(new String[][] {{"row1", "cf", "cq", "10"}}, table);
+    assertScan(new String[][] {{"row1", "cf", "cq", "10"}}, tableName);
 
     try {
-      client.checkIteratorConflicts(creds, table, setting, EnumSet.allOf(IteratorScope.class));
+      client.checkIteratorConflicts(creds, tableName, setting, EnumSet.allOf(IteratorScope.class));
       fail("checkIteratorConflicts did not throw an exception");
     } catch (Exception ex) {
       // Expected
     }
-    client.deleteRows(creds, table, null, null);
-    client.removeIterator(creds, table, "test", EnumSet.allOf(IteratorScope.class));
+    client.deleteRows(creds, tableName, null, null);
+    client.removeIterator(creds, tableName, "test", EnumSet.allOf(IteratorScope.class));
     String expected[][] = new String[10][];
     for (int i = 0; i < 10; i++) {
-      client.updateAndFlush(creds, table, mutation("row" + i, "cf", "cq", "" + i));
+      client.updateAndFlush(creds, tableName, mutation("row" + i, "cf", "cq", "" + i));
       expected[i] = new String[] {"row" + i, "cf", "cq", "" + i};
-      client.flushTable(creds, table, null, null, true);
+      client.flushTable(creds, tableName, null, null, true);
     }
-    assertScan(expected, table);
+    assertScan(expected, tableName);
   }
 
   @Test
@@ -1595,14 +1913,14 @@
 
     String expected[][] = new String[10][];
     for (int i = 0; i < 10; i++) {
-      client.updateAndFlush(creds, table, mutation("row" + i, "cf", "cq", "" + i));
+      client.updateAndFlush(creds, tableName, mutation("row" + i, "cf", "cq", "" + i));
       expected[i] = new String[] {"row" + i, "cf", "cq", "" + i};
-      client.flushTable(creds, table, null, null, true);
+      client.flushTable(creds, tableName, null, null, true);
     }
-    assertScan(expected, table);
+    assertScan(expected, tableName);
 
     // clone
-    client.cloneTable(creds, table, TABLE_TEST2, true, null, null);
+    client.cloneTable(creds, tableName, TABLE_TEST2, true, null, null);
     assertScan(expected, TABLE_TEST2);
     client.deleteTable(creds, TABLE_TEST2);
   }
@@ -1610,23 +1928,23 @@
   @Test
   public void clearLocatorCache() throws Exception {
     // don't know how to test this, call it just for fun
-    client.clearLocatorCache(creds, table);
+    client.clearLocatorCache(creds, tableName);
   }
 
   @Test
   public void compactTable() throws Exception {
     String expected[][] = new String[10][];
     for (int i = 0; i < 10; i++) {
-      client.updateAndFlush(creds, table, mutation("row" + i, "cf", "cq", "" + i));
+      client.updateAndFlush(creds, tableName, mutation("row" + i, "cf", "cq", "" + i));
       expected[i] = new String[] {"row" + i, "cf", "cq", "" + i};
-      client.flushTable(creds, table, null, null, true);
+      client.flushTable(creds, tableName, null, null, true);
     }
-    assertScan(expected, table);
+    assertScan(expected, tableName);
 
     // compact
-    client.compactTable(creds, table, null, null, null, true, true, null);
-    assertEquals(1, countFiles(table));
-    assertScan(expected, table);
+    client.compactTable(creds, tableName, null, null, null, true, true, null);
+    assertEquals(1, countFiles(tableName));
+    assertScan(expected, tableName);
   }
 
   @Test
@@ -1636,21 +1954,21 @@
     // Write some data
     String expected[][] = new String[10][];
     for (int i = 0; i < 10; i++) {
-      client.updateAndFlush(creds, table, mutation("row" + i, "cf", "cq", "" + i));
+      client.updateAndFlush(creds, tableName, mutation("row" + i, "cf", "cq", "" + i));
       expected[i] = new String[] {"row" + i, "cf", "cq", "" + i};
-      client.flushTable(creds, table, null, null, true);
+      client.flushTable(creds, tableName, null, null, true);
     }
-    assertScan(expected, table);
+    assertScan(expected, tableName);
 
     // compact
-    client.compactTable(creds, table, null, null, null, true, true, null);
-    assertEquals(1, countFiles(table));
-    assertScan(expected, table);
+    client.compactTable(creds, tableName, null, null, null, true, true, null);
+    assertEquals(1, countFiles(tableName));
+    assertScan(expected, tableName);
 
     // Clone the table
-    client.cloneTable(creds, table, TABLE_TEST2, true, null, null);
-    Set<String> tablesToScan = new HashSet<String>();
-    tablesToScan.add(table);
+    client.cloneTable(creds, tableName, TABLE_TEST2, true, null, null);
+    Set<String> tablesToScan = new HashSet<>();
+    tablesToScan.add(tableName);
     tablesToScan.add(TABLE_TEST2);
     tablesToScan.add("foo");
 
@@ -1684,25 +2002,25 @@
     // Write some data
     String expected[][] = new String[10][];
     for (int i = 0; i < 10; i++) {
-      client.updateAndFlush(creds, table, mutation("row" + i, "cf", "cq", "" + i));
+      client.updateAndFlush(creds, tableName, mutation("row" + i, "cf", "cq", "" + i));
       expected[i] = new String[] {"row" + i, "cf", "cq", "" + i};
-      client.flushTable(creds, table, null, null, true);
+      client.flushTable(creds, tableName, null, null, true);
     }
-    assertScan(expected, table);
+    assertScan(expected, tableName);
 
     // export/import
-    MiniAccumuloClusterImpl cluster = SharedMiniClusterIT.getCluster();
+    MiniAccumuloClusterImpl cluster = SharedMiniClusterBase.getCluster();
     FileSystem fs = cluster.getFileSystem();
     Path base = cluster.getTemporaryPath();
     Path dir = new Path(base, "test");
     assertTrue(fs.mkdirs(dir));
     Path destDir = new Path(base, "test_dest");
     assertTrue(fs.mkdirs(destDir));
-    client.offlineTable(creds, table, false);
-    client.exportTable(creds, table, dir.toString());
+    client.offlineTable(creds, tableName, false);
+    client.exportTable(creds, tableName, dir.toString());
     // copy files to a new location
     FSDataInputStream is = fs.open(new Path(dir, "distcp.txt"));
-    try (BufferedReader r = new BufferedReader(new InputStreamReader(is))) {
+    try (BufferedReader r = new BufferedReader(new InputStreamReader(is, UTF_8))) {
       while (true) {
         String line = r.readLine();
         if (line == null)
@@ -1711,7 +2029,7 @@
         FileUtil.copy(fs, srcPath, fs, destDir, false, fs.getConf());
       }
     }
-    client.deleteTable(creds, table);
+    client.deleteTable(creds, tableName);
     client.importTable(creds, "testify", destDir.toString());
     assertScan(expected, "testify");
     client.deleteTable(creds, "testify");
@@ -1727,11 +2045,11 @@
 
   @Test
   public void localityGroups() throws Exception {
-    Map<String,Set<String>> groups = new HashMap<String,Set<String>>();
+    Map<String,Set<String>> groups = new HashMap<>();
     groups.put("group1", Collections.singleton("cf1"));
     groups.put("group2", Collections.singleton("cf2"));
-    client.setLocalityGroups(creds, table, groups);
-    assertEquals(groups, client.getLocalityGroups(creds, table));
+    client.setLocalityGroups(creds, tableName, groups);
+    assertEquals(groups, client.getLocalityGroups(creds, tableName));
   }
 
   @Test
@@ -1739,18 +2057,18 @@
     Map<String,String> systemProps = client.getSystemConfiguration(creds);
     String systemTableSplitThreshold = systemProps.get("table.split.threshold");
 
-    Map<String,String> orig = client.getTableProperties(creds, table);
-    client.setTableProperty(creds, table, "table.split.threshold", "500M");
+    Map<String,String> orig = client.getTableProperties(creds, tableName);
+    client.setTableProperty(creds, tableName, "table.split.threshold", "500M");
 
     // Get the new table property value
-    Map<String,String> update = client.getTableProperties(creds, table);
+    Map<String,String> update = client.getTableProperties(creds, tableName);
     assertEquals(update.get("table.split.threshold"), "500M");
 
     // Table level properties shouldn't affect system level values
     assertEquals(systemTableSplitThreshold, client.getSystemConfiguration(creds).get("table.split.threshold"));
 
-    client.removeTableProperty(creds, table, "table.split.threshold");
-    update = client.getTableProperties(creds, table);
+    client.removeTableProperty(creds, tableName, "table.split.threshold");
+    update = client.getTableProperties(creds, tableName);
     assertEquals(orig, update);
   }
 
@@ -1758,18 +2076,18 @@
   public void tableRenames() throws Exception {
     // rename table
     Map<String,String> tables = client.tableIdMap(creds);
-    client.renameTable(creds, table, "bar");
+    client.renameTable(creds, tableName, "bar");
     Map<String,String> tables2 = client.tableIdMap(creds);
-    assertEquals(tables.get(table), tables2.get("bar"));
+    assertEquals(tables.get(tableName), tables2.get("bar"));
     // table exists
     assertTrue(client.tableExists(creds, "bar"));
-    assertFalse(client.tableExists(creds, table));
-    client.renameTable(creds, "bar", table);
+    assertFalse(client.tableExists(creds, tableName));
+    client.renameTable(creds, "bar", tableName);
   }
 
   @Test
   public void bulkImport() throws Exception {
-    MiniAccumuloClusterImpl cluster = SharedMiniClusterIT.getCluster();
+    MiniAccumuloClusterImpl cluster = SharedMiniClusterBase.getCluster();
     FileSystem fs = cluster.getFileSystem();
     Path base = cluster.getTemporaryPath();
     Path dir = new Path(base, "test");
@@ -1777,30 +2095,31 @@
 
     // Write an RFile
     String filename = dir + "/bulk/import/rfile.rf";
-    FileSKVWriter writer = FileOperations.getInstance().openWriter(filename, fs, fs.getConf(), DefaultConfiguration.getInstance());
+    FileSKVWriter writer = FileOperations.getInstance().newWriterBuilder().forFile(filename, fs, fs.getConf())
+        .withTableConfiguration(DefaultConfiguration.getInstance()).build();
     writer.startDefaultLocalityGroup();
-    writer.append(new org.apache.accumulo.core.data.Key(new Text("a"), new Text("b"), new Text("c")), new Value("value".getBytes()));
+    writer.append(new org.apache.accumulo.core.data.Key(new Text("a"), new Text("b"), new Text("c")), new Value("value".getBytes(UTF_8)));
     writer.close();
 
     // Create failures directory
     fs.mkdirs(new Path(dir + "/bulk/fail"));
 
     // Run the bulk import
-    client.importDirectory(creds, table, dir + "/bulk/import", dir + "/bulk/fail", true);
+    client.importDirectory(creds, tableName, dir + "/bulk/import", dir + "/bulk/fail", true);
 
     // Make sure we find the data
-    String scanner = client.createScanner(creds, table, null);
+    String scanner = client.createScanner(creds, tableName, null);
     ScanResult more = client.nextK(scanner, 100);
     client.closeScanner(scanner);
     assertEquals(1, more.results.size());
-    ByteBuffer maxRow = client.getMaxRow(creds, table, null, null, false, null, false);
+    ByteBuffer maxRow = client.getMaxRow(creds, tableName, null, null, false, null, false);
     assertEquals(s2bb("a"), maxRow);
   }
 
   @Test
   public void testTableClassLoad() throws Exception {
-    assertFalse(client.testTableClassLoad(creds, table, "abc123", SortedKeyValueIterator.class.getName()));
-    assertTrue(client.testTableClassLoad(creds, table, VersioningIterator.class.getName(), SortedKeyValueIterator.class.getName()));
+    assertFalse(client.testTableClassLoad(creds, tableName, "abc123", SortedKeyValueIterator.class.getName()));
+    assertTrue(client.testTableClassLoad(creds, tableName, VersioningIterator.class.getName(), SortedKeyValueIterator.class.getName()));
   }
 
   private Condition newCondition(String cf, String cq) {
@@ -1839,22 +2158,22 @@
 
   @Test
   public void testConditionalWriter() throws Exception {
-    log.debug("Adding constraint {} to {}", table, NumericValueConstraint.class.getName());
-    client.addConstraint(creds, table, NumericValueConstraint.class.getName());
-    UtilWaitThread.sleep(ZOOKEEPER_PROPAGATION_TIME);
+    log.debug("Adding constraint {} to {}", tableName, NumericValueConstraint.class.getName());
+    client.addConstraint(creds, tableName, NumericValueConstraint.class.getName());
+    sleepUninterruptibly(ZOOKEEPER_PROPAGATION_TIME, TimeUnit.MILLISECONDS);
 
     // Take the table offline and online to force a config update
-    client.offlineTable(creds, table, true);
-    client.onlineTable(creds, table, true);
+    client.offlineTable(creds, tableName, true);
+    client.onlineTable(creds, tableName, true);
 
-    while (!client.listConstraints(creds, table).containsKey(NumericValueConstraint.class.getName())) {
+    while (!client.listConstraints(creds, tableName).containsKey(NumericValueConstraint.class.getName())) {
       log.info("Failed to see constraint");
       Thread.sleep(1000);
     }
 
-    String cwid = client.createConditionalWriter(creds, table, new ConditionalWriterOptions());
+    String cwid = client.createConditionalWriter(creds, tableName, new ConditionalWriterOptions());
 
-    Map<ByteBuffer,ConditionalUpdates> updates = new HashMap<ByteBuffer,ConditionalUpdates>();
+    Map<ByteBuffer,ConditionalUpdates> updates = new HashMap<>();
 
     updates.put(
         s2bb("00345"),
@@ -1866,7 +2185,7 @@
     assertEquals(1, results.size());
     assertEquals(ConditionalStatus.ACCEPTED, results.get(s2bb("00345")));
 
-    assertScan(new String[][] { {"00345", "data", "img", "73435435"}, {"00345", "meta", "seq", "1"}}, table);
+    assertScan(new String[][] { {"00345", "data", "img", "73435435"}, {"00345", "meta", "seq", "1"}}, tableName);
 
     // test not setting values on conditions
     updates.clear();
@@ -1880,7 +2199,7 @@
     assertEquals(ConditionalStatus.REJECTED, results.get(s2bb("00345")));
     assertEquals(ConditionalStatus.ACCEPTED, results.get(s2bb("00346")));
 
-    assertScan(new String[][] { {"00345", "data", "img", "73435435"}, {"00345", "meta", "seq", "1"}, {"00346", "meta", "seq", "1"}}, table);
+    assertScan(new String[][] { {"00345", "data", "img", "73435435"}, {"00345", "meta", "seq", "1"}, {"00346", "meta", "seq", "1"}}, tableName);
 
     // test setting values on conditions
     updates.clear();
@@ -1898,7 +2217,7 @@
     assertEquals(ConditionalStatus.ACCEPTED, results.get(s2bb("00345")));
     assertEquals(ConditionalStatus.REJECTED, results.get(s2bb("00346")));
 
-    assertScan(new String[][] { {"00345", "data", "img", "567890"}, {"00345", "meta", "seq", "2"}, {"00346", "meta", "seq", "1"}}, table);
+    assertScan(new String[][] { {"00345", "data", "img", "567890"}, {"00345", "meta", "seq", "2"}, {"00346", "meta", "seq", "1"}}, tableName);
 
     // test setting timestamp on condition to a non-existant version
     updates.clear();
@@ -1913,7 +2232,7 @@
     assertEquals(1, results.size());
     assertEquals(ConditionalStatus.REJECTED, results.get(s2bb("00345")));
 
-    assertScan(new String[][] { {"00345", "data", "img", "567890"}, {"00345", "meta", "seq", "2"}, {"00346", "meta", "seq", "1"}}, table);
+    assertScan(new String[][] { {"00345", "data", "img", "567890"}, {"00345", "meta", "seq", "2"}, {"00346", "meta", "seq", "1"}}, tableName);
 
     // test setting timestamp to an existing version
 
@@ -1929,13 +2248,13 @@
     assertEquals(1, results.size());
     assertEquals(ConditionalStatus.ACCEPTED, results.get(s2bb("00345")));
 
-    assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"}}, table);
+    assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"}}, tableName);
 
     // run test w/ condition that has iterators
     // following should fail w/o iterator
-    client.updateAndFlush(creds, table, Collections.singletonMap(s2bb("00347"), Arrays.asList(newColUpdate("data", "count", "1"))));
-    client.updateAndFlush(creds, table, Collections.singletonMap(s2bb("00347"), Arrays.asList(newColUpdate("data", "count", "1"))));
-    client.updateAndFlush(creds, table, Collections.singletonMap(s2bb("00347"), Arrays.asList(newColUpdate("data", "count", "1"))));
+    client.updateAndFlush(creds, tableName, Collections.singletonMap(s2bb("00347"), Arrays.asList(newColUpdate("data", "count", "1"))));
+    client.updateAndFlush(creds, tableName, Collections.singletonMap(s2bb("00347"), Arrays.asList(newColUpdate("data", "count", "1"))));
+    client.updateAndFlush(creds, tableName, Collections.singletonMap(s2bb("00347"), Arrays.asList(newColUpdate("data", "count", "1"))));
 
     updates.clear();
     updates.put(s2bb("00347"),
@@ -1947,11 +2266,11 @@
     assertEquals(ConditionalStatus.REJECTED, results.get(s2bb("00347")));
 
     assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"},
-        {"00347", "data", "count", "1"}}, table);
+        {"00347", "data", "count", "1"}}, tableName);
 
     // following test w/ iterator setup should succeed
     Condition iterCond = newCondition("data", "count", "3");
-    Map<String,String> props = new HashMap<String,String>();
+    Map<String,String> props = new HashMap<>();
     props.put("type", "STRING");
     props.put("columns", "data:count");
     IteratorSetting is = new IteratorSetting(1, "sumc", SummingCombiner.class.getName(), props);
@@ -1966,7 +2285,7 @@
     assertEquals(ConditionalStatus.ACCEPTED, results.get(s2bb("00347")));
 
     assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"},
-        {"00347", "data", "count", "1"}, {"00347", "data", "img", "1234567890"}}, table);
+        {"00347", "data", "count", "1"}, {"00347", "data", "img", "1234567890"}}, tableName);
 
     ConditionalStatus status = null;
     for (int i = 0; i < 30; i++) {
@@ -1993,7 +2312,7 @@
     assertEquals(ConditionalStatus.VIOLATED, status);
 
     assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"},
-        {"00347", "data", "count", "1"}, {"00347", "data", "img", "1234567890"}}, table);
+        {"00347", "data", "count", "1"}, {"00347", "data", "img", "1234567890"}}, tableName);
 
     // run test with two conditions
     // both conditions should fail
@@ -2009,7 +2328,7 @@
     assertEquals(ConditionalStatus.REJECTED, results.get(s2bb("00347")));
 
     assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"},
-        {"00347", "data", "count", "1"}, {"00347", "data", "img", "1234567890"}}, table);
+        {"00347", "data", "count", "1"}, {"00347", "data", "img", "1234567890"}}, tableName);
 
     // one condition should fail
     updates.clear();
@@ -2024,7 +2343,7 @@
     assertEquals(ConditionalStatus.REJECTED, results.get(s2bb("00347")));
 
     assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"},
-        {"00347", "data", "count", "1"}, {"00347", "data", "img", "1234567890"}}, table);
+        {"00347", "data", "count", "1"}, {"00347", "data", "img", "1234567890"}}, tableName);
 
     // one condition should fail
     updates.clear();
@@ -2039,13 +2358,13 @@
     assertEquals(ConditionalStatus.REJECTED, results.get(s2bb("00347")));
 
     assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"},
-        {"00347", "data", "count", "1"}, {"00347", "data", "img", "1234567890"}}, table);
+        {"00347", "data", "count", "1"}, {"00347", "data", "img", "1234567890"}}, tableName);
 
     // both conditions should succeed
 
     ConditionalStatus result = client.updateRowConditionally(
         creds,
-        table,
+        tableName,
         s2bb("00347"),
         new ConditionalUpdates(Arrays.asList(newCondition("data", "img", "1234567890"), newCondition("data", "count", "1")), Arrays.asList(
             newColUpdate("data", "count", "3"), newColUpdate("data", "img", "0987654321"))));
@@ -2053,7 +2372,7 @@
     assertEquals(ConditionalStatus.ACCEPTED, result);
 
     assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"},
-        {"00347", "data", "count", "3"}, {"00347", "data", "img", "0987654321"}}, table);
+        {"00347", "data", "count", "3"}, {"00347", "data", "img", "0987654321"}}, tableName);
 
     client.closeConditionalWriter(cwid);
     try {
@@ -2075,8 +2394,8 @@
     }
 
     client.changeUserAuthorizations(creds, principal, Collections.singleton(s2bb("A")));
-    client.grantTablePermission(creds, principal, table, TablePermission.WRITE);
-    client.grantTablePermission(creds, principal, table, TablePermission.READ);
+    client.grantTablePermission(creds, principal, tableName, TablePermission.WRITE);
+    client.grantTablePermission(creds, principal, tableName, TablePermission.READ);
 
     TestProxyClient cwuserProxyClient = null;
     Client origClient = null;
@@ -2096,7 +2415,7 @@
     try {
       ByteBuffer cwCreds = client.login(principal, cwProperties);
 
-      cwid = client.createConditionalWriter(cwCreds, table, new ConditionalWriterOptions().setAuthorizations(Collections.singleton(s2bb("A"))));
+      cwid = client.createConditionalWriter(cwCreds, tableName, new ConditionalWriterOptions().setAuthorizations(Collections.singleton(s2bb("A"))));
 
       updates.clear();
       updates.put(
@@ -2121,7 +2440,7 @@
       }
       // Verify that the original user can't see the updates with visibilities set
       assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"},
-          {"00347", "data", "count", "3"}, {"00347", "data", "img", "0987654321"}, {"00348", "data", "seq", "1"}}, table);
+          {"00347", "data", "count", "3"}, {"00347", "data", "img", "0987654321"}, {"00348", "data", "seq", "1"}}, tableName);
 
       if (isKerberosEnabled()) {
         UserGroupInformation.loginUserFromKeytab(cwuser.getPrincipal(), cwuser.getKeytab().getAbsolutePath());
@@ -2146,7 +2465,7 @@
 
       // Same results as the original user
       assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"},
-          {"00347", "data", "count", "3"}, {"00347", "data", "img", "0987654321"}, {"00348", "data", "seq", "1"}}, table);
+          {"00347", "data", "count", "3"}, {"00347", "data", "img", "0987654321"}, {"00348", "data", "seq", "1"}}, tableName);
 
       if (isKerberosEnabled()) {
         UserGroupInformation.loginUserFromKeytab(cwuser.getPrincipal(), cwuser.getKeytab().getAbsolutePath());
@@ -2168,7 +2487,7 @@
       }
 
       assertScan(new String[][] { {"00345", "data", "img", "1234567890"}, {"00345", "meta", "seq", "3"}, {"00346", "meta", "seq", "1"},
-          {"00347", "data", "count", "3"}, {"00347", "data", "img", "0987654321"}, {"00348", "data", "seq", "2"}}, table);
+          {"00347", "data", "count", "3"}, {"00347", "data", "img", "0987654321"}, {"00348", "data", "seq", "2"}}, tableName);
 
       if (isKerberosEnabled()) {
         UserGroupInformation.loginUserFromKeytab(cwuser.getPrincipal(), cwuser.getKeytab().getAbsolutePath());
@@ -2228,16 +2547,16 @@
 
   private Map<ByteBuffer,List<ColumnUpdate>> mutation(String row, String cf, String cq, String value) {
     ColumnUpdate upd = new ColumnUpdate(s2bb(cf), s2bb(cq));
-    upd.setValue(value.getBytes());
+    upd.setValue(value.getBytes(UTF_8));
     return Collections.singletonMap(s2bb(row), Collections.singletonList(upd));
   }
 
   private ByteBuffer s2bb(String cf) {
-    return ByteBuffer.wrap(cf.getBytes());
+    return ByteBuffer.wrap(cf.getBytes(UTF_8));
   }
 
   private Map<String,String> s2pp(String cf) {
-    Map<String,String> toRet = new TreeMap<String,String>();
+    Map<String,String> toRet = new TreeMap<>();
     toRet.put("password", cf);
     return toRet;
   }
@@ -2264,33 +2583,127 @@
 
   @Test
   public void testCompactionStrategy() throws Exception {
-    client.setProperty(creds, Property.VFS_CONTEXT_CLASSPATH_PROPERTY.getKey() + "context1", System.getProperty("user.dir")
-        + "/src/test/resources/TestCompactionStrat.jar");
-    client.setTableProperty(creds, table, Property.TABLE_CLASSPATH.getKey(), "context1");
+    File jarDir = new File(System.getProperty("user.dir"), "target");
+    assertTrue(jarDir.mkdirs() || jarDir.isDirectory());
+    File jarFile = new File(jarDir, "TestCompactionStrat.jar");
+    FileUtils.copyInputStreamToFile(Class.class.getResourceAsStream("/TestCompactionStrat.jar"), jarFile);
+    client.setProperty(creds, Property.VFS_CONTEXT_CLASSPATH_PROPERTY.getKey() + "context1", jarFile.toString());
+    client.setTableProperty(creds, tableName, Property.TABLE_CLASSPATH.getKey(), "context1");
 
-    client.addSplits(creds, table, Collections.singleton(s2bb("efg")));
+    client.addSplits(creds, tableName, Collections.singleton(s2bb("efg")));
 
-    client.updateAndFlush(creds, table, mutation("a", "cf", "cq", "v1"));
-    client.flushTable(creds, table, null, null, true);
+    client.updateAndFlush(creds, tableName, mutation("a", "cf", "cq", "v1"));
+    client.flushTable(creds, tableName, null, null, true);
 
-    client.updateAndFlush(creds, table, mutation("b", "cf", "cq", "v2"));
-    client.flushTable(creds, table, null, null, true);
+    client.updateAndFlush(creds, tableName, mutation("b", "cf", "cq", "v2"));
+    client.flushTable(creds, tableName, null, null, true);
 
-    client.updateAndFlush(creds, table, mutation("y", "cf", "cq", "v1"));
-    client.flushTable(creds, table, null, null, true);
+    client.updateAndFlush(creds, tableName, mutation("y", "cf", "cq", "v1"));
+    client.flushTable(creds, tableName, null, null, true);
 
-    client.updateAndFlush(creds, table, mutation("z", "cf", "cq", "v2"));
-    client.flushTable(creds, table, null, null, true);
+    client.updateAndFlush(creds, tableName, mutation("z", "cf", "cq", "v2"));
+    client.flushTable(creds, tableName, null, null, true);
 
-    assertEquals(4, countFiles(table));
+    assertEquals(4, countFiles(tableName));
 
     CompactionStrategyConfig csc = new CompactionStrategyConfig();
 
     // The EfgCompactionStrat will only compact tablets with and end row of efg
     csc.setClassName("org.apache.accumulo.test.EfgCompactionStrat");
 
-    client.compactTable(creds, table, null, null, null, true, true, csc);
+    client.compactTable(creds, tableName, null, null, null, true, true, csc);
 
-    assertEquals(3, countFiles(table));
+    assertEquals(3, countFiles(tableName));
+  }
+
+  @Test
+  public void namespaceOperations() throws Exception {
+    // default namespace and accumulo namespace
+    assertEquals("System namespace is wrong", client.systemNamespace(), Namespaces.ACCUMULO_NAMESPACE);
+    assertEquals("Default namespace is wrong", client.defaultNamespace(), Namespaces.DEFAULT_NAMESPACE);
+
+    // namespace existance and namespace listing
+    assertTrue("Namespace created during setup should exist", client.namespaceExists(creds, namespaceName));
+    assertTrue("Namespace listing should contain namespace created during setup", client.listNamespaces(creds).contains(namespaceName));
+
+    // create new namespace
+    String newNamespace = "foobar";
+    client.createNamespace(creds, newNamespace);
+
+    assertTrue("Namespace just created should exist", client.namespaceExists(creds, newNamespace));
+    assertTrue("Namespace listing should contain just created", client.listNamespaces(creds).contains(newNamespace));
+
+    // rename the namespace
+    String renamedNamespace = "foobar_renamed";
+    client.renameNamespace(creds, newNamespace, renamedNamespace);
+
+    assertTrue("Renamed namespace should exist", client.namespaceExists(creds, renamedNamespace));
+    assertTrue("Namespace listing should contain renamed namespace", client.listNamespaces(creds).contains(renamedNamespace));
+
+    assertFalse("Original namespace should no longer exist", client.namespaceExists(creds, newNamespace));
+    assertFalse("Namespace listing should no longer contain original namespace", client.listNamespaces(creds).contains(newNamespace));
+
+    // delete the namespace
+    client.deleteNamespace(creds, renamedNamespace);
+    assertFalse("Renamed namespace should no longer exist", client.namespaceExists(creds, renamedNamespace));
+    assertFalse("Namespace listing should no longer contain renamed namespace", client.listNamespaces(creds).contains(renamedNamespace));
+
+    // namespace properties
+    Map<String,String> cfg = client.getNamespaceProperties(creds, namespaceName);
+    String defaultProp = cfg.get("table.compaction.major.ratio");
+    assertNotEquals(defaultProp, "10"); // let's make sure we are setting this value to something different than default...
+    client.setNamespaceProperty(creds, namespaceName, "table.compaction.major.ratio", "10");
+    for (int i = 0; i < 5; i++) {
+      cfg = client.getNamespaceProperties(creds, namespaceName);
+      if ("10".equals(cfg.get("table.compaction.major.ratio"))) {
+        break;
+      }
+      sleepUninterruptibly(200, TimeUnit.MILLISECONDS);
+    }
+    assertTrue("Namespace should contain table.compaction.major.ratio property",
+        client.getNamespaceProperties(creds, namespaceName).containsKey("table.compaction.major.ratio"));
+    assertEquals("Namespace property table.compaction.major.ratio property should equal 10",
+        client.getNamespaceProperties(creds, namespaceName).get("table.compaction.major.ratio"), "10");
+    client.removeNamespaceProperty(creds, namespaceName, "table.compaction.major.ratio");
+    for (int i = 0; i < 5; i++) {
+      cfg = client.getNamespaceProperties(creds, namespaceName);
+      if (!defaultProp.equals(cfg.get("table.compaction.major.ratio"))) {
+        break;
+      }
+      sleepUninterruptibly(200, TimeUnit.MILLISECONDS);
+    }
+    assertEquals("Namespace should have default value for table.compaction.major.ratio", defaultProp, cfg.get("table.compaction.major.ratio"));
+
+    // namespace ID map
+    assertTrue("Namespace ID map should contain accumulo", client.namespaceIdMap(creds).containsKey("accumulo"));
+    assertTrue("Namespace ID map should contain namespace created during setup", client.namespaceIdMap(creds).containsKey(namespaceName));
+
+    // namespace iterators
+    IteratorSetting setting = new IteratorSetting(100, "DebugTheThings", DebugIterator.class.getName(), Collections.<String,String> emptyMap());
+    client.attachNamespaceIterator(creds, namespaceName, setting, EnumSet.of(IteratorScope.SCAN));
+    assertEquals("Wrong iterator setting returned", setting, client.getNamespaceIteratorSetting(creds, namespaceName, "DebugTheThings", IteratorScope.SCAN));
+    assertTrue("Namespace iterator settings should contain iterator just added",
+        client.listNamespaceIterators(creds, namespaceName).containsKey("DebugTheThings"));
+    assertEquals("Namespace iterator listing should contain iterator scope just added", EnumSet.of(IteratorScope.SCAN),
+        client.listNamespaceIterators(creds, namespaceName).get("DebugTheThings"));
+    client.checkNamespaceIteratorConflicts(creds, namespaceName, setting, EnumSet.of(IteratorScope.MAJC));
+    client.removeNamespaceIterator(creds, namespaceName, "DebugTheThings", EnumSet.of(IteratorScope.SCAN));
+    assertFalse("Namespace iterator settings should contain iterator just added",
+        client.listNamespaceIterators(creds, namespaceName).containsKey("DebugTheThings"));
+
+    // namespace constraints
+    int id = client.addNamespaceConstraint(creds, namespaceName, MaxMutationSize.class.getName());
+    assertTrue("Namespace should contain max mutation size constraint",
+        client.listNamespaceConstraints(creds, namespaceName).containsKey(MaxMutationSize.class.getName()));
+    assertEquals("Namespace max mutation size constraint id is wrong", id,
+        (int) client.listNamespaceConstraints(creds, namespaceName).get(MaxMutationSize.class.getName()));
+    client.removeNamespaceConstraint(creds, namespaceName, id);
+    assertFalse("Namespace should no longer contain max mutation size constraint",
+        client.listNamespaceConstraints(creds, namespaceName).containsKey(MaxMutationSize.class.getName()));
+
+    // namespace class load
+    assertTrue("Namespace class load should work",
+        client.testNamespaceClassLoad(creds, namespaceName, DebugIterator.class.getName(), SortedKeyValueIterator.class.getName()));
+    assertFalse("Namespace class load should not work", client.testNamespaceClassLoad(creds, namespaceName, "foo.bar", SortedKeyValueIterator.class.getName()));
   }
 }
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/TBinaryProxyIT.java b/test/src/main/java/org/apache/accumulo/test/proxy/TBinaryProxyIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/proxy/TBinaryProxyIT.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/TBinaryProxyIT.java
index 97542a0..7500361 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/TBinaryProxyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/TBinaryProxyIT.java
@@ -16,7 +16,7 @@
  */
 package org.apache.accumulo.test.proxy;
 
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.apache.thrift.protocol.TBinaryProtocol;
 import org.junit.BeforeClass;
 
@@ -27,7 +27,7 @@
 
   @BeforeClass
   public static void setProtocol() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
     SimpleProxyBase.factory = new TBinaryProtocol.Factory();
     setUpProxy();
   }
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/TCompactProxyIT.java b/test/src/main/java/org/apache/accumulo/test/proxy/TCompactProxyIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/proxy/TCompactProxyIT.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/TCompactProxyIT.java
index b2ffbf7..157574b 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/TCompactProxyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/TCompactProxyIT.java
@@ -16,7 +16,7 @@
  */
 package org.apache.accumulo.test.proxy;
 
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.apache.thrift.protocol.TCompactProtocol;
 import org.junit.BeforeClass;
 
@@ -27,7 +27,7 @@
 
   @BeforeClass
   public static void setProtocol() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
     SimpleProxyBase.factory = new TCompactProtocol.Factory();
     setUpProxy();
   }
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/TJsonProtocolProxyIT.java b/test/src/main/java/org/apache/accumulo/test/proxy/TJsonProtocolProxyIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/proxy/TJsonProtocolProxyIT.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/TJsonProtocolProxyIT.java
index d3c8bc8..d8b91c4 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/TJsonProtocolProxyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/TJsonProtocolProxyIT.java
@@ -16,7 +16,7 @@
  */
 package org.apache.accumulo.test.proxy;
 
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.apache.thrift.protocol.TJSONProtocol;
 import org.junit.BeforeClass;
 
@@ -27,7 +27,7 @@
 
   @BeforeClass
   public static void setProtocol() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
     SimpleProxyBase.factory = new TJSONProtocol.Factory();
     setUpProxy();
   }
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/TTupleProxyIT.java b/test/src/main/java/org/apache/accumulo/test/proxy/TTupleProxyIT.java
similarity index 91%
rename from test/src/test/java/org/apache/accumulo/test/proxy/TTupleProxyIT.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/TTupleProxyIT.java
index 40f96b8..2f792f6 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/TTupleProxyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/TTupleProxyIT.java
@@ -16,7 +16,7 @@
  */
 package org.apache.accumulo.test.proxy;
 
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.apache.thrift.protocol.TTupleProtocol;
 import org.junit.BeforeClass;
 
@@ -27,7 +27,7 @@
 
   @BeforeClass
   public static void setProtocol() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
     SimpleProxyBase.factory = new TTupleProtocol.Factory();
     setUpProxy();
   }
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/TestProxyClient.java b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyClient.java
similarity index 96%
rename from proxy/src/main/java/org/apache/accumulo/proxy/TestProxyClient.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/TestProxyClient.java
index 4894f13..bb03934 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/TestProxyClient.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyClient.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.proxy;
+package org.apache.accumulo.test.proxy;
 
 import static java.nio.charset.StandardCharsets.UTF_8;
 
@@ -31,6 +31,7 @@
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.iterators.user.RegExFilter;
 import org.apache.accumulo.core.rpc.UGIAssumingTransport;
+import org.apache.accumulo.proxy.Util;
 import org.apache.accumulo.proxy.thrift.AccumuloProxy;
 import org.apache.accumulo.proxy.thrift.ColumnUpdate;
 import org.apache.accumulo.proxy.thrift.Key;
@@ -95,7 +96,7 @@
 
     TestProxyClient tpc = new TestProxyClient("localhost", 42424);
     String principal = "root";
-    Map<String,String> props = new TreeMap<String,String>();
+    Map<String,String> props = new TreeMap<>();
     props.put("password", "secret");
 
     System.out.println("Logging in");
@@ -126,7 +127,7 @@
     Date then = new Date();
     int maxInserts = 1000000;
     String format = "%1$05d";
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     for (int i = 0; i < maxInserts; i++) {
       String result = String.format(format, i);
       ColumnUpdate update = new ColumnUpdate(ByteBuffer.wrap(("cf" + i).getBytes(UTF_8)), ByteBuffer.wrap(("cq" + i).getBytes(UTF_8)));
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/TestProxyInstanceOperations.java b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyInstanceOperations.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/test/proxy/TestProxyInstanceOperations.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/TestProxyInstanceOperations.java
index caed876..ff94dd4 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/TestProxyInstanceOperations.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyInstanceOperations.java
@@ -26,7 +26,6 @@
 
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.proxy.Proxy;
-import org.apache.accumulo.proxy.TestProxyClient;
 import org.apache.thrift.TException;
 import org.apache.thrift.protocol.TCompactProtocol;
 import org.apache.thrift.server.TServer;
@@ -59,7 +58,7 @@
     }
     log.info("Proxy started");
     tpc = new TestProxyClient("localhost", port);
-    userpass = tpc.proxy().login("root", Collections.singletonMap("password", ""));
+    userpass = tpc.proxy.login("root", Collections.singletonMap("password", ""));
   }
 
   @AfterClass
diff --git a/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyNamespaceOperations.java b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyNamespaceOperations.java
new file mode 100644
index 0000000..8dc2990
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyNamespaceOperations.java
@@ -0,0 +1,165 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.proxy;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.nio.ByteBuffer;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Properties;
+import java.util.Set;
+
+import org.apache.accumulo.core.client.impl.Namespaces;
+import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.proxy.Proxy;
+import org.apache.accumulo.proxy.thrift.AccumuloException;
+import org.apache.accumulo.proxy.thrift.IteratorScope;
+import org.apache.accumulo.proxy.thrift.IteratorSetting;
+import org.apache.thrift.TException;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.server.TServer;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import com.google.common.net.HostAndPort;
+
+public class TestProxyNamespaceOperations {
+
+  protected static TServer proxy;
+  protected static TestProxyClient tpc;
+  protected static ByteBuffer userpass;
+  protected static final int port = 10198;
+  protected static final String testnamespace = "testns";
+
+  @BeforeClass
+  public static void setup() throws Exception {
+    Properties prop = new Properties();
+    prop.setProperty("useMockInstance", "true");
+    prop.put("tokenClass", PasswordToken.class.getName());
+
+    proxy = Proxy.createProxyServer(HostAndPort.fromParts("localhost", port), new TCompactProtocol.Factory(), prop).server;
+    while (!proxy.isServing()) {
+      Thread.sleep(500);
+    }
+    tpc = new TestProxyClient("localhost", port);
+    userpass = tpc.proxy().login("root", Collections.singletonMap("password", ""));
+  }
+
+  @AfterClass
+  public static void tearDown() throws InterruptedException {
+    proxy.stop();
+  }
+
+  @Before
+  public void makeTestNamespace() throws Exception {
+    tpc.proxy().createNamespace(userpass, testnamespace);
+  }
+
+  @After
+  public void deleteTestNamespace() throws Exception {
+    tpc.proxy().deleteNamespace(userpass, testnamespace);
+  }
+
+  @Test
+  public void createExistsDelete() throws TException {
+    tpc.proxy().createNamespace(userpass, "testns2");
+    assertTrue(tpc.proxy().namespaceExists(userpass, "testns2"));
+    tpc.proxy().deleteNamespace(userpass, "testns2");
+    assertFalse(tpc.proxy().namespaceExists(userpass, "testns2"));
+  }
+
+  @Test
+  public void listRename() throws TException {
+    assertFalse(tpc.proxy().namespaceExists(userpass, "testns2"));
+    tpc.proxy().renameNamespace(userpass, testnamespace, "testns2");
+    assertTrue(tpc.proxy().namespaceExists(userpass, "testns2"));
+    tpc.proxy().renameNamespace(userpass, "testns2", testnamespace);
+    assertTrue(tpc.proxy().listNamespaces(userpass).contains(testnamespace));
+    assertFalse(tpc.proxy().listNamespaces(userpass).contains("testns2"));
+  }
+
+  @Test
+  public void systemDefault() throws TException {
+    assertEquals(tpc.proxy().systemNamespace(), Namespaces.ACCUMULO_NAMESPACE);
+    assertEquals(tpc.proxy().defaultNamespace(), Namespaces.DEFAULT_NAMESPACE);
+  }
+
+  @Test
+  public void namespaceProperties() throws TException {
+    tpc.proxy().setNamespaceProperty(userpass, testnamespace, "test.property1", "wharrrgarbl");
+    assertEquals(tpc.proxy().getNamespaceProperties(userpass, testnamespace).get("test.property1"), "wharrrgarbl");
+    tpc.proxy().removeNamespaceProperty(userpass, testnamespace, "test.property1");
+    assertNull(tpc.proxy().getNamespaceProperties(userpass, testnamespace).get("test.property1"));
+  }
+
+  @Ignore("MockInstance doesn't return expected results for this function.")
+  @Test
+  public void namespaceIds() throws TException {
+    assertTrue(tpc.proxy().namespaceIdMap(userpass).containsKey("accumulo"));
+    assertEquals(tpc.proxy().namespaceIdMap(userpass).get("accumulo"), "+accumulo");
+  }
+
+  @Test
+  public void namespaceIterators() throws TException {
+    IteratorSetting setting = new IteratorSetting(40, "DebugTheThings", "org.apache.accumulo.core.iterators.DebugIterator", new HashMap<String,String>());
+    Set<IteratorScope> scopes = new HashSet<>();
+    scopes.add(IteratorScope.SCAN);
+    tpc.proxy().attachNamespaceIterator(userpass, testnamespace, setting, scopes);
+    assertEquals(setting, tpc.proxy().getNamespaceIteratorSetting(userpass, testnamespace, "DebugTheThings", IteratorScope.SCAN));
+    assertTrue(tpc.proxy().listNamespaceIterators(userpass, testnamespace).containsKey("DebugTheThings"));
+    Set<IteratorScope> scopes2 = new HashSet<>();
+    scopes2.add(IteratorScope.MINC);
+    tpc.proxy().checkNamespaceIteratorConflicts(userpass, testnamespace, setting, scopes2);
+    tpc.proxy().removeNamespaceIterator(userpass, testnamespace, "DebugTheThings", scopes);
+    assertFalse(tpc.proxy().listNamespaceIterators(userpass, testnamespace).containsKey("DebugTheThings"));
+  }
+
+  @Test(expected = AccumuloException.class)
+  public void namespaceIteratorConflict() throws TException {
+    IteratorSetting setting = new IteratorSetting(40, "DebugTheThings", "org.apache.accumulo.core.iterators.DebugIterator", new HashMap<String,String>());
+    Set<IteratorScope> scopes = new HashSet<>();
+    scopes.add(IteratorScope.SCAN);
+    tpc.proxy().attachNamespaceIterator(userpass, testnamespace, setting, scopes);
+    tpc.proxy().checkNamespaceIteratorConflicts(userpass, testnamespace, setting, scopes);
+  }
+
+  @Test
+  public void namespaceConstraints() throws TException {
+    int constraintId = tpc.proxy().addNamespaceConstraint(userpass, testnamespace, "org.apache.accumulo.examples.simple.constraints.MaxMutationSize");
+    assertTrue(tpc.proxy().listNamespaceConstraints(userpass, testnamespace).containsKey("org.apache.accumulo.examples.simple.constraints.MaxMutationSize"));
+    assertEquals(constraintId,
+        (int) tpc.proxy().listNamespaceConstraints(userpass, testnamespace).get("org.apache.accumulo.examples.simple.constraints.MaxMutationSize"));
+    tpc.proxy().removeNamespaceConstraint(userpass, testnamespace, constraintId);
+    assertFalse(tpc.proxy().listNamespaceConstraints(userpass, testnamespace).containsKey("org.apache.accumulo.examples.simple.constraints.MaxMutationSize"));
+  }
+
+  @Test
+  public void classLoad() throws TException {
+    assertTrue(tpc.proxy().testNamespaceClassLoad(userpass, testnamespace, "org.apache.accumulo.core.iterators.user.VersioningIterator",
+        "org.apache.accumulo.core.iterators.SortedKeyValueIterator"));
+    assertFalse(tpc.proxy().testNamespaceClassLoad(userpass, testnamespace, "org.apache.accumulo.core.iterators.user.VersioningIterator", "dummy"));
+  }
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/TestProxyReadWrite.java b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyReadWrite.java
similarity index 93%
rename from test/src/test/java/org/apache/accumulo/test/proxy/TestProxyReadWrite.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/TestProxyReadWrite.java
index 5eb9500..764a08c 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/TestProxyReadWrite.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyReadWrite.java
@@ -30,7 +30,6 @@
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.iterators.user.RegExFilter;
 import org.apache.accumulo.proxy.Proxy;
-import org.apache.accumulo.proxy.TestProxyClient;
 import org.apache.accumulo.proxy.Util;
 import org.apache.accumulo.proxy.thrift.BatchScanOptions;
 import org.apache.accumulo.proxy.thrift.ColumnUpdate;
@@ -105,7 +104,7 @@
   @Test
   public void readWriteBatchOneShotWithRange() throws Exception {
     int maxInserts = 100000;
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     String format = "%1$05d";
     for (int i = 0; i < maxInserts; i++) {
       addMutation(mutations, String.format(format, i), "cf" + i, "cq" + i, Util.randString(10));
@@ -141,7 +140,7 @@
   @Test
   public void readWriteBatchOneShotWithColumnFamilyOnly() throws Exception {
     int maxInserts = 100000;
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     String format = "%1$05d";
     for (int i = 0; i < maxInserts; i++) {
 
@@ -180,7 +179,7 @@
   @Test
   public void readWriteBatchOneShotWithFullColumn() throws Exception {
     int maxInserts = 100000;
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     String format = "%1$05d";
     for (int i = 0; i < maxInserts; i++) {
 
@@ -219,7 +218,7 @@
   @Test
   public void readWriteBatchOneShotWithFilterIterator() throws Exception {
     int maxInserts = 10000;
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     String format = "%1$05d";
     for (int i = 0; i < maxInserts; i++) {
       addMutation(mutations, String.format(format, i), "cf" + i, "cq" + i, Util.randString(10));
@@ -259,7 +258,7 @@
   @Test
   public void readWriteOneShotWithRange() throws Exception {
     int maxInserts = 100000;
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     String format = "%1$05d";
     for (int i = 0; i < maxInserts; i++) {
       addMutation(mutations, String.format(format, i), "cf" + i, "cq" + i, Util.randString(10));
@@ -294,7 +293,7 @@
   @Test
   public void readWriteOneShotWithFilterIterator() throws Exception {
     int maxInserts = 10000;
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     String format = "%1$05d";
     for (int i = 0; i < maxInserts; i++) {
       addMutation(mutations, String.format(format, i), "cf" + i, "cq" + i, Util.randString(10));
@@ -337,7 +336,7 @@
   // This test takes kind of a long time. Enable it if you think you may have memory issues.
   public void manyWritesAndReads() throws Exception {
     int maxInserts = 1000000;
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     String format = "%1$06d";
     String writer = tpc.proxy().createWriter(userpass, testtable, null);
     for (int i = 0; i < maxInserts; i++) {
@@ -377,7 +376,7 @@
   @Test
   public void asynchReadWrite() throws Exception {
     int maxInserts = 10000;
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     String format = "%1$05d";
     String writer = tpc.proxy().createWriter(userpass, testtable, null);
     for (int i = 0; i < maxInserts; i++) {
@@ -422,12 +421,12 @@
   @Test
   public void testVisibility() throws Exception {
 
-    Set<ByteBuffer> auths = new HashSet<ByteBuffer>();
+    Set<ByteBuffer> auths = new HashSet<>();
     auths.add(ByteBuffer.wrap("even".getBytes()));
     tpc.proxy().changeUserAuthorizations(userpass, "root", auths);
 
     int maxInserts = 10000;
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     String format = "%1$05d";
     String writer = tpc.proxy().createWriter(userpass, testtable, null);
     for (int i = 0; i < maxInserts; i++) {
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/TestProxySecurityOperations.java b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxySecurityOperations.java
similarity index 86%
rename from test/src/test/java/org/apache/accumulo/test/proxy/TestProxySecurityOperations.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/TestProxySecurityOperations.java
index ec3ddf3..fa6c52e 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/TestProxySecurityOperations.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxySecurityOperations.java
@@ -31,7 +31,7 @@
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.util.ByteBufferUtil;
 import org.apache.accumulo.proxy.Proxy;
-import org.apache.accumulo.proxy.TestProxyClient;
+import org.apache.accumulo.proxy.thrift.NamespacePermission;
 import org.apache.accumulo.proxy.thrift.SystemPermission;
 import org.apache.accumulo.proxy.thrift.TablePermission;
 import org.apache.accumulo.proxy.thrift.TimeType;
@@ -53,6 +53,7 @@
   protected static final int port = 10196;
   protected static final String testtable = "testtable";
   protected static final String testuser = "VonJines";
+  protected static final String testnamespace = "testns";
   protected static final ByteBuffer testpw = ByteBuffer.wrap("fiveones".getBytes());
 
   @BeforeClass
@@ -78,12 +79,14 @@
   public void makeTestTableAndUser() throws Exception {
     tpc.proxy().createTable(userpass, testtable, true, TimeType.MILLIS);
     tpc.proxy().createLocalUser(userpass, testuser, testpw);
+    tpc.proxy().createNamespace(userpass, testnamespace);
   }
 
   @After
   public void deleteTestTable() throws Exception {
     tpc.proxy().deleteTable(userpass, testtable);
     tpc.proxy().dropLocalUser(userpass, testuser);
+    tpc.proxy().deleteNamespace(userpass, testnamespace);
   }
 
   @Test
@@ -127,7 +130,7 @@
 
   @Test
   public void auths() throws TException {
-    HashSet<ByteBuffer> newauths = new HashSet<ByteBuffer>();
+    HashSet<ByteBuffer> newauths = new HashSet<>();
     newauths.add(ByteBuffer.wrap("BBR".getBytes()));
     newauths.add(ByteBuffer.wrap("Barney".getBytes()));
     tpc.proxy().changeUserAuthorizations(userpass, testuser, newauths);
@@ -139,8 +142,17 @@
     }
   }
 
+  @Test
+  public void namespacePermissions() throws TException {
+    tpc.proxy().grantNamespacePermission(userpass, testuser, testnamespace, NamespacePermission.ALTER_NAMESPACE);
+    assertTrue(tpc.proxy().hasNamespacePermission(userpass, testuser, testnamespace, NamespacePermission.ALTER_NAMESPACE));
+
+    tpc.proxy().revokeNamespacePermission(userpass, testuser, testnamespace, NamespacePermission.ALTER_NAMESPACE);
+    assertFalse(tpc.proxy().hasNamespacePermission(userpass, testuser, testnamespace, NamespacePermission.ALTER_NAMESPACE));
+  }
+
   private Map<String,String> bb2pp(ByteBuffer cf) {
-    Map<String,String> toRet = new TreeMap<String,String>();
+    Map<String,String> toRet = new TreeMap<>();
     toRet.put("password", ByteBufferUtil.toString(cf));
     return toRet;
   }
diff --git a/test/src/test/java/org/apache/accumulo/test/proxy/TestProxyTableOperations.java b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyTableOperations.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/proxy/TestProxyTableOperations.java
rename to test/src/main/java/org/apache/accumulo/test/proxy/TestProxyTableOperations.java
index 7da2f63..404bcbe 100644
--- a/test/src/test/java/org/apache/accumulo/test/proxy/TestProxyTableOperations.java
+++ b/test/src/main/java/org/apache/accumulo/test/proxy/TestProxyTableOperations.java
@@ -32,7 +32,6 @@
 
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.proxy.Proxy;
-import org.apache.accumulo.proxy.TestProxyClient;
 import org.apache.accumulo.proxy.thrift.ColumnUpdate;
 import org.apache.accumulo.proxy.thrift.TimeType;
 import org.apache.thrift.TException;
@@ -105,7 +104,7 @@
   // This test does not yet function because the backing Mock instance does not yet support merging
   @Test
   public void merge() throws TException {
-    Set<ByteBuffer> splits = new HashSet<ByteBuffer>();
+    Set<ByteBuffer> splits = new HashSet<>();
     splits.add(ByteBuffer.wrap("a".getBytes()));
     splits.add(ByteBuffer.wrap("c".getBytes()));
     splits.add(ByteBuffer.wrap("z".getBytes()));
@@ -125,7 +124,7 @@
 
   @Test
   public void splits() throws TException {
-    Set<ByteBuffer> splits = new HashSet<ByteBuffer>();
+    Set<ByteBuffer> splits = new HashSet<>();
     splits.add(ByteBuffer.wrap("a".getBytes()));
     splits.add(ByteBuffer.wrap("b".getBytes()));
     splits.add(ByteBuffer.wrap("z".getBytes()));
@@ -150,11 +149,11 @@
 
   @Test
   public void localityGroups() throws TException {
-    Map<String,Set<String>> groups = new HashMap<String,Set<String>>();
-    Set<String> group1 = new HashSet<String>();
+    Map<String,Set<String>> groups = new HashMap<>();
+    Set<String> group1 = new HashSet<>();
     group1.add("cf1");
     groups.put("group1", group1);
-    Set<String> group2 = new HashSet<String>();
+    Set<String> group2 = new HashSet<>();
     group2.add("cf2");
     group2.add("cf3");
     groups.put("group2", group2);
@@ -188,7 +187,7 @@
 
   @Test
   public void tableOperationsRowMethods() throws TException {
-    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<ByteBuffer,List<ColumnUpdate>>();
+    Map<ByteBuffer,List<ColumnUpdate>> mutations = new HashMap<>();
     for (int i = 0; i < 10; i++) {
       addMutation(mutations, "" + i, "cf", "cq", "");
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/Environment.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/Environment.java
index c162a38..44619bb 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/Environment.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/Environment.java
@@ -35,6 +35,7 @@
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -169,7 +170,8 @@
         throw new IllegalArgumentException("Provided keytab is not a normal file: " + keytab);
       }
       try {
-        return new KerberosToken(getUserName(), keytabFile, true);
+        UserGroupInformation.loginUserFromKeytab(getUserName(), keytabFile.getAbsolutePath());
+        return new KerberosToken();
       } catch (IOException e) {
         throw new RuntimeException("Failed to login", e);
       }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/Framework.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/Framework.java
index f5b721b..fd5d5fa 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/Framework.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/Framework.java
@@ -29,7 +29,7 @@
 public class Framework {
 
   private static final Logger log = Logger.getLogger(Framework.class);
-  private HashMap<String,Node> nodes = new HashMap<String,Node>();
+  private HashMap<String,Node> nodes = new HashMap<>();
   private String configDir = null;
   private static final Framework INSTANCE = new Framework();
 
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/Module.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/Module.java
index e5af8e6..37b0417 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/Module.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/Module.java
@@ -116,8 +116,8 @@
     }
   }
 
-  private HashMap<String,Node> nodes = new HashMap<String,Node>();
-  private HashMap<String,Properties> localProps = new HashMap<String,Properties>();
+  private HashMap<String,Node> nodes = new HashMap<>();
+  private HashMap<String,Properties> localProps = new HashMap<>();
 
   private class Edge {
     String nodeId;
@@ -126,7 +126,7 @@
 
   private class AdjList {
 
-    private List<Edge> edges = new ArrayList<Edge>();
+    private List<Edge> edges = new ArrayList<>();
     private int totalWeight = 0;
     private Random rand = new Random();
 
@@ -167,9 +167,9 @@
     }
   }
 
-  private HashMap<String,String> prefixes = new HashMap<String,String>();
-  private HashMap<String,AdjList> adjMap = new HashMap<String,AdjList>();
-  private HashMap<String,Set<String>> aliasMap = new HashMap<String,Set<String>>();
+  private HashMap<String,String> prefixes = new HashMap<>();
+  private HashMap<String,AdjList> adjMap = new HashMap<>();
+  private HashMap<String,Set<String>> aliasMap = new HashMap<>();
   private final File xmlFile;
   private String initNodeId;
   private Fixture fixture = null;
@@ -275,7 +275,7 @@
           }
 
           // Wrap the visit of the next node in the module in a callable that returns a thrown exception
-          FutureTask<Exception> task = new FutureTask<Exception>(new Callable<Exception>() {
+          FutureTask<Exception> task = new FutureTask<>(new Callable<Exception>() {
 
             @Override
             public Exception call() throws Exception {
@@ -327,7 +327,7 @@
             log.debug("  " + entry.getKey() + ": " + entry.getValue());
           }
           log.debug("State information");
-          for (String key : new TreeSet<String>(state.getMap().keySet())) {
+          for (String key : new TreeSet<>(state.getMap().keySet())) {
             Object value = state.getMap().get(key);
             String logMsg = "  " + key + ": ";
             if (value == null)
@@ -565,7 +565,7 @@
 
       // parse aliases
       NodeList aliaslist = nodeEl.getElementsByTagName("alias");
-      Set<String> aliases = new TreeSet<String>();
+      Set<String> aliases = new TreeSet<>();
       for (int j = 0; j < aliaslist.getLength(); j++) {
         Element propEl = (Element) aliaslist.item(j);
 
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/State.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/State.java
index 9b74ad4..d906d00 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/State.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/State.java
@@ -23,7 +23,7 @@
  */
 public class State {
 
-  private HashMap<String,Object> stateMap = new HashMap<String,Object>();
+  private HashMap<String,Object> stateMap = new HashMap<>();
 
   /**
    * Creates new empty state.
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/BulkPlusOne.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/BulkPlusOne.java
index c54a8e7..1fb8717 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/BulkPlusOne.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/BulkPlusOne.java
@@ -45,7 +45,7 @@
   public static final int COLS = 10;
   public static final int HEX_SIZE = (int) Math.ceil(Math.log(LOTS) / Math.log(16));
   public static final String FMT = "r%0" + HEX_SIZE + "x";
-  public static final List<Column> COLNAMES = new ArrayList<Column>();
+  public static final List<Column> COLNAMES = new ArrayList<>();
   public static final Text CHECK_COLUMN_FAMILY = new Text("cf");
   static {
     for (int i = 0; i < COLS; i++) {
@@ -66,24 +66,25 @@
     fs.mkdirs(fail);
     final int parts = rand.nextInt(10) + 1;
 
-    TreeSet<Integer> startRows = new TreeSet<Integer>();
+    TreeSet<Integer> startRows = new TreeSet<>();
     startRows.add(0);
     while (startRows.size() < parts)
       startRows.add(rand.nextInt(LOTS));
 
-    List<String> printRows = new ArrayList<String>(startRows.size());
+    List<String> printRows = new ArrayList<>(startRows.size());
     for (Integer row : startRows)
       printRows.add(String.format(FMT, row));
 
     String markerColumnQualifier = String.format("%07d", counter.incrementAndGet());
     log.debug("preparing bulk files with start rows " + printRows + " last row " + String.format(FMT, LOTS - 1) + " marker " + markerColumnQualifier);
 
-    List<Integer> rows = new ArrayList<Integer>(startRows);
+    List<Integer> rows = new ArrayList<>(startRows);
     rows.add(LOTS);
 
     for (int i = 0; i < parts; i++) {
       String fileName = dir + "/" + String.format("part_%d.", i) + RFile.EXTENSION;
-      FileSKVWriter f = FileOperations.getInstance().openWriter(fileName, fs, fs.getConf(), defaultConfiguration);
+      FileSKVWriter f = FileOperations.getInstance().newWriterBuilder().forFile(fileName, fs, fs.getConf()).withTableConfiguration(defaultConfiguration)
+          .build();
       f.startDefaultLocalityGroup();
       int start = rows.get(i);
       int end = rows.get(i + 1);
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/ConsistencyCheck.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/ConsistencyCheck.java
index 39ef3d8..0c7cfb6 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/ConsistencyCheck.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/ConsistencyCheck.java
@@ -38,19 +38,19 @@
     log.info("Checking " + row);
     String user = env.getConnector().whoami();
     Authorizations auths = env.getConnector().securityOperations().getUserAuthorizations(user);
-    Scanner scanner = env.getConnector().createScanner(Setup.getTableName(), auths);
-    scanner = new IsolatedScanner(scanner);
-    scanner.setRange(new Range(row));
-    scanner.fetchColumnFamily(BulkPlusOne.CHECK_COLUMN_FAMILY);
-    Value v = null;
-    Key first = null;
-    for (Entry<Key,Value> entry : scanner) {
-      if (v == null) {
-        v = entry.getValue();
-        first = entry.getKey();
+    try (Scanner scanner = new IsolatedScanner(env.getConnector().createScanner(Setup.getTableName(), auths))) {
+      scanner.setRange(new Range(row));
+      scanner.fetchColumnFamily(BulkPlusOne.CHECK_COLUMN_FAMILY);
+      Value v = null;
+      Key first = null;
+      for (Entry<Key,Value> entry : scanner) {
+        if (v == null) {
+          v = entry.getValue();
+          first = entry.getKey();
+        }
+        if (!v.equals(entry.getValue()))
+          throw new RuntimeException("Inconsistent value at " + entry.getKey() + " was " + entry.getValue() + " should be " + v + " first read at " + first);
       }
-      if (!v.equals(entry.getValue()))
-        throw new RuntimeException("Inconsistent value at " + entry.getKey() + " was " + entry.getValue() + " should be " + v + " first read at " + first);
     }
   }
 
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/Split.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/Split.java
index b69805d..641950e 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/Split.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/bulk/Split.java
@@ -28,7 +28,7 @@
 
   @Override
   protected void runLater(State state, Environment env) throws Exception {
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     Random rand = (Random) state.get("rand");
     int count = rand.nextInt(20);
     for (int i = 0; i < count; i++)
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/AddSplits.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/AddSplits.java
index 2727e62..dc040a6 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/AddSplits.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/AddSplits.java
@@ -41,11 +41,11 @@
 
     @SuppressWarnings("unchecked")
     List<String> tableNames = (List<String>) state.get("tables");
-    tableNames = new ArrayList<String>(tableNames);
+    tableNames = new ArrayList<>(tableNames);
     tableNames.add(MetadataTable.NAME);
     String tableName = tableNames.get(rand.nextInt(tableNames.size()));
 
-    TreeSet<Text> splits = new TreeSet<Text>();
+    TreeSet<Text> splits = new TreeSet<>();
 
     for (int i = 0; i < rand.nextInt(10) + 1; i++)
       splits.add(new Text(String.format("%016x", rand.nextLong() & 0x7fffffffffffffffl)));
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/BatchScan.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/BatchScan.java
index 187199f..111e6c7 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/BatchScan.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/BatchScan.java
@@ -52,7 +52,7 @@
 
     try {
       BatchScanner bs = conn.createBatchScanner(tableName, Authorizations.EMPTY, 3);
-      List<Range> ranges = new ArrayList<Range>();
+      List<Range> ranges = new ArrayList<>();
       for (int i = 0; i < rand.nextInt(2000) + 1; i++)
         ranges.add(new Range(String.format("%016x", rand.nextLong() & 0x7fffffffffffffffl)));
 
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/BulkImport.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/BulkImport.java
index 5af08ec..0e5e439 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/BulkImport.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/BulkImport.java
@@ -36,6 +36,7 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile;
 import org.apache.accumulo.core.file.rfile.RFile;
+import org.apache.accumulo.core.file.streams.PositionedOutputs;
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.test.randomwalk.Environment;
 import org.apache.accumulo.test.randomwalk.State;
@@ -52,8 +53,8 @@
 
     public RFileBatchWriter(Configuration conf, FileSystem fs, String file) throws IOException {
       AccumuloConfiguration aconf = AccumuloConfiguration.getDefaultConfiguration();
-      CachableBlockFile.Writer cbw = new CachableBlockFile.Writer(fs.create(new Path(file), false, conf.getInt("io.file.buffer.size", 4096),
-          (short) conf.getInt("dfs.replication", 3), conf.getLong("dfs.block.size", 1 << 26)), "gz", conf, aconf);
+      CachableBlockFile.Writer cbw = new CachableBlockFile.Writer(PositionedOutputs.wrap(fs.create(new Path(file), false,
+          conf.getInt("io.file.buffer.size", 4096), (short) conf.getInt("dfs.replication", 3), conf.getLong("dfs.block.size", 1 << 26))), "gz", conf, aconf);
       writer = new RFile.Writer(cbw, 100000);
       writer.startDefaultLocalityGroup();
     }
@@ -115,7 +116,7 @@
     try {
       BatchWriter bw = new RFileBatchWriter(conf, fs, bulkDir + "/file01.rf");
       try {
-        TreeSet<Long> rows = new TreeSet<Long>();
+        TreeSet<Long> rows = new TreeSet<>();
         int numRows = rand.nextInt(100000);
         for (int i = 0; i < numRows; i++) {
           rows.add(rand.nextLong() & 0x7fffffffffffffffl);
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ChangeAuthorizations.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ChangeAuthorizations.java
index 65502c3..646c415 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ChangeAuthorizations.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ChangeAuthorizations.java
@@ -43,7 +43,7 @@
 
     String userName = userNames.get(rand.nextInt(userNames.size()));
     try {
-      List<byte[]> auths = new ArrayList<byte[]>(conn.securityOperations().getUserAuthorizations(userName).getAuthorizations());
+      List<byte[]> auths = new ArrayList<>(conn.securityOperations().getUserAuthorizations(userName).getAuthorizations());
 
       if (rand.nextBoolean()) {
         String authorization = String.format("a%d", rand.nextInt(5000));
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ChangePermissions.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ChangePermissions.java
index 680750a..7e8f789 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ChangePermissions.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ChangePermissions.java
@@ -88,13 +88,13 @@
     more.removeAll(perms);
 
     if (rand.nextBoolean() && more.size() > 0) {
-      List<TablePermission> moreList = new ArrayList<TablePermission>(more);
+      List<TablePermission> moreList = new ArrayList<>(more);
       TablePermission choice = moreList.get(rand.nextInt(moreList.size()));
       log.debug("adding permission " + choice);
       conn.securityOperations().grantTablePermission(userName, tableName, choice);
     } else {
       if (perms.size() > 0) {
-        List<TablePermission> permList = new ArrayList<TablePermission>(perms);
+        List<TablePermission> permList = new ArrayList<>(perms);
         TablePermission choice = permList.get(rand.nextInt(permList.size()));
         log.debug("removing permission " + choice);
         conn.securityOperations().revokeTablePermission(userName, tableName, choice);
@@ -114,13 +114,13 @@
     more.remove(SystemPermission.GRANT);
 
     if (rand.nextBoolean() && more.size() > 0) {
-      List<SystemPermission> moreList = new ArrayList<SystemPermission>(more);
+      List<SystemPermission> moreList = new ArrayList<>(more);
       SystemPermission choice = moreList.get(rand.nextInt(moreList.size()));
       log.debug("adding permission " + choice);
       conn.securityOperations().grantSystemPermission(userName, choice);
     } else {
       if (perms.size() > 0) {
-        List<SystemPermission> permList = new ArrayList<SystemPermission>(perms);
+        List<SystemPermission> permList = new ArrayList<>(perms);
         SystemPermission choice = permList.get(rand.nextInt(permList.size()));
         log.debug("removing permission " + choice);
         conn.securityOperations().revokeSystemPermission(userName, choice);
@@ -140,13 +140,13 @@
     more.removeAll(perms);
 
     if (rand.nextBoolean() && more.size() > 0) {
-      List<NamespacePermission> moreList = new ArrayList<NamespacePermission>(more);
+      List<NamespacePermission> moreList = new ArrayList<>(more);
       NamespacePermission choice = moreList.get(rand.nextInt(moreList.size()));
       log.debug("adding permission " + choice);
       conn.securityOperations().grantNamespacePermission(userName, namespace, choice);
     } else {
       if (perms.size() > 0) {
-        List<NamespacePermission> permList = new ArrayList<NamespacePermission>(perms);
+        List<NamespacePermission> permList = new ArrayList<>(perms);
         NamespacePermission choice = permList.get(rand.nextInt(permList.size()));
         log.debug("removing permission " + choice);
         conn.securityOperations().revokeNamespacePermission(userName, namespace, choice);
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ConcurrentFixture.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ConcurrentFixture.java
index b27f34c..403a66a 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ConcurrentFixture.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/ConcurrentFixture.java
@@ -46,7 +46,7 @@
    * @return A two element list with first being smaller than the second, but either value (or both) can be null
    */
   public static List<Text> generateRange(Random rand) {
-    ArrayList<Text> toRet = new ArrayList<Text>(2);
+    ArrayList<Text> toRet = new ArrayList<>(2);
 
     long firstLong = rand.nextLong();
 
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Config.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Config.java
index 7e95d3b..b05e08c 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Config.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Config.java
@@ -26,8 +26,7 @@
 import org.apache.accumulo.test.randomwalk.Environment;
 import org.apache.accumulo.test.randomwalk.State;
 import org.apache.accumulo.test.randomwalk.Test;
-import org.apache.commons.math.random.RandomData;
-import org.apache.commons.math.random.RandomDataImpl;
+import org.apache.commons.math3.random.RandomDataGenerator;
 
 public class Config extends Test {
 
@@ -157,7 +156,7 @@
     state.remove(LAST_SETTING);
     state.remove(LAST_TABLE_SETTING);
     state.remove(LAST_NAMESPACE_SETTING);
-    RandomData random = new RandomDataImpl();
+    RandomDataGenerator random = new RandomDataGenerator();
     int dice = random.nextInt(0, 2);
     if (dice == 0) {
       changeTableSetting(random, state, env, props);
@@ -168,7 +167,7 @@
     }
   }
 
-  private void changeTableSetting(RandomData random, State state, Environment env, Properties props) throws Exception {
+  private void changeTableSetting(RandomDataGenerator random, State state, Environment env, Properties props) throws Exception {
     // pick a random property
     int choice = random.nextInt(0, tableSettings.length - 1);
     Setting setting = tableSettings[choice];
@@ -195,7 +194,7 @@
     }
   }
 
-  private void changeNamespaceSetting(RandomData random, State state, Environment env, Properties props) throws Exception {
+  private void changeNamespaceSetting(RandomDataGenerator random, State state, Environment env, Properties props) throws Exception {
     // pick a random property
     int choice = random.nextInt(0, tableSettings.length - 1);
     Setting setting = tableSettings[choice];
@@ -222,7 +221,7 @@
     }
   }
 
-  private void changeSetting(RandomData random, State state, Environment env, Properties props) throws Exception {
+  private void changeSetting(RandomDataGenerator random, State state, Environment env, Properties props) throws Exception {
     // pick a random property
     int choice = random.nextInt(0, settings.length - 1);
     Setting setting = settings[choice];
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/DeleteRange.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/DeleteRange.java
index 280f620..c164b6b 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/DeleteRange.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/DeleteRange.java
@@ -43,7 +43,7 @@
 
     String tableName = tableNames.get(rand.nextInt(tableNames.size()));
 
-    List<Text> range = new ArrayList<Text>();
+    List<Text> range = new ArrayList<>();
     do {
       range.add(new Text(String.format("%016x", rand.nextLong() & 0x7fffffffffffffffl)));
       range.add(new Text(String.format("%016x", rand.nextLong() & 0x7fffffffffffffffl)));
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/IsolatedScan.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/IsolatedScan.java
index 1bb51bb..eac39fa 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/IsolatedScan.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/IsolatedScan.java
@@ -52,7 +52,7 @@
       RowIterator iter = new RowIterator(new IsolatedScanner(conn.createScanner(tableName, Authorizations.EMPTY)));
 
       while (iter.hasNext()) {
-        PeekingIterator<Entry<Key,Value>> row = new PeekingIterator<Entry<Key,Value>>(iter.next());
+        PeekingIterator<Entry<Key,Value>> row = new PeekingIterator<>(iter.next());
         Entry<Key,Value> kv = null;
         if (row.hasNext())
           kv = row.peek();
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Merge.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Merge.java
index a997c2b..fe84dca 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Merge.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Merge.java
@@ -40,7 +40,7 @@
 
     @SuppressWarnings("unchecked")
     List<String> tableNames = (List<String>) state.get("tables");
-    tableNames = new ArrayList<String>(tableNames);
+    tableNames = new ArrayList<>(tableNames);
     tableNames.add(MetadataTable.NAME);
     String tableName = tableNames.get(rand.nextInt(tableNames.size()));
 
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/OfflineTable.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/OfflineTable.java
index ba6389f..b5af0b7 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/OfflineTable.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/OfflineTable.java
@@ -19,14 +19,16 @@
 import java.util.List;
 import java.util.Properties;
 import java.util.Random;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.test.randomwalk.Environment;
 import org.apache.accumulo.test.randomwalk.State;
 import org.apache.accumulo.test.randomwalk.Test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class OfflineTable extends Test {
 
   @Override
@@ -43,7 +45,7 @@
     try {
       conn.tableOperations().offline(tableName, rand.nextBoolean());
       log.debug("Offlined " + tableName);
-      UtilWaitThread.sleep(rand.nextInt(200));
+      sleepUninterruptibly(rand.nextInt(200), TimeUnit.MILLISECONDS);
       conn.tableOperations().online(tableName, rand.nextBoolean());
       log.debug("Onlined " + tableName);
     } catch (TableNotFoundException tne) {
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Replication.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Replication.java
index c1b2502..9f3e0aa 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Replication.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Replication.java
@@ -35,6 +35,7 @@
 import java.util.SortedSet;
 import java.util.TreeSet;
 import java.util.UUID;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.Connector;
@@ -47,13 +48,14 @@
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.test.randomwalk.Environment;
 import org.apache.accumulo.test.randomwalk.State;
 import org.apache.accumulo.test.randomwalk.Test;
 import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
 import org.apache.hadoop.io.Text;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class Replication extends Test {
 
   final int ROWS = 1000;
@@ -84,7 +86,7 @@
     for (int i = 0; i < 10; i++) {
       if (online)
         break;
-      UtilWaitThread.sleep(2000);
+      sleepUninterruptibly(2, TimeUnit.SECONDS);
       online = ReplicationTable.isOnline(c);
     }
     assertTrue("Replication table was not online", online);
@@ -105,7 +107,7 @@
     tOps.setProperty(sourceTable, TABLE_REPLICATION_TARGET.getKey() + instName, destID);
 
     // zookeeper propagation wait
-    UtilWaitThread.sleep(5 * 1000);
+    sleepUninterruptibly(5, TimeUnit.SECONDS);
 
     // Maybe split the tables
     Random rand = new Random(System.currentTimeMillis());
@@ -148,7 +150,7 @@
     }
 
     // wait a little while for replication to take place
-    UtilWaitThread.sleep(30 * 1000);
+    sleepUninterruptibly(30, TimeUnit.SECONDS);
 
     // check the data
     Scanner scanner = c.createScanner(destTable, Authorizations.EMPTY);
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Setup.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Setup.java
index c19fcbd..ee7ae32 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Setup.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Setup.java
@@ -36,8 +36,8 @@
     int numNamespaces = Integer.parseInt(props.getProperty("numNamespaces", "2"));
     log.debug("numTables = " + numTables);
     log.debug("numNamespaces = " + numNamespaces);
-    List<String> tables = new ArrayList<String>();
-    List<String> namespaces = new ArrayList<String>();
+    List<String> tables = new ArrayList<>();
+    List<String> namespaces = new ArrayList<>();
 
     for (int i = 0; i < numNamespaces; i++) {
       namespaces.add(String.format("nspc_%03d", i));
@@ -62,7 +62,7 @@
 
     int numUsers = Integer.parseInt(props.getProperty("numUsers", "5"));
     log.debug("numUsers = " + numUsers);
-    List<String> users = new ArrayList<String>();
+    List<String> users = new ArrayList<>();
     for (int i = 0; i < numUsers; i++)
       users.add(String.format("user%03d", i));
     state.set("users", users);
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Shutdown.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Shutdown.java
index 6cc8312..eeaeea6 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Shutdown.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/Shutdown.java
@@ -17,12 +17,12 @@
 package org.apache.accumulo.test.randomwalk.concurrent;
 
 import java.util.Properties;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.impl.MasterClient;
 import org.apache.accumulo.core.master.thrift.MasterClientService.Client;
 import org.apache.accumulo.core.master.thrift.MasterGoalState;
 import org.apache.accumulo.core.trace.Tracer;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.master.state.SetGoalState;
 import org.apache.accumulo.server.AccumuloServerContext;
 import org.apache.accumulo.server.client.HdfsZooInstance;
@@ -31,6 +31,8 @@
 import org.apache.accumulo.test.randomwalk.State;
 import org.apache.accumulo.test.randomwalk.Test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class Shutdown extends Test {
 
   @Override
@@ -39,7 +41,7 @@
     SetGoalState.main(new String[] {MasterGoalState.CLEAN_STOP.name()});
 
     while (!env.getConnector().instanceOperations().getTabletServers().isEmpty()) {
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
 
     while (true) {
@@ -51,11 +53,11 @@
         // assume this is due to server shutdown
         break;
       }
-      UtilWaitThread.sleep(1000);
+      sleepUninterruptibly(1, TimeUnit.SECONDS);
     }
 
     log.info("servers stopped");
-    UtilWaitThread.sleep(10000);
+    sleepUninterruptibly(10, TimeUnit.SECONDS);
   }
 
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/StartAll.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/StartAll.java
index 8504fd1..cfc0053 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/StartAll.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/StartAll.java
@@ -17,13 +17,13 @@
 package org.apache.accumulo.test.randomwalk.concurrent;
 
 import java.util.Properties;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.impl.MasterClient;
 import org.apache.accumulo.core.master.thrift.MasterClientService.Client;
 import org.apache.accumulo.core.master.thrift.MasterGoalState;
 import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
 import org.apache.accumulo.core.trace.Tracer;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.master.state.SetGoalState;
 import org.apache.accumulo.server.AccumuloServerContext;
 import org.apache.accumulo.server.client.HdfsZooInstance;
@@ -32,6 +32,8 @@
 import org.apache.accumulo.test.randomwalk.State;
 import org.apache.accumulo.test.randomwalk.Test;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class StartAll extends Test {
 
   @Override
@@ -48,7 +50,7 @@
         if (!masterStats.tServerInfo.isEmpty())
           break;
       } catch (Exception ex) {
-        UtilWaitThread.sleep(1000);
+        sleepUninterruptibly(1, TimeUnit.SECONDS);
       }
     }
   }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/StopTabletServer.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/StopTabletServer.java
index 995a72e..d819cc0 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/StopTabletServer.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/concurrent/StopTabletServer.java
@@ -40,7 +40,7 @@
 public class StopTabletServer extends Test {
 
   Set<TServerInstance> getTServers(Instance instance) throws KeeperException, InterruptedException {
-    Set<TServerInstance> result = new HashSet<TServerInstance>();
+    Set<TServerInstance> result = new HashSet<>();
     ZooReader rdr = new ZooReader(instance.getZooKeepers(), instance.getZooKeepersSessionTimeOut());
     String base = ZooUtil.getRoot(instance) + Constants.ZTSERVERS;
     for (String child : rdr.getChildren(base)) {
@@ -66,7 +66,7 @@
 
     Instance instance = env.getInstance();
 
-    List<TServerInstance> currentServers = new ArrayList<TServerInstance>(getTServers(instance));
+    List<TServerInstance> currentServers = new ArrayList<>(getTServers(instance));
     Collections.shuffle(currentServers);
     Runtime runtime = Runtime.getRuntime();
     if (currentServers.size() > 1) {
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Init.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Init.java
index ebe12ef..4668802 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Init.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Init.java
@@ -43,13 +43,13 @@
     int numAccts = (Integer) state.get("numAccts");
 
     // add some splits to spread ingest out a little
-    TreeSet<Text> splits = new TreeSet<Text>();
+    TreeSet<Text> splits = new TreeSet<>();
     for (int i = 1; i < 10; i++)
       splits.add(new Text(Utils.getBank((int) (numBanks * .1 * i))));
     env.getConnector().tableOperations().addSplits((String) state.get("tableName"), splits);
     log.debug("Added splits " + splits);
 
-    ArrayList<Integer> banks = new ArrayList<Integer>();
+    ArrayList<Integer> banks = new ArrayList<>();
     for (int i = 0; i < numBanks; i++)
       banks.add(i);
     // shuffle for case when multiple threads are adding banks
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Split.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Split.java
index a1ca830..0d7fee1 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Split.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Split.java
@@ -39,7 +39,7 @@
     String row = Utils.getBank(rand.nextInt((Integer) state.get("numBanks")));
 
     log.debug("adding split " + row);
-    conn.tableOperations().addSplits(table, new TreeSet<Text>(Arrays.asList(new Text(row))));
+    conn.tableOperations().addSplits(table, new TreeSet<>(Arrays.asList(new Text(row))));
   }
 
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Transfer.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Transfer.java
index 35636e4..da30df5 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Transfer.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Transfer.java
@@ -34,7 +34,7 @@
 import org.apache.accumulo.test.randomwalk.Environment;
 import org.apache.accumulo.test.randomwalk.State;
 import org.apache.accumulo.test.randomwalk.Test;
-import org.apache.commons.math.distribution.ZipfDistributionImpl;
+import org.apache.commons.math3.distribution.ZipfDistribution;
 import org.apache.hadoop.io.Text;
 
 /**
@@ -69,9 +69,9 @@
     int numAccts = (Integer) state.get("numAccts");
     // note: non integer exponents are slow
 
-    ZipfDistributionImpl zdiBanks = new ZipfDistributionImpl((Integer) state.get("numBanks"), 1);
+    ZipfDistribution zdiBanks = new ZipfDistribution((Integer) state.get("numBanks"), 1);
     String bank = Utils.getBank(zdiBanks.inverseCumulativeProbability(rand.nextDouble()));
-    ZipfDistributionImpl zdiAccts = new ZipfDistributionImpl(numAccts, 1);
+    ZipfDistribution zdiAccts = new ZipfDistribution(numAccts, 1);
     String acct1 = Utils.getAccount(zdiAccts.inverseCumulativeProbability(rand.nextDouble()));
     String acct2 = Utils.getAccount(zdiAccts.inverseCumulativeProbability(rand.nextDouble()));
     while (acct2.equals(acct1)) {
@@ -80,56 +80,56 @@
     }
 
     // TODO document how data should be read when using ConditionalWriter
-    Scanner scanner = new IsolatedScanner(conn.createScanner(table, Authorizations.EMPTY));
+    try (Scanner scanner = new IsolatedScanner(conn.createScanner(table, Authorizations.EMPTY))) {
 
-    scanner.setRange(new Range(bank));
-    scanner.fetchColumnFamily(new Text(acct1));
-    scanner.fetchColumnFamily(new Text(acct2));
+      scanner.setRange(new Range(bank));
+      scanner.fetchColumnFamily(new Text(acct1));
+      scanner.fetchColumnFamily(new Text(acct2));
 
-    Account a1 = new Account();
-    Account a2 = new Account();
-    Account a;
+      Account a1 = new Account();
+      Account a2 = new Account();
+      Account a;
 
-    for (Entry<Key,Value> entry : scanner) {
-      String cf = entry.getKey().getColumnFamilyData().toString();
-      String cq = entry.getKey().getColumnQualifierData().toString();
+      for (Entry<Key,Value> entry : scanner) {
+        String cf = entry.getKey().getColumnFamilyData().toString();
+        String cq = entry.getKey().getColumnQualifierData().toString();
 
-      if (cf.equals(acct1))
-        a = a1;
-      else if (cf.equals(acct2))
-        a = a2;
-      else
-        throw new Exception("Unexpected column fam: " + cf);
+        if (cf.equals(acct1))
+          a = a1;
+        else if (cf.equals(acct2))
+          a = a2;
+        else
+          throw new Exception("Unexpected column fam: " + cf);
 
-      if (cq.equals("bal"))
-        a.setBal(entry.getValue().toString());
-      else if (cq.equals("seq"))
-        a.setSeq(entry.getValue().toString());
-      else
-        throw new Exception("Unexpected column qual: " + cq);
-    }
-
-    int amt = rand.nextInt(50);
-
-    log.debug("transfer req " + bank + " " + amt + " " + acct1 + " " + a1 + " " + acct2 + " " + a2);
-
-    if (a1.bal >= amt) {
-      ConditionalMutation cm = new ConditionalMutation(bank, new Condition(acct1, "seq").setValue(Utils.getSeq(a1.seq)),
-          new Condition(acct2, "seq").setValue(Utils.getSeq(a2.seq)));
-      cm.put(acct1, "bal", (a1.bal - amt) + "");
-      cm.put(acct2, "bal", (a2.bal + amt) + "");
-      cm.put(acct1, "seq", Utils.getSeq(a1.seq + 1));
-      cm.put(acct2, "seq", Utils.getSeq(a2.seq + 1));
-
-      ConditionalWriter cw = (ConditionalWriter) state.get("cw");
-      Status status = cw.write(cm).getStatus();
-      while (status == Status.UNKNOWN) {
-        log.debug("retrying transfer " + status);
-        status = cw.write(cm).getStatus();
+        if (cq.equals("bal"))
+          a.setBal(entry.getValue().toString());
+        else if (cq.equals("seq"))
+          a.setSeq(entry.getValue().toString());
+        else
+          throw new Exception("Unexpected column qual: " + cq);
       }
-      log.debug("transfer result " + bank + " " + status + " " + a1 + " " + a2);
-    }
 
+      int amt = rand.nextInt(50);
+
+      log.debug("transfer req " + bank + " " + amt + " " + acct1 + " " + a1 + " " + acct2 + " " + a2);
+
+      if (a1.bal >= amt) {
+        ConditionalMutation cm = new ConditionalMutation(bank, new Condition(acct1, "seq").setValue(Utils.getSeq(a1.seq)),
+            new Condition(acct2, "seq").setValue(Utils.getSeq(a2.seq)));
+        cm.put(acct1, "bal", (a1.bal - amt) + "");
+        cm.put(acct2, "bal", (a2.bal + amt) + "");
+        cm.put(acct1, "seq", Utils.getSeq(a1.seq + 1));
+        cm.put(acct2, "seq", Utils.getSeq(a2.seq + 1));
+
+        ConditionalWriter cw = (ConditionalWriter) state.get("cw");
+        Status status = cw.write(cm).getStatus();
+        while (status == Status.UNKNOWN) {
+          log.debug("retrying transfer " + status);
+          status = cw.write(cm).getStatus();
+        }
+        log.debug("transfer result " + bank + " " + status + " " + a1 + " " + a2);
+      }
+    }
   }
 
 }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Verify.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Verify.java
index 2690ffc..6c46f73 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Verify.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/conditional/Verify.java
@@ -53,27 +53,30 @@
   private void verifyBank(String table, Connector conn, String row, int numAccts) throws TableNotFoundException, Exception {
     log.debug("Verifying bank " + row);
 
-    // TODO do not use IsolatedScanner, just enable isolation on scanner
-    Scanner scanner = new IsolatedScanner(conn.createScanner(table, Authorizations.EMPTY));
-
-    scanner.setRange(new Range(row));
-    IteratorSetting iterConf = new IteratorSetting(100, "cqsl", ColumnSliceFilter.class);
-    ColumnSliceFilter.setSlice(iterConf, "bal", true, "bal", true);
-    scanner.clearScanIterators();
-    scanner.addScanIterator(iterConf);
-
     int count = 0;
     int sum = 0;
     int min = Integer.MAX_VALUE;
     int max = Integer.MIN_VALUE;
-    for (Entry<Key,Value> entry : scanner) {
-      int bal = Integer.parseInt(entry.getValue().toString());
-      sum += bal;
-      if (bal > max)
-        max = bal;
-      if (bal < min)
-        min = bal;
-      count++;
+
+    // TODO do not use IsolatedScanner, just enable isolation on scanner
+    try (Scanner scanner = new IsolatedScanner(conn.createScanner(table, Authorizations.EMPTY))) {
+
+      scanner.setRange(new Range(row));
+      IteratorSetting iterConf = new IteratorSetting(100, "cqsl", ColumnSliceFilter.class);
+      ColumnSliceFilter.setSlice(iterConf, "bal", true, "bal", true);
+      scanner.clearScanIterators();
+      scanner.addScanIterator(iterConf);
+
+      for (Entry<Key,Value> entry : scanner) {
+        int bal = Integer.parseInt(entry.getValue().toString());
+        sum += bal;
+        if (bal > max)
+          max = bal;
+        if (bal < min)
+          min = bal;
+        count++;
+      }
+
     }
 
     if (count > 0 && sum != numAccts * 100) {
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/image/ImageFixture.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/image/ImageFixture.java
index 3bcc41c..b37ff90 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/image/ImageFixture.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/image/ImageFixture.java
@@ -47,7 +47,7 @@
     Connector conn = env.getConnector();
     Instance instance = env.getInstance();
 
-    SortedSet<Text> splits = new TreeSet<Text>();
+    SortedSet<Text> splits = new TreeSet<>();
     for (int i = 1; i < 256; i++) {
       splits.add(new Text(String.format("%04x", i << 8)));
     }
@@ -94,13 +94,13 @@
   }
 
   static Map<String,Set<Text>> getLocalityGroups() {
-    Map<String,Set<Text>> groups = new HashMap<String,Set<Text>>();
+    Map<String,Set<Text>> groups = new HashMap<>();
 
-    HashSet<Text> lg1 = new HashSet<Text>();
+    HashSet<Text> lg1 = new HashSet<>();
     lg1.add(Write.CONTENT_COLUMN_FAMILY);
     groups.put("lg1", lg1);
 
-    HashSet<Text> lg2 = new HashSet<Text>();
+    HashSet<Text> lg2 = new HashSet<>();
     lg2.add(Write.META_COLUMN_FAMILY);
     groups.put("lg2", lg2);
     return groups;
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/image/ScanMeta.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/image/ScanMeta.java
index 4b801c2..49d33d0 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/image/ScanMeta.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/image/ScanMeta.java
@@ -63,7 +63,7 @@
     Random rand = new Random();
     int numToScan = rand.nextInt(maxScan - minScan) + minScan;
 
-    Map<Text,Text> hashes = new HashMap<Text,Text>();
+    Map<Text,Text> hashes = new HashMap<>();
 
     Iterator<Entry<Key,Value>> iter = imageScanner.iterator();
 
@@ -84,14 +84,14 @@
 
     // use batch scanner to verify all of these exist in index
     BatchScanner indexScanner = conn.createBatchScanner(indexTableName, Authorizations.EMPTY, 3);
-    ArrayList<Range> ranges = new ArrayList<Range>();
+    ArrayList<Range> ranges = new ArrayList<>();
     for (Text row : hashes.keySet()) {
       ranges.add(new Range(row));
     }
 
     indexScanner.setRanges(ranges);
 
-    Map<Text,Text> hashes2 = new HashMap<Text,Text>();
+    Map<Text,Text> hashes2 = new HashMap<>();
 
     for (Entry<Key,Value> entry : indexScanner)
       hashes2.put(entry.getKey().getRow(), new Text(entry.getValue().get()));
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/image/TableOp.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/image/TableOp.java
index b62ec34..d2b5e2f 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/image/TableOp.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/image/TableOp.java
@@ -72,7 +72,7 @@
         groups = ImageFixture.getLocalityGroups();
       } else {
         log.debug("Removing locality groups from " + state.getString("imageTableName"));
-        groups = new HashMap<String,Set<Text>>();
+        groups = new HashMap<>();
       }
 
       tableOps.setLocalityGroups(state.getString("imageTableName"), groups);
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/multitable/CopyTable.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/multitable/CopyTable.java
index d02cb42..46ae035 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/multitable/CopyTable.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/multitable/CopyTable.java
@@ -34,7 +34,7 @@
   private final TreeSet<Text> splits;
 
   public CopyTable() {
-    splits = new TreeSet<Text>();
+    splits = new TreeSet<>();
     for (int i = 1; i < 10; i++) {
       splits.add(new Text(Integer.toString(i)));
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/multitable/CreateTable.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/multitable/CreateTable.java
index 27ab09c..2669d9d 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/multitable/CreateTable.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/multitable/CreateTable.java
@@ -33,7 +33,7 @@
   private final TreeSet<Text> splits;
 
   public CreateTable() {
-    splits = new TreeSet<Text>();
+    splits = new TreeSet<>();
     for (int i = 1; i < 10; i++) {
       splits.add(new Text(Integer.toString(i)));
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/security/TableOp.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/security/TableOp.java
index 2612fc9..477d95f 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/security/TableOp.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/security/TableOp.java
@@ -195,7 +195,7 @@
         break;
       case BULK_IMPORT:
         key = WalkingSecurity.get(state, env).getLastKey() + "1";
-        SortedSet<Key> keys = new TreeSet<Key>();
+        SortedSet<Key> keys = new TreeSet<>();
         for (String s : WalkingSecurity.get(state, env).getAuthsArray()) {
           Key k = new Key(key, "", "", s);
           keys.add(k);
@@ -203,8 +203,8 @@
         Path dir = new Path("/tmp", "bulk_" + UUID.randomUUID().toString());
         Path fail = new Path(dir.toString() + "_fail");
         FileSystem fs = WalkingSecurity.get(state, env).getFs();
-        FileSKVWriter f = FileOperations.getInstance().openWriter(dir + "/securityBulk." + RFile.EXTENSION, fs, fs.getConf(),
-            AccumuloConfiguration.getDefaultConfiguration());
+        FileSKVWriter f = FileOperations.getInstance().newWriterBuilder().forFile(dir + "/securityBulk." + RFile.EXTENSION, fs, fs.getConf())
+            .withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
         f.startDefaultLocalityGroup();
         fs.mkdirs(fail);
         for (Key k : keys)
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/security/WalkingSecurity.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/security/WalkingSecurity.java
index 457b478..0c440af 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/security/WalkingSecurity.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/security/WalkingSecurity.java
@@ -154,7 +154,7 @@
 
   @Override
   public Set<String> listUsers() throws AccumuloSecurityException {
-    Set<String> userList = new TreeSet<String>();
+    Set<String> userList = new TreeSet<>();
     for (String user : new String[] {getSysUserName(), getTabUserName()}) {
       if (userExists(user))
         userList.add(user);
@@ -488,7 +488,7 @@
 
   @Override
   public Set<Class<? extends AuthenticationToken>> getSupportedTokenTypes() {
-    Set<Class<? extends AuthenticationToken>> cs = new HashSet<Class<? extends AuthenticationToken>>();
+    Set<Class<? extends AuthenticationToken>> cs = new HashSet<>();
     cs.add(PasswordToken.class);
     return cs;
   }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/sequential/BatchVerify.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/sequential/BatchVerify.java
index 9ae1bf7..074cbcf 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/sequential/BatchVerify.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/sequential/BatchVerify.java
@@ -55,7 +55,7 @@
 
     try {
       int count = 0;
-      List<Range> ranges = new ArrayList<Range>();
+      List<Range> ranges = new ArrayList<>();
       while (count < numVerify) {
         long rangeStart = rand.nextInt((int) numWrites);
         long rangeEnd = rangeStart + 99;
@@ -81,7 +81,7 @@
 
       scanner.setRanges(ranges);
 
-      List<Key> keys = new ArrayList<Key>();
+      List<Key> keys = new ArrayList<>();
       for (Entry<Key,Value> entry : scanner) {
         keys.add(entry.getKey());
       }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java
index 095b5f7..5f696d0 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/BulkInsert.java
@@ -25,6 +25,7 @@
 import java.util.List;
 import java.util.Properties;
 import java.util.Random;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.Connector;
@@ -36,7 +37,6 @@
 import org.apache.accumulo.core.util.Base64;
 import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.TextUtil;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.test.randomwalk.Environment;
 import org.apache.accumulo.test.randomwalk.State;
 import org.apache.accumulo.test.randomwalk.Test;
@@ -48,6 +48,8 @@
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.util.ToolRunner;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+
 public class BulkInsert extends Test {
 
   class SeqfileBatchWriter implements BatchWriter {
@@ -157,7 +159,7 @@
           else
             log.debug("Ignoring " + failure.getPath());
         }
-        UtilWaitThread.sleep(3000);
+        sleepUninterruptibly(3, TimeUnit.SECONDS);
       } else
         break;
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/CompactFilter.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/CompactFilter.java
index a73b311..42dc9dc 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/CompactFilter.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/CompactFilter.java
@@ -49,7 +49,7 @@
     String deleteChar = Integer.toHexString(rand.nextInt(16)) + "";
     String regex = "^[0-9a-f][" + deleteChar + "].*";
 
-    ArrayList<IteratorSetting> documentFilters = new ArrayList<IteratorSetting>();
+    ArrayList<IteratorSetting> documentFilters = new ArrayList<>();
 
     IteratorSetting is = new IteratorSetting(21, "ii", RegExFilter.class);
     RegExFilter.setRegexs(is, regex, null, null, null, false);
@@ -61,7 +61,7 @@
     long t2 = System.currentTimeMillis();
     long t3 = t2 - t1;
 
-    ArrayList<IteratorSetting> indexFilters = new ArrayList<IteratorSetting>();
+    ArrayList<IteratorSetting> indexFilters = new ArrayList<>();
 
     is = new IteratorSetting(21, RegExFilter.class);
     RegExFilter.setRegexs(is, null, null, regex, null, false);
@@ -76,7 +76,7 @@
 
     BatchScanner bscanner = env.getConnector().createBatchScanner(docTableName, new Authorizations(), 10);
 
-    List<Range> ranges = new ArrayList<Range>();
+    List<Range> ranges = new ArrayList<>();
     for (int i = 0; i < 16; i++) {
       ranges.add(Range.prefix(new Text(Integer.toHexString(i) + "" + deleteChar)));
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/DeleteSomeDocs.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/DeleteSomeDocs.java
index f7fb1df..08fcb4f 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/DeleteSomeDocs.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/DeleteSomeDocs.java
@@ -43,7 +43,7 @@
     String indexTableName = (String) state.get("indexTableName");
     String dataTableName = (String) state.get("docTableName");
 
-    ArrayList<String> patterns = new ArrayList<String>();
+    ArrayList<String> patterns = new ArrayList<>();
 
     for (Object key : props.keySet())
       if (key instanceof String && ((String) key).startsWith("pattern"))
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/DeleteWord.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/DeleteWord.java
index 28b1899..9de0240 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/DeleteWord.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/DeleteWord.java
@@ -54,7 +54,7 @@
     Scanner scanner = env.getConnector().createScanner(indexTableName, Authorizations.EMPTY);
     scanner.fetchColumnFamily(new Text(wordToDelete));
 
-    ArrayList<Range> documentsToDelete = new ArrayList<Range>();
+    ArrayList<Range> documentsToDelete = new ArrayList<>();
 
     for (Entry<Key,Value> entry : scanner)
       documentsToDelete.add(new Range(entry.getKey().getColumnQualifier()));
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/ExportIndex.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/ExportIndex.java
index ca273ac..5b57ace 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/ExportIndex.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/ExportIndex.java
@@ -89,17 +89,17 @@
     fs.delete(new Path(exportDir), true);
     fs.delete(new Path(copyDir), true);
 
-    HashSet<Text> splits1 = new HashSet<Text>(env.getConnector().tableOperations().listSplits(indexTableName));
-    HashSet<Text> splits2 = new HashSet<Text>(env.getConnector().tableOperations().listSplits(tmpIndexTableName));
+    HashSet<Text> splits1 = new HashSet<>(env.getConnector().tableOperations().listSplits(indexTableName));
+    HashSet<Text> splits2 = new HashSet<>(env.getConnector().tableOperations().listSplits(tmpIndexTableName));
 
     if (!splits1.equals(splits2))
       throw new Exception("Splits not equals " + indexTableName + " " + tmpIndexTableName);
 
-    HashMap<String,String> props1 = new HashMap<String,String>();
+    HashMap<String,String> props1 = new HashMap<>();
     for (Entry<String,String> entry : env.getConnector().tableOperations().getProperties(indexTableName))
       props1.put(entry.getKey(), entry.getValue());
 
-    HashMap<String,String> props2 = new HashMap<String,String>();
+    HashMap<String,String> props2 = new HashMap<>();
     for (Entry<String,String> entry : env.getConnector().tableOperations().getProperties(tmpIndexTableName))
       props2.put(entry.getKey(), entry.getValue());
 
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Grep.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Grep.java
index d5c5e5d..7409ea7 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Grep.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Grep.java
@@ -59,7 +59,7 @@
     bs.addScanIterator(ii);
     bs.setRanges(Collections.singleton(new Range()));
 
-    HashSet<Text> documentsFoundInIndex = new HashSet<Text>();
+    HashSet<Text> documentsFoundInIndex = new HashSet<>();
 
     for (Entry<Key,Value> entry2 : bs) {
       documentsFoundInIndex.add(entry2.getKey().getColumnQualifier());
@@ -77,7 +77,7 @@
 
     bs.setRanges(Collections.singleton(new Range()));
 
-    HashSet<Text> documentsFoundByGrep = new HashSet<Text>();
+    HashSet<Text> documentsFoundByGrep = new HashSet<>();
 
     for (Entry<Key,Value> entry2 : bs) {
       documentsFoundByGrep.add(entry2.getKey().getRow());
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Insert.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Insert.java
index 4fdbbb9..8482a59 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Insert.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Insert.java
@@ -115,7 +115,7 @@
 
     Mutation m = new Mutation(partition);
 
-    HashSet<String> tokensSeen = new HashSet<String>();
+    HashSet<String> tokensSeen = new HashSet<>();
 
     for (String token : tokens) {
       token = token.toLowerCase();
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Merge.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Merge.java
index 36a70f6..910e64c 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Merge.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Merge.java
@@ -33,7 +33,7 @@
     String indexTableName = (String) state.get("indexTableName");
 
     Collection<Text> splits = env.getConnector().tableOperations().listSplits(indexTableName);
-    SortedSet<Text> splitSet = new TreeSet<Text>(splits);
+    SortedSet<Text> splitSet = new TreeSet<>(splits);
     log.debug("merging " + indexTableName);
     env.getConnector().tableOperations().merge(indexTableName, null, null);
     org.apache.accumulo.core.util.Merge merge = new org.apache.accumulo.core.util.Merge();
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Search.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Search.java
index 899ec42..bccb280 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Search.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/Search.java
@@ -57,7 +57,7 @@
     if (numSearchTerms < 2)
       numSearchTerms = 2;
 
-    HashSet<String> searchTerms = new HashSet<String>();
+    HashSet<String> searchTerms = new HashSet<>();
     while (searchTerms.size() < numSearchTerms)
       searchTerms.add(tokens[rand.nextInt(tokens.length)]);
 
diff --git a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/ShardFixture.java b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/ShardFixture.java
index 63500a0..99e3a61 100644
--- a/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/ShardFixture.java
+++ b/test/src/main/java/org/apache/accumulo/test/randomwalk/shard/ShardFixture.java
@@ -39,7 +39,7 @@
     long distance = max / numTablets;
     long split = distance;
 
-    TreeSet<Text> splits = new TreeSet<Text>();
+    TreeSet<Text> splits = new TreeSet<>();
 
     for (int i = 0; i < numSplits; i++) {
       splits.add(new Text(String.format(format, split)));
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/CyclicReplicationIT.java b/test/src/main/java/org/apache/accumulo/test/replication/CyclicReplicationIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/replication/CyclicReplicationIT.java
rename to test/src/main/java/org/apache/accumulo/test/replication/CyclicReplicationIT.java
index 25061c9..8603cd6 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/CyclicReplicationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/CyclicReplicationIT.java
@@ -48,7 +48,7 @@
 import org.apache.accumulo.minicluster.impl.ProcessReference;
 import org.apache.accumulo.minicluster.impl.ZooKeeperBindException;
 import org.apache.accumulo.server.replication.ReplicaSystemFactory;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.tserver.TabletServer;
 import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
 import org.apache.commons.io.FileUtils;
@@ -79,7 +79,7 @@
       log.warn("Could not parse timeout.factor, not scaling timeout");
     }
 
-    return new Timeout(scalingFactor * 5 * 60 * 1000);
+    return Timeout.millis(scalingFactor * 10 * 60 * 1000);
   }
 
   @Rule
@@ -113,7 +113,7 @@
     // Set the same SSL information from the primary when present
     Map<String,String> primarySiteConfig = primaryCfg.getSiteConfig();
     if ("true".equals(primarySiteConfig.get(Property.INSTANCE_RPC_SSL_ENABLED.getKey()))) {
-      Map<String,String> peerSiteConfig = new HashMap<String,String>();
+      Map<String,String> peerSiteConfig = new HashMap<>();
       peerSiteConfig.put(Property.INSTANCE_RPC_SSL_ENABLED.getKey(), "true");
       String keystorePath = primarySiteConfig.get(Property.RPC_SSL_KEYSTORE_PATH.getKey());
       Assert.assertNotNull("Keystore Path was null", keystorePath);
@@ -158,7 +158,7 @@
       master1Cfg.setInstanceName("master1");
 
       // Set up SSL if needed
-      ConfigurableMacIT.configureForEnvironment(master1Cfg, this.getClass(), ConfigurableMacIT.getSslDir(master1Dir));
+      ConfigurableMacBase.configureForEnvironment(master1Cfg, this.getClass(), ConfigurableMacBase.getSslDir(master1Dir));
 
       master1Cfg.setProperty(Property.REPLICATION_NAME, master1Cfg.getInstanceName());
       master1Cfg.setProperty(Property.TSERV_WALOG_MAX_SIZE, "5M");
diff --git a/server/master/src/test/java/org/apache/accumulo/master/replication/FinishedWorkUpdaterTest.java b/test/src/main/java/org/apache/accumulo/test/replication/FinishedWorkUpdaterIT.java
similarity index 84%
rename from server/master/src/test/java/org/apache/accumulo/master/replication/FinishedWorkUpdaterTest.java
rename to test/src/main/java/org/apache/accumulo/test/replication/FinishedWorkUpdaterIT.java
index 4f1e159..5519013 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/replication/FinishedWorkUpdaterTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/FinishedWorkUpdaterIT.java
@@ -1,3 +1,5 @@
+package org.apache.accumulo.test.replication;
+
 /*
  * Licensed to the Apache Software Foundation (ASF) under one or more
  * contributor license agreements.  See the NOTICE file distributed with
@@ -14,7 +16,6 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.master.replication;
 
 import java.util.Map.Entry;
 import java.util.UUID;
@@ -22,8 +23,6 @@
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
@@ -33,27 +32,24 @@
 import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.replication.ReplicationTarget;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.master.replication.FinishedWorkUpdater;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.junit.Assert;
 import org.junit.Before;
-import org.junit.Rule;
 import org.junit.Test;
-import org.junit.rules.TestName;
 
 import com.google.common.collect.Iterables;
 
-public class FinishedWorkUpdaterTest {
-
-  @Rule
-  public TestName test = new TestName();
+public class FinishedWorkUpdaterIT extends ConfigurableMacBase {
 
   private Connector conn;
   private FinishedWorkUpdater updater;
 
   @Before
-  public void setup() throws Exception {
-    MockInstance inst = new MockInstance(test.getMethodName());
-    conn = inst.getConnector("root", new PasswordToken(""));
+  public void configureUpdater() throws Exception {
+    conn = getConnector();
     updater = new FinishedWorkUpdater(conn);
   }
 
@@ -64,6 +60,10 @@
 
   @Test
   public void recordsWithProgressUpdateBothTables() throws Exception {
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.READ);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.WRITE);
+    ReplicationTable.setOnline(conn);
+
     String file = "/accumulo/wals/tserver+port/" + UUID.randomUUID();
     Status stat = Status.newBuilder().setBegin(100).setEnd(200).setClosed(true).setInfiniteEnd(false).build();
     ReplicationTarget target = new ReplicationTarget("peer", "table1", "1");
@@ -92,6 +92,10 @@
 
   @Test
   public void chooseMinimumBeginOffset() throws Exception {
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.READ);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.WRITE);
+    ReplicationTable.setOnline(conn);
+
     String file = "/accumulo/wals/tserver+port/" + UUID.randomUUID();
     // @formatter:off
     Status stat1 = Status.newBuilder().setBegin(100).setEnd(1000).setClosed(true).setInfiniteEnd(false).build(),
@@ -128,6 +132,10 @@
 
   @Test
   public void chooseMinimumBeginOffsetInfiniteEnd() throws Exception {
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.READ);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.WRITE);
+    ReplicationTable.setOnline(conn);
+
     String file = "/accumulo/wals/tserver+port/" + UUID.randomUUID();
     // @formatter:off
     Status stat1 = Status.newBuilder().setBegin(100).setEnd(1000).setClosed(true).setInfiniteEnd(true).build(),
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/GarbageCollectorCommunicatesWithTServersIT.java b/test/src/main/java/org/apache/accumulo/test/replication/GarbageCollectorCommunicatesWithTServersIT.java
similarity index 83%
rename from test/src/test/java/org/apache/accumulo/test/replication/GarbageCollectorCommunicatesWithTServersIT.java
rename to test/src/main/java/org/apache/accumulo/test/replication/GarbageCollectorCommunicatesWithTServersIT.java
index 75f61f1..fccb238 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/GarbageCollectorCommunicatesWithTServersIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/GarbageCollectorCommunicatesWithTServersIT.java
@@ -16,6 +16,7 @@
  */
 package org.apache.accumulo.test.replication;
 
+import java.util.Arrays;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
@@ -25,6 +26,7 @@
 
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.impl.ClientContext;
 import org.apache.accumulo.core.client.impl.ClientExecReturn;
@@ -46,14 +48,16 @@
 import org.apache.accumulo.core.tabletserver.thrift.TabletClientService.Client;
 import org.apache.accumulo.core.trace.Tracer;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.log.WalStateManager.WalState;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.RawLocalFileSystem;
 import org.apache.hadoop.io.Text;
-import org.bouncycastle.util.Arrays;
 import org.junit.Assert;
 import org.junit.Test;
 import org.slf4j.Logger;
@@ -65,7 +69,7 @@
  * ACCUMULO-3302 series of tests which ensure that a WAL is prematurely closed when a TServer may still continue to use it. Checking that no tablet references a
  * WAL is insufficient to determine if a WAL will never be used in the future.
  */
-public class GarbageCollectorCommunicatesWithTServersIT extends ConfigurableMacIT {
+public class GarbageCollectorCommunicatesWithTServersIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(GarbageCollectorCommunicatesWithTServersIT.class);
 
   private final int GC_PERIOD_SECONDS = 1;
@@ -78,6 +82,7 @@
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
     cfg.setNumTservers(1);
+    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
     cfg.setProperty(Property.GC_CYCLE_DELAY, GC_PERIOD_SECONDS + "s");
     // Wait longer to try to let the replication table come online before a cycle runs
     cfg.setProperty(Property.GC_CYCLE_START, "10s");
@@ -101,24 +106,16 @@
 
     Assert.assertNotNull("Could not determine table ID for " + tableName, tableId);
 
-    Scanner s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    Range r = MetadataSchema.TabletsSection.getRange(tableId);
-    s.setRange(r);
-    s.fetchColumnFamily(MetadataSchema.TabletsSection.LogColumnFamily.NAME);
+    Instance i = conn.getInstance();
+    ZooReaderWriter zk = new ZooReaderWriter(i.getZooKeepers(), i.getZooKeepersSessionTimeOut(), "");
+    WalStateManager wals = new WalStateManager(conn.getInstance(), zk);
 
-    Set<String> wals = new HashSet<String>();
-    for (Entry<Key,Value> entry : s) {
-      log.debug("Reading WALs: {}={}", entry.getKey().toStringNoTruncate(), entry.getValue());
-      // hostname:port/uri://path/to/wal
-      String cq = entry.getKey().getColumnQualifier().toString();
-      int index = cq.indexOf('/');
-      // Normalize the path
-      String path = new Path(cq.substring(index + 1)).toString();
-      log.debug("Extracted file: " + path);
-      wals.add(path);
+    Set<String> result = new HashSet<>();
+    for (Entry<Path,WalState> entry : wals.getAllState().entrySet()) {
+      log.debug("Reading WALs: {}={}", entry.getKey(), entry.getValue());
+      result.add(entry.getKey().toString());
     }
-
-    return wals;
+    return result;
   }
 
   /**
@@ -135,7 +132,7 @@
     s.setRange(r);
     s.fetchColumnFamily(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME);
 
-    Set<String> rfiles = new HashSet<String>();
+    Set<String> rfiles = new HashSet<>();
     for (Entry<Key,Value> entry : s) {
       log.debug("Reading RFiles: {}={}", entry.getKey().toStringNoTruncate(), entry.getValue());
       // uri://path/to/wal
@@ -162,7 +159,7 @@
     s.setRange(r);
     s.fetchColumn(MetadataSchema.ReplicationSection.COLF, new Text(tableId));
 
-    Map<String,Status> fileToStatus = new HashMap<String,Status>();
+    Map<String,Status> fileToStatus = new HashMap<>();
     for (Entry<Key,Value> entry : s) {
       Text file = new Text();
       MetadataSchema.ReplicationSection.getFile(entry.getKey(), file);
@@ -201,12 +198,10 @@
     log.info("Flushing mutations to the server");
     bw.flush();
 
-    log.info("Checking that metadata only has one WAL recorded for this table");
+    log.info("Checking that metadata only has two WALs recorded for this table (inUse, and opened)");
 
     Set<String> wals = getWalsForTable(table);
-    Assert.assertEquals("Expected to only find one WAL for the table", 1, wals.size());
-
-    log.info("Compacting the table which will remove all WALs from the tablets");
+    Assert.assertEquals("Expected to only find two WALs for the table", 2, wals.size());
 
     // Flush our test table to remove the WAL references in it
     conn.tableOperations().flush(table, null, null, true);
@@ -222,17 +217,13 @@
     Assert.assertEquals("Expected to only find one replication status message", 1, fileToStatus.size());
 
     String walName = fileToStatus.keySet().iterator().next();
-    Assert.assertEquals("Expected log file name from tablet to equal replication entry", wals.iterator().next(), walName);
+    wals.retainAll(fileToStatus.keySet());
+    Assert.assertEquals(1, wals.size());
 
     Status status = fileToStatus.get(walName);
 
     Assert.assertEquals("Expected Status for file to not be closed", false, status.getClosed());
 
-    log.info("Checking to see that log entries are removed from tablet section after MinC");
-    // After compaction, the log column should be gone from the tablet
-    Set<String> walsAfterMinc = getWalsForTable(table);
-    Assert.assertEquals("Expected to find no WALs for tablet", 0, walsAfterMinc.size());
-
     Set<String> filesForTable = getFilesForTable(table);
     Assert.assertEquals("Expected to only find one rfile for table", 1, filesForTable.size());
     log.info("Files for table before MajC: {}", filesForTable);
@@ -249,7 +240,7 @@
 
     // Use the rfile which was just replaced by the MajC to determine when the GC has ran
     Path fileToBeDeleted = new Path(filesForTable.iterator().next());
-    FileSystem fs = fileToBeDeleted.getFileSystem(new Configuration());
+    FileSystem fs = getCluster().getFileSystem();
 
     boolean fileExists = fs.exists(fileToBeDeleted);
     while (fileExists) {
@@ -258,21 +249,13 @@
       fileExists = fs.exists(fileToBeDeleted);
     }
 
-    // At this point in time, we *know* that the GarbageCollector has run which means that the Status
-    // for our WAL should not be altered.
-
-    log.info("Re-checking that WALs are still not referenced for our table");
-
-    Set<String> walsAfterMajc = getWalsForTable(table);
-    Assert.assertEquals("Expected to find no WALs in tablets section: " + walsAfterMajc, 0, walsAfterMajc.size());
-
     Map<String,Status> fileToStatusAfterMinc = getMetadataStatusForTable(table);
     Assert.assertEquals("Expected to still find only one replication status message: " + fileToStatusAfterMinc, 1, fileToStatusAfterMinc.size());
 
     Assert.assertEquals("Status before and after MinC should be identical", fileToStatus, fileToStatusAfterMinc);
   }
 
-  @Test
+  @Test(timeout = 2 * 60 * 1000)
   public void testUnreferencedWalInTserverIsClosed() throws Exception {
     final String[] names = getUniqueNames(2);
     // `table` will be replicated, `otherTable` is only used to roll the WAL on the tserver
@@ -304,7 +287,7 @@
     log.info("Checking that metadata only has one WAL recorded for this table");
 
     Set<String> wals = getWalsForTable(table);
-    Assert.assertEquals("Expected to only find one WAL for the table", 1, wals.size());
+    Assert.assertEquals("Expected to only find two WAL for the table", 2, wals.size());
 
     log.info("Compacting the table which will remove all WALs from the tablets");
 
@@ -320,17 +303,12 @@
     Assert.assertEquals("Expected to only find one replication status message", 1, fileToStatus.size());
 
     String walName = fileToStatus.keySet().iterator().next();
-    Assert.assertEquals("Expected log file name from tablet to equal replication entry", wals.iterator().next(), walName);
+    Assert.assertTrue("Expected log file name from tablet to equal replication entry", wals.contains(walName));
 
     Status status = fileToStatus.get(walName);
 
     Assert.assertEquals("Expected Status for file to not be closed", false, status.getClosed());
 
-    log.info("Checking to see that log entries are removed from tablet section after MinC");
-    // After compaction, the log column should be gone from the tablet
-    Set<String> walsAfterMinc = getWalsForTable(table);
-    Assert.assertEquals("Expected to find no WALs for tablet", 0, walsAfterMinc.size());
-
     Set<String> filesForTable = getFilesForTable(table);
     Assert.assertEquals("Expected to only find one rfile for table", 1, filesForTable.size());
     log.info("Files for table before MajC: {}", filesForTable);
@@ -347,7 +325,7 @@
 
     // Use the rfile which was just replaced by the MajC to determine when the GC has ran
     Path fileToBeDeleted = new Path(filesForTable.iterator().next());
-    FileSystem fs = fileToBeDeleted.getFileSystem(new Configuration());
+    FileSystem fs = getCluster().getFileSystem();
 
     boolean fileExists = fs.exists(fileToBeDeleted);
     while (fileExists) {
@@ -359,16 +337,9 @@
     // At this point in time, we *know* that the GarbageCollector has run which means that the Status
     // for our WAL should not be altered.
 
-    log.info("Re-checking that WALs are still not referenced for our table");
-
-    Set<String> walsAfterMajc = getWalsForTable(table);
-    Assert.assertEquals("Expected to find no WALs in tablets section: " + walsAfterMajc, 0, walsAfterMajc.size());
-
     Map<String,Status> fileToStatusAfterMinc = getMetadataStatusForTable(table);
     Assert.assertEquals("Expected to still find only one replication status message: " + fileToStatusAfterMinc, 1, fileToStatusAfterMinc.size());
 
-    Assert.assertEquals("Status before and after MinC should be identical", fileToStatus, fileToStatusAfterMinc);
-
     /*
      * To verify that the WALs is still getting closed, we have to force the tserver to close the existing WAL and open a new one instead. The easiest way to do
      * this is to write a load of data that will exceed the 1.33% full threshold that the logger keeps track of
@@ -394,7 +365,7 @@
     conn.tableOperations().flush(otherTable, null, null, true);
 
     // Get the tservers which the master deems as active
-    final ClientContext context = new ClientContext(conn.getInstance(), new Credentials("root", new PasswordToken(ConfigurableMacIT.ROOT_PASSWORD)),
+    final ClientContext context = new ClientContext(conn.getInstance(), new Credentials("root", new PasswordToken(ConfigurableMacBase.ROOT_PASSWORD)),
         getClientConfig());
     List<String> tservers = MasterClient.execute(context, new ClientExecReturn<List<String>,MasterClientService.Client>() {
       @Override
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java b/test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
new file mode 100644
index 0000000..4559195
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
@@ -0,0 +1,243 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.replication;
+
+import java.security.PrivilegedExceptionAction;
+import java.util.Map.Entry;
+import java.util.Set;
+
+import org.apache.accumulo.cluster.ClusterUser;
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.BatchWriterConfig;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.security.tokens.KerberosToken;
+import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.harness.AccumuloITBase;
+import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
+import org.apache.accumulo.harness.MiniClusterHarness;
+import org.apache.accumulo.harness.TestingKdc;
+import org.apache.accumulo.master.replication.SequentialWorkAssigner;
+import org.apache.accumulo.minicluster.ServerType;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.minicluster.impl.ProcessReference;
+import org.apache.accumulo.server.replication.ReplicaSystemFactory;
+import org.apache.accumulo.test.functional.KerberosIT;
+import org.apache.accumulo.tserver.TabletServer;
+import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.Iterators;
+
+/**
+ * Ensure that replication occurs using keytabs instead of password (not to mention SASL)
+ */
+public class KerberosReplicationIT extends AccumuloITBase {
+  private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
+
+  private static TestingKdc kdc;
+  private static String krbEnabledForITs = null;
+  private static ClusterUser rootUser;
+
+  @BeforeClass
+  public static void startKdc() throws Exception {
+    kdc = new TestingKdc();
+    kdc.start();
+    krbEnabledForITs = System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION);
+    if (null == krbEnabledForITs || !Boolean.parseBoolean(krbEnabledForITs)) {
+      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, "true");
+    }
+    rootUser = kdc.getRootUser();
+  }
+
+  @AfterClass
+  public static void stopKdc() throws Exception {
+    if (null != kdc) {
+      kdc.stop();
+    }
+    if (null != krbEnabledForITs) {
+      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
+    }
+  }
+
+  private MiniAccumuloClusterImpl primary, peer;
+  private String PRIMARY_NAME = "primary", PEER_NAME = "peer";
+
+  @Override
+  protected int defaultTimeoutSeconds() {
+    return 60 * 3;
+  }
+
+  private MiniClusterConfigurationCallback getConfigCallback(final String name) {
+    return new MiniClusterConfigurationCallback() {
+      @Override
+      public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
+        cfg.setNumTservers(1);
+        cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
+        cfg.setProperty(Property.TSERV_WALOG_MAX_SIZE, "2M");
+        cfg.setProperty(Property.GC_CYCLE_START, "1s");
+        cfg.setProperty(Property.GC_CYCLE_DELAY, "5s");
+        cfg.setProperty(Property.REPLICATION_WORK_ASSIGNMENT_SLEEP, "1s");
+        cfg.setProperty(Property.MASTER_REPLICATION_SCAN_INTERVAL, "1s");
+        cfg.setProperty(Property.REPLICATION_NAME, name);
+        cfg.setProperty(Property.REPLICATION_MAX_UNIT_SIZE, "8M");
+        cfg.setProperty(Property.REPLICATION_WORK_ASSIGNER, SequentialWorkAssigner.class.getName());
+        cfg.setProperty(Property.TSERV_TOTAL_MUTATION_QUEUE_MAX, "1M");
+        coreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
+        coreSite.set("fs.defaultFS", "file:///");
+      }
+    };
+  }
+
+  @Before
+  public void setup() throws Exception {
+    MiniClusterHarness harness = new MiniClusterHarness();
+
+    // Create a primary and a peer instance, both with the same "root" user
+    primary = harness.create(getClass().getName(), testName.getMethodName(), new PasswordToken("unused"), getConfigCallback(PRIMARY_NAME), kdc);
+    primary.start();
+
+    peer = harness.create(getClass().getName(), testName.getMethodName() + "_peer", new PasswordToken("unused"), getConfigCallback(PEER_NAME), kdc);
+    peer.start();
+
+    // Enable kerberos auth
+    Configuration conf = new Configuration(false);
+    conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
+    UserGroupInformation.setConfiguration(conf);
+  }
+
+  @After
+  public void teardown() throws Exception {
+    if (null != peer) {
+      peer.stop();
+    }
+    if (null != primary) {
+      primary.stop();
+    }
+    UserGroupInformation.setConfiguration(new Configuration(false));
+  }
+
+  @Test
+  public void dataReplicatedToCorrectTable() throws Exception {
+    // Login as the root user
+    final UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().toURI().toString());
+    ugi.doAs(new PrivilegedExceptionAction<Void>() {
+      @Override
+      public Void run() throws Exception {
+        log.info("testing {}", ugi);
+        final KerberosToken token = new KerberosToken();
+        final Connector primaryConn = primary.getConnector(rootUser.getPrincipal(), token);
+        final Connector peerConn = peer.getConnector(rootUser.getPrincipal(), token);
+
+        ClusterUser replicationUser = kdc.getClientPrincipal(0);
+
+        // Create user for replication to the peer
+        peerConn.securityOperations().createLocalUser(replicationUser.getPrincipal(), null);
+
+        primaryConn.instanceOperations().setProperty(Property.REPLICATION_PEER_USER.getKey() + PEER_NAME, replicationUser.getPrincipal());
+        primaryConn.instanceOperations().setProperty(Property.REPLICATION_PEER_KEYTAB.getKey() + PEER_NAME, replicationUser.getKeytab().getAbsolutePath());
+
+        // ...peer = AccumuloReplicaSystem,instanceName,zookeepers
+        primaryConn.instanceOperations().setProperty(
+            Property.REPLICATION_PEERS.getKey() + PEER_NAME,
+            ReplicaSystemFactory.getPeerConfigurationValue(AccumuloReplicaSystem.class,
+                AccumuloReplicaSystem.buildConfiguration(peerConn.getInstance().getInstanceName(), peerConn.getInstance().getZooKeepers())));
+
+        String primaryTable1 = "primary", peerTable1 = "peer";
+
+        // Create tables
+        primaryConn.tableOperations().create(primaryTable1);
+        String masterTableId1 = primaryConn.tableOperations().tableIdMap().get(primaryTable1);
+        Assert.assertNotNull(masterTableId1);
+
+        peerConn.tableOperations().create(peerTable1);
+        String peerTableId1 = peerConn.tableOperations().tableIdMap().get(peerTable1);
+        Assert.assertNotNull(peerTableId1);
+
+        // Grant write permission
+        peerConn.securityOperations().grantTablePermission(replicationUser.getPrincipal(), peerTable1, TablePermission.WRITE);
+
+        // Replicate this table to the peerClusterName in a table with the peerTableId table id
+        primaryConn.tableOperations().setProperty(primaryTable1, Property.TABLE_REPLICATION.getKey(), "true");
+        primaryConn.tableOperations().setProperty(primaryTable1, Property.TABLE_REPLICATION_TARGET.getKey() + PEER_NAME, peerTableId1);
+
+        // Write some data to table1
+        BatchWriter bw = primaryConn.createBatchWriter(primaryTable1, new BatchWriterConfig());
+        long masterTable1Records = 0l;
+        for (int rows = 0; rows < 2500; rows++) {
+          Mutation m = new Mutation(primaryTable1 + rows);
+          for (int cols = 0; cols < 100; cols++) {
+            String value = Integer.toString(cols);
+            m.put(value, "", value);
+            masterTable1Records++;
+          }
+          bw.addMutation(m);
+        }
+
+        bw.close();
+
+        log.info("Wrote all data to primary cluster");
+
+        Set<String> filesFor1 = primaryConn.replicationOperations().referencedFiles(primaryTable1);
+
+        // Restart the tserver to force a close on the WAL
+        for (ProcessReference proc : primary.getProcesses().get(ServerType.TABLET_SERVER)) {
+          primary.killProcess(ServerType.TABLET_SERVER, proc);
+        }
+        primary.exec(TabletServer.class);
+
+        log.info("Restarted the tserver");
+
+        // Read the data -- the tserver is back up and running and tablets are assigned
+        Iterators.size(primaryConn.createScanner(primaryTable1, Authorizations.EMPTY).iterator());
+
+        // Wait for both tables to be replicated
+        log.info("Waiting for {} for {}", filesFor1, primaryTable1);
+        primaryConn.replicationOperations().drain(primaryTable1, filesFor1);
+
+        long countTable = 0l;
+        for (Entry<Key,Value> entry : peerConn.createScanner(peerTable1, Authorizations.EMPTY)) {
+          countTable++;
+          Assert.assertTrue("Found unexpected key-value" + entry.getKey().toStringNoTruncate() + " " + entry.getValue(), entry.getKey().getRow().toString()
+              .startsWith(primaryTable1));
+        }
+
+        log.info("Found {} records in {}", countTable, peerTable1);
+        Assert.assertEquals(masterTable1Records, countTable);
+
+        return null;
+      }
+    });
+  }
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/MultiInstanceReplicationIT.java b/test/src/main/java/org/apache/accumulo/test/replication/MultiInstanceReplicationIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/replication/MultiInstanceReplicationIT.java
rename to test/src/main/java/org/apache/accumulo/test/replication/MultiInstanceReplicationIT.java
index 35bc0fe..33e0a55 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/MultiInstanceReplicationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/MultiInstanceReplicationIT.java
@@ -45,7 +45,6 @@
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.master.replication.SequentialWorkAssigner;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
@@ -54,7 +53,7 @@
 import org.apache.accumulo.server.replication.ReplicaSystemFactory;
 import org.apache.accumulo.server.replication.StatusUtil;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.tserver.TabletServer;
 import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
 import org.apache.hadoop.conf.Configuration;
@@ -67,11 +66,12 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
 /**
  * Replication tests which start at least two MAC instances and replicate data between them
  */
-public class MultiInstanceReplicationIT extends ConfigurableMacIT {
+public class MultiInstanceReplicationIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(MultiInstanceReplicationIT.class);
 
   private ExecutorService executor;
@@ -116,7 +116,7 @@
     // Set the same SSL information from the primary when present
     Map<String,String> primarySiteConfig = primaryCfg.getSiteConfig();
     if ("true".equals(primarySiteConfig.get(Property.INSTANCE_RPC_SSL_ENABLED.getKey()))) {
-      Map<String,String> peerSiteConfig = new HashMap<String,String>();
+      Map<String,String> peerSiteConfig = new HashMap<>();
       peerSiteConfig.put(Property.INSTANCE_RPC_SSL_ENABLED.getKey(), "true");
       String keystorePath = primarySiteConfig.get(Property.RPC_SSL_KEYSTORE_PATH.getKey());
       Assert.assertNotNull("Keystore Path was null", keystorePath);
@@ -148,7 +148,7 @@
     }
   }
 
-  @Test
+  @Test(timeout = 10 * 60 * 1000)
   public void dataWasReplicatedToThePeer() throws Exception {
     MiniAccumuloConfigImpl peerCfg = new MiniAccumuloConfigImpl(createTestDir(this.getClass().getName() + "_" + this.testName.getMethodName() + "_peer"),
         ROOT_PASSWORD);
@@ -670,7 +670,7 @@
       // Wait until we fully replicated something
       boolean fullyReplicated = false;
       for (int i = 0; i < 10 && !fullyReplicated; i++) {
-        UtilWaitThread.sleep(2000);
+        sleepUninterruptibly(2, TimeUnit.SECONDS);
 
         Scanner s = ReplicationTable.getScanner(connMaster);
         WorkSection.limit(s);
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/MultiTserverReplicationIT.java b/test/src/main/java/org/apache/accumulo/test/replication/MultiTserverReplicationIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/test/replication/MultiTserverReplicationIT.java
rename to test/src/main/java/org/apache/accumulo/test/replication/MultiTserverReplicationIT.java
index 6b24e99..72cb569 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/MultiTserverReplicationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/MultiTserverReplicationIT.java
@@ -30,7 +30,7 @@
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooReader;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Assert;
 import org.junit.Test;
@@ -43,7 +43,7 @@
 /**
  *
  */
-public class MultiTserverReplicationIT extends ConfigurableMacIT {
+public class MultiTserverReplicationIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(MultiTserverReplicationIT.class);
 
   @Override
diff --git a/server/master/src/test/java/org/apache/accumulo/master/replication/RemoveCompleteReplicationRecordsTest.java b/test/src/main/java/org/apache/accumulo/test/replication/RemoveCompleteReplicationRecordsIT.java
similarity index 83%
rename from server/master/src/test/java/org/apache/accumulo/master/replication/RemoveCompleteReplicationRecordsTest.java
rename to test/src/main/java/org/apache/accumulo/test/replication/RemoveCompleteReplicationRecordsIT.java
index 952fb2c..237a8a0 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/replication/RemoveCompleteReplicationRecordsTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/RemoveCompleteReplicationRecordsIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.master.replication;
+package org.apache.accumulo.test.replication;
 
 import java.util.Collections;
 import java.util.HashSet;
@@ -26,8 +26,6 @@
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
@@ -39,32 +37,43 @@
 import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.replication.ReplicationTarget;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.master.replication.RemoveCompleteReplicationRecords;
 import org.apache.accumulo.server.replication.StatusUtil;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
-import org.apache.hadoop.io.Text;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.easymock.EasyMock;
 import org.junit.Assert;
 import org.junit.Before;
-import org.junit.Rule;
 import org.junit.Test;
-import org.junit.rules.TestName;
 
 import com.google.common.collect.Iterables;
 
-public class RemoveCompleteReplicationRecordsTest {
+public class RemoveCompleteReplicationRecordsIT extends ConfigurableMacBase {
 
-  private RemoveCompleteReplicationRecords rcrr;
-  private MockInstance inst;
+  private MockRemoveCompleteReplicationRecords rcrr;
   private Connector conn;
 
-  @Rule
-  public TestName test = new TestName();
+  private static class MockRemoveCompleteReplicationRecords extends RemoveCompleteReplicationRecords {
+
+    public MockRemoveCompleteReplicationRecords(Connector conn) {
+      super(conn);
+    }
+
+    @Override
+    public long removeCompleteRecords(Connector conn, BatchScanner bs, BatchWriter bw) {
+      return super.removeCompleteRecords(conn, bs, bw);
+    }
+
+  }
 
   @Before
   public void initialize() throws Exception {
-    inst = new MockInstance(test.getMethodName());
-    conn = inst.getConnector("root", new PasswordToken(""));
-    rcrr = new RemoveCompleteReplicationRecords(conn);
+    conn = getConnector();
+    rcrr = new MockRemoveCompleteReplicationRecords(conn);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.READ);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.WRITE);
+    ReplicationTable.setOnline(conn);
   }
 
   @Test
@@ -74,7 +83,7 @@
     for (int i = 0; i < numRecords; i++) {
       String file = "/accumulo/wal/tserver+port/" + UUID.randomUUID();
       Mutation m = new Mutation(file);
-      StatusSection.add(m, new Text(Integer.toString(i)), StatusUtil.openWithUnknownLengthValue());
+      StatusSection.add(m, Integer.toString(i), StatusUtil.openWithUnknownLengthValue());
       bw.addMutation(m);
     }
 
@@ -107,7 +116,7 @@
     for (int i = 0; i < numRecords; i++) {
       String file = "/accumulo/wal/tserver+port/" + UUID.randomUUID();
       Mutation m = new Mutation(file);
-      StatusSection.add(m, new Text(Integer.toString(i)), ProtobufUtil.toValue(builder.setBegin(1000 * (i + 1)).build()));
+      StatusSection.add(m, Integer.toString(i), ProtobufUtil.toValue(builder.setBegin(1000 * (i + 1)).build()));
       bw.addMutation(m);
     }
 
@@ -144,21 +153,21 @@
     for (int i = 0; i < numRecords; i++) {
       String file = "/accumulo/wal/tserver+port/" + UUID.randomUUID();
       Mutation m = new Mutation(file);
-      StatusSection.add(m, new Text(Integer.toString(i)), ProtobufUtil.toValue(builder.setBegin(1000 * (i + 1)).build()));
+      StatusSection.add(m, Integer.toString(i), ProtobufUtil.toValue(builder.setBegin(1000 * (i + 1)).build()));
       replBw.addMutation(m);
     }
 
     // Add two records that we can delete
     String fileToRemove = "/accumulo/wal/tserver+port/" + UUID.randomUUID();
     Mutation m = new Mutation(fileToRemove);
-    StatusSection.add(m, new Text("5"), ProtobufUtil.toValue(builder.setBegin(10000).setEnd(10000).setClosed(false).build()));
+    StatusSection.add(m, "5", ProtobufUtil.toValue(builder.setBegin(10000).setEnd(10000).setClosed(false).build()));
     replBw.addMutation(m);
 
     numRecords++;
 
     fileToRemove = "/accumulo/wal/tserver+port/" + UUID.randomUUID();
     m = new Mutation(fileToRemove);
-    StatusSection.add(m, new Text("6"), ProtobufUtil.toValue(builder.setBegin(10000).setEnd(10000).setClosed(false).build()));
+    StatusSection.add(m, "6", ProtobufUtil.toValue(builder.setBegin(10000).setEnd(10000).setClosed(false).build()));
     replBw.addMutation(m);
 
     numRecords++;
@@ -199,10 +208,10 @@
       String file = "/accumulo/wal/tserver+port/" + UUID.randomUUID();
       Mutation m = new Mutation(file);
       Value v = ProtobufUtil.toValue(builder.setBegin(1000 * (i + 1)).build());
-      StatusSection.add(m, new Text(Integer.toString(i)), v);
+      StatusSection.add(m, Integer.toString(i), v);
       replBw.addMutation(m);
       m = OrderSection.createMutation(file, time);
-      OrderSection.add(m, new Text(Integer.toString(i)), v);
+      OrderSection.add(m, Integer.toString(i), v);
       replBw.addMutation(m);
     }
 
@@ -217,12 +226,12 @@
     Mutation m = new Mutation(fileToRemove);
     ReplicationTarget target = new ReplicationTarget("peer1", "5", "5");
     Value value = ProtobufUtil.toValue(builder.setBegin(10000).setEnd(10000).setClosed(true).setCreatedTime(time).build());
-    StatusSection.add(m, new Text("5"), value);
+    StatusSection.add(m, "5", value);
     WorkSection.add(m, target.toText(), value);
     replBw.addMutation(m);
 
     m = OrderSection.createMutation(fileToRemove, time);
-    OrderSection.add(m, new Text("5"), value);
+    OrderSection.add(m, "5", value);
     replBw.addMutation(m);
     time++;
 
@@ -233,12 +242,12 @@
     m = new Mutation(fileToRemove);
     value = ProtobufUtil.toValue(builder.setBegin(10000).setEnd(10000).setClosed(true).setCreatedTime(time).build());
     target = new ReplicationTarget("peer1", "6", "6");
-    StatusSection.add(m, new Text("6"), value);
+    StatusSection.add(m, "6", value);
     WorkSection.add(m, target.toText(), value);
     replBw.addMutation(m);
 
     m = OrderSection.createMutation(fileToRemove, time);
-    OrderSection.add(m, new Text("6"), value);
+    OrderSection.add(m, "6", value);
     replBw.addMutation(m);
     time++;
 
@@ -287,7 +296,7 @@
     for (int i = 0; i < numRecords; i++) {
       String file = "/accumulo/wal/tserver+port/" + UUID.randomUUID();
       Mutation m = new Mutation(file);
-      StatusSection.add(m, new Text(Integer.toString(i)), ProtobufUtil.toValue(builder.setBegin(1000 * (i + 1)).build()));
+      StatusSection.add(m, Integer.toString(i), ProtobufUtil.toValue(builder.setBegin(1000 * (i + 1)).build()));
       replBw.addMutation(m);
     }
 
@@ -296,7 +305,7 @@
     Mutation m = new Mutation(fileToRemove);
     ReplicationTarget target = new ReplicationTarget("peer1", "5", "5");
     Value value = ProtobufUtil.toValue(builder.setBegin(10000).setEnd(10000).setClosed(true).build());
-    StatusSection.add(m, new Text("5"), value);
+    StatusSection.add(m, "5", value);
     WorkSection.add(m, target.toText(), value);
     target = new ReplicationTarget("peer2", "5", "5");
     WorkSection.add(m, target.toText(), value);
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/ReplicationIT.java b/test/src/main/java/org/apache/accumulo/test/replication/ReplicationIT.java
similarity index 82%
rename from test/src/test/java/org/apache/accumulo/test/replication/ReplicationIT.java
rename to test/src/main/java/org/apache/accumulo/test/replication/ReplicationIT.java
index c312409..11f0634 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/ReplicationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/ReplicationIT.java
@@ -16,11 +16,13 @@
  */
 package org.apache.accumulo.test.replication;
 
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
+import static java.nio.charset.StandardCharsets.UTF_8;
+
 import java.net.URI;
 import java.net.URISyntaxException;
 import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Collection;
 import java.util.EnumSet;
 import java.util.HashSet;
 import java.util.Iterator;
@@ -29,6 +31,8 @@
 import java.util.Map.Entry;
 import java.util.NoSuchElementException;
 import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.accumulo.core.Constants;
@@ -37,6 +41,7 @@
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.IteratorSetting;
 import org.apache.accumulo.core.client.IteratorSetting.Column;
 import org.apache.accumulo.core.client.Scanner;
@@ -49,6 +54,7 @@
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.data.impl.KeyExtent;
 import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 import org.apache.accumulo.core.iterators.conf.ColumnSet;
 import org.apache.accumulo.core.metadata.MetadataTable;
@@ -60,11 +66,12 @@
 import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
 import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
 import org.apache.accumulo.core.replication.ReplicationTable;
+import org.apache.accumulo.core.replication.ReplicationTableOfflineException;
 import org.apache.accumulo.core.replication.ReplicationTarget;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.tabletserver.log.LogEntry;
-import org.apache.accumulo.core.util.UtilWaitThread;
+import org.apache.accumulo.core.util.Pair;
 import org.apache.accumulo.core.zookeeper.ZooUtil;
 import org.apache.accumulo.fate.zookeeper.ZooCache;
 import org.apache.accumulo.fate.zookeeper.ZooCacheFactory;
@@ -72,15 +79,17 @@
 import org.apache.accumulo.gc.SimpleGarbageCollector;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.minicluster.impl.ProcessReference;
+import org.apache.accumulo.server.log.WalStateManager;
+import org.apache.accumulo.server.log.WalStateManager.WalState;
+import org.apache.accumulo.server.master.state.TServerInstance;
 import org.apache.accumulo.server.replication.ReplicaSystemFactory;
 import org.apache.accumulo.server.replication.StatusCombiner;
 import org.apache.accumulo.server.replication.StatusFormatter;
 import org.apache.accumulo.server.replication.StatusUtil;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
 import org.apache.accumulo.server.util.ReplicationTableUtil;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
-import org.apache.accumulo.tserver.TabletServer;
+import org.apache.accumulo.server.zookeeper.ZooReaderWriter;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -104,7 +113,7 @@
  * Replication tests which verify expected functionality using a single MAC instance. A MockReplicaSystem is used to "fake" the peer instance that we're
  * replicating to. This lets us test replication in a functional way without having to worry about two real systems.
  */
-public class ReplicationIT extends ConfigurableMacIT {
+public class ReplicationIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(ReplicationIT.class);
   private static final long MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS = 5000l;
 
@@ -116,7 +125,7 @@
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
     // Run the master replication loop run frequently
-    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "10s");
+    cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
     cfg.setProperty(Property.MASTER_REPLICATION_SCAN_INTERVAL, "1s");
     cfg.setProperty(Property.REPLICATION_WORK_ASSIGNMENT_SLEEP, "1s");
     cfg.setProperty(Property.TSERV_WALOG_MAX_SIZE, "1M");
@@ -130,28 +139,34 @@
     hadoopCoreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
   }
 
-  private Multimap<String,String> getLogs(Connector conn) throws TableNotFoundException {
-    Multimap<String,String> logs = HashMultimap.create();
+  private Multimap<String,String> getLogs(Connector conn) throws Exception {
+    // Map of server to tableId
+    Multimap<TServerInstance,String> serverToTableID = HashMultimap.create();
     Scanner scanner = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-    scanner.fetchColumnFamily(LogColumnFamily.NAME);
-    scanner.setRange(new Range());
+    scanner.setRange(MetadataSchema.TabletsSection.getRange());
+    scanner.fetchColumnFamily(MetadataSchema.TabletsSection.CurrentLocationColumnFamily.NAME);
     for (Entry<Key,Value> entry : scanner) {
-      if (Thread.interrupted()) {
-        Thread.currentThread().interrupt();
-        return logs;
-      }
-
-      LogEntry logEntry = LogEntry.fromKeyValue(entry.getKey(), entry.getValue());
-
-      for (String log : logEntry.logSet) {
-        // Need to normalize the log file from LogEntry
-        logs.put(new Path(log).toString(), logEntry.extent.getTableId().toString());
+      TServerInstance key = new TServerInstance(entry.getValue(), entry.getKey().getColumnQualifier());
+      byte[] tableId = KeyExtent.tableOfMetadataRow(entry.getKey().getRow());
+      serverToTableID.put(key, new String(tableId, UTF_8));
+    }
+    // Map of logs to tableId
+    Multimap<String,String> logs = HashMultimap.create();
+    Instance i = conn.getInstance();
+    ZooReaderWriter zk = new ZooReaderWriter(i.getZooKeepers(), i.getZooKeepersSessionTimeOut(), "");
+    WalStateManager wals = new WalStateManager(conn.getInstance(), zk);
+    for (Entry<TServerInstance,List<UUID>> entry : wals.getAllMarkers().entrySet()) {
+      for (UUID id : entry.getValue()) {
+        Pair<WalState,Path> state = wals.state(entry.getKey(), id);
+        for (String tableId : serverToTableID.get(entry.getKey())) {
+          logs.put(state.getSecond().toString(), tableId);
+        }
       }
     }
     return logs;
   }
 
-  private Multimap<String,String> getAllLogs(Connector conn) throws TableNotFoundException {
+  private Multimap<String,String> getAllLogs(Connector conn) throws Exception {
     Multimap<String,String> logs = getLogs(conn);
     try {
       Scanner scanner = conn.createScanner(ReplicationTable.NAME, Authorizations.EMPTY);
@@ -165,8 +180,7 @@
 
         StatusSection.getFile(entry.getKey(), buff);
         String file = buff.toString();
-        StatusSection.getTableId(entry.getKey(), buff);
-        String tableId = buff.toString();
+        String tableId = StatusSection.getTableId(entry.getKey());
 
         logs.put(file, tableId);
       }
@@ -279,7 +293,7 @@
 
     // After writing data, we'll get a replication table online
     while (!ReplicationTable.isOnline(conn)) {
-      UtilWaitThread.sleep(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS);
+      sleepUninterruptibly(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS, TimeUnit.MILLISECONDS);
     }
     Assert.assertTrue("Replication table did not exist", ReplicationTable.isOnline(conn));
 
@@ -315,14 +329,13 @@
     }
 
     Set<String> wals = new HashSet<>();
-    Scanner s;
     attempts = 5;
+    Instance i = conn.getInstance();
+    ZooReaderWriter zk = new ZooReaderWriter(i.getZooKeepers(), i.getZooKeepersSessionTimeOut(), "");
     while (wals.isEmpty() && attempts > 0) {
-      s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
-      s.fetchColumnFamily(MetadataSchema.TabletsSection.LogColumnFamily.NAME);
-      for (Entry<Key,Value> entry : s) {
-        LogEntry logEntry = LogEntry.fromKeyValue(entry.getKey(), entry.getValue());
-        wals.add(new Path(logEntry.filename).toString());
+      WalStateManager markers = new WalStateManager(i, zk);
+      for (Entry<Path,WalState> entry : markers.getAllState().entrySet()) {
+        wals.add(entry.getKey().toString());
       }
       attempts--;
     }
@@ -331,8 +344,10 @@
     // We should find an entry in tablet and in the repl row
     Assert.assertEquals("Rows found: " + replRows, 1, replRows.size());
 
-    // This should be the same set of WALs that we also are using
-    Assert.assertEquals(replRows, wals);
+    // There should only be one extra WALog that replication doesn't know about
+    replRows.removeAll(wals);
+    Assert.assertEquals(2, wals.size());
+    Assert.assertEquals(0, replRows.size());
   }
 
   @Test
@@ -353,18 +368,7 @@
     Assert.assertFalse(ReplicationTable.isOnline(conn));
 
     for (String table : tables) {
-      BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
-
-      for (int j = 0; j < 5; j++) {
-        Mutation m = new Mutation(Integer.toString(j));
-        for (int k = 0; k < 5; k++) {
-          String value = Integer.toString(k);
-          m.put(value, "", value);
-        }
-        bw.addMutation(m);
-      }
-
-      bw.close();
+      writeSomeData(conn, table, 5, 5);
     }
 
     // After writing data, still no replication table
@@ -396,6 +400,9 @@
     // Create two tables
     conn.tableOperations().create(table1);
     conn.tableOperations().create(table2);
+    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.READ);
+    // wait for permission to propagate
+    Thread.sleep(5000);
 
     // Enable replication on table1
     conn.tableOperations().setProperty(table1, Property.TABLE_REPLICATION.getKey(), "true");
@@ -404,47 +411,27 @@
     Assert.assertFalse(ReplicationTable.isOnline(conn));
 
     // Write some data to table1
-    BatchWriter bw = conn.createBatchWriter(table1, new BatchWriterConfig());
-
-    for (int rows = 0; rows < 50; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 50; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
+    writeSomeData(conn, table1, 50, 50);
 
     // After the commit for these mutations finishes, we'll get a replication entry in accumulo.metadata for table1
     // Don't want to compact table1 as it ultimately cause the entry in accumulo.metadata to be removed before we can verify it's there
 
     // After writing data, we'll get a replication table online
     while (!ReplicationTable.isOnline(conn)) {
-      UtilWaitThread.sleep(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS);
+      sleepUninterruptibly(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS, TimeUnit.MILLISECONDS);
     }
     Assert.assertTrue(ReplicationTable.isOnline(conn));
-    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.READ);
 
     // Verify that we found a single replication record that's for table1
     Scanner s = ReplicationTable.getScanner(conn);
     StatusSection.limit(s);
-    Iterator<Entry<Key,Value>> iter = s.iterator();
-    int attempts = 5;
-    while (attempts > 0) {
-      if (!iter.hasNext()) {
-        s.close();
-        Thread.sleep(1000);
-        s = ReplicationTable.getScanner(conn);
-        iter = s.iterator();
-        attempts--;
-      } else {
+    for (int i = 0; i < 5; i++) {
+      if (Iterators.size(s.iterator()) == 1) {
         break;
       }
+      Thread.sleep(1000);
     }
-    Assert.assertTrue(iter.hasNext());
-    Entry<Key,Value> entry = iter.next();
+    Entry<Key,Value> entry = Iterators.getOnlyElement(s.iterator());
     // We should at least find one status record for this table, we might find a second if another log was started from ingesting the data
     Assert.assertEquals("Expected to find replication entry for " + table1, conn.tableOperations().tableIdMap().get(table1), entry.getKey()
         .getColumnQualifier().toString());
@@ -454,38 +441,20 @@
     conn.tableOperations().setProperty(table2, Property.TABLE_REPLICATION.getKey(), "true");
 
     // Write some data to table2
-    bw = conn.createBatchWriter(table2, new BatchWriterConfig());
-
-    for (int rows = 0; rows < 50; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 50; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
+    writeSomeData(conn, table2, 50, 50);
 
     // After the commit on these mutations, we'll get a replication entry in accumulo.metadata for table2
     // Don't want to compact table2 as it ultimately cause the entry in accumulo.metadata to be removed before we can verify it's there
 
-    // After writing data, we'll get a replication table online
-    Assert.assertTrue(ReplicationTable.isOnline(conn));
-    conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.READ);
-
     Set<String> tableIds = Sets.newHashSet(conn.tableOperations().tableIdMap().get(table1), conn.tableOperations().tableIdMap().get(table2));
     Set<String> tableIdsForMetadata = Sets.newHashSet(tableIds);
 
-    // Wait to make sure the table permission propagate
-    Thread.sleep(5000);
-
+    List<Entry<Key,Value>> records = new ArrayList<>();
     s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
     s.setRange(MetadataSchema.ReplicationSection.getRange());
-
-    List<Entry<Key,Value>> records = new ArrayList<>();
     for (Entry<Key,Value> metadata : s) {
       records.add(metadata);
+      log.debug("Meta: {} => {}", metadata.getKey().toStringNoTruncate(), metadata.getValue().toString());
     }
 
     Assert.assertEquals("Expected to find 2 records, but actually found " + records, 2, records.size());
@@ -503,7 +472,7 @@
     // Verify that we found two replication records: one for table1 and one for table2
     s = ReplicationTable.getScanner(conn);
     StatusSection.limit(s);
-    iter = s.iterator();
+    Iterator<Entry<Key,Value>> iter = s.iterator();
     Assert.assertTrue("Found no records in replication table", iter.hasNext());
     entry = iter.next();
     Assert.assertTrue("Expected to find element in replication table", tableIds.remove(entry.getKey().getColumnQualifier().toString()));
@@ -513,6 +482,19 @@
     Assert.assertFalse("Expected to only find two elements in replication table", iter.hasNext());
   }
 
+  private void writeSomeData(Connector conn, String table, int rows, int cols) throws Exception {
+    BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
+    for (int row = 0; row < rows; row++) {
+      Mutation m = new Mutation(Integer.toString(row));
+      for (int col = 0; col < cols; col++) {
+        String value = Integer.toString(col);
+        m.put(value, "", value);
+      }
+      bw.addMutation(m);
+    }
+    bw.close();
+  }
+
   @Test
   public void replicationEntriesPrecludeWalDeletion() throws Exception {
     final Connector conn = getConnector();
@@ -528,8 +510,8 @@
         while (keepRunning.get()) {
           try {
             logs.putAll(getAllLogs(conn));
-          } catch (TableNotFoundException e) {
-            log.error("Metadata table doesn't exist");
+          } catch (Exception e) {
+            log.error("Error getting logs", e);
           }
         }
       }
@@ -544,53 +526,21 @@
     Thread.sleep(2000);
 
     // Write some data to table1
-    BatchWriter bw = conn.createBatchWriter(table1, new BatchWriterConfig());
-    for (int rows = 0; rows < 200; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 500; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
+    writeSomeData(conn, table1, 200, 500);
 
     conn.tableOperations().create(table2);
     conn.tableOperations().setProperty(table2, Property.TABLE_REPLICATION.getKey(), "true");
     conn.tableOperations().setProperty(table2, Property.TABLE_REPLICATION_TARGET.getKey() + "cluster1", "1");
     Thread.sleep(2000);
 
-    // Write some data to table2
-    bw = conn.createBatchWriter(table2, new BatchWriterConfig());
-    for (int rows = 0; rows < 200; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 500; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
+    writeSomeData(conn, table2, 200, 500);
 
     conn.tableOperations().create(table3);
     conn.tableOperations().setProperty(table3, Property.TABLE_REPLICATION.getKey(), "true");
     conn.tableOperations().setProperty(table3, Property.TABLE_REPLICATION_TARGET.getKey() + "cluster1", "1");
     Thread.sleep(2000);
 
-    // Write some data to table3
-    bw = conn.createBatchWriter(table3, new BatchWriterConfig());
-    for (int rows = 0; rows < 200; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 500; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
+    writeSomeData(conn, table3, 200, 500);
 
     // Force a write to metadata for the data written
     for (String table : Arrays.asList(table1, table2, table3)) {
@@ -604,12 +554,7 @@
     // Sleep a sufficient amount of time to ensure that we get the straggling WALs that might have been created at the end
     Thread.sleep(5000);
 
-    Scanner s = ReplicationTable.getScanner(conn);
-    StatusSection.limit(s);
-    Set<String> replFiles = new HashSet<>();
-    for (Entry<Key,Value> entry : s) {
-      replFiles.add(entry.getKey().getRow().toString());
-    }
+    Set<String> replFiles = getReferencesToFilesToBeReplicated(conn);
 
     // We might have a WAL that was use solely for the replication table
     // We want to remove that from our list as it should not appear in the replication table
@@ -625,16 +570,34 @@
 
     // We should have *some* reference to each log that was seen in the metadata table
     // They might not yet all be closed though (might be newfile)
-    Assert.assertEquals("Metadata log distribution: " + logs, logs.keySet(), replFiles);
+    Assert.assertTrue("Metadata log distribution: " + logs + "replFiles " + replFiles, logs.keySet().containsAll(replFiles));
+    Assert.assertTrue("Difference between replication entries and current logs is bigger than one", logs.keySet().size() - replFiles.size() <= 1);
 
     final Configuration conf = new Configuration();
     for (String replFile : replFiles) {
       Path p = new Path(replFile);
       FileSystem fs = p.getFileSystem(conf);
-      Assert.assertTrue("File does not exist anymore, it was likely incorrectly garbage collected: " + p, fs.exists(p));
+      if (!fs.exists(p)) {
+        // double-check: the garbage collector can be fast
+        Set<String> currentSet = getReferencesToFilesToBeReplicated(conn);
+        log.info("Current references {}", currentSet);
+        log.info("Looking for reference to {}", replFile);
+        log.info("Contains? {}", currentSet.contains(replFile));
+        Assert.assertTrue("File does not exist anymore, it was likely incorrectly garbage collected: " + p, !currentSet.contains(replFile));
+      }
     }
   }
 
+  private Set<String> getReferencesToFilesToBeReplicated(final Connector conn) throws ReplicationTableOfflineException {
+    Scanner s = ReplicationTable.getScanner(conn);
+    StatusSection.limit(s);
+    Set<String> replFiles = new HashSet<>();
+    for (Entry<Key,Value> entry : s) {
+      replFiles.add(entry.getKey().getRow().toString());
+    }
+    return replFiles;
+  }
+
   @Test
   public void combinerWorksOnMetadata() throws Exception {
     Connector conn = getConnector();
@@ -692,45 +655,17 @@
     conn.tableOperations().setProperty(table3, Property.TABLE_REPLICATION.getKey(), "true");
     conn.tableOperations().setProperty(table3, Property.TABLE_REPLICATION_TARGET.getKey() + "cluster1", "1");
 
-    // Write some data to table1
-    BatchWriter bw = conn.createBatchWriter(table1, new BatchWriterConfig());
-    for (int rows = 0; rows < 200; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 500; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
+    writeSomeData(conn, table1, 200, 500);
+
+    writeSomeData(conn, table2, 200, 500);
+
+    writeSomeData(conn, table3, 200, 500);
+
+    // Flush everything to try to make the replication records
+    for (String table : Arrays.asList(table1, table2, table3)) {
+      conn.tableOperations().flush(table, null, null, true);
     }
 
-    bw.close();
-
-    // Write some data to table2
-    bw = conn.createBatchWriter(table2, new BatchWriterConfig());
-    for (int rows = 0; rows < 200; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 500; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
-
-    // Write some data to table3
-    bw = conn.createBatchWriter(table3, new BatchWriterConfig());
-    for (int rows = 0; rows < 200; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 500; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
-
     // Flush everything to try to make the replication records
     for (String table : Arrays.asList(table1, table2, table3)) {
       conn.tableOperations().flush(table, null, null, true);
@@ -777,10 +712,7 @@
     Set<String> wals = new HashSet<>();
     for (Entry<Key,Value> entry : s) {
       LogEntry logEntry = LogEntry.fromKeyValue(entry.getKey(), entry.getValue());
-      for (String file : logEntry.logSet) {
-        Path p = new Path(file);
-        wals.add(p.toString());
-      }
+      wals.add(new Path(logEntry.filename).toString());
     }
 
     log.warn("Found wals {}", wals);
@@ -799,7 +731,7 @@
     conn.tableOperations().flush(table, null, null, true);
 
     while (!ReplicationTable.isOnline(conn)) {
-      UtilWaitThread.sleep(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS);
+      sleepUninterruptibly(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS, TimeUnit.MILLISECONDS);
     }
 
     for (int i = 0; i < 10; i++) {
@@ -857,9 +789,7 @@
   public void singleTableWithSingleTarget() throws Exception {
     // We want to kill the GC so it doesn't come along and close Status records and mess up the comparisons
     // against expected Status messages.
-    for (ProcessReference proc : cluster.getProcesses().get(ServerType.GARBAGE_COLLECTOR)) {
-      cluster.killProcess(ServerType.GARBAGE_COLLECTOR, proc);
-    }
+    getCluster().getClusterControl().stop(ServerType.GARBAGE_COLLECTOR);
 
     Connector conn = getConnector();
     String table1 = "table1";
@@ -888,32 +818,22 @@
         if (attempts <= 0) {
           throw e;
         }
-        UtilWaitThread.sleep(2000);
+        sleepUninterruptibly(2, TimeUnit.SECONDS);
       }
     }
 
     // Write some data to table1
-    BatchWriter bw = conn.createBatchWriter(table1, new BatchWriterConfig());
-    for (int rows = 0; rows < 2000; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 50; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
+    writeSomeData(conn, table1, 2000, 50);
 
     // Make sure the replication table is online at this point
     while (!ReplicationTable.isOnline(conn)) {
-      UtilWaitThread.sleep(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS);
+      sleepUninterruptibly(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS, TimeUnit.MILLISECONDS);
     }
     Assert.assertTrue("Replication table was never created", ReplicationTable.isOnline(conn));
 
     // ACCUMULO-2743 The Observer in the tserver has to be made aware of the change to get the combiner (made by the master)
     for (int i = 0; i < 10 && !conn.tableOperations().listIterators(ReplicationTable.NAME).keySet().contains(ReplicationTable.COMBINER_NAME); i++) {
-      UtilWaitThread.sleep(2000);
+      sleepUninterruptibly(2, TimeUnit.SECONDS);
     }
 
     Assert.assertTrue("Combiner was never set on replication table",
@@ -984,17 +904,7 @@
     }
 
     // Write some more data so that we over-run the single WAL
-    bw = conn.createBatchWriter(table1, new BatchWriterConfig());
-    for (int rows = 0; rows < 3000; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 50; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
+    writeSomeData(conn, table1, 3000, 50);
 
     log.info("Issued compaction for table");
     conn.tableOperations().compact(table1, null, null, true, true);
@@ -1062,22 +972,12 @@
         if (attempts <= 0) {
           throw e;
         }
-        UtilWaitThread.sleep(500);
+        sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
       }
     }
 
     // Write some data to table1
-    BatchWriter bw = conn.createBatchWriter(table1, new BatchWriterConfig());
-    for (int rows = 0; rows < 2000; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 50; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
+    writeSomeData(conn, table1, 2000, 50);
     conn.tableOperations().flush(table1, null, null, true);
 
     String tableId = conn.tableOperations().tableIdMap().get(table1);
@@ -1085,7 +985,7 @@
 
     // Make sure the replication table exists at this point
     while (!ReplicationTable.isOnline(conn)) {
-      UtilWaitThread.sleep(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS);
+      sleepUninterruptibly(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS, TimeUnit.MILLISECONDS);
     }
     Assert.assertTrue("Replication table did not exist", ReplicationTable.isOnline(conn));
 
@@ -1127,10 +1027,7 @@
 
   @Test
   public void replicationRecordsAreClosedAfterGarbageCollection() throws Exception {
-    Collection<ProcessReference> gcProcs = cluster.getProcesses().get(ServerType.GARBAGE_COLLECTOR);
-    for (ProcessReference ref : gcProcs) {
-      cluster.killProcess(ServerType.GARBAGE_COLLECTOR, ref);
-    }
+    getCluster().getClusterControl().stop(ServerType.GARBAGE_COLLECTOR);
 
     final Connector conn = getConnector();
 
@@ -1161,7 +1058,6 @@
 
     String table1 = "table1", table2 = "table2", table3 = "table3";
 
-    BatchWriter bw;
     try {
       conn.tableOperations().create(table1);
       conn.tableOperations().setProperty(table1, Property.TABLE_REPLICATION.getKey(), "true");
@@ -1170,51 +1066,19 @@
           ReplicaSystemFactory.getPeerConfigurationValue(MockReplicaSystem.class, null));
 
       // Write some data to table1
-      bw = conn.createBatchWriter(table1, new BatchWriterConfig());
-      for (int rows = 0; rows < 200; rows++) {
-        Mutation m = new Mutation(Integer.toString(rows));
-        for (int cols = 0; cols < 500; cols++) {
-          String value = Integer.toString(cols);
-          m.put(value, "", value);
-        }
-        bw.addMutation(m);
-      }
-
-      bw.close();
+      writeSomeData(conn, table1, 200, 500);
 
       conn.tableOperations().create(table2);
       conn.tableOperations().setProperty(table2, Property.TABLE_REPLICATION.getKey(), "true");
       conn.tableOperations().setProperty(table2, Property.TABLE_REPLICATION_TARGET.getKey() + "cluster1", "1");
 
-      // Write some data to table2
-      bw = conn.createBatchWriter(table2, new BatchWriterConfig());
-      for (int rows = 0; rows < 200; rows++) {
-        Mutation m = new Mutation(Integer.toString(rows));
-        for (int cols = 0; cols < 500; cols++) {
-          String value = Integer.toString(cols);
-          m.put(value, "", value);
-        }
-        bw.addMutation(m);
-      }
-
-      bw.close();
+      writeSomeData(conn, table2, 200, 500);
 
       conn.tableOperations().create(table3);
       conn.tableOperations().setProperty(table3, Property.TABLE_REPLICATION.getKey(), "true");
       conn.tableOperations().setProperty(table3, Property.TABLE_REPLICATION_TARGET.getKey() + "cluster1", "1");
 
-      // Write some data to table3
-      bw = conn.createBatchWriter(table3, new BatchWriterConfig());
-      for (int rows = 0; rows < 200; rows++) {
-        Mutation m = new Mutation(Integer.toString(rows));
-        for (int cols = 0; cols < 500; cols++) {
-          String value = Integer.toString(cols);
-          m.put(value, "", value);
-        }
-        bw.addMutation(m);
-      }
-
-      bw.close();
+      writeSomeData(conn, table3, 200, 500);
 
       // Flush everything to try to make the replication records
       for (String table : Arrays.asList(table1, table2, table3)) {
@@ -1228,11 +1092,8 @@
 
     // Kill the tserver(s) and restart them
     // to ensure that the WALs we previously observed all move to closed.
-    for (ProcessReference proc : cluster.getProcesses().get(ServerType.TABLET_SERVER)) {
-      cluster.killProcess(ServerType.TABLET_SERVER, proc);
-    }
-
-    cluster.exec(TabletServer.class);
+    cluster.getClusterControl().stop(ServerType.TABLET_SERVER);
+    cluster.getClusterControl().start(ServerType.TABLET_SERVER);
 
     // Make sure we can read all the tables (recovery complete)
     for (String table : Arrays.asList(table1, table2, table3)) {
@@ -1276,7 +1137,7 @@
           break;
         }
 
-        UtilWaitThread.sleep(2000);
+        sleepUninterruptibly(2, TimeUnit.SECONDS);
       }
 
       if (!allClosed) {
@@ -1311,7 +1172,7 @@
           break;
         }
 
-        UtilWaitThread.sleep(3000);
+        sleepUninterruptibly(3, TimeUnit.SECONDS);
       }
 
       if (!allClosed) {
@@ -1333,9 +1194,7 @@
   @Test
   public void replicatedStatusEntriesAreDeleted() throws Exception {
     // Just stop it now, we'll restart it after we restart the tserver
-    for (ProcessReference proc : getCluster().getProcesses().get(ServerType.GARBAGE_COLLECTOR)) {
-      getCluster().killProcess(ServerType.GARBAGE_COLLECTOR, proc);
-    }
+    getCluster().getClusterControl().stop(ServerType.GARBAGE_COLLECTOR);
 
     final Connector conn = getConnector();
     log.info("Got connector to MAC");
@@ -1363,7 +1222,7 @@
         if (attempts <= 0) {
           throw e;
         }
-        UtilWaitThread.sleep(500);
+        sleepUninterruptibly(500, TimeUnit.MILLISECONDS);
       }
     }
 
@@ -1371,36 +1230,47 @@
     Assert.assertNotNull("Could not determine table id for " + table1, tableId);
 
     // Write some data to table1
-    BatchWriter bw = conn.createBatchWriter(table1, new BatchWriterConfig());
-    for (int rows = 0; rows < 2000; rows++) {
-      Mutation m = new Mutation(Integer.toString(rows));
-      for (int cols = 0; cols < 50; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
+    writeSomeData(conn, table1, 2000, 50);
     conn.tableOperations().flush(table1, null, null, true);
 
     // Make sure the replication table exists at this point
     while (!ReplicationTable.isOnline(conn)) {
-      UtilWaitThread.sleep(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS);
+      sleepUninterruptibly(MILLIS_BETWEEN_REPLICATION_TABLE_ONLINE_CHECKS, TimeUnit.MILLISECONDS);
     }
     Assert.assertTrue("Replication table did not exist", ReplicationTable.isOnline(conn));
 
     // Grant ourselves the write permission for later
     conn.securityOperations().grantTablePermission("root", ReplicationTable.NAME, TablePermission.WRITE);
 
+    log.info("Checking for replication entries in replication");
+    // Then we need to get those records over to the replication table
+    Scanner s;
+    Set<String> entries = new HashSet<>();
+    for (int i = 0; i < 5; i++) {
+      s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
+      s.setRange(ReplicationSection.getRange());
+      entries.clear();
+      for (Entry<Key,Value> entry : s) {
+        entries.add(entry.getKey().getRow().toString());
+        log.info("{}={}", entry.getKey().toStringNoTruncate(), entry.getValue());
+      }
+      if (!entries.isEmpty()) {
+        log.info("Replication entries {}", entries);
+        break;
+      }
+      Thread.sleep(1000);
+    }
+
+    Assert.assertFalse("Did not find any replication entries in the replication table", entries.isEmpty());
+
     // Find the WorkSection record that will be created for that data we ingested
     boolean notFound = true;
-    Scanner s;
     for (int i = 0; i < 10 && notFound; i++) {
       try {
         s = ReplicationTable.getScanner(conn);
         WorkSection.limit(s);
         Entry<Key,Value> e = Iterables.getOnlyElement(s);
+        log.info("Found entry: " + e.getKey().toStringNoTruncate());
         Text expectedColqual = new ReplicationTarget("cluster1", "4", tableId).toText();
         Assert.assertEquals(expectedColqual, e.getKey().getColumnQualifier());
         notFound = false;
@@ -1451,69 +1321,61 @@
     log.info("Killing tserver");
     // Kill the tserver(s) and restart them
     // to ensure that the WALs we previously observed all move to closed.
-    for (ProcessReference proc : cluster.getProcesses().get(ServerType.TABLET_SERVER)) {
-      cluster.killProcess(ServerType.TABLET_SERVER, proc);
-    }
+    cluster.getClusterControl().stop(ServerType.TABLET_SERVER);
 
     log.info("Starting tserver");
-    cluster.exec(TabletServer.class);
+    cluster.getClusterControl().start(ServerType.TABLET_SERVER);
 
     log.info("Waiting to read tables");
+    sleepUninterruptibly(2 * 3, TimeUnit.SECONDS);
 
     // Make sure we can read all the tables (recovery complete)
     for (String table : new String[] {MetadataTable.NAME, table1}) {
       Iterators.size(conn.createScanner(table, Authorizations.EMPTY).iterator());
     }
 
-    log.info("Checking for replication entries in replication");
-    // Then we need to get those records over to the replication table
-    boolean foundResults = false;
-    for (int i = 0; i < 5; i++) {
-      s = ReplicationTable.getScanner(conn);
-      int count = 0;
-      for (Entry<Key,Value> entry : s) {
-        count++;
-        log.info("{}={}", entry.getKey().toStringNoTruncate(), entry.getValue());
-      }
-      if (count > 0) {
-        foundResults = true;
-        break;
-      }
-      Thread.sleep(1000);
+    log.info("Recovered metadata:");
+    s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
+    for (Entry<Key,Value> entry : s) {
+      log.info("{}={}", entry.getKey().toStringNoTruncate(), entry.getValue());
     }
 
-    Assert.assertTrue("Did not find any replication entries in the replication table", foundResults);
-
-    getCluster().exec(SimpleGarbageCollector.class);
+    cluster.getClusterControl().start(ServerType.GARBAGE_COLLECTOR);
 
     // Wait for a bit since the GC has to run (should be running after a one second delay)
     waitForGCLock(conn);
 
     Thread.sleep(1000);
 
+    log.info("After GC");
+    s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
+    for (Entry<Key,Value> entry : s) {
+      log.info("{}={}", entry.getKey().toStringNoTruncate(), entry.getValue());
+    }
+
     // We expect no records in the metadata table after compaction. We have to poll
     // because we have to wait for the StatusMaker's next iteration which will clean
     // up the dangling *closed* records after we create the record in the replication table.
     // We need the GC to close the file (CloseWriteAheadLogReferences) before we can remove the record
     log.info("Checking metadata table for replication entries");
-    foundResults = true;
+    Set<String> remaining = new HashSet<>();
     for (int i = 0; i < 10; i++) {
       s = conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
       s.setRange(ReplicationSection.getRange());
-      long size = 0;
+      remaining.clear();
       for (Entry<Key,Value> e : s) {
-        size++;
-        log.info("{}={}", e.getKey().toStringNoTruncate(), ProtobufUtil.toString(Status.parseFrom(e.getValue().get())));
+        remaining.add(e.getKey().getRow().toString());
       }
-      if (size == 0) {
-        foundResults = false;
+      remaining.retainAll(entries);
+      if (remaining.isEmpty()) {
         break;
       }
+      log.info("remaining {}", remaining);
       Thread.sleep(2000);
       log.info("");
     }
 
-    Assert.assertFalse("Replication status messages were not cleaned up from metadata table", foundResults);
+    Assert.assertTrue("Replication status messages were not cleaned up from metadata table", remaining.isEmpty());
 
     /**
      * After we close out and subsequently delete the metadata record, this will propagate to the replication table, which will cause those records to be
@@ -1526,10 +1388,10 @@
       recordsFound = 0;
       for (Entry<Key,Value> entry : s) {
         recordsFound++;
-        log.info(entry.getKey().toStringNoTruncate() + " " + ProtobufUtil.toString(Status.parseFrom(entry.getValue().get())));
+        log.info("{} {}", entry.getKey().toStringNoTruncate(), ProtobufUtil.toString(Status.parseFrom(entry.getValue().get())));
       }
 
-      if (0 == recordsFound) {
+      if (recordsFound <= 2) {
         break;
       } else {
         Thread.sleep(1000);
@@ -1537,6 +1399,6 @@
       }
     }
 
-    Assert.assertEquals("Found unexpected replication records in the replication table", 0, recordsFound);
+    Assert.assertTrue("Found unexpected replication records in the replication table", recordsFound <= 2);
   }
 }
diff --git a/server/master/src/test/java/org/apache/accumulo/master/ReplicationOperationsImplTest.java b/test/src/main/java/org/apache/accumulo/test/replication/ReplicationOperationsImplIT.java
similarity index 78%
rename from server/master/src/test/java/org/apache/accumulo/master/ReplicationOperationsImplTest.java
rename to test/src/main/java/org/apache/accumulo/test/replication/ReplicationOperationsImplIT.java
index 6790858..00945a1 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/ReplicationOperationsImplTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/ReplicationOperationsImplIT.java
@@ -14,9 +14,8 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.master;
+package org.apache.accumulo.test.replication;
 
-import java.util.Arrays;
 import java.util.Map.Entry;
 import java.util.Set;
 import java.util.UUID;
@@ -26,7 +25,6 @@
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.ClientConfiguration;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.TableNotFoundException;
@@ -34,7 +32,6 @@
 import org.apache.accumulo.core.client.impl.Credentials;
 import org.apache.accumulo.core.client.impl.ReplicationOperationsImpl;
 import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
-import org.apache.accumulo.core.client.mock.MockInstance;
 import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
@@ -46,40 +43,45 @@
 import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.security.thrift.TCredentials;
 import org.apache.accumulo.core.tabletserver.log.LogEntry;
 import org.apache.accumulo.core.trace.thrift.TInfo;
+import org.apache.accumulo.master.Master;
+import org.apache.accumulo.master.MasterClientServiceHandler;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.io.Text;
 import org.apache.thrift.TException;
 import org.easymock.EasyMock;
 import org.junit.Assert;
 import org.junit.Before;
-import org.junit.Rule;
 import org.junit.Test;
-import org.junit.rules.TestName;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-public class ReplicationOperationsImplTest {
-  private static final Logger log = LoggerFactory.getLogger(ReplicationOperationsImplTest.class);
+public class ReplicationOperationsImplIT extends ConfigurableMacBase {
+  private static final Logger log = LoggerFactory.getLogger(ReplicationOperationsImplIT.class);
 
-  private MockInstance inst;
-
-  @Rule
-  public TestName test = new TestName();
+  private Instance inst;
+  private Connector conn;
 
   @Before
-  public void setup() {
-    inst = new MockInstance(test.getMethodName());
+  public void configureInstance() throws Exception {
+    conn = getConnector();
+    inst = conn.getInstance();
+    ReplicationTable.setOnline(conn);
+    conn.securityOperations().grantTablePermission(conn.whoami(), MetadataTable.NAME, TablePermission.WRITE);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.READ);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.WRITE);
   }
 
   /**
    * Spoof out the Master so we can call the implementation without starting a full instance.
    */
-  private ReplicationOperationsImpl getReplicationOperations(ClientContext context) throws Exception {
+  private ReplicationOperationsImpl getReplicationOperations() throws Exception {
     Master master = EasyMock.createMock(Master.class);
-    EasyMock.expect(master.getConnector()).andReturn(inst.getConnector("root", new PasswordToken(""))).anyTimes();
+    EasyMock.expect(master.getConnector()).andReturn(conn).anyTimes();
     EasyMock.expect(master.getInstance()).andReturn(inst).anyTimes();
     EasyMock.replay(master);
 
@@ -87,13 +89,14 @@
       @Override
       protected String getTableId(Instance inst, String tableName) throws ThriftTableOperationException {
         try {
-          return inst.getConnector("root", new PasswordToken("")).tableOperations().tableIdMap().get(tableName);
+          return conn.tableOperations().tableIdMap().get(tableName);
         } catch (Exception e) {
           throw new RuntimeException(e);
         }
       }
     };
 
+    ClientContext context = new ClientContext(inst, new Credentials("root", new PasswordToken(ROOT_PASSWORD)), getClientConfig());
     return new ReplicationOperationsImpl(context) {
       @Override
       protected boolean getMasterDrain(final TInfo tinfo, final TCredentials rpcCreds, final String tableName, final Set<String> wals)
@@ -109,9 +112,8 @@
 
   @Test
   public void waitsUntilEntriesAreReplicated() throws Exception {
-    Connector conn = inst.getConnector("root", new PasswordToken(""));
     conn.tableOperations().create("foo");
-    Text tableId = new Text(conn.tableOperations().tableIdMap().get("foo"));
+    String tableId = conn.tableOperations().tableIdMap().get("foo");
 
     String file1 = "/accumulo/wals/tserver+port/" + UUID.randomUUID(), file2 = "/accumulo/wals/tserver+port/" + UUID.randomUUID();
     Status stat = Status.newBuilder().setBegin(0).setEnd(10000).setInfiniteEnd(false).setClosed(false).build();
@@ -130,19 +132,18 @@
 
     bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
     m = new Mutation(ReplicationSection.getRowPrefix() + file1);
-    m.put(ReplicationSection.COLF, tableId, ProtobufUtil.toValue(stat));
+    m.put(ReplicationSection.COLF, new Text(tableId), ProtobufUtil.toValue(stat));
 
     bw.addMutation(m);
 
     m = new Mutation(ReplicationSection.getRowPrefix() + file2);
-    m.put(ReplicationSection.COLF, tableId, ProtobufUtil.toValue(stat));
+    m.put(ReplicationSection.COLF, new Text(tableId), ProtobufUtil.toValue(stat));
 
     bw.close();
 
     final AtomicBoolean done = new AtomicBoolean(false);
     final AtomicBoolean exception = new AtomicBoolean(false);
-    ClientContext context = new ClientContext(inst, new Credentials("root", new PasswordToken("")), new ClientConfiguration());
-    final ReplicationOperationsImpl roi = getReplicationOperations(context);
+    final ReplicationOperationsImpl roi = getReplicationOperations();
     Thread t = new Thread(new Runnable() {
       @Override
       public void run() {
@@ -163,14 +164,14 @@
 
     bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
     m = new Mutation(ReplicationSection.getRowPrefix() + file1);
-    m.putDelete(ReplicationSection.COLF, tableId);
+    m.putDelete(ReplicationSection.COLF, new Text(tableId));
     bw.addMutation(m);
     bw.flush();
 
     Assert.assertFalse(done.get());
 
     m = new Mutation(ReplicationSection.getRowPrefix() + file2);
-    m.putDelete(ReplicationSection.COLF, tableId);
+    m.putDelete(ReplicationSection.COLF, new Text(tableId));
     bw.addMutation(m);
     bw.flush();
     bw.close();
@@ -181,14 +182,14 @@
     // Remove the replication entries too
     bw = ReplicationTable.getBatchWriter(conn);
     m = new Mutation(file1);
-    m.putDelete(StatusSection.NAME, tableId);
+    m.putDelete(StatusSection.NAME, new Text(tableId));
     bw.addMutation(m);
     bw.flush();
 
     Assert.assertFalse(done.get());
 
     m = new Mutation(file2);
-    m.putDelete(StatusSection.NAME, tableId);
+    m.putDelete(StatusSection.NAME, new Text(tableId));
     bw.addMutation(m);
     bw.flush();
 
@@ -205,12 +206,11 @@
 
   @Test
   public void unrelatedReplicationRecordsDontBlockDrain() throws Exception {
-    Connector conn = inst.getConnector("root", new PasswordToken(""));
     conn.tableOperations().create("foo");
     conn.tableOperations().create("bar");
 
-    Text tableId1 = new Text(conn.tableOperations().tableIdMap().get("foo"));
-    Text tableId2 = new Text(conn.tableOperations().tableIdMap().get("bar"));
+    String tableId1 = conn.tableOperations().tableIdMap().get("foo");
+    String tableId2 = conn.tableOperations().tableIdMap().get("bar");
 
     String file1 = "/accumulo/wals/tserver+port/" + UUID.randomUUID(), file2 = "/accumulo/wals/tserver+port/" + UUID.randomUUID();
     Status stat = Status.newBuilder().setBegin(0).setEnd(10000).setInfiniteEnd(false).setClosed(false).build();
@@ -229,20 +229,19 @@
 
     bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
     m = new Mutation(ReplicationSection.getRowPrefix() + file1);
-    m.put(ReplicationSection.COLF, tableId1, ProtobufUtil.toValue(stat));
+    m.put(ReplicationSection.COLF, new Text(tableId1), ProtobufUtil.toValue(stat));
 
     bw.addMutation(m);
 
     m = new Mutation(ReplicationSection.getRowPrefix() + file2);
-    m.put(ReplicationSection.COLF, tableId2, ProtobufUtil.toValue(stat));
+    m.put(ReplicationSection.COLF, new Text(tableId2), ProtobufUtil.toValue(stat));
 
     bw.close();
 
     final AtomicBoolean done = new AtomicBoolean(false);
     final AtomicBoolean exception = new AtomicBoolean(false);
-    ClientContext context = new ClientContext(inst, new Credentials("root", new PasswordToken("")), new ClientConfiguration());
 
-    final ReplicationOperationsImpl roi = getReplicationOperations(context);
+    final ReplicationOperationsImpl roi = getReplicationOperations();
 
     Thread t = new Thread(new Runnable() {
       @Override
@@ -264,7 +263,7 @@
 
     bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
     m = new Mutation(ReplicationSection.getRowPrefix() + file1);
-    m.putDelete(ReplicationSection.COLF, tableId1);
+    m.putDelete(ReplicationSection.COLF, new Text(tableId1));
     bw.addMutation(m);
     bw.flush();
 
@@ -274,7 +273,7 @@
     // Remove the replication entries too
     bw = ReplicationTable.getBatchWriter(conn);
     m = new Mutation(file1);
-    m.putDelete(StatusSection.NAME, tableId1);
+    m.putDelete(StatusSection.NAME, new Text(tableId1));
     bw.addMutation(m);
     bw.flush();
 
@@ -291,10 +290,9 @@
 
   @Test
   public void inprogressReplicationRecordsBlockExecution() throws Exception {
-    Connector conn = inst.getConnector("root", new PasswordToken(""));
     conn.tableOperations().create("foo");
 
-    Text tableId1 = new Text(conn.tableOperations().tableIdMap().get("foo"));
+    String tableId1 = conn.tableOperations().tableIdMap().get("foo");
 
     String file1 = "/accumulo/wals/tserver+port/" + UUID.randomUUID();
     Status stat = Status.newBuilder().setBegin(0).setEnd(10000).setInfiniteEnd(false).setClosed(false).build();
@@ -306,17 +304,11 @@
     bw.addMutation(m);
     bw.close();
 
-    LogEntry logEntry = new LogEntry();
-    logEntry.extent = new KeyExtent(new Text(tableId1), null, null);
-    logEntry.server = "tserver";
-    logEntry.filename = file1;
-    logEntry.tabletId = 1;
-    logEntry.logSet = Arrays.asList(file1);
-    logEntry.timestamp = System.currentTimeMillis();
+    LogEntry logEntry = new LogEntry(new KeyExtent(tableId1, null, null), System.currentTimeMillis(), "tserver", file1);
 
     bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
     m = new Mutation(ReplicationSection.getRowPrefix() + file1);
-    m.put(ReplicationSection.COLF, tableId1, ProtobufUtil.toValue(stat));
+    m.put(ReplicationSection.COLF, new Text(tableId1), ProtobufUtil.toValue(stat));
     bw.addMutation(m);
 
     m = new Mutation(logEntry.getRow());
@@ -327,8 +319,7 @@
 
     final AtomicBoolean done = new AtomicBoolean(false);
     final AtomicBoolean exception = new AtomicBoolean(false);
-    ClientContext context = new ClientContext(inst, new Credentials("root", new PasswordToken("")), new ClientConfiguration());
-    final ReplicationOperationsImpl roi = getReplicationOperations(context);
+    final ReplicationOperationsImpl roi = getReplicationOperations();
     Thread t = new Thread(new Runnable() {
       @Override
       public void run() {
@@ -350,7 +341,7 @@
     Status newStatus = Status.newBuilder().setBegin(1000).setEnd(2000).setInfiniteEnd(false).setClosed(true).build();
     bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
     m = new Mutation(ReplicationSection.getRowPrefix() + file1);
-    m.put(ReplicationSection.COLF, tableId1, ProtobufUtil.toValue(newStatus));
+    m.put(ReplicationSection.COLF, new Text(tableId1), ProtobufUtil.toValue(newStatus));
     bw.addMutation(m);
     bw.flush();
 
@@ -360,7 +351,7 @@
     // Remove the replication entries too
     bw = ReplicationTable.getBatchWriter(conn);
     m = new Mutation(file1);
-    m.put(StatusSection.NAME, tableId1, ProtobufUtil.toValue(newStatus));
+    m.put(StatusSection.NAME, new Text(tableId1), ProtobufUtil.toValue(newStatus));
     bw.addMutation(m);
     bw.flush();
 
@@ -377,10 +368,9 @@
 
   @Test
   public void laterCreatedLogsDontBlockExecution() throws Exception {
-    Connector conn = inst.getConnector("root", new PasswordToken(""));
     conn.tableOperations().create("foo");
 
-    Text tableId1 = new Text(conn.tableOperations().tableIdMap().get("foo"));
+    String tableId1 = conn.tableOperations().tableIdMap().get("foo");
 
     String file1 = "/accumulo/wals/tserver+port/" + UUID.randomUUID();
     Status stat = Status.newBuilder().setBegin(0).setEnd(10000).setInfiniteEnd(false).setClosed(false).build();
@@ -393,20 +383,19 @@
 
     bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
     m = new Mutation(ReplicationSection.getRowPrefix() + file1);
-    m.put(ReplicationSection.COLF, tableId1, ProtobufUtil.toValue(stat));
+    m.put(ReplicationSection.COLF, new Text(tableId1), ProtobufUtil.toValue(stat));
     bw.addMutation(m);
 
     bw.close();
 
-    System.out.println("Reading metadata first time");
+    log.info("Reading metadata first time");
     for (Entry<Key,Value> e : conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY)) {
-      System.out.println(e.getKey());
+      log.info("{}", e.getKey());
     }
 
     final AtomicBoolean done = new AtomicBoolean(false);
     final AtomicBoolean exception = new AtomicBoolean(false);
-    ClientContext context = new ClientContext(inst, new Credentials("root", new PasswordToken("")), new ClientConfiguration());
-    final ReplicationOperationsImpl roi = getReplicationOperations(context);
+    final ReplicationOperationsImpl roi = getReplicationOperations();
     Thread t = new Thread(new Runnable() {
       @Override
       public void run() {
@@ -428,21 +417,21 @@
     // Write another file, but also delete the old files
     bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
     m = new Mutation(ReplicationSection.getRowPrefix() + "/accumulo/wals/tserver+port/" + UUID.randomUUID());
-    m.put(ReplicationSection.COLF, tableId1, ProtobufUtil.toValue(stat));
+    m.put(ReplicationSection.COLF, new Text(tableId1), ProtobufUtil.toValue(stat));
     bw.addMutation(m);
     m = new Mutation(ReplicationSection.getRowPrefix() + file1);
-    m.putDelete(ReplicationSection.COLF, tableId1);
+    m.putDelete(ReplicationSection.COLF, new Text(tableId1));
     bw.addMutation(m);
     bw.close();
 
-    System.out.println("Reading metadata second time");
+    log.info("Reading metadata second time");
     for (Entry<Key,Value> e : conn.createScanner(MetadataTable.NAME, Authorizations.EMPTY)) {
-      System.out.println(e.getKey());
+      log.info("{}", e.getKey());
     }
 
     bw = ReplicationTable.getBatchWriter(conn);
     m = new Mutation(file1);
-    m.putDelete(StatusSection.NAME, tableId1);
+    m.putDelete(StatusSection.NAME, new Text(tableId1));
     bw.addMutation(m);
     bw.close();
 
@@ -455,5 +444,4 @@
     // We should pass immediately because we aren't waiting on both files to be deleted (just the one that we did)
     Assert.assertTrue("Drain didn't finish", done.get());
   }
-
 }
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/ReplicationRandomWalkIT.java b/test/src/main/java/org/apache/accumulo/test/replication/ReplicationRandomWalkIT.java
similarity index 94%
rename from test/src/test/java/org/apache/accumulo/test/replication/ReplicationRandomWalkIT.java
rename to test/src/main/java/org/apache/accumulo/test/replication/ReplicationRandomWalkIT.java
index 43d1f20..80bc69d 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/ReplicationRandomWalkIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/ReplicationRandomWalkIT.java
@@ -25,13 +25,13 @@
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.test.randomwalk.Environment;
 import org.apache.accumulo.test.randomwalk.concurrent.Replication;
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Test;
 
-public class ReplicationRandomWalkIT extends ConfigurableMacIT {
+public class ReplicationRandomWalkIT extends ConfigurableMacBase {
 
   @Override
   protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/SequentialWorkAssignerIT.java b/test/src/main/java/org/apache/accumulo/test/replication/SequentialWorkAssignerIT.java
new file mode 100644
index 0000000..ae277c9
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/replication/SequentialWorkAssignerIT.java
@@ -0,0 +1,368 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.replication;
+
+import static org.easymock.EasyMock.createMock;
+import static org.easymock.EasyMock.expectLastCall;
+import static org.easymock.EasyMock.replay;
+import static org.easymock.EasyMock.verify;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.protobuf.ProtobufUtil;
+import org.apache.accumulo.core.replication.ReplicationSchema.OrderSection;
+import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
+import org.apache.accumulo.core.replication.ReplicationTable;
+import org.apache.accumulo.core.replication.ReplicationTarget;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.master.replication.SequentialWorkAssigner;
+import org.apache.accumulo.server.replication.DistributedWorkQueueWorkAssignerHelper;
+import org.apache.accumulo.server.replication.proto.Replication.Status;
+import org.apache.accumulo.server.zookeeper.DistributedWorkQueue;
+import org.apache.accumulo.server.zookeeper.ZooCache;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.io.Text;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+public class SequentialWorkAssignerIT extends ConfigurableMacBase {
+
+  private Connector conn;
+  private MockSequentialWorkAssigner assigner;
+
+  private static class MockSequentialWorkAssigner extends SequentialWorkAssigner {
+
+    public MockSequentialWorkAssigner(Connector conn) {
+      super(null, conn);
+    }
+
+    @Override
+    public void setConnector(Connector conn) {
+      super.setConnector(conn);
+    }
+
+    @Override
+    public void setQueuedWork(Map<String,Map<String,String>> queuedWork) {
+      super.setQueuedWork(queuedWork);
+    }
+
+    @Override
+    public void setWorkQueue(DistributedWorkQueue workQueue) {
+      super.setWorkQueue(workQueue);
+    }
+
+    @Override
+    public void setMaxQueueSize(int maxQueueSize) {
+      super.setMaxQueueSize(maxQueueSize);
+    }
+
+    @Override
+    public void createWork() {
+      super.createWork();
+    }
+
+    @Override
+    public void setZooCache(ZooCache zooCache) {
+      super.setZooCache(zooCache);
+    }
+
+    @Override
+    public void cleanupFinishedWork() {
+      super.cleanupFinishedWork();
+    }
+
+  }
+
+  @Before
+  public void init() throws Exception {
+    conn = getConnector();
+    assigner = new MockSequentialWorkAssigner(conn);
+    // grant ourselves write to the replication table
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.READ);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.WRITE);
+    ReplicationTable.setOnline(conn);
+  }
+
+  @Test
+  public void createWorkForFilesInCorrectOrder() throws Exception {
+    ReplicationTarget target = new ReplicationTarget("cluster1", "table1", "1");
+    Text serializedTarget = target.toText();
+
+    // Create two mutations, both of which need replication work done
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    // We want the name of file2 to sort before file1
+    String filename1 = "z_file1", filename2 = "a_file1";
+    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
+
+    // File1 was closed before file2, however
+    Status stat1 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(250).build();
+    Status stat2 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(500).build();
+
+    Mutation m = new Mutation(file1);
+    WorkSection.add(m, serializedTarget, ProtobufUtil.toValue(stat1));
+    bw.addMutation(m);
+
+    m = new Mutation(file2);
+    WorkSection.add(m, serializedTarget, ProtobufUtil.toValue(stat2));
+    bw.addMutation(m);
+
+    m = OrderSection.createMutation(file1, stat1.getCreatedTime());
+    OrderSection.add(m, target.getSourceTableId(), ProtobufUtil.toValue(stat1));
+    bw.addMutation(m);
+
+    m = OrderSection.createMutation(file2, stat2.getCreatedTime());
+    OrderSection.add(m, target.getSourceTableId(), ProtobufUtil.toValue(stat2));
+    bw.addMutation(m);
+
+    bw.close();
+
+    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
+    Map<String,Map<String,String>> queuedWork = new HashMap<>();
+    assigner.setQueuedWork(queuedWork);
+    assigner.setWorkQueue(workQueue);
+    assigner.setMaxQueueSize(Integer.MAX_VALUE);
+
+    // Make sure we expect the invocations in the correct order (accumulo is sorted)
+    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target), file1);
+    expectLastCall().once();
+
+    // file2 is *not* queued because file1 must be replicated first
+
+    replay(workQueue);
+
+    assigner.createWork();
+
+    verify(workQueue);
+
+    Assert.assertEquals(1, queuedWork.size());
+    Assert.assertTrue(queuedWork.containsKey("cluster1"));
+    Map<String,String> cluster1Work = queuedWork.get("cluster1");
+    Assert.assertEquals(1, cluster1Work.size());
+    Assert.assertTrue(cluster1Work.containsKey(target.getSourceTableId()));
+    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target), cluster1Work.get(target.getSourceTableId()));
+  }
+
+  @Test
+  public void workAcrossTablesHappensConcurrently() throws Exception {
+    ReplicationTarget target1 = new ReplicationTarget("cluster1", "table1", "1");
+    Text serializedTarget1 = target1.toText();
+
+    ReplicationTarget target2 = new ReplicationTarget("cluster1", "table2", "2");
+    Text serializedTarget2 = target2.toText();
+
+    // Create two mutations, both of which need replication work done
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    // We want the name of file2 to sort before file1
+    String filename1 = "z_file1", filename2 = "a_file1";
+    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
+
+    // File1 was closed before file2, however
+    Status stat1 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(250).build();
+    Status stat2 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(500).build();
+
+    Mutation m = new Mutation(file1);
+    WorkSection.add(m, serializedTarget1, ProtobufUtil.toValue(stat1));
+    bw.addMutation(m);
+
+    m = new Mutation(file2);
+    WorkSection.add(m, serializedTarget2, ProtobufUtil.toValue(stat2));
+    bw.addMutation(m);
+
+    m = OrderSection.createMutation(file1, stat1.getCreatedTime());
+    OrderSection.add(m, target1.getSourceTableId(), ProtobufUtil.toValue(stat1));
+    bw.addMutation(m);
+
+    m = OrderSection.createMutation(file2, stat2.getCreatedTime());
+    OrderSection.add(m, target2.getSourceTableId(), ProtobufUtil.toValue(stat2));
+    bw.addMutation(m);
+
+    bw.close();
+
+    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
+    Map<String,Map<String,String>> queuedWork = new HashMap<>();
+    assigner.setQueuedWork(queuedWork);
+    assigner.setWorkQueue(workQueue);
+    assigner.setMaxQueueSize(Integer.MAX_VALUE);
+
+    // Make sure we expect the invocations in the correct order (accumulo is sorted)
+    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target1), file1);
+    expectLastCall().once();
+
+    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target2), file2);
+    expectLastCall().once();
+
+    // file2 is *not* queued because file1 must be replicated first
+
+    replay(workQueue);
+
+    assigner.createWork();
+
+    verify(workQueue);
+
+    Assert.assertEquals(1, queuedWork.size());
+    Assert.assertTrue(queuedWork.containsKey("cluster1"));
+
+    Map<String,String> cluster1Work = queuedWork.get("cluster1");
+    Assert.assertEquals(2, cluster1Work.size());
+    Assert.assertTrue(cluster1Work.containsKey(target1.getSourceTableId()));
+    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target1), cluster1Work.get(target1.getSourceTableId()));
+
+    Assert.assertTrue(cluster1Work.containsKey(target2.getSourceTableId()));
+    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target2), cluster1Work.get(target2.getSourceTableId()));
+  }
+
+  @Test
+  public void workAcrossPeersHappensConcurrently() throws Exception {
+    ReplicationTarget target1 = new ReplicationTarget("cluster1", "table1", "1");
+    Text serializedTarget1 = target1.toText();
+
+    ReplicationTarget target2 = new ReplicationTarget("cluster2", "table1", "1");
+    Text serializedTarget2 = target2.toText();
+
+    // Create two mutations, both of which need replication work done
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    // We want the name of file2 to sort before file1
+    String filename1 = "z_file1", filename2 = "a_file1";
+    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
+
+    // File1 was closed before file2, however
+    Status stat1 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(250).build();
+    Status stat2 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(500).build();
+
+    Mutation m = new Mutation(file1);
+    WorkSection.add(m, serializedTarget1, ProtobufUtil.toValue(stat1));
+    bw.addMutation(m);
+
+    m = new Mutation(file2);
+    WorkSection.add(m, serializedTarget2, ProtobufUtil.toValue(stat2));
+    bw.addMutation(m);
+
+    m = OrderSection.createMutation(file1, stat1.getCreatedTime());
+    OrderSection.add(m, target1.getSourceTableId(), ProtobufUtil.toValue(stat1));
+    bw.addMutation(m);
+
+    m = OrderSection.createMutation(file2, stat2.getCreatedTime());
+    OrderSection.add(m, target2.getSourceTableId(), ProtobufUtil.toValue(stat2));
+    bw.addMutation(m);
+
+    bw.close();
+
+    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
+    Map<String,Map<String,String>> queuedWork = new HashMap<>();
+    assigner.setQueuedWork(queuedWork);
+    assigner.setWorkQueue(workQueue);
+    assigner.setMaxQueueSize(Integer.MAX_VALUE);
+
+    // Make sure we expect the invocations in the correct order (accumulo is sorted)
+    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target1), file1);
+    expectLastCall().once();
+
+    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target2), file2);
+    expectLastCall().once();
+
+    // file2 is *not* queued because file1 must be replicated first
+
+    replay(workQueue);
+
+    assigner.createWork();
+
+    verify(workQueue);
+
+    Assert.assertEquals(2, queuedWork.size());
+    Assert.assertTrue(queuedWork.containsKey("cluster1"));
+
+    Map<String,String> cluster1Work = queuedWork.get("cluster1");
+    Assert.assertEquals(1, cluster1Work.size());
+    Assert.assertTrue(cluster1Work.containsKey(target1.getSourceTableId()));
+    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target1), cluster1Work.get(target1.getSourceTableId()));
+
+    Map<String,String> cluster2Work = queuedWork.get("cluster2");
+    Assert.assertEquals(1, cluster2Work.size());
+    Assert.assertTrue(cluster2Work.containsKey(target2.getSourceTableId()));
+    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target2), cluster2Work.get(target2.getSourceTableId()));
+  }
+
+  @Test
+  public void reprocessingOfCompletedWorkRemovesWork() throws Exception {
+    ReplicationTarget target = new ReplicationTarget("cluster1", "table1", "1");
+    Text serializedTarget = target.toText();
+
+    // Create two mutations, both of which need replication work done
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    // We want the name of file2 to sort before file1
+    String filename1 = "z_file1", filename2 = "a_file1";
+    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
+
+    // File1 was closed before file2, however
+    Status stat1 = Status.newBuilder().setBegin(100).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(250).build();
+    Status stat2 = Status.newBuilder().setBegin(0).setEnd(100).setClosed(true).setInfiniteEnd(false).setCreatedTime(500).build();
+
+    Mutation m = new Mutation(file1);
+    WorkSection.add(m, serializedTarget, ProtobufUtil.toValue(stat1));
+    bw.addMutation(m);
+
+    m = new Mutation(file2);
+    WorkSection.add(m, serializedTarget, ProtobufUtil.toValue(stat2));
+    bw.addMutation(m);
+
+    m = OrderSection.createMutation(file1, stat1.getCreatedTime());
+    OrderSection.add(m, target.getSourceTableId(), ProtobufUtil.toValue(stat1));
+    bw.addMutation(m);
+
+    m = OrderSection.createMutation(file2, stat2.getCreatedTime());
+    OrderSection.add(m, target.getSourceTableId(), ProtobufUtil.toValue(stat2));
+    bw.addMutation(m);
+
+    bw.close();
+
+    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
+
+    // Treat filename1 as we have already submitted it for replication
+    Map<String,Map<String,String>> queuedWork = new HashMap<>();
+    Map<String,String> queuedWorkForCluster = new HashMap<>();
+    queuedWorkForCluster.put(target.getSourceTableId(), DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename1, target));
+    queuedWork.put("cluster1", queuedWorkForCluster);
+
+    assigner.setQueuedWork(queuedWork);
+    assigner.setWorkQueue(workQueue);
+    assigner.setMaxQueueSize(Integer.MAX_VALUE);
+
+    // Make sure we expect the invocations in the correct order (accumulo is sorted)
+    workQueue.addWork(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target), file2);
+    expectLastCall().once();
+
+    // file2 is queued because we remove file1 because it's fully replicated
+
+    replay(workQueue);
+
+    assigner.createWork();
+
+    verify(workQueue);
+
+    Assert.assertEquals(1, queuedWork.size());
+    Assert.assertTrue(queuedWork.containsKey("cluster1"));
+    Map<String,String> cluster1Work = queuedWork.get("cluster1");
+    Assert.assertEquals(1, cluster1Work.size());
+    Assert.assertTrue(cluster1Work.containsKey(target.getSourceTableId()));
+    Assert.assertEquals(DistributedWorkQueueWorkAssignerHelper.getQueueKey(filename2, target), cluster1Work.get(target.getSourceTableId()));
+  }
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/StatusCombinerMacIT.java b/test/src/main/java/org/apache/accumulo/test/replication/StatusCombinerMacIT.java
similarity index 90%
rename from test/src/test/java/org/apache/accumulo/test/replication/StatusCombinerMacIT.java
rename to test/src/main/java/org/apache/accumulo/test/replication/StatusCombinerMacIT.java
index a15e6b6..03663a2 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/StatusCombinerMacIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/StatusCombinerMacIT.java
@@ -37,11 +37,10 @@
 import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.harness.SharedMiniClusterIT;
+import org.apache.accumulo.harness.SharedMiniClusterBase;
 import org.apache.accumulo.server.replication.StatusUtil;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
 import org.apache.accumulo.server.util.ReplicationTableUtil;
-import org.apache.hadoop.io.Text;
 import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.BeforeClass;
@@ -49,7 +48,7 @@
 
 import com.google.common.collect.Iterables;
 
-public class StatusCombinerMacIT extends SharedMiniClusterIT {
+public class StatusCombinerMacIT extends SharedMiniClusterBase {
 
   @Override
   public int defaultTimeoutSeconds() {
@@ -58,12 +57,12 @@
 
   @BeforeClass
   public static void setup() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
+    SharedMiniClusterBase.startMiniCluster();
   }
 
   @AfterClass
   public static void teardown() throws Exception {
-    SharedMiniClusterIT.stopMiniCluster();
+    SharedMiniClusterBase.stopMiniCluster();
   }
 
   @Test
@@ -79,7 +78,7 @@
     Assert.assertTrue(scopes.contains(IteratorScope.majc));
 
     Iterable<Entry<String,String>> propIter = tops.getProperties(MetadataTable.NAME);
-    HashMap<String,String> properties = new HashMap<String,String>();
+    HashMap<String,String> properties = new HashMap<>();
     for (Entry<String,String> entry : propIter) {
       properties.put(entry.getKey(), entry.getValue());
     }
@@ -102,7 +101,7 @@
     long createTime = System.currentTimeMillis();
     try {
       Mutation m = new Mutation("file:/accumulo/wal/HW10447.local+56808/93cdc17e-7521-44fa-87b5-37f45bcb92d3");
-      StatusSection.add(m, new Text("1"), StatusUtil.fileCreatedValue(createTime));
+      StatusSection.add(m, "1", StatusUtil.fileCreatedValue(createTime));
       bw.addMutation(m);
     } finally {
       bw.close();
@@ -115,7 +114,7 @@
     bw = ReplicationTable.getBatchWriter(conn);
     try {
       Mutation m = new Mutation("file:/accumulo/wal/HW10447.local+56808/93cdc17e-7521-44fa-87b5-37f45bcb92d3");
-      StatusSection.add(m, new Text("1"), ProtobufUtil.toValue(StatusUtil.replicated(Long.MAX_VALUE)));
+      StatusSection.add(m, "1", ProtobufUtil.toValue(StatusUtil.replicated(Long.MAX_VALUE)));
       bw.addMutation(m);
     } finally {
       bw.close();
diff --git a/server/master/src/test/java/org/apache/accumulo/master/replication/StatusMakerTest.java b/test/src/main/java/org/apache/accumulo/test/replication/StatusMakerIT.java
similarity index 85%
rename from server/master/src/test/java/org/apache/accumulo/master/replication/StatusMakerTest.java
rename to test/src/main/java/org/apache/accumulo/test/replication/StatusMakerIT.java
index b57fd89..cd57ae1 100644
--- a/server/master/src/test/java/org/apache/accumulo/master/replication/StatusMakerTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/StatusMakerIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.master.replication;
+package org.apache.accumulo.test.replication;
 
 import java.util.Arrays;
 import java.util.HashMap;
@@ -29,9 +29,6 @@
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
 import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.impl.Credentials;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
@@ -41,30 +38,35 @@
 import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.master.replication.StatusMaker;
 import org.apache.accumulo.server.replication.StatusUtil;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
 import org.apache.accumulo.server.util.ReplicationTableUtil;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.hadoop.io.Text;
 import org.junit.Assert;
-import org.junit.Rule;
+import org.junit.Before;
 import org.junit.Test;
-import org.junit.rules.TestName;
 
 import com.google.common.collect.Iterables;
 import com.google.common.collect.Sets;
 
-public class StatusMakerTest {
+public class StatusMakerIT extends ConfigurableMacBase {
 
-  @Rule
-  public TestName test = new TestName();
+  private Connector conn;
+
+  @Before
+  public void setupInstance() throws Exception {
+    conn = getConnector();
+    ReplicationTable.setOnline(conn);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.WRITE);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.READ);
+  }
 
   @Test
   public void statusRecordsCreated() throws Exception {
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    String sourceTable = "source";
+    String sourceTable = testName.getMethodName();
     conn.tableOperations().create(sourceTable);
     ReplicationTableUtil.configureMetadataTable(conn, sourceTable);
 
@@ -96,13 +98,13 @@
 
     Scanner s = ReplicationTable.getScanner(conn);
     StatusSection.limit(s);
-    Text file = new Text(), tableId = new Text();
+    Text file = new Text();
     for (Entry<Key,Value> entry : s) {
       StatusSection.getFile(entry.getKey(), file);
-      StatusSection.getTableId(entry.getKey(), tableId);
+      String tableId = StatusSection.getTableId(entry.getKey());
 
       Assert.assertTrue("Found unexpected file: " + file, files.contains(file.toString()));
-      Assert.assertEquals(fileToTableId.get(file.toString()), new Integer(tableId.toString()));
+      Assert.assertEquals(fileToTableId.get(file.toString()), new Integer(tableId));
       timeCreated = fileToTimeCreated.get(file.toString());
       Assert.assertNotNull(timeCreated);
       Assert.assertEquals(StatusUtil.fileCreated(timeCreated), Status.parseFrom(entry.getValue().get()));
@@ -111,11 +113,7 @@
 
   @Test
   public void openMessagesAreNotDeleted() throws Exception {
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    String sourceTable = "source";
+    String sourceTable = testName.getMethodName();
     conn.tableOperations().create(sourceTable);
     ReplicationTableUtil.configureMetadataTable(conn, sourceTable);
 
@@ -151,11 +149,7 @@
 
   @Test
   public void closedMessagesAreDeleted() throws Exception {
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    String sourceTable = "source";
+    String sourceTable = testName.getMethodName();
     conn.tableOperations().create(sourceTable);
     ReplicationTableUtil.configureMetadataTable(conn, sourceTable);
 
@@ -198,11 +192,7 @@
 
   @Test
   public void closedMessagesCreateOrderRecords() throws Exception {
-    MockInstance inst = new MockInstance(test.getMethodName());
-    Credentials creds = new Credentials("root", new PasswordToken(""));
-    Connector conn = inst.getConnector(creds.getPrincipal(), creds.getToken());
-
-    String sourceTable = "source";
+    String sourceTable = testName.getMethodName();
     conn.tableOperations().create(sourceTable);
     ReplicationTableUtil.configureMetadataTable(conn, sourceTable);
 
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/UnorderedWorkAssignerIT.java b/test/src/main/java/org/apache/accumulo/test/replication/UnorderedWorkAssignerIT.java
new file mode 100644
index 0000000..f24129e
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/replication/UnorderedWorkAssignerIT.java
@@ -0,0 +1,238 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.replication;
+
+import static org.easymock.EasyMock.createMock;
+import static org.easymock.EasyMock.expectLastCall;
+import static org.easymock.EasyMock.replay;
+import static org.easymock.EasyMock.verify;
+
+import java.util.HashSet;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.protobuf.ProtobufUtil;
+import org.apache.accumulo.core.replication.ReplicationSchema.OrderSection;
+import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
+import org.apache.accumulo.core.replication.ReplicationTable;
+import org.apache.accumulo.core.replication.ReplicationTarget;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.master.replication.UnorderedWorkAssigner;
+import org.apache.accumulo.server.replication.DistributedWorkQueueWorkAssignerHelper;
+import org.apache.accumulo.server.replication.StatusUtil;
+import org.apache.accumulo.server.replication.proto.Replication.Status;
+import org.apache.accumulo.server.zookeeper.DistributedWorkQueue;
+import org.apache.accumulo.server.zookeeper.ZooCache;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+import org.junit.Before;
+import org.junit.Test;
+
+public class UnorderedWorkAssignerIT extends ConfigurableMacBase {
+
+  private Connector conn;
+  private MockUnorderedWorkAssigner assigner;
+
+  private static class MockUnorderedWorkAssigner extends UnorderedWorkAssigner {
+    public MockUnorderedWorkAssigner(Connector conn) {
+      super(null, conn);
+    }
+
+    @Override
+    protected void setQueuedWork(Set<String> queuedWork) {
+      super.setQueuedWork(queuedWork);
+    }
+
+    @Override
+    protected void setWorkQueue(DistributedWorkQueue workQueue) {
+      super.setWorkQueue(workQueue);
+    }
+
+    @Override
+    protected boolean queueWork(Path path, ReplicationTarget target) {
+      return super.queueWork(path, target);
+    }
+
+    @Override
+    protected void initializeQueuedWork() {
+      super.initializeQueuedWork();
+    }
+
+    @Override
+    protected Set<String> getQueuedWork() {
+      return super.getQueuedWork();
+    }
+
+    @Override
+    protected void setConnector(Connector conn) {
+      super.setConnector(conn);
+    }
+
+    @Override
+    protected void setMaxQueueSize(int maxQueueSize) {
+      super.setMaxQueueSize(maxQueueSize);
+    }
+
+    @Override
+    protected void createWork() {
+      super.createWork();
+    }
+
+    @Override
+    protected void setZooCache(ZooCache zooCache) {
+      super.setZooCache(zooCache);
+    }
+
+    @Override
+    protected void cleanupFinishedWork() {
+      super.cleanupFinishedWork();
+    }
+  }
+
+  @Before
+  public void init() throws Exception {
+    conn = getConnector();
+    assigner = new MockUnorderedWorkAssigner(conn);
+    ReplicationTable.setOnline(conn);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.WRITE);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.READ);
+  }
+
+  @Test
+  public void createWorkForFilesNeedingIt() throws Exception {
+    ReplicationTarget target1 = new ReplicationTarget("cluster1", "table1", "1"), target2 = new ReplicationTarget("cluster1", "table2", "2");
+    Text serializedTarget1 = target1.toText(), serializedTarget2 = target2.toText();
+    String keyTarget1 = target1.getPeerName() + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR + target1.getRemoteIdentifier()
+        + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR + target1.getSourceTableId(), keyTarget2 = target2.getPeerName()
+        + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR + target2.getRemoteIdentifier() + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR
+        + target2.getSourceTableId();
+
+    Status.Builder builder = Status.newBuilder().setBegin(0).setEnd(0).setInfiniteEnd(true).setClosed(false).setCreatedTime(5l);
+    Status status1 = builder.build();
+    builder.setCreatedTime(10l);
+    Status status2 = builder.build();
+
+    // Create two mutations, both of which need replication work done
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    String filename1 = UUID.randomUUID().toString(), filename2 = UUID.randomUUID().toString();
+    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
+    Mutation m = new Mutation(file1);
+    WorkSection.add(m, serializedTarget1, ProtobufUtil.toValue(status1));
+    bw.addMutation(m);
+    m = OrderSection.createMutation(file1, status1.getCreatedTime());
+    OrderSection.add(m, target1.getSourceTableId(), ProtobufUtil.toValue(status1));
+    bw.addMutation(m);
+
+    m = new Mutation(file2);
+    WorkSection.add(m, serializedTarget2, ProtobufUtil.toValue(status2));
+    bw.addMutation(m);
+    m = OrderSection.createMutation(file2, status2.getCreatedTime());
+    OrderSection.add(m, target2.getSourceTableId(), ProtobufUtil.toValue(status2));
+    bw.addMutation(m);
+
+    bw.close();
+
+    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
+    HashSet<String> queuedWork = new HashSet<>();
+    assigner.setQueuedWork(queuedWork);
+    assigner.setWorkQueue(workQueue);
+    assigner.setMaxQueueSize(Integer.MAX_VALUE);
+
+    // Make sure we expect the invocations in the order they were created
+    String key = filename1 + "|" + keyTarget1;
+    workQueue.addWork(key, file1);
+    expectLastCall().once();
+
+    key = filename2 + "|" + keyTarget2;
+    workQueue.addWork(key, file2);
+    expectLastCall().once();
+
+    replay(workQueue);
+
+    assigner.createWork();
+
+    verify(workQueue);
+  }
+
+  @Test
+  public void doNotCreateWorkForFilesNotNeedingIt() throws Exception {
+    ReplicationTarget target1 = new ReplicationTarget("cluster1", "table1", "1"), target2 = new ReplicationTarget("cluster1", "table2", "2");
+    Text serializedTarget1 = target1.toText(), serializedTarget2 = target2.toText();
+
+    // Create two mutations, both of which need replication work done
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    String filename1 = UUID.randomUUID().toString(), filename2 = UUID.randomUUID().toString();
+    String file1 = "/accumulo/wal/tserver+port/" + filename1, file2 = "/accumulo/wal/tserver+port/" + filename2;
+
+    Mutation m = new Mutation(file1);
+    WorkSection.add(m, serializedTarget1, StatusUtil.fileCreatedValue(5));
+    bw.addMutation(m);
+
+    m = new Mutation(file2);
+    WorkSection.add(m, serializedTarget2, StatusUtil.fileCreatedValue(10));
+    bw.addMutation(m);
+
+    bw.close();
+
+    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
+    HashSet<String> queuedWork = new HashSet<>();
+    assigner.setQueuedWork(queuedWork);
+    assigner.setMaxQueueSize(Integer.MAX_VALUE);
+
+    replay(workQueue);
+
+    assigner.createWork();
+
+    verify(workQueue);
+  }
+
+  @Test
+  public void workNotReAdded() throws Exception {
+    Set<String> queuedWork = new HashSet<>();
+
+    assigner.setQueuedWork(queuedWork);
+
+    ReplicationTarget target = new ReplicationTarget("cluster1", "table1", "1");
+    String serializedTarget = target.getPeerName() + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR + target.getRemoteIdentifier()
+        + DistributedWorkQueueWorkAssignerHelper.KEY_SEPARATOR + target.getSourceTableId();
+
+    queuedWork.add("wal1|" + serializedTarget.toString());
+
+    // Create two mutations, both of which need replication work done
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    String file1 = "/accumulo/wal/tserver+port/wal1";
+    Mutation m = new Mutation(file1);
+    WorkSection.add(m, target.toText(), StatusUtil.openWithUnknownLengthValue());
+    bw.addMutation(m);
+
+    bw.close();
+
+    DistributedWorkQueue workQueue = createMock(DistributedWorkQueue.class);
+    assigner.setWorkQueue(workQueue);
+    assigner.setMaxQueueSize(Integer.MAX_VALUE);
+
+    replay(workQueue);
+
+    assigner.createWork();
+
+    verify(workQueue);
+  }
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/UnorderedWorkAssignerReplicationIT.java b/test/src/main/java/org/apache/accumulo/test/replication/UnorderedWorkAssignerReplicationIT.java
similarity index 98%
rename from test/src/test/java/org/apache/accumulo/test/replication/UnorderedWorkAssignerReplicationIT.java
rename to test/src/main/java/org/apache/accumulo/test/replication/UnorderedWorkAssignerReplicationIT.java
index 090ac83..ec0361b 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/UnorderedWorkAssignerReplicationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/UnorderedWorkAssignerReplicationIT.java
@@ -45,7 +45,6 @@
 import org.apache.accumulo.core.replication.ReplicationTable;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.master.replication.UnorderedWorkAssigner;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
@@ -54,7 +53,7 @@
 import org.apache.accumulo.server.replication.ReplicaSystemFactory;
 import org.apache.accumulo.server.replication.StatusUtil;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.tserver.TabletServer;
 import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
 import org.apache.hadoop.conf.Configuration;
@@ -67,8 +66,9 @@
 import org.slf4j.LoggerFactory;
 
 import com.google.common.collect.Iterators;
+import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 
-public class UnorderedWorkAssignerReplicationIT extends ConfigurableMacIT {
+public class UnorderedWorkAssignerReplicationIT extends ConfigurableMacBase {
   private static final Logger log = LoggerFactory.getLogger(UnorderedWorkAssignerReplicationIT.class);
 
   private ExecutorService executor;
@@ -122,7 +122,7 @@
     // Set the same SSL information from the primary when present
     Map<String,String> primarySiteConfig = primaryCfg.getSiteConfig();
     if ("true".equals(primarySiteConfig.get(Property.INSTANCE_RPC_SSL_ENABLED.getKey()))) {
-      Map<String,String> peerSiteConfig = new HashMap<String,String>();
+      Map<String,String> peerSiteConfig = new HashMap<>();
       peerSiteConfig.put(Property.INSTANCE_RPC_SSL_ENABLED.getKey(), "true");
       String keystorePath = primarySiteConfig.get(Property.RPC_SSL_KEYSTORE_PATH.getKey());
       Assert.assertNotNull("Keystore Path was null", keystorePath);
@@ -204,7 +204,7 @@
       connMaster.tableOperations().setProperty(masterTable, Property.TABLE_REPLICATION_TARGET.getKey() + peerClusterName, peerTableId);
 
       // Wait for zookeeper updates (configuration) to propagate
-      UtilWaitThread.sleep(3 * 1000);
+      sleepUninterruptibly(3, TimeUnit.SECONDS);
 
       // Write some data to table1
       BatchWriter bw = connMaster.createBatchWriter(masterTable, new BatchWriterConfig());
@@ -368,7 +368,7 @@
       connMaster.tableOperations().setProperty(masterTable2, Property.TABLE_REPLICATION_TARGET.getKey() + peerClusterName, peerTableId2);
 
       // Wait for zookeeper updates (configuration) to propogate
-      UtilWaitThread.sleep(3 * 1000);
+      sleepUninterruptibly(3, TimeUnit.SECONDS);
 
       // Write some data to table1
       BatchWriter bw = connMaster.createBatchWriter(masterTable1, new BatchWriterConfig());
@@ -632,7 +632,7 @@
       connMaster.tableOperations().setProperty(masterTable2, Property.TABLE_REPLICATION_TARGET.getKey() + peerClusterName, peerTableId2);
 
       // Wait for zookeeper updates (configuration) to propagate
-      UtilWaitThread.sleep(3 * 1000);
+      sleepUninterruptibly(3, TimeUnit.SECONDS);
 
       // Write some data to table1
       BatchWriter bw = connMaster.createBatchWriter(masterTable1, new BatchWriterConfig());
@@ -675,7 +675,7 @@
       // Wait until we fully replicated something
       boolean fullyReplicated = false;
       for (int i = 0; i < 10 && !fullyReplicated; i++) {
-        UtilWaitThread.sleep(timeoutFactor * 2000);
+        sleepUninterruptibly(timeoutFactor * 2, TimeUnit.SECONDS);
 
         Scanner s = ReplicationTable.getScanner(connMaster);
         WorkSection.limit(s);
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/UnusedWalDoesntCloseReplicationStatusIT.java b/test/src/main/java/org/apache/accumulo/test/replication/UnusedWalDoesntCloseReplicationStatusIT.java
similarity index 97%
rename from test/src/test/java/org/apache/accumulo/test/replication/UnusedWalDoesntCloseReplicationStatusIT.java
rename to test/src/main/java/org/apache/accumulo/test/replication/UnusedWalDoesntCloseReplicationStatusIT.java
index bdd5db5..0eae4f3 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/UnusedWalDoesntCloseReplicationStatusIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/UnusedWalDoesntCloseReplicationStatusIT.java
@@ -46,7 +46,7 @@
 import org.apache.accumulo.server.replication.ReplicaSystemFactory;
 import org.apache.accumulo.server.replication.StatusUtil;
 import org.apache.accumulo.server.replication.proto.Replication.Status;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.apache.accumulo.tserver.log.DfsLogger;
 import org.apache.accumulo.tserver.logger.LogEvents;
 import org.apache.accumulo.tserver.logger.LogFileKey;
@@ -61,7 +61,7 @@
 
 import com.google.common.collect.Iterables;
 
-public class UnusedWalDoesntCloseReplicationStatusIT extends ConfigurableMacIT {
+public class UnusedWalDoesntCloseReplicationStatusIT extends ConfigurableMacBase {
 
   @Override
   public void configure(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
@@ -113,7 +113,7 @@
     value.write(out);
 
     key.event = LogEvents.DEFINE_TABLET;
-    key.tablet = new KeyExtent(new Text(Integer.toString(fakeTableId)), null, null);
+    key.tablet = new KeyExtent(Integer.toString(fakeTableId), null, null);
     key.seq = 1l;
     key.tid = 1;
 
@@ -169,7 +169,7 @@
 
     // Add our fake WAL to the log column for this table
     String walUri = tserverWal.toURI().toString();
-    KeyExtent extent = new KeyExtent(new Text(tableId), null, null);
+    KeyExtent extent = new KeyExtent(tableId, null, null);
     bw = conn.createBatchWriter(MetadataTable.NAME, new BatchWriterConfig());
     m = new Mutation(extent.getMetadataEntry());
     m.put(MetadataSchema.TabletsSection.LogColumnFamily.NAME, new Text("localhost:12345/" + walUri), new Value((walUri + "|1").getBytes(UTF_8)));
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/WorkMakerIT.java b/test/src/main/java/org/apache/accumulo/test/replication/WorkMakerIT.java
new file mode 100644
index 0000000..6e2c833
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/replication/WorkMakerIT.java
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.replication;
+
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+
+import org.apache.accumulo.core.client.BatchWriter;
+import org.apache.accumulo.core.client.Connector;
+import org.apache.accumulo.core.client.Scanner;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Mutation;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.replication.ReplicationSchema.StatusSection;
+import org.apache.accumulo.core.replication.ReplicationSchema.WorkSection;
+import org.apache.accumulo.core.replication.ReplicationTable;
+import org.apache.accumulo.core.replication.ReplicationTarget;
+import org.apache.accumulo.core.security.TablePermission;
+import org.apache.accumulo.master.replication.WorkMaker;
+import org.apache.accumulo.server.replication.StatusUtil;
+import org.apache.accumulo.server.replication.proto.Replication.Status;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.Text;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+
+public class WorkMakerIT extends ConfigurableMacBase {
+
+  private Connector conn;
+
+  private static class MockWorkMaker extends WorkMaker {
+
+    public MockWorkMaker(Connector conn) {
+      super(null, conn);
+    }
+
+    @Override
+    public void setBatchWriter(BatchWriter bw) {
+      super.setBatchWriter(bw);
+    }
+
+    @Override
+    public void addWorkRecord(Text file, Value v, Map<String,String> targets, String sourceTableId) {
+      super.addWorkRecord(file, v, targets, sourceTableId);
+    }
+
+    @Override
+    public boolean shouldCreateWork(Status status) {
+      return super.shouldCreateWork(status);
+    }
+
+  }
+
+  @Before
+  public void setupInstance() throws Exception {
+    conn = getConnector();
+    ReplicationTable.setOnline(conn);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.WRITE);
+    conn.securityOperations().grantTablePermission(conn.whoami(), ReplicationTable.NAME, TablePermission.READ);
+  }
+
+  @Test
+  public void singleUnitSingleTarget() throws Exception {
+    String table = testName.getMethodName();
+    conn.tableOperations().create(table);
+    String tableId = conn.tableOperations().tableIdMap().get(table);
+    String file = "hdfs://localhost:8020/accumulo/wal/123456-1234-1234-12345678";
+
+    // Create a status record for a file
+    long timeCreated = System.currentTimeMillis();
+    Mutation m = new Mutation(new Path(file).toString());
+    m.put(StatusSection.NAME, new Text(tableId), StatusUtil.fileCreatedValue(timeCreated));
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    bw.addMutation(m);
+    bw.flush();
+
+    // Assert that we have one record in the status section
+    Scanner s = ReplicationTable.getScanner(conn);
+    StatusSection.limit(s);
+    Assert.assertEquals(1, Iterables.size(s));
+
+    MockWorkMaker workMaker = new MockWorkMaker(conn);
+
+    // Invoke the addWorkRecord method to create a Work record from the Status record earlier
+    ReplicationTarget expected = new ReplicationTarget("remote_cluster_1", "4", tableId);
+    workMaker.setBatchWriter(bw);
+    workMaker.addWorkRecord(new Text(file), StatusUtil.fileCreatedValue(timeCreated), ImmutableMap.of("remote_cluster_1", "4"), tableId);
+
+    // Scan over just the WorkSection
+    s = ReplicationTable.getScanner(conn);
+    WorkSection.limit(s);
+
+    Entry<Key,Value> workEntry = Iterables.getOnlyElement(s);
+    Key workKey = workEntry.getKey();
+    ReplicationTarget actual = ReplicationTarget.from(workKey.getColumnQualifier());
+
+    Assert.assertEquals(file, workKey.getRow().toString());
+    Assert.assertEquals(WorkSection.NAME, workKey.getColumnFamily());
+    Assert.assertEquals(expected, actual);
+    Assert.assertEquals(workEntry.getValue(), StatusUtil.fileCreatedValue(timeCreated));
+  }
+
+  @Test
+  public void singleUnitMultipleTargets() throws Exception {
+    String table = testName.getMethodName();
+    conn.tableOperations().create(table);
+
+    String tableId = conn.tableOperations().tableIdMap().get(table);
+
+    String file = "hdfs://localhost:8020/accumulo/wal/123456-1234-1234-12345678";
+
+    Mutation m = new Mutation(new Path(file).toString());
+    m.put(StatusSection.NAME, new Text(tableId), StatusUtil.fileCreatedValue(System.currentTimeMillis()));
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    bw.addMutation(m);
+    bw.flush();
+
+    // Assert that we have one record in the status section
+    Scanner s = ReplicationTable.getScanner(conn);
+    StatusSection.limit(s);
+    Assert.assertEquals(1, Iterables.size(s));
+
+    MockWorkMaker workMaker = new MockWorkMaker(conn);
+
+    Map<String,String> targetClusters = ImmutableMap.of("remote_cluster_1", "4", "remote_cluster_2", "6", "remote_cluster_3", "8");
+    Set<ReplicationTarget> expectedTargets = new HashSet<>();
+    for (Entry<String,String> cluster : targetClusters.entrySet()) {
+      expectedTargets.add(new ReplicationTarget(cluster.getKey(), cluster.getValue(), tableId));
+    }
+    workMaker.setBatchWriter(bw);
+    workMaker.addWorkRecord(new Text(file), StatusUtil.fileCreatedValue(System.currentTimeMillis()), targetClusters, tableId);
+
+    s = ReplicationTable.getScanner(conn);
+    WorkSection.limit(s);
+
+    Set<ReplicationTarget> actualTargets = new HashSet<>();
+    for (Entry<Key,Value> entry : s) {
+      Assert.assertEquals(file, entry.getKey().getRow().toString());
+      Assert.assertEquals(WorkSection.NAME, entry.getKey().getColumnFamily());
+
+      ReplicationTarget target = ReplicationTarget.from(entry.getKey().getColumnQualifier());
+      actualTargets.add(target);
+    }
+
+    for (ReplicationTarget expected : expectedTargets) {
+      Assert.assertTrue("Did not find expected target: " + expected, actualTargets.contains(expected));
+      actualTargets.remove(expected);
+    }
+
+    Assert.assertTrue("Found extra replication work entries: " + actualTargets, actualTargets.isEmpty());
+  }
+
+  @Test
+  public void dontCreateWorkForEntriesWithNothingToReplicate() throws Exception {
+    String table = testName.getMethodName();
+    conn.tableOperations().create(table);
+    String tableId = conn.tableOperations().tableIdMap().get(table);
+    String file = "hdfs://localhost:8020/accumulo/wal/123456-1234-1234-12345678";
+
+    Mutation m = new Mutation(new Path(file).toString());
+    m.put(StatusSection.NAME, new Text(tableId), StatusUtil.fileCreatedValue(System.currentTimeMillis()));
+    BatchWriter bw = ReplicationTable.getBatchWriter(conn);
+    bw.addMutation(m);
+    bw.flush();
+
+    // Assert that we have one record in the status section
+    Scanner s = ReplicationTable.getScanner(conn);
+    StatusSection.limit(s);
+    Assert.assertEquals(1, Iterables.size(s));
+
+    MockWorkMaker workMaker = new MockWorkMaker(conn);
+
+    conn.tableOperations().setProperty(ReplicationTable.NAME, Property.TABLE_REPLICATION_TARGET.getKey() + "remote_cluster_1", "4");
+
+    workMaker.setBatchWriter(bw);
+
+    // If we don't shortcircuit out, we should get an exception because ServerConfiguration.getTableConfiguration
+    // won't work with MockAccumulo
+    workMaker.run();
+
+    s = ReplicationTable.getScanner(conn);
+    WorkSection.limit(s);
+
+    Assert.assertEquals(0, Iterables.size(s));
+  }
+
+}
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/merkle/MerkleTree.java b/test/src/main/java/org/apache/accumulo/test/replication/merkle/MerkleTree.java
index 03eb466..9a4b127 100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/merkle/MerkleTree.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/merkle/MerkleTree.java
@@ -67,7 +67,7 @@
 
       // At the same level
       if (left.getLevel() == right.getLevel()) {
-        return new Pair<Integer,Integer>(i, j);
+        return new Pair<>(i, j);
       }
 
       // Peek to see if we have another element
@@ -77,14 +77,14 @@
         j++;
       } else {
         // Otherwise, the last two elements must be paired
-        return new Pair<Integer,Integer>(i, j);
+        return new Pair<>(i, j);
       }
     }
 
     if (2 < nodes.size()) {
       throw new IllegalStateException("Should not have exited loop without pairing two elements when we have at least 3 nodes");
     } else if (2 == nodes.size()) {
-      return new Pair<Integer,Integer>(0, 1);
+      return new Pair<>(0, 1);
     } else {
       throw new IllegalStateException("Must have at least two nodes to pair");
     }
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/merkle/MerkleTreeNode.java b/test/src/main/java/org/apache/accumulo/test/replication/merkle/MerkleTreeNode.java
index 33ec056..f392f12 100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/merkle/MerkleTreeNode.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/merkle/MerkleTreeNode.java
@@ -58,7 +58,7 @@
 
   public MerkleTreeNode(List<MerkleTreeNode> children, String digestAlgorithm) throws NoSuchAlgorithmException {
     level = 0;
-    this.children = new ArrayList<Range>(children.size());
+    this.children = new ArrayList<>(children.size());
     MessageDigest digest = MessageDigest.getInstance(digestAlgorithm);
 
     Range childrenRange = null;
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/merkle/cli/ComputeRootHash.java b/test/src/main/java/org/apache/accumulo/test/replication/merkle/cli/ComputeRootHash.java
index cb2761b..56a5931 100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/merkle/cli/ComputeRootHash.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/merkle/cli/ComputeRootHash.java
@@ -76,7 +76,7 @@
   protected ArrayList<MerkleTreeNode> getLeaves(Connector conn, String tableName) throws TableNotFoundException {
     // TODO make this a bit more resilient to very large merkle trees by lazily reading more data from the table when necessary
     final Scanner s = conn.createScanner(tableName, Authorizations.EMPTY);
-    final ArrayList<MerkleTreeNode> leaves = new ArrayList<MerkleTreeNode>();
+    final ArrayList<MerkleTreeNode> leaves = new ArrayList<>();
 
     for (Entry<Key,Value> entry : s) {
       Range range = RangeSerialization.toRange(entry.getKey());
diff --git a/test/src/main/java/org/apache/accumulo/test/replication/merkle/cli/GenerateHashes.java b/test/src/main/java/org/apache/accumulo/test/replication/merkle/cli/GenerateHashes.java
index b1ef6c3..72da0ae 100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/merkle/cli/GenerateHashes.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/merkle/cli/GenerateHashes.java
@@ -129,7 +129,7 @@
       return endRowsToRanges(endRows);
     } else {
       log.info("Using provided split points");
-      ArrayList<Text> splits = new ArrayList<Text>();
+      ArrayList<Text> splits = new ArrayList<>();
 
       String line;
       java.util.Scanner file = new java.util.Scanner(new File(splitsFile), UTF_8.name());
@@ -249,7 +249,7 @@
   }
 
   public TreeSet<Range> endRowsToRanges(Collection<Text> endRows) {
-    ArrayList<Text> sortedEndRows = new ArrayList<Text>(endRows);
+    ArrayList<Text> sortedEndRows = new ArrayList<>(endRows);
     Collections.sort(sortedEndRows);
 
     Text prevEndRow = null;
diff --git a/test/src/main/java/org/apache/accumulo/test/scalability/ScaleTest.java b/test/src/main/java/org/apache/accumulo/test/scalability/ScaleTest.java
index f908296..2f82bfa 100644
--- a/test/src/main/java/org/apache/accumulo/test/scalability/ScaleTest.java
+++ b/test/src/main/java/org/apache/accumulo/test/scalability/ScaleTest.java
@@ -70,7 +70,7 @@
     int numSplits = numTabletServers - 1;
     long distance = (Long.MAX_VALUE / numTabletServers) + 1;
     long split = distance;
-    TreeSet<Text> keys = new TreeSet<Text>();
+    TreeSet<Text> keys = new TreeSet<>();
     for (int i = 0; i < numSplits; i++) {
       keys.add(new Text(String.format("%016x", split)));
       split += distance;
diff --git a/test/src/test/java/org/apache/accumulo/server/security/SystemCredentialsIT.java b/test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java
similarity index 96%
rename from test/src/test/java/org/apache/accumulo/server/security/SystemCredentialsIT.java
rename to test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java
index 00c5f51..9752916 100644
--- a/test/src/test/java/org/apache/accumulo/server/security/SystemCredentialsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/server/security/SystemCredentialsIT.java
@@ -14,7 +14,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.server.security;
+package org.apache.accumulo.test.server.security;
 
 import static org.junit.Assert.assertEquals;
 
@@ -38,10 +38,11 @@
 import org.apache.accumulo.core.metadata.RootTable;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.server.client.HdfsZooInstance;
-import org.apache.accumulo.test.functional.ConfigurableMacIT;
+import org.apache.accumulo.server.security.SystemCredentials;
+import org.apache.accumulo.test.functional.ConfigurableMacBase;
 import org.junit.Test;
 
-public class SystemCredentialsIT extends ConfigurableMacIT {
+public class SystemCredentialsIT extends ConfigurableMacBase {
 
   private static final int FAIL_CODE = 7, BAD_PASSWD_FAIL_CODE = 8;
 
diff --git a/test/src/test/java/org/apache/accumulo/start/KeywordStartIT.java b/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java
similarity index 95%
rename from test/src/test/java/org/apache/accumulo/start/KeywordStartIT.java
rename to test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java
index a5c28bf..9fc8927 100644
--- a/test/src/test/java/org/apache/accumulo/start/KeywordStartIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java
@@ -14,12 +14,14 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.accumulo.start;
+package org.apache.accumulo.test.start;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assume.assumeTrue;
 
+import java.io.File;
 import java.io.IOException;
 import java.lang.reflect.Method;
 import java.lang.reflect.Modifier;
@@ -46,12 +48,14 @@
 import org.apache.accumulo.monitor.Monitor;
 import org.apache.accumulo.monitor.MonitorExecutable;
 import org.apache.accumulo.proxy.Proxy;
+import org.apache.accumulo.server.conf.ConfigSanityCheck;
 import org.apache.accumulo.server.init.Initialize;
 import org.apache.accumulo.server.util.Admin;
 import org.apache.accumulo.server.util.Info;
 import org.apache.accumulo.server.util.LoginProperties;
 import org.apache.accumulo.server.util.ZooKeeperMain;
 import org.apache.accumulo.shell.Shell;
+import org.apache.accumulo.start.Main;
 import org.apache.accumulo.start.spi.KeywordExecutable;
 import org.apache.accumulo.tracer.TraceServer;
 import org.apache.accumulo.tracer.TracerExecutable;
@@ -91,8 +95,10 @@
   // Note: this test may fail in Eclipse, if the services files haven't been generated by the AutoService annotation processor
   @Test
   public void testExpectedClasses() throws IOException {
+    assumeTrue(new File(System.getProperty("user.dir") + "/src").exists());
     TreeMap<String,Class<? extends KeywordExecutable>> expectSet = new TreeMap<>();
     expectSet.put("admin", Admin.class);
+    expectSet.put("check-server-config", ConfigSanityCheck.class);
     expectSet.put("classpath", Classpath.class);
     expectSet.put("create-token", CreateToken.class);
     expectSet.put("gc", GCExecutable.class);
diff --git a/test/src/test/java/org/apache/accumulo/test/util/CertUtils.java b/test/src/main/java/org/apache/accumulo/test/util/CertUtils.java
similarity index 99%
rename from test/src/test/java/org/apache/accumulo/test/util/CertUtils.java
rename to test/src/main/java/org/apache/accumulo/test/util/CertUtils.java
index 2345ea7..95042d2 100644
--- a/test/src/test/java/org/apache/accumulo/test/util/CertUtils.java
+++ b/test/src/main/java/org/apache/accumulo/test/util/CertUtils.java
@@ -135,7 +135,7 @@
 
           @Override
           public Iterator<Entry<String,String>> iterator() {
-            TreeMap<String,String> map = new TreeMap<String,String>();
+            TreeMap<String,String> map = new TreeMap<>();
             for (Entry<String,String> props : DefaultConfiguration.getInstance())
               map.put(props.getKey(), props.getValue());
             for (Entry<String,String> props : xml)
diff --git a/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java b/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java
new file mode 100644
index 0000000..bc3c53e
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/util/SerializationUtil.java
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.util;
+
+import org.apache.commons.codec.binary.Base64;
+import org.apache.hadoop.io.Writable;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.ObjectInputStream;
+import java.io.ObjectOutputStream;
+import java.io.OutputStream;
+import java.io.Serializable;
+import java.util.Objects;
+
+/**
+ * Partially based from {@link org.apache.commons.lang3.SerializationUtils}.
+ *
+ * <p>
+ * For serializing and de-serializing objects.
+ */
+public class SerializationUtil {
+  private static final Logger log = LoggerFactory.getLogger(SerializationUtil.class);
+
+  private SerializationUtil() {}
+
+  /**
+   * Create a new instance of a class whose name is given, as a descendent of a given subclass.
+   */
+  public static <E> E subclassNewInstance(String classname, Class<E> parentClass) {
+    Class<?> c;
+    try {
+      c = Class.forName(classname);
+    } catch (ClassNotFoundException e) {
+      throw new IllegalArgumentException("Can't find class: " + classname, e);
+    }
+    Class<? extends E> cm;
+    try {
+      cm = c.asSubclass(parentClass);
+    } catch (ClassCastException e) {
+      throw new IllegalArgumentException(classname + " is not a subclass of " + parentClass.getName(), e);
+    }
+    try {
+      return cm.newInstance();
+    } catch (InstantiationException | IllegalAccessException e) {
+      throw new IllegalArgumentException("can't instantiate new instance of " + cm.getName(), e);
+    }
+  }
+
+  public static String serializeWritableBase64(Writable writable) {
+    byte[] b = serializeWritable(writable);
+    return org.apache.accumulo.core.util.Base64.encodeBase64String(b);
+  }
+
+  public static void deserializeWritableBase64(Writable writable, String str) {
+    byte[] b = Base64.decodeBase64(str);
+    deserializeWritable(writable, b);
+  }
+
+  public static String serializeBase64(Serializable obj) {
+    byte[] b = serialize(obj);
+    return org.apache.accumulo.core.util.Base64.encodeBase64String(b);
+  }
+
+  public static Object deserializeBase64(String str) {
+    byte[] b = Base64.decodeBase64(str);
+    return deserialize(b);
+  }
+
+  // Interop with Hadoop Writable
+  // -----------------------------------------------------------------------
+
+  public static byte[] serializeWritable(Writable writable) {
+    ByteArrayOutputStream baos = new ByteArrayOutputStream(512);
+    serializeWritable(writable, baos);
+    return baos.toByteArray();
+  }
+
+  public static void serializeWritable(Writable obj, OutputStream outputStream) {
+    Objects.requireNonNull(obj);
+    Objects.requireNonNull(outputStream);
+    DataOutputStream out = null;
+    try {
+      out = new DataOutputStream(outputStream);
+      obj.write(out);
+    } catch (IOException ex) {
+      throw new RuntimeException(ex);
+    } finally {
+      if (out != null)
+        try {
+          out.close();
+        } catch (IOException e) {
+          log.error("cannot close", e);
+        }
+    }
+  }
+
+  public static void deserializeWritable(Writable writable, InputStream inputStream) {
+    Objects.requireNonNull(writable);
+    Objects.requireNonNull(inputStream);
+    DataInputStream in = null;
+    try {
+      in = new DataInputStream(inputStream);
+      writable.readFields(in);
+    } catch (IOException ex) {
+      throw new RuntimeException(ex);
+    } finally {
+      if (in != null)
+        try {
+          in.close();
+        } catch (IOException e) {
+          log.error("cannot close", e);
+        }
+    }
+  }
+
+  public static void deserializeWritable(Writable writable, byte[] objectData) {
+    Objects.requireNonNull(objectData);
+    deserializeWritable(writable, new ByteArrayInputStream(objectData));
+  }
+
+  // Serialize
+  // -----------------------------------------------------------------------
+
+  /**
+   * Serializes an {@code Object} to the specified stream.
+   * <p>
+   * The stream will be closed once the object is written. This avoids the need for a finally clause, and maybe also exception handling, in the application
+   * code.
+   * <p>
+   * The stream passed in is not buffered internally within this method. This is the responsibility of your application if desired.
+   *
+   * @param obj
+   *          the object to serialize to bytes, may be null
+   * @param outputStream
+   *          the stream to write to, must not be null
+   * @throws IllegalArgumentException
+   *           if {@code outputStream} is {@code null}
+   */
+  public static void serialize(Serializable obj, OutputStream outputStream) {
+    Objects.requireNonNull(outputStream);
+    ObjectOutputStream out = null;
+    try {
+      out = new ObjectOutputStream(outputStream);
+      out.writeObject(obj);
+    } catch (IOException ex) {
+      throw new RuntimeException(ex);
+    } finally {
+      if (out != null)
+        try {
+          out.close();
+        } catch (IOException e) {
+          log.error("cannot close", e);
+        }
+    }
+  }
+
+  /**
+   * Serializes an {@code Object} to a byte array for storage/serialization.
+   *
+   * @param obj
+   *          the object to serialize to bytes
+   * @return a byte[] with the converted Serializable
+   */
+  public static byte[] serialize(Serializable obj) {
+    ByteArrayOutputStream baos = new ByteArrayOutputStream(512);
+    serialize(obj, baos);
+    return baos.toByteArray();
+  }
+
+  // Deserialize
+  // -----------------------------------------------------------------------
+
+  /**
+   * Deserializes an {@code Object} from the specified stream.
+   * <p>
+   * The stream will be closed once the object is written. This avoids the need for a finally clause, and maybe also exception handling, in the application
+   * code.
+   * <p>
+   * The stream passed in is not buffered internally within this method. This is the responsibility of your application if desired.
+   *
+   * @param inputStream
+   *          the serialized object input stream, must not be null
+   * @return the deserialized object
+   * @throws IllegalArgumentException
+   *           if {@code inputStream} is {@code null}
+   */
+  public static Object deserialize(InputStream inputStream) {
+    Objects.requireNonNull(inputStream);
+    ObjectInputStream in = null;
+    try {
+      in = new ObjectInputStream(inputStream);
+      return in.readObject();
+    } catch (ClassNotFoundException | IOException ex) {
+      throw new RuntimeException(ex);
+    } finally {
+      if (in != null)
+        try {
+          in.close();
+        } catch (IOException e) {
+          log.error("cannot close", e);
+        }
+    }
+  }
+
+  /**
+   * Deserializes a single {@code Object} from an array of bytes.
+   *
+   * @param objectData
+   *          the serialized object, must not be null
+   * @return the deserialized object
+   * @throws IllegalArgumentException
+   *           if {@code objectData} is {@code null}
+   */
+  public static Object deserialize(byte[] objectData) {
+    Objects.requireNonNull(objectData);
+    return deserialize(new ByteArrayInputStream(objectData));
+  }
+
+}
diff --git a/test/src/test/resources/FooConstraint.jar b/test/src/main/resources/FooConstraint.jar
similarity index 100%
rename from test/src/test/resources/FooConstraint.jar
rename to test/src/main/resources/FooConstraint.jar
Binary files differ
diff --git a/test/src/test/resources/FooFilter.jar b/test/src/main/resources/FooFilter.jar
similarity index 100%
rename from test/src/test/resources/FooFilter.jar
rename to test/src/main/resources/FooFilter.jar
Binary files differ
diff --git a/test/src/main/resources/ShellServerIT-iterators.jar b/test/src/main/resources/ShellServerIT-iterators.jar
new file mode 100644
index 0000000..7a67f21
--- /dev/null
+++ b/test/src/main/resources/ShellServerIT-iterators.jar
Binary files differ
diff --git a/test/src/test/resources/TestCombinerX.jar b/test/src/main/resources/TestCombinerX.jar
similarity index 100%
rename from test/src/test/resources/TestCombinerX.jar
rename to test/src/main/resources/TestCombinerX.jar
Binary files differ
diff --git a/test/src/test/resources/TestCombinerY.jar b/test/src/main/resources/TestCombinerY.jar
similarity index 100%
rename from test/src/test/resources/TestCombinerY.jar
rename to test/src/main/resources/TestCombinerY.jar
Binary files differ
diff --git a/test/src/test/resources/TestCompactionStrat.jar b/test/src/main/resources/TestCompactionStrat.jar
similarity index 100%
rename from test/src/test/resources/TestCompactionStrat.jar
rename to test/src/main/resources/TestCompactionStrat.jar
Binary files differ
diff --git a/test/src/test/resources/conf/accumulo-site.xml b/test/src/main/resources/conf/accumulo-site.xml
similarity index 100%
rename from test/src/test/resources/conf/accumulo-site.xml
rename to test/src/main/resources/conf/accumulo-site.xml
diff --git a/test/src/test/resources/conf/generic_logger.xml b/test/src/main/resources/conf/generic_logger.xml
similarity index 100%
rename from test/src/test/resources/conf/generic_logger.xml
rename to test/src/main/resources/conf/generic_logger.xml
diff --git a/test/src/test/resources/conf/monitor_logger.xml b/test/src/main/resources/conf/monitor_logger.xml
similarity index 100%
rename from test/src/test/resources/conf/monitor_logger.xml
rename to test/src/main/resources/conf/monitor_logger.xml
diff --git a/test/src/test/resources/log4j.properties b/test/src/main/resources/log4j.properties
similarity index 98%
rename from test/src/test/resources/log4j.properties
rename to test/src/main/resources/log4j.properties
index 42b58c8..6041ee5 100644
--- a/test/src/test/resources/log4j.properties
+++ b/test/src/main/resources/log4j.properties
@@ -43,7 +43,7 @@
 log4j.logger.org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace=WARN
 log4j.logger.BlockStateChange=WARN
 log4j.logger.org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator=INFO
-log4j.logger.org.apache.hadoop.security=DEBUG
+log4j.logger.org.apache.hadoop.security=INFO
 log4j.logger.org.apache.hadoop.minikdc=DEBUG
 log4j.logger.org.apache.directory=INFO
 log4j.logger.org.apache.directory.api.ldap=WARN
diff --git a/test/src/test/resources/unit/Basic.xml b/test/src/main/resources/unit/Basic.xml
similarity index 100%
rename from test/src/test/resources/unit/Basic.xml
rename to test/src/main/resources/unit/Basic.xml
diff --git a/test/src/test/resources/unit/Simple.xml b/test/src/main/resources/unit/Simple.xml
similarity index 100%
rename from test/src/test/resources/unit/Simple.xml
rename to test/src/main/resources/unit/Simple.xml
diff --git a/test/src/test/java/org/apache/accumulo/test/AccumuloOutputFormatIT.java b/test/src/test/java/org/apache/accumulo/test/AccumuloOutputFormatIT.java
deleted file mode 100644
index 20556ab..0000000
--- a/test/src/test/java/org/apache/accumulo/test/AccumuloOutputFormatIT.java
+++ /dev/null
@@ -1,124 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements. See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test;
-
-import static java.nio.charset.StandardCharsets.UTF_8;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.ClientConfiguration;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.ZooKeeperInstance;
-import org.apache.accumulo.core.client.mapred.AccumuloOutputFormat;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.minicluster.MiniAccumuloCluster;
-import org.apache.accumulo.minicluster.MiniAccumuloConfig;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.RecordWriter;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.rules.ExpectedException;
-import org.junit.rules.TemporaryFolder;
-
-/**
- * Prevent regression of ACCUMULO-3709. Exists as a mini test because mock instance doesn't produce this error when dynamically changing the table permissions.
- */
-public class AccumuloOutputFormatIT {
-
-  private static final String TABLE = "abc";
-  private MiniAccumuloCluster accumulo;
-  private String secret = "secret";
-
-  @Rule
-  public TemporaryFolder folder = new TemporaryFolder(new File(System.getProperty("user.dir") + "/target"));
-
-  @Rule
-  public ExpectedException exception = ExpectedException.none();
-
-  @Before
-  public void setUp() throws Exception {
-    folder.create();
-    MiniAccumuloConfig config = new MiniAccumuloConfig(folder.getRoot(), secret);
-    Map<String,String> configMap = new HashMap<>();
-    configMap.put(Property.TSERV_SESSION_MAXIDLE.toString(), "1");
-    config.setSiteConfig(configMap);
-    config.setNumTservers(1);
-    accumulo = new MiniAccumuloCluster(config);
-    accumulo.start();
-  }
-
-  @After
-  public void tearDown() throws Exception {
-    accumulo.stop();
-    folder.delete();
-  }
-
-  @Test
-  public void testMapred() throws Exception {
-    ClientConfiguration clientConfig = accumulo.getClientConfig();
-    ZooKeeperInstance instance = new ZooKeeperInstance(clientConfig);
-    Connector connector = instance.getConnector("root", new PasswordToken(secret));
-    // create a table and put some data in it
-    connector.tableOperations().create(TABLE);
-
-    JobConf job = new JobConf();
-    BatchWriterConfig batchConfig = new BatchWriterConfig();
-    // no flushes!!!!!
-    batchConfig.setMaxLatency(0, TimeUnit.MILLISECONDS);
-    // use a single thread to ensure our update session times out
-    batchConfig.setMaxWriteThreads(1);
-    // set the max memory so that we ensure we don't flush on the write.
-    batchConfig.setMaxMemory(Long.MAX_VALUE);
-    AccumuloOutputFormat outputFormat = new AccumuloOutputFormat();
-    AccumuloOutputFormat.setBatchWriterOptions(job, batchConfig);
-    AccumuloOutputFormat.setZooKeeperInstance(job, clientConfig);
-    AccumuloOutputFormat.setConnectorInfo(job, "root", new PasswordToken(secret));
-    RecordWriter<Text,Mutation> writer = outputFormat.getRecordWriter(null, job, "Test", null);
-
-    try {
-      for (int i = 0; i < 3; i++) {
-        Mutation m = new Mutation(new Text(String.format("%08d", i)));
-        for (int j = 0; j < 3; j++) {
-          m.put(new Text("cf1"), new Text("cq" + j), new Value((i + "_" + j).getBytes(UTF_8)));
-          writer.write(new Text(TABLE), m);
-        }
-      }
-
-    } catch (Exception e) {
-      e.printStackTrace();
-      // we don't want the exception to come from write
-    }
-
-    connector.securityOperations().revokeTablePermission("root", TABLE, TablePermission.WRITE);
-
-    exception.expect(IOException.class);
-    exception.expectMessage("PERMISSION_DENIED");
-    writer.close(null);
-  }
-}
diff --git a/test/src/test/java/org/apache/accumulo/test/ConditionalWriterIT.java b/test/src/test/java/org/apache/accumulo/test/ConditionalWriterIT.java
deleted file mode 100644
index ab6a1dd..0000000
--- a/test/src/test/java/org/apache/accumulo/test/ConditionalWriterIT.java
+++ /dev/null
@@ -1,1483 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.accumulo.test;
-
-import static java.nio.charset.StandardCharsets.UTF_8;
-import static org.junit.Assert.assertTrue;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.EnumSet;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Random;
-import java.util.Set;
-import java.util.SortedSet;
-import java.util.TreeSet;
-import java.util.UUID;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicBoolean;
-
-import org.apache.accumulo.cluster.AccumuloCluster;
-import org.apache.accumulo.cluster.ClusterUser;
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.ClientConfiguration;
-import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
-import org.apache.accumulo.core.client.ConditionalWriter;
-import org.apache.accumulo.core.client.ConditionalWriter.Result;
-import org.apache.accumulo.core.client.ConditionalWriter.Status;
-import org.apache.accumulo.core.client.ConditionalWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.IsolatedScanner;
-import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.RowIterator;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.TableDeletedException;
-import org.apache.accumulo.core.client.TableExistsException;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.TableOfflineException;
-import org.apache.accumulo.core.client.admin.NewTableConfiguration;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.data.ArrayByteSequence;
-import org.apache.accumulo.core.data.ByteSequence;
-import org.apache.accumulo.core.data.Condition;
-import org.apache.accumulo.core.data.ConditionalMutation;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.iterators.IteratorEnvironment;
-import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
-import org.apache.accumulo.core.iterators.LongCombiner.Type;
-import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
-import org.apache.accumulo.core.iterators.WrappingIterator;
-import org.apache.accumulo.core.iterators.user.SummingCombiner;
-import org.apache.accumulo.core.iterators.user.VersioningIterator;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.accumulo.core.security.SystemPermission;
-import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.core.trace.DistributedTrace;
-import org.apache.accumulo.core.trace.Span;
-import org.apache.accumulo.core.trace.Trace;
-import org.apache.accumulo.core.util.FastFormat;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.examples.simple.constraints.AlphaNumKeyConstraint;
-import org.apache.accumulo.harness.AccumuloClusterIT;
-import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
-import org.apache.accumulo.test.functional.BadIterator;
-import org.apache.accumulo.test.functional.SlowIterator;
-import org.apache.accumulo.tracer.TraceDump;
-import org.apache.accumulo.tracer.TraceDump.Printer;
-import org.apache.accumulo.tracer.TraceServer;
-import org.apache.hadoop.io.Text;
-import org.junit.Assert;
-import org.junit.Assume;
-import org.junit.Before;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import com.google.common.collect.Iterables;
-
-/**
- *
- */
-public class ConditionalWriterIT extends AccumuloClusterIT {
-  private static final Logger log = LoggerFactory.getLogger(ConditionalWriterIT.class);
-
-  @Override
-  protected int defaultTimeoutSeconds() {
-    return 60;
-  }
-
-  public static long abs(long l) {
-    l = Math.abs(l); // abs(Long.MIN_VALUE) == Long.MIN_VALUE...
-    if (l < 0)
-      return 0;
-    return l;
-  }
-
-  @Before
-  public void deleteUsers() throws Exception {
-    Connector conn = getConnector();
-    Set<String> users = conn.securityOperations().listLocalUsers();
-    ClusterUser user = getUser(0);
-    if (users.contains(user.getPrincipal())) {
-      conn.securityOperations().dropLocalUser(user.getPrincipal());
-    }
-  }
-
-  @Test
-  public void testBasic() throws Exception {
-
-    Connector conn = getConnector();
-    String tableName = getUniqueNames(1)[0];
-
-    conn.tableOperations().create(tableName);
-
-    ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig());
-
-    // mutation conditional on column tx:seq not existing
-    ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq"));
-    cm0.put("name", "last", "doe");
-    cm0.put("name", "first", "john");
-    cm0.put("tx", "seq", "1");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm0).getStatus());
-    Assert.assertEquals(Status.REJECTED, cw.write(cm0).getStatus());
-
-    // mutation conditional on column tx:seq being 1
-    ConditionalMutation cm1 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("1"));
-    cm1.put("name", "last", "Doe");
-    cm1.put("tx", "seq", "2");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm1).getStatus());
-
-    // test condition where value differs
-    ConditionalMutation cm2 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("1"));
-    cm2.put("name", "last", "DOE");
-    cm2.put("tx", "seq", "2");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm2).getStatus());
-
-    // test condition where column does not exists
-    ConditionalMutation cm3 = new ConditionalMutation("99006", new Condition("txtypo", "seq").setValue("1"));
-    cm3.put("name", "last", "deo");
-    cm3.put("tx", "seq", "2");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm3).getStatus());
-
-    // test two conditions, where one should fail
-    ConditionalMutation cm4 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("2"), new Condition("name", "last").setValue("doe"));
-    cm4.put("name", "last", "deo");
-    cm4.put("tx", "seq", "3");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm4).getStatus());
-
-    // test two conditions, where one should fail
-    ConditionalMutation cm5 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("1"), new Condition("name", "last").setValue("Doe"));
-    cm5.put("name", "last", "deo");
-    cm5.put("tx", "seq", "3");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm5).getStatus());
-
-    // ensure rejected mutations did not write
-    Scanner scanner = conn.createScanner(tableName, Authorizations.EMPTY);
-    scanner.fetchColumn(new Text("name"), new Text("last"));
-    scanner.setRange(new Range("99006"));
-    Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("Doe", entry.getValue().toString());
-
-    // test w/ two conditions that are met
-    ConditionalMutation cm6 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("2"), new Condition("name", "last").setValue("Doe"));
-    cm6.put("name", "last", "DOE");
-    cm6.put("tx", "seq", "3");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm6).getStatus());
-
-    entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("DOE", entry.getValue().toString());
-
-    // test a conditional mutation that deletes
-    ConditionalMutation cm7 = new ConditionalMutation("99006", new Condition("tx", "seq").setValue("3"));
-    cm7.putDelete("name", "last");
-    cm7.putDelete("name", "first");
-    cm7.putDelete("tx", "seq");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm7).getStatus());
-
-    Assert.assertFalse("Did not expect to find any results", scanner.iterator().hasNext());
-
-    // add the row back
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm0).getStatus());
-    Assert.assertEquals(Status.REJECTED, cw.write(cm0).getStatus());
-
-    entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("doe", entry.getValue().toString());
-  }
-
-  @Test
-  public void testFields() throws Exception {
-
-    Connector conn = getConnector();
-    String tableName = getUniqueNames(1)[0];
-
-    String user = null;
-    ClientConfiguration clientConf = cluster.getClientConfig();
-    final boolean saslEnabled = clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false);
-
-    ClusterUser user1 = getUser(0);
-    user = user1.getPrincipal();
-    if (saslEnabled) {
-      // The token is pointless for kerberos
-      conn.securityOperations().createLocalUser(user, null);
-    } else {
-      conn.securityOperations().createLocalUser(user, new PasswordToken(user1.getPassword()));
-    }
-
-    Authorizations auths = new Authorizations("A", "B");
-
-    conn.securityOperations().changeUserAuthorizations(user, auths);
-    conn.securityOperations().grantSystemPermission(user, SystemPermission.CREATE_TABLE);
-
-    conn = conn.getInstance().getConnector(user, user1.getToken());
-
-    conn.tableOperations().create(tableName);
-
-    ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig().setAuthorizations(auths));
-
-    ColumnVisibility cva = new ColumnVisibility("A");
-    ColumnVisibility cvb = new ColumnVisibility("B");
-
-    ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cva));
-    cm0.put("name", "last", cva, "doe");
-    cm0.put("name", "first", cva, "john");
-    cm0.put("tx", "seq", cva, "1");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm0).getStatus());
-
-    Scanner scanner = conn.createScanner(tableName, auths);
-    scanner.setRange(new Range("99006"));
-    // TODO verify all columns
-    scanner.fetchColumn(new Text("tx"), new Text("seq"));
-    Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("1", entry.getValue().toString());
-    long ts = entry.getKey().getTimestamp();
-
-    // test wrong colf
-    ConditionalMutation cm1 = new ConditionalMutation("99006", new Condition("txA", "seq").setVisibility(cva).setValue("1"));
-    cm1.put("name", "last", cva, "Doe");
-    cm1.put("name", "first", cva, "John");
-    cm1.put("tx", "seq", cva, "2");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm1).getStatus());
-
-    // test wrong colq
-    ConditionalMutation cm2 = new ConditionalMutation("99006", new Condition("tx", "seqA").setVisibility(cva).setValue("1"));
-    cm2.put("name", "last", cva, "Doe");
-    cm2.put("name", "first", cva, "John");
-    cm2.put("tx", "seq", cva, "2");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm2).getStatus());
-
-    // test wrong colv
-    ConditionalMutation cm3 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb).setValue("1"));
-    cm3.put("name", "last", cva, "Doe");
-    cm3.put("name", "first", cva, "John");
-    cm3.put("tx", "seq", cva, "2");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm3).getStatus());
-
-    // test wrong timestamp
-    ConditionalMutation cm4 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cva).setTimestamp(ts + 1).setValue("1"));
-    cm4.put("name", "last", cva, "Doe");
-    cm4.put("name", "first", cva, "John");
-    cm4.put("tx", "seq", cva, "2");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm4).getStatus());
-
-    // test wrong timestamp
-    ConditionalMutation cm5 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cva).setTimestamp(ts - 1).setValue("1"));
-    cm5.put("name", "last", cva, "Doe");
-    cm5.put("name", "first", cva, "John");
-    cm5.put("tx", "seq", cva, "2");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm5).getStatus());
-
-    // ensure no updates were made
-    entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("1", entry.getValue().toString());
-
-    // set all columns correctly
-    ConditionalMutation cm6 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cva).setTimestamp(ts).setValue("1"));
-    cm6.put("name", "last", cva, "Doe");
-    cm6.put("name", "first", cva, "John");
-    cm6.put("tx", "seq", cva, "2");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm6).getStatus());
-
-    entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("2", entry.getValue().toString());
-
-  }
-
-  @Test
-  public void testBadColVis() throws Exception {
-    // test when a user sets a col vis in a condition that can never be seen
-
-    Connector conn = getConnector();
-    String tableName = getUniqueNames(1)[0];
-
-    conn.tableOperations().create(tableName);
-
-    Authorizations auths = new Authorizations("A", "B");
-
-    conn.securityOperations().changeUserAuthorizations(getAdminPrincipal(), auths);
-
-    Authorizations filteredAuths = new Authorizations("A");
-
-    ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig().setAuthorizations(filteredAuths));
-
-    ColumnVisibility cva = new ColumnVisibility("A");
-    ColumnVisibility cvb = new ColumnVisibility("B");
-    ColumnVisibility cvc = new ColumnVisibility("C");
-
-    // User has authorization, but didn't include it in the writer
-    ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb));
-    cm0.put("name", "last", cva, "doe");
-    cm0.put("name", "first", cva, "john");
-    cm0.put("tx", "seq", cva, "1");
-    Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm0).getStatus());
-
-    ConditionalMutation cm1 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb).setValue("1"));
-    cm1.put("name", "last", cva, "doe");
-    cm1.put("name", "first", cva, "john");
-    cm1.put("tx", "seq", cva, "1");
-    Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm1).getStatus());
-
-    // User does not have the authorization
-    ConditionalMutation cm2 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvc));
-    cm2.put("name", "last", cva, "doe");
-    cm2.put("name", "first", cva, "john");
-    cm2.put("tx", "seq", cva, "1");
-    Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm2).getStatus());
-
-    ConditionalMutation cm3 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvc).setValue("1"));
-    cm3.put("name", "last", cva, "doe");
-    cm3.put("name", "first", cva, "john");
-    cm3.put("tx", "seq", cva, "1");
-    Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm3).getStatus());
-
-    // if any visibility is bad, good visibilities don't override
-    ConditionalMutation cm4 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb), new Condition("tx", "seq").setVisibility(cva));
-
-    cm4.put("name", "last", cva, "doe");
-    cm4.put("name", "first", cva, "john");
-    cm4.put("tx", "seq", cva, "1");
-    Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm4).getStatus());
-
-    ConditionalMutation cm5 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb).setValue("1"), new Condition("tx", "seq")
-        .setVisibility(cva).setValue("1"));
-    cm5.put("name", "last", cva, "doe");
-    cm5.put("name", "first", cva, "john");
-    cm5.put("tx", "seq", cva, "1");
-    Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm5).getStatus());
-
-    ConditionalMutation cm6 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb).setValue("1"),
-        new Condition("tx", "seq").setVisibility(cva));
-    cm6.put("name", "last", cva, "doe");
-    cm6.put("name", "first", cva, "john");
-    cm6.put("tx", "seq", cva, "1");
-    Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm6).getStatus());
-
-    ConditionalMutation cm7 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb), new Condition("tx", "seq").setVisibility(cva)
-        .setValue("1"));
-    cm7.put("name", "last", cva, "doe");
-    cm7.put("name", "first", cva, "john");
-    cm7.put("tx", "seq", cva, "1");
-    Assert.assertEquals(Status.INVISIBLE_VISIBILITY, cw.write(cm7).getStatus());
-
-    cw.close();
-
-    // test passing auths that exceed users configured auths
-
-    Authorizations exceedingAuths = new Authorizations("A", "B", "D");
-    ConditionalWriter cw2 = conn.createConditionalWriter(tableName, new ConditionalWriterConfig().setAuthorizations(exceedingAuths));
-
-    ConditionalMutation cm8 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvb), new Condition("tx", "seq").setVisibility(cva)
-        .setValue("1"));
-    cm8.put("name", "last", cva, "doe");
-    cm8.put("name", "first", cva, "john");
-    cm8.put("tx", "seq", cva, "1");
-
-    try {
-      Status status = cw2.write(cm8).getStatus();
-      Assert.fail("Writing mutation with Authorizations the user doesn't have should fail. Got status: " + status);
-    } catch (AccumuloSecurityException ase) {
-      // expected, check specific failure?
-    } finally {
-      cw2.close();
-    }
-  }
-
-  @Test
-  public void testConstraints() throws Exception {
-    // ensure constraint violations are properly reported
-
-    Connector conn = getConnector();
-    String tableName = getUniqueNames(1)[0];
-
-    conn.tableOperations().create(tableName);
-    conn.tableOperations().addConstraint(tableName, AlphaNumKeyConstraint.class.getName());
-    conn.tableOperations().clone(tableName, tableName + "_clone", true, new HashMap<String,String>(), new HashSet<String>());
-
-    Scanner scanner = conn.createScanner(tableName + "_clone", new Authorizations());
-
-    ConditionalWriter cw = conn.createConditionalWriter(tableName + "_clone", new ConditionalWriterConfig());
-
-    ConditionalMutation cm0 = new ConditionalMutation("99006+", new Condition("tx", "seq"));
-    cm0.put("tx", "seq", "1");
-
-    Assert.assertEquals(Status.VIOLATED, cw.write(cm0).getStatus());
-    Assert.assertFalse("Should find no results in the table is mutation result was violated", scanner.iterator().hasNext());
-
-    ConditionalMutation cm1 = new ConditionalMutation("99006", new Condition("tx", "seq"));
-    cm1.put("tx", "seq", "1");
-
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm1).getStatus());
-    Assert.assertTrue("Accepted result should be returned when reading table", scanner.iterator().hasNext());
-
-    cw.close();
-  }
-
-  @Test
-  public void testIterators() throws Exception {
-
-    Connector conn = getConnector();
-    String tableName = getUniqueNames(1)[0];
-
-    conn.tableOperations().create(tableName, new NewTableConfiguration().withoutDefaultIterators());
-
-    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
-
-    Mutation m = new Mutation("ACCUMULO-1000");
-    m.put("count", "comments", "1");
-    bw.addMutation(m);
-    bw.addMutation(m);
-    bw.addMutation(m);
-
-    m = new Mutation("ACCUMULO-1001");
-    m.put("count2", "comments", "1");
-    bw.addMutation(m);
-    bw.addMutation(m);
-
-    m = new Mutation("ACCUMULO-1002");
-    m.put("count2", "comments", "1");
-    bw.addMutation(m);
-    bw.addMutation(m);
-
-    bw.close();
-
-    IteratorSetting iterConfig = new IteratorSetting(10, SummingCombiner.class);
-    SummingCombiner.setEncodingType(iterConfig, Type.STRING);
-    SummingCombiner.setColumns(iterConfig, Collections.singletonList(new IteratorSetting.Column("count")));
-
-    IteratorSetting iterConfig2 = new IteratorSetting(10, SummingCombiner.class);
-    SummingCombiner.setEncodingType(iterConfig2, Type.STRING);
-    SummingCombiner.setColumns(iterConfig2, Collections.singletonList(new IteratorSetting.Column("count2", "comments")));
-
-    IteratorSetting iterConfig3 = new IteratorSetting(5, VersioningIterator.class);
-    VersioningIterator.setMaxVersions(iterConfig3, 1);
-
-    Scanner scanner = conn.createScanner(tableName, new Authorizations());
-    scanner.addScanIterator(iterConfig);
-    scanner.setRange(new Range("ACCUMULO-1000"));
-    scanner.fetchColumn(new Text("count"), new Text("comments"));
-
-    Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("3", entry.getValue().toString());
-
-    ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig());
-
-    ConditionalMutation cm0 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setValue("3"));
-    cm0.put("count", "comments", "1");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm0).getStatus());
-    entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("3", entry.getValue().toString());
-
-    ConditionalMutation cm1 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setIterators(iterConfig).setValue("3"));
-    cm1.put("count", "comments", "1");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm1).getStatus());
-    entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("4", entry.getValue().toString());
-
-    ConditionalMutation cm2 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setValue("4"));
-    cm2.put("count", "comments", "1");
-    Assert.assertEquals(Status.REJECTED, cw.write(cm1).getStatus());
-    entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("4", entry.getValue().toString());
-
-    // run test with multiple iterators passed in same batch and condition with two iterators
-
-    ConditionalMutation cm3 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setIterators(iterConfig).setValue("4"));
-    cm3.put("count", "comments", "1");
-
-    ConditionalMutation cm4 = new ConditionalMutation("ACCUMULO-1001", new Condition("count2", "comments").setIterators(iterConfig2).setValue("2"));
-    cm4.put("count2", "comments", "1");
-
-    ConditionalMutation cm5 = new ConditionalMutation("ACCUMULO-1002", new Condition("count2", "comments").setIterators(iterConfig2, iterConfig3).setValue("2"));
-    cm5.put("count2", "comments", "1");
-
-    Iterator<Result> results = cw.write(Arrays.asList(cm3, cm4, cm5).iterator());
-    Map<String,Status> actual = new HashMap<String,Status>();
-
-    while (results.hasNext()) {
-      Result result = results.next();
-      String k = new String(result.getMutation().getRow());
-      Assert.assertFalse("Did not expect to see multiple resultus for the row: " + k, actual.containsKey(k));
-      actual.put(k, result.getStatus());
-    }
-
-    Map<String,Status> expected = new HashMap<String,Status>();
-    expected.put("ACCUMULO-1000", Status.ACCEPTED);
-    expected.put("ACCUMULO-1001", Status.ACCEPTED);
-    expected.put("ACCUMULO-1002", Status.REJECTED);
-
-    Assert.assertEquals(expected, actual);
-
-    cw.close();
-  }
-
-  public static class AddingIterator extends WrappingIterator {
-    long amount = 0;
-
-    @Override
-    public Value getTopValue() {
-      Value val = super.getTopValue();
-      long l = Long.parseLong(val.toString());
-      String newVal = (l + amount) + "";
-      return new Value(newVal.getBytes(UTF_8));
-    }
-
-    @Override
-    public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
-      this.setSource(source);
-      amount = Long.parseLong(options.get("amount"));
-    }
-  }
-
-  public static class MultiplyingIterator extends WrappingIterator {
-    long amount = 0;
-
-    @Override
-    public Value getTopValue() {
-      Value val = super.getTopValue();
-      long l = Long.parseLong(val.toString());
-      String newVal = l * amount + "";
-      return new Value(newVal.getBytes(UTF_8));
-    }
-
-    @Override
-    public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
-      this.setSource(source);
-      amount = Long.parseLong(options.get("amount"));
-    }
-  }
-
-  @Test
-  public void testTableAndConditionIterators() throws Exception {
-
-    // test w/ table that has iterators configured
-    Connector conn = getConnector();
-    String tableName = getUniqueNames(1)[0];
-
-    IteratorSetting aiConfig1 = new IteratorSetting(30, "AI1", AddingIterator.class);
-    aiConfig1.addOption("amount", "2");
-    IteratorSetting aiConfig2 = new IteratorSetting(35, "MI1", MultiplyingIterator.class);
-    aiConfig2.addOption("amount", "3");
-    IteratorSetting aiConfig3 = new IteratorSetting(40, "AI2", AddingIterator.class);
-    aiConfig3.addOption("amount", "5");
-
-    conn.tableOperations().create(tableName);
-
-    BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
-
-    Mutation m = new Mutation("ACCUMULO-1000");
-    m.put("count", "comments", "6");
-    bw.addMutation(m);
-
-    m = new Mutation("ACCUMULO-1001");
-    m.put("count", "comments", "7");
-    bw.addMutation(m);
-
-    m = new Mutation("ACCUMULO-1002");
-    m.put("count", "comments", "8");
-    bw.addMutation(m);
-
-    bw.close();
-
-    conn.tableOperations().attachIterator(tableName, aiConfig1, EnumSet.of(IteratorScope.scan));
-    conn.tableOperations().offline(tableName, true);
-    conn.tableOperations().online(tableName, true);
-
-    ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig());
-
-    ConditionalMutation cm6 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setValue("8"));
-    cm6.put("count", "comments", "7");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm6).getStatus());
-
-    Scanner scanner = conn.createScanner(tableName, new Authorizations());
-    scanner.setRange(new Range("ACCUMULO-1000"));
-    scanner.fetchColumn(new Text("count"), new Text("comments"));
-
-    Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("9", entry.getValue().toString());
-
-    ConditionalMutation cm7 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setIterators(aiConfig2).setValue("27"));
-    cm7.put("count", "comments", "8");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm7).getStatus());
-
-    entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("10", entry.getValue().toString());
-
-    ConditionalMutation cm8 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setIterators(aiConfig2, aiConfig3).setValue("35"));
-    cm8.put("count", "comments", "9");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm8).getStatus());
-
-    entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("11", entry.getValue().toString());
-
-    ConditionalMutation cm3 = new ConditionalMutation("ACCUMULO-1000", new Condition("count", "comments").setIterators(aiConfig2).setValue("33"));
-    cm3.put("count", "comments", "3");
-
-    ConditionalMutation cm4 = new ConditionalMutation("ACCUMULO-1001", new Condition("count", "comments").setIterators(aiConfig3).setValue("14"));
-    cm4.put("count", "comments", "3");
-
-    ConditionalMutation cm5 = new ConditionalMutation("ACCUMULO-1002", new Condition("count", "comments").setIterators(aiConfig3).setValue("10"));
-    cm5.put("count", "comments", "3");
-
-    Iterator<Result> results = cw.write(Arrays.asList(cm3, cm4, cm5).iterator());
-    Map<String,Status> actual = new HashMap<String,Status>();
-
-    while (results.hasNext()) {
-      Result result = results.next();
-      String k = new String(result.getMutation().getRow());
-      Assert.assertFalse("Did not expect to see multiple resultus for the row: " + k, actual.containsKey(k));
-      actual.put(k, result.getStatus());
-    }
-
-    cw.close();
-
-    Map<String,Status> expected = new HashMap<String,Status>();
-    expected.put("ACCUMULO-1000", Status.ACCEPTED);
-    expected.put("ACCUMULO-1001", Status.ACCEPTED);
-    expected.put("ACCUMULO-1002", Status.REJECTED);
-
-    Assert.assertEquals(expected, actual);
-
-    cw.close();
-  }
-
-  @Test
-  public void testBatch() throws Exception {
-
-    Connector conn = getConnector();
-    String tableName = getUniqueNames(1)[0];
-
-    conn.tableOperations().create(tableName);
-
-    conn.securityOperations().changeUserAuthorizations(getAdminPrincipal(), new Authorizations("A", "B"));
-
-    ColumnVisibility cvab = new ColumnVisibility("A|B");
-
-    ArrayList<ConditionalMutation> mutations = new ArrayList<ConditionalMutation>();
-
-    ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvab));
-    cm0.put("name", "last", cvab, "doe");
-    cm0.put("name", "first", cvab, "john");
-    cm0.put("tx", "seq", cvab, "1");
-    mutations.add(cm0);
-
-    ConditionalMutation cm1 = new ConditionalMutation("59056", new Condition("tx", "seq").setVisibility(cvab));
-    cm1.put("name", "last", cvab, "doe");
-    cm1.put("name", "first", cvab, "jane");
-    cm1.put("tx", "seq", cvab, "1");
-    mutations.add(cm1);
-
-    ConditionalMutation cm2 = new ConditionalMutation("19059", new Condition("tx", "seq").setVisibility(cvab));
-    cm2.put("name", "last", cvab, "doe");
-    cm2.put("name", "first", cvab, "jack");
-    cm2.put("tx", "seq", cvab, "1");
-    mutations.add(cm2);
-
-    ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig().setAuthorizations(new Authorizations("A")));
-    Iterator<Result> results = cw.write(mutations.iterator());
-    int count = 0;
-    while (results.hasNext()) {
-      Result result = results.next();
-      Assert.assertEquals(Status.ACCEPTED, result.getStatus());
-      count++;
-    }
-
-    Assert.assertEquals(3, count);
-
-    Scanner scanner = conn.createScanner(tableName, new Authorizations("A"));
-    scanner.fetchColumn(new Text("tx"), new Text("seq"));
-
-    for (String row : new String[] {"99006", "59056", "19059"}) {
-      scanner.setRange(new Range(row));
-      Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
-      Assert.assertEquals("1", entry.getValue().toString());
-    }
-
-    TreeSet<Text> splits = new TreeSet<Text>();
-    splits.add(new Text("7"));
-    splits.add(new Text("3"));
-    conn.tableOperations().addSplits(tableName, splits);
-
-    mutations.clear();
-
-    ConditionalMutation cm3 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvab).setValue("1"));
-    cm3.put("name", "last", cvab, "Doe");
-    cm3.put("tx", "seq", cvab, "2");
-    mutations.add(cm3);
-
-    ConditionalMutation cm4 = new ConditionalMutation("59056", new Condition("tx", "seq").setVisibility(cvab));
-    cm4.put("name", "last", cvab, "Doe");
-    cm4.put("tx", "seq", cvab, "1");
-    mutations.add(cm4);
-
-    ConditionalMutation cm5 = new ConditionalMutation("19059", new Condition("tx", "seq").setVisibility(cvab).setValue("2"));
-    cm5.put("name", "last", cvab, "Doe");
-    cm5.put("tx", "seq", cvab, "3");
-    mutations.add(cm5);
-
-    results = cw.write(mutations.iterator());
-    int accepted = 0;
-    int rejected = 0;
-    while (results.hasNext()) {
-      Result result = results.next();
-      if (new String(result.getMutation().getRow()).equals("99006")) {
-        Assert.assertEquals(Status.ACCEPTED, result.getStatus());
-        accepted++;
-      } else {
-        Assert.assertEquals(Status.REJECTED, result.getStatus());
-        rejected++;
-      }
-    }
-
-    Assert.assertEquals("Expected only one accepted conditional mutation", 1, accepted);
-    Assert.assertEquals("Expected two rejected conditional mutations", 2, rejected);
-
-    for (String row : new String[] {"59056", "19059"}) {
-      scanner.setRange(new Range(row));
-      Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
-      Assert.assertEquals("1", entry.getValue().toString());
-    }
-
-    scanner.setRange(new Range("99006"));
-    Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("2", entry.getValue().toString());
-
-    scanner.clearColumns();
-    scanner.fetchColumn(new Text("name"), new Text("last"));
-    entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("Doe", entry.getValue().toString());
-
-    cw.close();
-  }
-
-  @Test
-  public void testBigBatch() throws Exception {
-
-    Connector conn = getConnector();
-    String tableName = getUniqueNames(1)[0];
-
-    conn.tableOperations().create(tableName);
-    conn.tableOperations().addSplits(tableName, nss("2", "4", "6"));
-
-    UtilWaitThread.sleep(2000);
-
-    int num = 100;
-
-    ArrayList<byte[]> rows = new ArrayList<byte[]>(num);
-    ArrayList<ConditionalMutation> cml = new ArrayList<ConditionalMutation>(num);
-
-    Random r = new Random();
-    byte[] e = new byte[0];
-
-    for (int i = 0; i < num; i++) {
-      rows.add(FastFormat.toZeroPaddedString(abs(r.nextLong()), 16, 16, e));
-    }
-
-    for (int i = 0; i < num; i++) {
-      ConditionalMutation cm = new ConditionalMutation(rows.get(i), new Condition("meta", "seq"));
-
-      cm.put("meta", "seq", "1");
-      cm.put("meta", "tx", UUID.randomUUID().toString());
-
-      cml.add(cm);
-    }
-
-    ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig());
-
-    Iterator<Result> results = cw.write(cml.iterator());
-
-    int count = 0;
-
-    // TODO check got each row back
-    while (results.hasNext()) {
-      Result result = results.next();
-      Assert.assertEquals(Status.ACCEPTED, result.getStatus());
-      count++;
-    }
-
-    Assert.assertEquals("Did not receive the expected number of results", num, count);
-
-    ArrayList<ConditionalMutation> cml2 = new ArrayList<ConditionalMutation>(num);
-
-    for (int i = 0; i < num; i++) {
-      ConditionalMutation cm = new ConditionalMutation(rows.get(i), new Condition("meta", "seq").setValue("1"));
-
-      cm.put("meta", "seq", "2");
-      cm.put("meta", "tx", UUID.randomUUID().toString());
-
-      cml2.add(cm);
-    }
-
-    count = 0;
-
-    results = cw.write(cml2.iterator());
-
-    while (results.hasNext()) {
-      Result result = results.next();
-      Assert.assertEquals(Status.ACCEPTED, result.getStatus());
-      count++;
-    }
-
-    Assert.assertEquals("Did not receive the expected number of results", num, count);
-
-    cw.close();
-  }
-
-  @Test
-  public void testBatchErrors() throws Exception {
-
-    Connector conn = getConnector();
-    String tableName = getUniqueNames(1)[0];
-
-    conn.tableOperations().create(tableName);
-    conn.tableOperations().addConstraint(tableName, AlphaNumKeyConstraint.class.getName());
-    conn.tableOperations().clone(tableName, tableName + "_clone", true, new HashMap<String,String>(), new HashSet<String>());
-
-    conn.securityOperations().changeUserAuthorizations(getAdminPrincipal(), new Authorizations("A", "B"));
-
-    ColumnVisibility cvaob = new ColumnVisibility("A|B");
-    ColumnVisibility cvaab = new ColumnVisibility("A&B");
-
-    switch ((new Random()).nextInt(3)) {
-      case 1:
-        conn.tableOperations().addSplits(tableName, nss("6"));
-        break;
-      case 2:
-        conn.tableOperations().addSplits(tableName, nss("2", "95"));
-        break;
-    }
-
-    ArrayList<ConditionalMutation> mutations = new ArrayList<ConditionalMutation>();
-
-    ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq").setVisibility(cvaob));
-    cm0.put("name+", "last", cvaob, "doe");
-    cm0.put("name", "first", cvaob, "john");
-    cm0.put("tx", "seq", cvaob, "1");
-    mutations.add(cm0);
-
-    ConditionalMutation cm1 = new ConditionalMutation("59056", new Condition("tx", "seq").setVisibility(cvaab));
-    cm1.put("name", "last", cvaab, "doe");
-    cm1.put("name", "first", cvaab, "jane");
-    cm1.put("tx", "seq", cvaab, "1");
-    mutations.add(cm1);
-
-    ConditionalMutation cm2 = new ConditionalMutation("19059", new Condition("tx", "seq").setVisibility(cvaob));
-    cm2.put("name", "last", cvaob, "doe");
-    cm2.put("name", "first", cvaob, "jack");
-    cm2.put("tx", "seq", cvaob, "1");
-    mutations.add(cm2);
-
-    ConditionalMutation cm3 = new ConditionalMutation("90909", new Condition("tx", "seq").setVisibility(cvaob).setValue("1"));
-    cm3.put("name", "last", cvaob, "doe");
-    cm3.put("name", "first", cvaob, "john");
-    cm3.put("tx", "seq", cvaob, "2");
-    mutations.add(cm3);
-
-    ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig().setAuthorizations(new Authorizations("A")));
-    Iterator<Result> results = cw.write(mutations.iterator());
-    HashSet<String> rows = new HashSet<String>();
-    while (results.hasNext()) {
-      Result result = results.next();
-      String row = new String(result.getMutation().getRow());
-      if (row.equals("19059")) {
-        Assert.assertEquals(Status.ACCEPTED, result.getStatus());
-      } else if (row.equals("59056")) {
-        Assert.assertEquals(Status.INVISIBLE_VISIBILITY, result.getStatus());
-      } else if (row.equals("99006")) {
-        Assert.assertEquals(Status.VIOLATED, result.getStatus());
-      } else if (row.equals("90909")) {
-        Assert.assertEquals(Status.REJECTED, result.getStatus());
-      }
-      rows.add(row);
-    }
-
-    Assert.assertEquals(4, rows.size());
-
-    Scanner scanner = conn.createScanner(tableName, new Authorizations("A"));
-    scanner.fetchColumn(new Text("tx"), new Text("seq"));
-
-    Entry<Key,Value> entry = Iterables.getOnlyElement(scanner);
-    Assert.assertEquals("1", entry.getValue().toString());
-
-    cw.close();
-  }
-
-  @Test
-  public void testSameRow() throws Exception {
-    // test multiple mutations for same row in same batch
-
-    Connector conn = getConnector();
-    String tableName = getUniqueNames(1)[0];
-
-    conn.tableOperations().create(tableName);
-
-    ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig());
-
-    ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq"));
-    cm1.put("tx", "seq", "1");
-    cm1.put("data", "x", "a");
-
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm1).getStatus());
-
-    ConditionalMutation cm2 = new ConditionalMutation("r1", new Condition("tx", "seq").setValue("1"));
-    cm2.put("tx", "seq", "2");
-    cm2.put("data", "x", "b");
-
-    ConditionalMutation cm3 = new ConditionalMutation("r1", new Condition("tx", "seq").setValue("1"));
-    cm3.put("tx", "seq", "2");
-    cm3.put("data", "x", "c");
-
-    ConditionalMutation cm4 = new ConditionalMutation("r1", new Condition("tx", "seq").setValue("1"));
-    cm4.put("tx", "seq", "2");
-    cm4.put("data", "x", "d");
-
-    Iterator<Result> results = cw.write(Arrays.asList(cm2, cm3, cm4).iterator());
-
-    int accepted = 0;
-    int rejected = 0;
-    int total = 0;
-
-    while (results.hasNext()) {
-      Status status = results.next().getStatus();
-      if (status == Status.ACCEPTED)
-        accepted++;
-      if (status == Status.REJECTED)
-        rejected++;
-      total++;
-    }
-
-    Assert.assertEquals("Expected one accepted result", 1, accepted);
-    Assert.assertEquals("Expected two rejected results", 2, rejected);
-    Assert.assertEquals("Expected three total results", 3, total);
-
-    cw.close();
-  }
-
-  private static class Stats {
-
-    ByteSequence row = null;
-    int seq;
-    long sum;
-    int data[] = new int[10];
-
-    public Stats(Iterator<Entry<Key,Value>> iterator) {
-      while (iterator.hasNext()) {
-        Entry<Key,Value> entry = iterator.next();
-
-        if (row == null)
-          row = entry.getKey().getRowData();
-
-        String cf = entry.getKey().getColumnFamilyData().toString();
-        String cq = entry.getKey().getColumnQualifierData().toString();
-
-        if (cf.equals("data")) {
-          data[Integer.parseInt(cq)] = Integer.parseInt(entry.getValue().toString());
-        } else if (cf.equals("meta")) {
-          if (cq.equals("sum")) {
-            sum = Long.parseLong(entry.getValue().toString());
-          } else if (cq.equals("seq")) {
-            seq = Integer.parseInt(entry.getValue().toString());
-          }
-        }
-      }
-
-      long sum2 = 0;
-
-      for (int datum : data) {
-        sum2 += datum;
-      }
-
-      Assert.assertEquals(sum2, sum);
-    }
-
-    public Stats(ByteSequence row) {
-      this.row = row;
-      for (int i = 0; i < data.length; i++) {
-        this.data[i] = 0;
-      }
-      this.seq = -1;
-      this.sum = 0;
-    }
-
-    void set(int index, int value) {
-      sum -= data[index];
-      sum += value;
-      data[index] = value;
-    }
-
-    ConditionalMutation toMutation() {
-      Condition cond = new Condition("meta", "seq");
-      if (seq >= 0)
-        cond.setValue(seq + "");
-
-      ConditionalMutation cm = new ConditionalMutation(row, cond);
-
-      cm.put("meta", "seq", (seq + 1) + "");
-      cm.put("meta", "sum", (sum) + "");
-
-      for (int i = 0; i < data.length; i++) {
-        cm.put("data", i + "", data[i] + "");
-      }
-
-      return cm;
-    }
-
-    @Override
-    public String toString() {
-      return row + " " + seq + " " + sum;
-    }
-  }
-
-  private static class MutatorTask implements Runnable {
-    String table;
-    ArrayList<ByteSequence> rows;
-    ConditionalWriter cw;
-    Connector conn;
-    AtomicBoolean failed;
-
-    public MutatorTask(String table, Connector conn, ArrayList<ByteSequence> rows, ConditionalWriter cw, AtomicBoolean failed) {
-      this.table = table;
-      this.rows = rows;
-      this.conn = conn;
-      this.cw = cw;
-      this.failed = failed;
-    }
-
-    @Override
-    public void run() {
-      try {
-        Random rand = new Random();
-
-        Scanner scanner = new IsolatedScanner(conn.createScanner(table, Authorizations.EMPTY));
-
-        for (int i = 0; i < 20; i++) {
-          int numRows = rand.nextInt(10) + 1;
-
-          ArrayList<ByteSequence> changes = new ArrayList<ByteSequence>(numRows);
-          ArrayList<ConditionalMutation> mutations = new ArrayList<ConditionalMutation>();
-
-          for (int j = 0; j < numRows; j++)
-            changes.add(rows.get(rand.nextInt(rows.size())));
-
-          for (ByteSequence row : changes) {
-            scanner.setRange(new Range(row.toString()));
-            Stats stats = new Stats(scanner.iterator());
-            stats.set(rand.nextInt(10), rand.nextInt(Integer.MAX_VALUE));
-            mutations.add(stats.toMutation());
-          }
-
-          ArrayList<ByteSequence> changed = new ArrayList<ByteSequence>(numRows);
-          Iterator<Result> results = cw.write(mutations.iterator());
-          while (results.hasNext()) {
-            Result result = results.next();
-            changed.add(new ArrayByteSequence(result.getMutation().getRow()));
-          }
-
-          Collections.sort(changes);
-          Collections.sort(changed);
-
-          Assert.assertEquals(changes, changed);
-
-        }
-
-      } catch (Exception e) {
-        log.error("{}", e.getMessage(), e);
-        failed.set(true);
-      }
-    }
-  }
-
-  @Test
-  public void testThreads() throws Exception {
-    // test multiple threads using a single conditional writer
-
-    String table = getUniqueNames(1)[0];
-    Connector conn = getConnector();
-
-    conn.tableOperations().create(table);
-
-    Random rand = new Random();
-
-    switch (rand.nextInt(3)) {
-      case 1:
-        conn.tableOperations().addSplits(table, nss("4"));
-        break;
-      case 2:
-        conn.tableOperations().addSplits(table, nss("3", "5"));
-        break;
-    }
-
-    ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig());
-
-    ArrayList<ByteSequence> rows = new ArrayList<ByteSequence>();
-
-    for (int i = 0; i < 1000; i++) {
-      rows.add(new ArrayByteSequence(FastFormat.toZeroPaddedString(abs(rand.nextLong()), 16, 16, new byte[0])));
-    }
-
-    ArrayList<ConditionalMutation> mutations = new ArrayList<ConditionalMutation>();
-
-    for (ByteSequence row : rows)
-      mutations.add(new Stats(row).toMutation());
-
-    ArrayList<ByteSequence> rows2 = new ArrayList<ByteSequence>();
-    Iterator<Result> results = cw.write(mutations.iterator());
-    while (results.hasNext()) {
-      Result result = results.next();
-      Assert.assertEquals(Status.ACCEPTED, result.getStatus());
-      rows2.add(new ArrayByteSequence(result.getMutation().getRow()));
-    }
-
-    Collections.sort(rows);
-    Collections.sort(rows2);
-
-    Assert.assertEquals(rows, rows2);
-
-    AtomicBoolean failed = new AtomicBoolean(false);
-
-    ExecutorService tp = Executors.newFixedThreadPool(5);
-    for (int i = 0; i < 5; i++) {
-      tp.submit(new MutatorTask(table, conn, rows, cw, failed));
-    }
-
-    tp.shutdown();
-
-    while (!tp.isTerminated()) {
-      tp.awaitTermination(1, TimeUnit.MINUTES);
-    }
-
-    Assert.assertFalse("A MutatorTask failed with an exception", failed.get());
-
-    Scanner scanner = conn.createScanner(table, Authorizations.EMPTY);
-
-    RowIterator rowIter = new RowIterator(scanner);
-
-    while (rowIter.hasNext()) {
-      Iterator<Entry<Key,Value>> row = rowIter.next();
-      new Stats(row);
-    }
-  }
-
-  private SortedSet<Text> nss(String... splits) {
-    TreeSet<Text> ret = new TreeSet<Text>();
-    for (String split : splits)
-      ret.add(new Text(split));
-
-    return ret;
-  }
-
-  @Test
-  public void testSecurity() throws Exception {
-    // test against table user does not have read and/or write permissions for
-    Connector conn = getConnector();
-    String user = null;
-    ClientConfiguration clientConf = cluster.getClientConfig();
-    final boolean saslEnabled = clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false);
-
-    // Create a new user
-    ClusterUser user1 = getUser(0);
-    user = user1.getPrincipal();
-    if (saslEnabled) {
-      conn.securityOperations().createLocalUser(user, null);
-    } else {
-      conn.securityOperations().createLocalUser(user, new PasswordToken(user1.getPassword()));
-    }
-
-    String[] tables = getUniqueNames(3);
-    String table1 = tables[0], table2 = tables[1], table3 = tables[2];
-
-    // Create three tables
-    conn.tableOperations().create(table1);
-    conn.tableOperations().create(table2);
-    conn.tableOperations().create(table3);
-
-    // Grant R on table1, W on table2, R/W on table3
-    conn.securityOperations().grantTablePermission(user, table1, TablePermission.READ);
-    conn.securityOperations().grantTablePermission(user, table2, TablePermission.WRITE);
-    conn.securityOperations().grantTablePermission(user, table3, TablePermission.READ);
-    conn.securityOperations().grantTablePermission(user, table3, TablePermission.WRITE);
-
-    // Login as the user
-    Connector conn2 = conn.getInstance().getConnector(user, user1.getToken());
-
-    ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq"));
-    cm1.put("tx", "seq", "1");
-    cm1.put("data", "x", "a");
-
-    ConditionalWriter cw1 = conn2.createConditionalWriter(table1, new ConditionalWriterConfig());
-    ConditionalWriter cw2 = conn2.createConditionalWriter(table2, new ConditionalWriterConfig());
-    ConditionalWriter cw3 = conn2.createConditionalWriter(table3, new ConditionalWriterConfig());
-
-    // Should be able to conditional-update a table we have R/W on
-    Assert.assertEquals(Status.ACCEPTED, cw3.write(cm1).getStatus());
-
-    // Conditional-update to a table we only have read on should fail
-    try {
-      Status status = cw1.write(cm1).getStatus();
-      Assert.fail("Expected exception writing conditional mutation to table the user doesn't have write access to, Got status: " + status);
-    } catch (AccumuloSecurityException ase) {
-
-    }
-
-    // Conditional-update to a table we only have writer on should fail
-    try {
-      Status status = cw2.write(cm1).getStatus();
-      Assert.fail("Expected exception writing conditional mutation to table the user doesn't have read access to. Got status: " + status);
-    } catch (AccumuloSecurityException ase) {
-
-    }
-  }
-
-  @Test
-  public void testTimeout() throws Exception {
-    Connector conn = getConnector();
-
-    String table = getUniqueNames(1)[0];
-
-    conn.tableOperations().create(table);
-
-    ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig().setTimeout(3, TimeUnit.SECONDS));
-
-    ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq"));
-    cm1.put("tx", "seq", "1");
-    cm1.put("data", "x", "a");
-
-    Assert.assertEquals(cw.write(cm1).getStatus(), Status.ACCEPTED);
-
-    IteratorSetting is = new IteratorSetting(5, SlowIterator.class);
-    SlowIterator.setSeekSleepTime(is, 5000);
-
-    ConditionalMutation cm2 = new ConditionalMutation("r1", new Condition("tx", "seq").setValue("1").setIterators(is));
-    cm2.put("tx", "seq", "2");
-    cm2.put("data", "x", "b");
-
-    Assert.assertEquals(cw.write(cm2).getStatus(), Status.UNKNOWN);
-
-    Scanner scanner = conn.createScanner(table, Authorizations.EMPTY);
-
-    for (Entry<Key,Value> entry : scanner) {
-      String cf = entry.getKey().getColumnFamilyData().toString();
-      String cq = entry.getKey().getColumnQualifierData().toString();
-      String val = entry.getValue().toString();
-
-      if (cf.equals("tx") && cq.equals("seq"))
-        Assert.assertEquals("Unexpected value in tx:seq", "1", val);
-      else if (cf.equals("data") && cq.equals("x"))
-        Assert.assertEquals("Unexpected value in data:x", "a", val);
-      else
-        Assert.fail("Saw unexpected column family and qualifier: " + entry);
-    }
-
-    ConditionalMutation cm3 = new ConditionalMutation("r1", new Condition("tx", "seq").setValue("1"));
-    cm3.put("tx", "seq", "2");
-    cm3.put("data", "x", "b");
-
-    Assert.assertEquals(cw.write(cm3).getStatus(), Status.ACCEPTED);
-
-    cw.close();
-  }
-
-  @Test
-  public void testDeleteTable() throws Exception {
-    String table = getUniqueNames(1)[0];
-    Connector conn = getConnector();
-
-    try {
-      conn.createConditionalWriter(table, new ConditionalWriterConfig());
-      Assert.fail("Creating conditional writer for table that doesn't exist should fail");
-    } catch (TableNotFoundException e) {}
-
-    conn.tableOperations().create(table);
-
-    ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig());
-
-    conn.tableOperations().delete(table);
-
-    ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq"));
-    cm1.put("tx", "seq", "1");
-    cm1.put("data", "x", "a");
-
-    Result result = cw.write(cm1);
-
-    try {
-      Status status = result.getStatus();
-      Assert.fail("Expected exception writing conditional mutation to deleted table. Got status: " + status);
-    } catch (AccumuloException ae) {
-      Assert.assertEquals(TableDeletedException.class, ae.getCause().getClass());
-    }
-  }
-
-  @Test
-  public void testOffline() throws Exception {
-    String table = getUniqueNames(1)[0];
-    Connector conn = getConnector();
-
-    conn.tableOperations().create(table);
-
-    ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig());
-
-    conn.tableOperations().offline(table, true);
-
-    ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq"));
-    cm1.put("tx", "seq", "1");
-    cm1.put("data", "x", "a");
-
-    Result result = cw.write(cm1);
-
-    try {
-      Status status = result.getStatus();
-      Assert.fail("Expected exception writing conditional mutation to offline table. Got status: " + status);
-    } catch (AccumuloException ae) {
-      Assert.assertEquals(TableOfflineException.class, ae.getCause().getClass());
-    }
-
-    cw.close();
-
-    try {
-      conn.createConditionalWriter(table, new ConditionalWriterConfig());
-      Assert.fail("Expected exception creating conditional writer to offline table");
-    } catch (TableOfflineException e) {}
-  }
-
-  @Test
-  public void testError() throws Exception {
-    String table = getUniqueNames(1)[0];
-    Connector conn = getConnector();
-
-    conn.tableOperations().create(table);
-
-    ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig());
-
-    IteratorSetting iterSetting = new IteratorSetting(5, BadIterator.class);
-
-    ConditionalMutation cm1 = new ConditionalMutation("r1", new Condition("tx", "seq").setIterators(iterSetting));
-    cm1.put("tx", "seq", "1");
-    cm1.put("data", "x", "a");
-
-    Result result = cw.write(cm1);
-
-    try {
-      Status status = result.getStatus();
-      Assert.fail("Expected exception using iterator which throws an error, Got status: " + status);
-    } catch (AccumuloException ae) {
-
-    }
-
-    cw.close();
-  }
-
-  @Test(expected = IllegalArgumentException.class)
-  public void testNoConditions() throws AccumuloException, AccumuloSecurityException, TableExistsException, TableNotFoundException {
-    String table = getUniqueNames(1)[0];
-    Connector conn = getConnector();
-
-    conn.tableOperations().create(table);
-
-    ConditionalWriter cw = conn.createConditionalWriter(table, new ConditionalWriterConfig());
-
-    ConditionalMutation cm1 = new ConditionalMutation("r1");
-    cm1.put("tx", "seq", "1");
-    cm1.put("data", "x", "a");
-
-    cw.write(cm1);
-  }
-
-  @Test
-  public void testTrace() throws Exception {
-    // Need to add a getClientConfig() to AccumuloCluster
-    Assume.assumeTrue(getClusterType() == ClusterType.MINI);
-    Process tracer = null;
-    Connector conn = getConnector();
-    AccumuloCluster cluster = getCluster();
-    MiniAccumuloClusterImpl mac = (MiniAccumuloClusterImpl) cluster;
-    if (!conn.tableOperations().exists("trace")) {
-      tracer = mac.exec(TraceServer.class);
-      while (!conn.tableOperations().exists("trace")) {
-        UtilWaitThread.sleep(1000);
-      }
-    }
-
-    String tableName = getUniqueNames(1)[0];
-    conn.tableOperations().create(tableName);
-
-    DistributedTrace.enable("localhost", "testTrace", mac.getClientConfig());
-    Span root = Trace.on("traceTest");
-    ConditionalWriter cw = conn.createConditionalWriter(tableName, new ConditionalWriterConfig());
-
-    // mutation conditional on column tx:seq not exiting
-    ConditionalMutation cm0 = new ConditionalMutation("99006", new Condition("tx", "seq"));
-    cm0.put("name", "last", "doe");
-    cm0.put("name", "first", "john");
-    cm0.put("tx", "seq", "1");
-    Assert.assertEquals(Status.ACCEPTED, cw.write(cm0).getStatus());
-    root.stop();
-
-    final Scanner scanner = conn.createScanner("trace", Authorizations.EMPTY);
-    scanner.setRange(new Range(new Text(Long.toHexString(root.traceId()))));
-    loop: while (true) {
-      final StringBuilder finalBuffer = new StringBuilder();
-      int traceCount = TraceDump.printTrace(scanner, new Printer() {
-        @Override
-        public void print(final String line) {
-          try {
-            finalBuffer.append(line).append("\n");
-          } catch (Exception ex) {
-            throw new RuntimeException(ex);
-          }
-        }
-      });
-      String traceOutput = finalBuffer.toString();
-      log.info("Trace output:" + traceOutput);
-      if (traceCount > 0) {
-        int lastPos = 0;
-        for (String part : "traceTest, startScan,startConditionalUpdate,conditionalUpdate,Check conditions,apply conditional mutations".split(",")) {
-          log.info("Looking in trace output for '" + part + "'");
-          int pos = traceOutput.indexOf(part);
-          if (-1 == pos) {
-            log.info("Trace output doesn't contain '" + part + "'");
-            Thread.sleep(1000);
-            break loop;
-          }
-          assertTrue("Did not find '" + part + "' in output", pos > 0);
-          assertTrue("'" + part + "' occurred earlier than the previous element unexpectedly", pos > lastPos);
-          lastPos = pos;
-        }
-        break;
-      } else {
-        log.info("Ignoring trace output as traceCount not greater than zero: " + traceCount);
-        Thread.sleep(1000);
-      }
-    }
-    if (tracer != null) {
-      tracer.destroy();
-    }
-  }
-}
diff --git a/test/src/test/java/org/apache/accumulo/test/NoMutationRecoveryIT.java b/test/src/test/java/org/apache/accumulo/test/NoMutationRecoveryIT.java
deleted file mode 100644
index 8b315d6..0000000
--- a/test/src/test/java/org/apache/accumulo/test/NoMutationRecoveryIT.java
+++ /dev/null
@@ -1,179 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-import java.util.Map.Entry;
-
-import org.apache.accumulo.cluster.ClusterControl;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.PartialKey;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.metadata.MetadataTable;
-import org.apache.accumulo.core.metadata.RootTable;
-import org.apache.accumulo.core.metadata.schema.MetadataSchema;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.harness.AccumuloClusterIT;
-import org.apache.accumulo.minicluster.ServerType;
-import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.RawLocalFileSystem;
-import org.apache.hadoop.io.Text;
-import org.junit.After;
-import org.junit.Before;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import com.google.common.collect.Iterables;
-
-// Verify that a recovery of a log without any mutations removes the log reference
-public class NoMutationRecoveryIT extends AccumuloClusterIT {
-  private static final Logger log = LoggerFactory.getLogger(NoMutationRecoveryIT.class);
-
-  @Override
-  public int defaultTimeoutSeconds() {
-    return 10 * 60;
-  }
-
-  @Override
-  public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
-    cfg.setNumTservers(1);
-    hadoopCoreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
-  }
-
-  @Before
-  public void takeTraceTableOffline() throws Exception {
-    Connector conn = getConnector();
-    if (conn.tableOperations().exists("trace")) {
-      conn.tableOperations().offline("trace", true);
-    }
-  }
-
-  @After
-  public void takeTraceTableOnline() throws Exception {
-    Connector conn = getConnector();
-    if (conn.tableOperations().exists("trace")) {
-      conn.tableOperations().online("trace", true);
-    }
-  }
-
-  public boolean equals(Entry<Key,Value> a, Entry<Key,Value> b) {
-    // comparison, without timestamp
-    Key akey = a.getKey();
-    Key bkey = b.getKey();
-    log.info("Comparing {} to {}", akey.toStringNoTruncate(), bkey.toStringNoTruncate());
-    return akey.compareTo(bkey, PartialKey.ROW_COLFAM_COLQUAL_COLVIS) == 0 && a.getValue().equals(b.getValue());
-  }
-
-  @Test
-  public void test() throws Exception {
-    Connector conn = getConnector();
-    final String table = getUniqueNames(1)[0];
-    conn.tableOperations().create(table);
-    String tableId = conn.tableOperations().tableIdMap().get(table);
-
-    log.info("Created {} with id {}", table, tableId);
-
-    // Add a record to the table
-    update(conn, table, new Text("row"), new Text("cf"), new Text("cq"), new Value("value".getBytes()));
-
-    // Get the WAL reference used by the table we just added the update to
-    Entry<Key,Value> logRef = getLogRef(conn, MetadataTable.NAME);
-
-    log.info("Log reference in metadata table {} {}", logRef.getKey().toStringNoTruncate(), logRef.getValue());
-
-    // Flush the record to disk
-    conn.tableOperations().flush(table, null, null, true);
-
-    Range range = Range.prefix(tableId);
-    log.info("Fetching WAL references over " + table);
-    assertEquals("should not have any refs", 0, Iterables.size(getLogRefs(conn, MetadataTable.NAME, range)));
-
-    // Grant permission to the admin user to write to the Metadata table
-    conn.securityOperations().grantTablePermission(conn.whoami(), MetadataTable.NAME, TablePermission.WRITE);
-
-    // Add the wal record back to the metadata table
-    update(conn, MetadataTable.NAME, logRef);
-
-    // Assert that we can get the bogus update back out again
-    assertTrue(equals(logRef, getLogRef(conn, MetadataTable.NAME)));
-
-    conn.tableOperations().flush(MetadataTable.NAME, null, null, true);
-    conn.tableOperations().flush(RootTable.NAME, null, null, true);
-
-    ClusterControl control = cluster.getClusterControl();
-    control.stopAllServers(ServerType.TABLET_SERVER);
-    control.startAllServers(ServerType.TABLET_SERVER);
-
-    // Verify that we can read the original record we wrote
-    Scanner s = conn.createScanner(table, Authorizations.EMPTY);
-    int count = 0;
-    for (Entry<Key,Value> e : s) {
-      assertEquals(e.getKey().getRow().toString(), "row");
-      assertEquals(e.getKey().getColumnFamily().toString(), "cf");
-      assertEquals(e.getKey().getColumnQualifier().toString(), "cq");
-      assertEquals(e.getValue().toString(), "value");
-      count++;
-    }
-    assertEquals(1, count);
-
-    // Verify that the bogus log reference we wrote it gone
-    for (Entry<Key,Value> ref : getLogRefs(conn, MetadataTable.NAME)) {
-      assertFalse("Unexpected found reference to bogus log entry: " + ref.getKey().toStringNoTruncate() + " " + ref.getValue(), equals(ref, logRef));
-    }
-  }
-
-  private void update(Connector conn, String name, Entry<Key,Value> logRef) throws Exception {
-    Key k = logRef.getKey();
-    update(conn, name, k.getRow(), k.getColumnFamily(), k.getColumnQualifier(), logRef.getValue());
-  }
-
-  private Iterable<Entry<Key,Value>> getLogRefs(Connector conn, String table) throws Exception {
-    return getLogRefs(conn, table, new Range());
-  }
-
-  private Iterable<Entry<Key,Value>> getLogRefs(Connector conn, String table, Range r) throws Exception {
-    Scanner s = conn.createScanner(table, Authorizations.EMPTY);
-    s.fetchColumnFamily(MetadataSchema.TabletsSection.LogColumnFamily.NAME);
-    s.setRange(r);
-    return s;
-  }
-
-  private Entry<Key,Value> getLogRef(Connector conn, String table) throws Exception {
-    return getLogRefs(conn, table).iterator().next();
-  }
-
-  private void update(Connector conn, String table, Text row, Text cf, Text cq, Value value) throws Exception {
-    BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
-    Mutation m = new Mutation(row);
-    m.put(cf, cq, value);
-    bw.addMutation(m);
-    bw.close();
-  }
-
-}
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/AccumuloInputFormatIT.java b/test/src/test/java/org/apache/accumulo/test/functional/AccumuloInputFormatIT.java
deleted file mode 100644
index 054f9a4..0000000
--- a/test/src/test/java/org/apache/accumulo/test/functional/AccumuloInputFormatIT.java
+++ /dev/null
@@ -1,210 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test.functional;
-
-import static java.lang.System.currentTimeMillis;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.fail;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.List;
-import java.util.TreeSet;
-
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.ClientConfiguration;
-import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.TableNotFoundException;
-import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
-import org.apache.accumulo.core.client.mapreduce.impl.BatchInputSplit;
-import org.apache.accumulo.core.conf.AccumuloConfiguration;
-import org.apache.accumulo.core.conf.ConfigurationCopy;
-import org.apache.accumulo.core.conf.DefaultConfiguration;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Range;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.util.UtilWaitThread;
-import org.apache.accumulo.harness.AccumuloClusterIT;
-import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapreduce.InputSplit;
-import org.apache.hadoop.mapreduce.Job;
-import org.junit.Before;
-import org.junit.Test;
-
-public class AccumuloInputFormatIT extends AccumuloClusterIT {
-
-  AccumuloInputFormat inputFormat;
-
-  @Override
-  protected int defaultTimeoutSeconds() {
-    return 4 * 60;
-  }
-
-  @Override
-  public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {
-    cfg.setNumTservers(1);
-  }
-
-  @Before
-  public void before() {
-    inputFormat = new AccumuloInputFormat();
-  }
-
-  /**
-   * Tests several different paths through the getSplits() method by setting different properties and verifying the results.
-   */
-  @Test
-  public void testGetSplits() throws Exception {
-    Connector conn = getConnector();
-    String table = getUniqueNames(1)[0];
-    conn.tableOperations().create(table);
-    insertData(table, currentTimeMillis());
-
-    ClientConfiguration clientConf = cluster.getClientConfig();
-    AccumuloConfiguration clusterClientConf = new ConfigurationCopy(new DefaultConfiguration());
-
-    // Pass SSL and CredentialProvider options into the ClientConfiguration given to AccumuloInputFormat
-    boolean sslEnabled = Boolean.valueOf(clusterClientConf.get(Property.INSTANCE_RPC_SSL_ENABLED));
-    if (sslEnabled) {
-      ClientProperty[] sslProperties = new ClientProperty[] {ClientProperty.INSTANCE_RPC_SSL_ENABLED, ClientProperty.INSTANCE_RPC_SSL_CLIENT_AUTH,
-          ClientProperty.RPC_SSL_KEYSTORE_PATH, ClientProperty.RPC_SSL_KEYSTORE_TYPE, ClientProperty.RPC_SSL_KEYSTORE_PASSWORD,
-          ClientProperty.RPC_SSL_TRUSTSTORE_PATH, ClientProperty.RPC_SSL_TRUSTSTORE_TYPE, ClientProperty.RPC_SSL_TRUSTSTORE_PASSWORD,
-          ClientProperty.RPC_USE_JSSE, ClientProperty.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS};
-
-      for (ClientProperty prop : sslProperties) {
-        // The default property is returned if it's not in the ClientConfiguration so we don't have to check if the value is actually defined
-        clientConf.setProperty(prop, clusterClientConf.get(prop.getKey()));
-      }
-    }
-
-    Job job = Job.getInstance();
-    AccumuloInputFormat.setInputTableName(job, table);
-    AccumuloInputFormat.setZooKeeperInstance(job, clientConf);
-    AccumuloInputFormat.setConnectorInfo(job, getAdminPrincipal(), getAdminToken());
-
-    // split table
-    TreeSet<Text> splitsToAdd = new TreeSet<Text>();
-    for (int i = 0; i < 10000; i += 1000)
-      splitsToAdd.add(new Text(String.format("%09d", i)));
-    conn.tableOperations().addSplits(table, splitsToAdd);
-    UtilWaitThread.sleep(500); // wait for splits to be propagated
-
-    // get splits without setting any range
-    Collection<Text> actualSplits = conn.tableOperations().listSplits(table);
-    List<InputSplit> splits = inputFormat.getSplits(job);
-    assertEquals(actualSplits.size() + 1, splits.size()); // No ranges set on the job so it'll start with -inf
-
-    // set ranges and get splits
-    List<Range> ranges = new ArrayList<Range>();
-    for (Text text : actualSplits)
-      ranges.add(new Range(text));
-    AccumuloInputFormat.setRanges(job, ranges);
-    splits = inputFormat.getSplits(job);
-    assertEquals(actualSplits.size(), splits.size());
-
-    // offline mode
-    AccumuloInputFormat.setOfflineTableScan(job, true);
-    try {
-      inputFormat.getSplits(job);
-      fail("An exception should have been thrown");
-    } catch (IOException e) {}
-
-    conn.tableOperations().offline(table, true);
-    splits = inputFormat.getSplits(job);
-    assertEquals(actualSplits.size(), splits.size());
-
-    // auto adjust ranges
-    ranges = new ArrayList<Range>();
-    for (int i = 0; i < 5; i++)
-      // overlapping ranges
-      ranges.add(new Range(String.format("%09d", i), String.format("%09d", i + 2)));
-    AccumuloInputFormat.setRanges(job, ranges);
-    splits = inputFormat.getSplits(job);
-    assertEquals(2, splits.size());
-
-    AccumuloInputFormat.setAutoAdjustRanges(job, false);
-    splits = inputFormat.getSplits(job);
-    assertEquals(ranges.size(), splits.size());
-
-    // BatchScan not available for offline scans
-    AccumuloInputFormat.setBatchScan(job, true);
-    // Reset auto-adjust ranges too
-    AccumuloInputFormat.setAutoAdjustRanges(job, true);
-
-    AccumuloInputFormat.setOfflineTableScan(job, true);
-    try {
-      inputFormat.getSplits(job);
-      fail("An exception should have been thrown");
-    } catch (IllegalArgumentException e) {}
-
-    conn.tableOperations().online(table, true);
-    AccumuloInputFormat.setOfflineTableScan(job, false);
-
-    // test for resumption of success
-    splits = inputFormat.getSplits(job);
-    assertEquals(2, splits.size());
-
-    // BatchScan not available with isolated iterators
-    AccumuloInputFormat.setScanIsolation(job, true);
-    try {
-      inputFormat.getSplits(job);
-      fail("An exception should have been thrown");
-    } catch (IllegalArgumentException e) {}
-    AccumuloInputFormat.setScanIsolation(job, false);
-
-    // test for resumption of success
-    splits = inputFormat.getSplits(job);
-    assertEquals(2, splits.size());
-
-    // BatchScan not available with local iterators
-    AccumuloInputFormat.setLocalIterators(job, true);
-    try {
-      inputFormat.getSplits(job);
-      fail("An exception should have been thrown");
-    } catch (IllegalArgumentException e) {}
-    AccumuloInputFormat.setLocalIterators(job, false);
-
-    // Check we are getting back correct type pf split
-    conn.tableOperations().online(table);
-    splits = inputFormat.getSplits(job);
-    for (InputSplit split : splits)
-      assert (split instanceof BatchInputSplit);
-
-    // We should divide along the tablet lines similar to when using `setAutoAdjustRanges(job, true)`
-    assertEquals(2, splits.size());
-  }
-
-  private void insertData(String tableName, long ts) throws AccumuloException, AccumuloSecurityException, TableNotFoundException {
-    BatchWriter bw = getConnector().createBatchWriter(tableName, null);
-
-    for (int i = 0; i < 10000; i++) {
-      String row = String.format("%09d", i);
-
-      Mutation m = new Mutation(new Text(row));
-      m.put(new Text("cf1"), new Text("cq1"), ts, new Value(("" + i).getBytes()));
-      bw.addMutation(m);
-    }
-    bw.close();
-  }
-}
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/SimpleMacIT.java b/test/src/test/java/org/apache/accumulo/test/functional/SimpleMacIT.java
deleted file mode 100644
index 1e80c8d..0000000
--- a/test/src/test/java/org/apache/accumulo/test/functional/SimpleMacIT.java
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test.functional;
-
-import org.apache.accumulo.harness.SharedMiniClusterIT;
-import org.junit.AfterClass;
-import org.junit.BeforeClass;
-
-/**
- * @deprecated since 1.6.2; use {@link SharedMiniClusterIT} instead
- */
-@Deprecated
-public class SimpleMacIT extends SharedMiniClusterIT {
-
-  @BeforeClass
-  public static void setup() throws Exception {
-    SharedMiniClusterIT.startMiniCluster();
-  }
-
-  @AfterClass
-  public static void teardown() throws Exception {
-    SharedMiniClusterIT.stopMiniCluster();
-  }
-}
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ValueReversingIterator.java b/test/src/test/java/org/apache/accumulo/test/functional/ValueReversingIterator.java
new file mode 100644
index 0000000..e606f5a
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/functional/ValueReversingIterator.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.functional;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.commons.lang3.ArrayUtils;
+
+/**
+ * Iterator used in ScannerContextIT that reverses the bytes of the value
+ *
+ */
+public class ValueReversingIterator implements SortedKeyValueIterator<Key,Value> {
+
+  protected SortedKeyValueIterator<Key,Value> source;
+
+  public ValueReversingIterator deepCopy(IteratorEnvironment env) {
+    throw new UnsupportedOperationException();
+  }
+
+  public Key getTopKey() {
+    return source.getTopKey();
+  }
+
+  public Value getTopValue() {
+    byte[] buf = source.getTopValue().get();
+    ArrayUtils.reverse(buf);
+    return new Value(buf);
+  }
+
+  public boolean hasTop() {
+    return source.hasTop();
+  }
+
+  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
+    this.source = source;
+  }
+
+  public void next() throws IOException {
+    source.next();
+  }
+
+  public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
+    source.seek(range, columnFamilies, inclusive);
+  }
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/iterator/AgeOffFilterTest.java b/test/src/test/java/org/apache/accumulo/test/iterator/AgeOffFilterTest.java
new file mode 100644
index 0000000..e78d8a9
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/iterator/AgeOffFilterTest.java
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.iterator;
+
+import static org.junit.Assert.assertNotNull;
+
+import java.util.List;
+import java.util.Map.Entry;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.user.AgeOffFilter;
+import org.apache.accumulo.iteratortest.IteratorTestCaseFinder;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.junit4.BaseJUnit4IteratorTest;
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.base.Predicate;
+import com.google.common.collect.Iterables;
+
+/**
+ * Iterator test harness tests for AgeOffFilter
+ */
+public class AgeOffFilterTest extends BaseJUnit4IteratorTest {
+  public static long NOW;
+  public static long TTL;
+
+  @Parameters
+  public static Object[][] parameters() {
+    // Test ageoff after 30 seconds.
+    NOW = System.currentTimeMillis();
+    TTL = 30 * 1000;
+
+    IteratorTestInput input = getIteratorInput();
+    IteratorTestOutput output = getIteratorOutput();
+    List<IteratorTestCase> tests = IteratorTestCaseFinder.findAllTestCases();
+    return BaseJUnit4IteratorTest.createParameters(input, output, tests);
+  }
+
+  private static final TreeMap<Key,Value> INPUT_DATA = createInputData();
+  private static final TreeMap<Key,Value> OUTPUT_DATA = createOutputData();
+
+  private static TreeMap<Key,Value> createInputData() {
+    TreeMap<Key,Value> data = new TreeMap<>();
+    final Value value = new Value(new byte[] {'a'});
+
+    data.put(new Key("1", "a", "a", nowDelta(25)), value);
+    data.put(new Key("2", "a", "a", nowDelta(35)), value);
+    data.put(new Key("3", "a", "a", nowDelta(55)), value);
+    data.put(new Key("4", "a", "a", nowDelta(0)), value);
+    data.put(new Key("5", "a", "a", nowDelta(-29)), value);
+    data.put(new Key("6", "a", "a", nowDelta(-28)), value);
+    // Dropped
+    data.put(new Key("7", "a", "a", nowDelta(-40)), value);
+    // Dropped (comparison is not inclusive)
+    data.put(new Key("8", "a", "a", nowDelta(-30)), value);
+    // Dropped
+    data.put(new Key("9", "a", "a", nowDelta(-31)), value);
+
+    // Dropped
+    data.put(new Key("a", "", "", nowDelta(-50)), value);
+    data.put(new Key("a", "a", "", nowDelta(-20)), value);
+    data.put(new Key("a", "a", "a", nowDelta(50)), value);
+    data.put(new Key("a", "a", "b", nowDelta(-15)), value);
+    // Dropped
+    data.put(new Key("a", "a", "c", nowDelta(-32)), value);
+    // Dropped
+    data.put(new Key("a", "a", "d", nowDelta(-32)), value);
+
+    return data;
+  }
+
+  /**
+   * Compute a timestamp (milliseconds) based on {@link #NOW} plus the <code>seconds</code> argument.
+   *
+   * @param seconds
+   *          The number of seconds to add to <code>NOW</code> .
+   * @return A Key timestamp the provided number of seconds after <code>NOW</code>.
+   */
+  private static long nowDelta(long seconds) {
+    return NOW + (seconds * 1000);
+  }
+
+  private static TreeMap<Key,Value> createOutputData() {
+    TreeMap<Key,Value> data = new TreeMap<>();
+
+    Iterable<Entry<Key,Value>> filtered = Iterables.filter(data.entrySet(), new Predicate<Entry<Key,Value>>() {
+
+      @Override
+      public boolean apply(Entry<Key,Value> input) {
+        assertNotNull(input);
+        return NOW - input.getKey().getTimestamp() > TTL;
+      }
+
+    });
+
+    for (Entry<Key,Value> entry : filtered) {
+      data.put(entry.getKey(), entry.getValue());
+    }
+
+    return data;
+  }
+
+  private static IteratorTestInput getIteratorInput() {
+    IteratorSetting setting = new IteratorSetting(50, AgeOffFilter.class);
+    AgeOffFilter.setCurrentTime(setting, NOW);
+    AgeOffFilter.setTTL(setting, TTL);
+    return new IteratorTestInput(AgeOffFilter.class, setting.getOptions(), new Range(), INPUT_DATA);
+  }
+
+  private static IteratorTestOutput getIteratorOutput() {
+    return new IteratorTestOutput(OUTPUT_DATA);
+  }
+
+  public AgeOffFilterTest(IteratorTestInput input, IteratorTestOutput expectedOutput, IteratorTestCase testCase) {
+    super(input, expectedOutput, testCase);
+  }
+
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/iterator/CfCqSliceFilterTest.java b/test/src/test/java/org/apache/accumulo/test/iterator/CfCqSliceFilterTest.java
new file mode 100644
index 0000000..fc0f672
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/iterator/CfCqSliceFilterTest.java
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.iterator;
+
+import static org.junit.Assert.assertNotNull;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map.Entry;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.user.CfCqSliceFilter;
+import org.apache.accumulo.core.iterators.user.CfCqSliceOpts;
+import org.apache.accumulo.iteratortest.IteratorTestCaseFinder;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.junit4.BaseJUnit4IteratorTest;
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.base.Predicate;
+import com.google.common.collect.Iterables;
+
+/**
+ * Iterator test harness tests for CfCqSliceFilter
+ */
+public class CfCqSliceFilterTest extends BaseJUnit4IteratorTest {
+
+  // Default is inclusive on min and max
+  public static final String MIN_CF = "f";
+  public static final String MAX_CF = "m";
+  public static final String MIN_CQ = "q";
+  public static final String MAX_CQ = "y";
+
+  @Parameters
+  public static Object[][] parameters() {
+    IteratorTestInput input = getIteratorInput();
+    IteratorTestOutput output = getIteratorOutput();
+    List<IteratorTestCase> tests = IteratorTestCaseFinder.findAllTestCases();
+    return BaseJUnit4IteratorTest.createParameters(input, output, tests);
+  }
+
+  private static final TreeMap<Key,Value> INPUT_DATA = createInputData();
+  private static final TreeMap<Key,Value> OUTPUT_DATA = createOutputData();
+
+  private static TreeMap<Key,Value> createInputData() {
+    TreeMap<Key,Value> data = new TreeMap<>();
+    Value value = new Value(new byte[] {'a'});
+
+    // Dropped
+    data.put(new Key("1", "a", "g"), value);
+    data.put(new Key("1", "f", "q"), value);
+    data.put(new Key("1", "f", "t"), value);
+    data.put(new Key("1", "g", "q"), value);
+    data.put(new Key("1", "g", "y"), value);
+    // Dropped
+    data.put(new Key("1", "g", "z"), value);
+
+    // Dropped
+    data.put(new Key("2", "m", "a"), value);
+
+    data.put(new Key("3", "j", "u"), value);
+
+    data.put(new Key("4", "h", "w"), value);
+    data.put(new Key("4", "h", "x"), value);
+    data.put(new Key("4", "h", "y"), value);
+    data.put(new Key("4", "l", "r"), value);
+    // Dropped
+    data.put(new Key("4", "l", "z"), value);
+    data.put(new Key("4", "m", "y"), value);
+
+    return data;
+  }
+
+  private static TreeMap<Key,Value> createOutputData() {
+    TreeMap<Key,Value> data = new TreeMap<>();
+
+    Iterable<Entry<Key,Value>> filtered = Iterables.filter(INPUT_DATA.entrySet(), new Predicate<Entry<Key,Value>>() {
+
+      @Override
+      public boolean apply(Entry<Key,Value> entry) {
+        assertNotNull(entry);
+        String cf = entry.getKey().getColumnFamily().toString();
+        String cq = entry.getKey().getColumnQualifier().toString();
+        return MIN_CF.compareTo(cf) <= 0 && MAX_CF.compareTo(cf) >= 0 && MIN_CQ.compareTo(cq) <= 0 && MAX_CQ.compareTo(cq) >= 0;
+      }
+
+    });
+
+    for (Entry<Key,Value> entry : filtered) {
+      data.put(entry.getKey(), entry.getValue());
+    }
+
+    return data;
+  }
+
+  private static IteratorTestInput getIteratorInput() {
+    HashMap<String,String> options = new HashMap<>();
+    options.put(CfCqSliceOpts.OPT_MIN_CF, MIN_CF);
+    options.put(CfCqSliceOpts.OPT_MAX_CF, MAX_CF);
+    options.put(CfCqSliceOpts.OPT_MIN_CQ, MIN_CQ);
+    options.put(CfCqSliceOpts.OPT_MAX_CQ, MAX_CQ);
+
+    return new IteratorTestInput(CfCqSliceFilter.class, options, new Range(), INPUT_DATA);
+  }
+
+  private static IteratorTestOutput getIteratorOutput() {
+    return new IteratorTestOutput(OUTPUT_DATA);
+  }
+
+  public CfCqSliceFilterTest(IteratorTestInput input, IteratorTestOutput expectedOutput, IteratorTestCase testCase) {
+    super(input, expectedOutput, testCase);
+  }
+
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/iterator/RegExTest.java b/test/src/test/java/org/apache/accumulo/test/iterator/RegExTest.java
index 82c89be..4b2de22 100644
--- a/test/src/test/java/org/apache/accumulo/test/iterator/RegExTest.java
+++ b/test/src/test/java/org/apache/accumulo/test/iterator/RegExTest.java
@@ -17,40 +17,31 @@
 package org.apache.accumulo.test.iterator;
 
 import java.util.ArrayList;
-import java.util.Collections;
-import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeMap;
 
-import org.apache.accumulo.core.client.BatchScanner;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.ScannerBase;
-import org.apache.accumulo.core.client.mock.MockInstance;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
+import org.apache.accumulo.core.data.ByteSequence;
 import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.iterators.SortedMapIterator;
 import org.apache.accumulo.core.iterators.user.RegExFilter;
-import org.apache.accumulo.core.security.Authorizations;
 import org.apache.hadoop.io.Text;
+import org.junit.BeforeClass;
 import org.junit.Test;
 
+import com.google.common.collect.ImmutableSet;
+
 public class RegExTest {
 
-  Instance inst = new MockInstance();
-  Connector conn;
+  private static TreeMap<Key,Value> data = new TreeMap<>();
 
-  @Test
-  public void runTest() throws Exception {
-    conn = inst.getConnector("user", new PasswordToken("pass"));
-    conn.tableOperations().create("ret");
-    BatchWriter bw = conn.createBatchWriter("ret", new BatchWriterConfig());
+  @BeforeClass
+  public static void setupTests() throws Exception {
 
-    ArrayList<Character> chars = new ArrayList<Character>();
+    ArrayList<Character> chars = new ArrayList<>();
     for (char c = 'a'; c <= 'z'; c++)
       chars.add(c);
 
@@ -59,24 +50,14 @@
 
     // insert some data into accumulo
     for (Character rc : chars) {
-      Mutation m = new Mutation(new Text("r" + rc));
+      String row = "r" + rc;
       for (Character cfc : chars) {
         for (Character cqc : chars) {
           Value v = new Value(("v" + rc + cfc + cqc).getBytes());
-          m.put(new Text("cf" + cfc), new Text("cq" + cqc), v);
+          data.put(new Key(row, "cf" + cfc, "cq" + cqc, "", 9), v);
         }
       }
-
-      bw.addMutation(m);
     }
-
-    bw.close();
-
-    runTest1();
-    runTest2();
-    runTest3();
-    runTest4();
-    runTest5();
   }
 
   private void check(String regex, String val) throws Exception {
@@ -92,31 +73,36 @@
     check(regex, val.toString());
   }
 
-  private void runTest1() throws Exception {
+  @Test
+  public void runTest1() throws Exception {
     // try setting all regex
     Range range = new Range(new Text("rf"), true, new Text("rl"), true);
     runTest(range, "r[g-k]", "cf[1-5]", "cq[x-z]", "v[g-k][1-5][t-y]", 5 * 5 * (3 - 1));
   }
 
-  private void runTest2() throws Exception {
+  @Test
+  public void runTest2() throws Exception {
     // try setting only a row regex
     Range range = new Range(new Text("rf"), true, new Text("rl"), true);
     runTest(range, "r[g-k]", null, null, null, 5 * 36 * 36);
   }
 
-  private void runTest3() throws Exception {
+  @Test
+  public void runTest3() throws Exception {
     // try setting only a col fam regex
     Range range = new Range((Key) null, (Key) null);
     runTest(range, null, "cf[a-f]", null, null, 36 * 6 * 36);
   }
 
-  private void runTest4() throws Exception {
+  @Test
+  public void runTest4() throws Exception {
     // try setting only a col qual regex
     Range range = new Range((Key) null, (Key) null);
     runTest(range, null, null, "cq[1-7]", null, 36 * 36 * 7);
   }
 
-  private void runTest5() throws Exception {
+  @Test
+  public void runTest5() throws Exception {
     // try setting only a value regex
     Range range = new Range((Key) null, (Key) null);
     runTest(range, null, null, null, "v[a-c][d-f][g-i]", 3 * 3 * 3);
@@ -124,42 +110,29 @@
 
   private void runTest(Range range, String rowRegEx, String cfRegEx, String cqRegEx, String valRegEx, int expected) throws Exception {
 
-    Scanner s = conn.createScanner("ret", Authorizations.EMPTY);
-    s.setRange(range);
-    setRegexs(s, rowRegEx, cfRegEx, cqRegEx, valRegEx);
-    runTest(s, rowRegEx, cfRegEx, cqRegEx, valRegEx, expected);
-
-    BatchScanner bs = conn.createBatchScanner("ret", Authorizations.EMPTY, 1);
-    bs.setRanges(Collections.singletonList(range));
-    setRegexs(bs, rowRegEx, cfRegEx, cqRegEx, valRegEx);
-    runTest(bs, rowRegEx, cfRegEx, cqRegEx, valRegEx, expected);
-    bs.close();
+    SortedKeyValueIterator<Key,Value> source = new SortedMapIterator(data);
+    Set<ByteSequence> es = ImmutableSet.of();
+    IteratorSetting is = new IteratorSetting(50, "regex", RegExFilter.class);
+    RegExFilter.setRegexs(is, rowRegEx, cfRegEx, cqRegEx, valRegEx, false);
+    RegExFilter iter = new RegExFilter();
+    iter.init(source, is.getOptions(), null);
+    iter.seek(range, es, false);
+    runTest(iter, rowRegEx, cfRegEx, cqRegEx, valRegEx, expected);
   }
 
-  private void setRegexs(ScannerBase scanner, String rowRegEx, String cfRegEx, String cqRegEx, String valRegEx) {
-    IteratorSetting regex = new IteratorSetting(50, "regex", RegExFilter.class);
-    if (rowRegEx != null)
-      regex.addOption(RegExFilter.ROW_REGEX, rowRegEx);
-    if (cfRegEx != null)
-      regex.addOption(RegExFilter.COLF_REGEX, cfRegEx);
-    if (cqRegEx != null)
-      regex.addOption(RegExFilter.COLQ_REGEX, cqRegEx);
-    if (valRegEx != null)
-      regex.addOption(RegExFilter.VALUE_REGEX, valRegEx);
-    scanner.addScanIterator(regex);
-  }
-
-  private void runTest(Iterable<Entry<Key,Value>> scanner, String rowRegEx, String cfRegEx, String cqRegEx, String valRegEx, int expected) throws Exception {
+  private void runTest(RegExFilter scanner, String rowRegEx, String cfRegEx, String cqRegEx, String valRegEx, int expected) throws Exception {
 
     int counter = 0;
 
-    for (Entry<Key,Value> entry : scanner) {
-      Key k = entry.getKey();
+    while (scanner.hasTop()) {
+      Key k = scanner.getTopKey();
 
       check(rowRegEx, k.getRow());
       check(cfRegEx, k.getColumnFamily());
       check(cqRegEx, k.getColumnQualifier());
-      check(valRegEx, entry.getValue());
+      check(valRegEx, scanner.getTopValue());
+
+      scanner.next();
 
       counter++;
     }
diff --git a/test/src/test/java/org/apache/accumulo/test/iterator/SummingCombinerTest.java b/test/src/test/java/org/apache/accumulo/test/iterator/SummingCombinerTest.java
new file mode 100644
index 0000000..cac2334
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/iterator/SummingCombinerTest.java
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.iterator;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+import static java.util.Objects.requireNonNull;
+
+import java.util.List;
+import java.util.Map.Entry;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.client.IteratorSetting;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.PartialKey;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.Combiner;
+import org.apache.accumulo.core.iterators.LongCombiner;
+import org.apache.accumulo.core.iterators.user.SummingCombiner;
+import org.apache.accumulo.iteratortest.IteratorTestCaseFinder;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.junit4.BaseJUnit4IteratorTest;
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+import org.junit.runners.Parameterized.Parameters;
+
+/**
+ * Iterator test harness tests for SummingCombiner
+ */
+public class SummingCombinerTest extends BaseJUnit4IteratorTest {
+
+  @Parameters
+  public static Object[][] parameters() {
+    IteratorTestInput input = getIteratorInput();
+    IteratorTestOutput output = getIteratorOutput();
+    List<IteratorTestCase> tests = IteratorTestCaseFinder.findAllTestCases();
+    return BaseJUnit4IteratorTest.createParameters(input, output, tests);
+  }
+
+  private static final TreeMap<Key,Value> INPUT_DATA = createInputData();
+  private static final TreeMap<Key,Value> OUTPUT_DATA = createOutputData();
+
+  private static TreeMap<Key,Value> createInputData() {
+    TreeMap<Key,Value> data = new TreeMap<>();
+
+    // 3
+    data.put(new Key("1", "a", "a", 1), new Value(bytes("1")));
+    data.put(new Key("1", "a", "a", 5), new Value(bytes("1")));
+    data.put(new Key("1", "a", "a", 10), new Value(bytes("1")));
+    // 7
+    data.put(new Key("1", "a", "b", 1), new Value(bytes("5")));
+    data.put(new Key("1", "a", "b", 5), new Value(bytes("2")));
+    // 0
+    data.put(new Key("1", "a", "f", 1), new Value(bytes("0")));
+    // -10
+    data.put(new Key("1", "a", "g", 5), new Value(bytes("1")));
+    data.put(new Key("1", "a", "g", 10), new Value(bytes("-11")));
+    // -5
+    data.put(new Key("1", "b", "d", 10), new Value(bytes("-5")));
+    // MAX_VALUE
+    data.put(new Key("1", "b", "e", 10), new Value(bytes(Long.toString(Long.MAX_VALUE))));
+    // MIN_VALUE
+    data.put(new Key("1", "d", "d", 10), new Value(bytes(Long.toString(Long.MIN_VALUE))));
+    // 30
+    data.put(new Key("2", "a", "a", 1), new Value(bytes("5")));
+    data.put(new Key("2", "a", "a", 5), new Value(bytes("10")));
+    data.put(new Key("2", "a", "a", 10), new Value(bytes("15")));
+
+    return data;
+  }
+
+  private static final byte[] bytes(String value) {
+    return requireNonNull(value).getBytes(UTF_8);
+  }
+
+  private static TreeMap<Key,Value> createOutputData() {
+    TreeMap<Key,Value> data = new TreeMap<>();
+
+    Key lastKey = null;
+    long sum = 0;
+    for (Entry<Key,Value> entry : INPUT_DATA.entrySet()) {
+      if (null == lastKey) {
+        lastKey = entry.getKey();
+        sum += Long.parseLong(entry.getValue().toString());
+      } else {
+        if (0 != lastKey.compareTo(entry.getKey(), PartialKey.ROW_COLFAM_COLQUAL_COLVIS)) {
+          // Different key, store the running sum.
+          data.put(lastKey, new Value(Long.toString(sum).getBytes(UTF_8)));
+          // Reset lastKey and the sum
+          lastKey = entry.getKey();
+          sum = 0;
+        }
+
+        sum += Long.parseLong(entry.getValue().toString());
+      }
+    }
+
+    data.put(lastKey, new Value(Long.toString(sum).getBytes(UTF_8)));
+
+    return data;
+  }
+
+  private static IteratorTestInput getIteratorInput() {
+    IteratorSetting setting = new IteratorSetting(50, SummingCombiner.class);
+    LongCombiner.setEncodingType(setting, LongCombiner.Type.STRING);
+    Combiner.setCombineAllColumns(setting, true);
+    return new IteratorTestInput(SummingCombiner.class, setting.getOptions(), new Range(), INPUT_DATA);
+  }
+
+  private static IteratorTestOutput getIteratorOutput() {
+    return new IteratorTestOutput(OUTPUT_DATA);
+  }
+
+  public SummingCombinerTest(IteratorTestInput input, IteratorTestOutput expectedOutput, IteratorTestCase testCase) {
+    super(input, expectedOutput, testCase);
+  }
+
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/iterator/WholeRowIteratorTest.java b/test/src/test/java/org/apache/accumulo/test/iterator/WholeRowIteratorTest.java
new file mode 100644
index 0000000..5e26106
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/iterator/WholeRowIteratorTest.java
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.iterator;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map.Entry;
+import java.util.TreeMap;
+
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.user.WholeRowIterator;
+import org.apache.accumulo.iteratortest.IteratorTestCaseFinder;
+import org.apache.accumulo.iteratortest.IteratorTestInput;
+import org.apache.accumulo.iteratortest.IteratorTestOutput;
+import org.apache.accumulo.iteratortest.junit4.BaseJUnit4IteratorTest;
+import org.apache.accumulo.iteratortest.testcases.IteratorTestCase;
+import org.apache.hadoop.io.Text;
+import org.junit.runners.Parameterized.Parameters;
+
+/**
+ * Framework tests for {@link WholeRowIterator}.
+ */
+public class WholeRowIteratorTest extends BaseJUnit4IteratorTest {
+
+  @Parameters
+  public static Object[][] parameters() {
+    IteratorTestInput input = getIteratorInput();
+    IteratorTestOutput output = getIteratorOutput();
+    List<IteratorTestCase> tests = IteratorTestCaseFinder.findAllTestCases();
+    return BaseJUnit4IteratorTest.createParameters(input, output, tests);
+  }
+
+  private static final TreeMap<Key,Value> INPUT_DATA = createInputData();
+  private static final TreeMap<Key,Value> OUTPUT_DATA = createOutputData();
+
+  private static TreeMap<Key,Value> createInputData() {
+    TreeMap<Key,Value> data = new TreeMap<>();
+
+    data.put(new Key("1", "", "a"), new Value("1a".getBytes()));
+    data.put(new Key("1", "", "b"), new Value("1b".getBytes()));
+    data.put(new Key("1", "a", "a"), new Value("1aa".getBytes()));
+    data.put(new Key("1", "a", "b"), new Value("1ab".getBytes()));
+    data.put(new Key("1", "b", "a"), new Value("1ba".getBytes()));
+
+    data.put(new Key("2", "a", "a"), new Value("2aa".getBytes()));
+    data.put(new Key("2", "a", "b"), new Value("2ab".getBytes()));
+    data.put(new Key("2", "a", "c"), new Value("2ac".getBytes()));
+    data.put(new Key("2", "c", "c"), new Value("2cc".getBytes()));
+
+    data.put(new Key("3", "a", ""), new Value("3a".getBytes()));
+
+    data.put(new Key("4", "a", "b"), new Value("4ab".getBytes()));
+
+    data.put(new Key("5", "a", "a"), new Value("5aa".getBytes()));
+    data.put(new Key("5", "a", "b"), new Value("5ab".getBytes()));
+    data.put(new Key("5", "a", "c"), new Value("5ac".getBytes()));
+    data.put(new Key("5", "a", "d"), new Value("5ad".getBytes()));
+
+    data.put(new Key("6", "", "a"), new Value("6a".getBytes()));
+    data.put(new Key("6", "", "b"), new Value("6b".getBytes()));
+    data.put(new Key("6", "", "c"), new Value("6c".getBytes()));
+    data.put(new Key("6", "", "d"), new Value("6d".getBytes()));
+    data.put(new Key("6", "", "e"), new Value("6e".getBytes()));
+    data.put(new Key("6", "1", "a"), new Value("61a".getBytes()));
+    data.put(new Key("6", "1", "b"), new Value("61b".getBytes()));
+    data.put(new Key("6", "1", "c"), new Value("61c".getBytes()));
+    data.put(new Key("6", "1", "d"), new Value("61d".getBytes()));
+    data.put(new Key("6", "1", "e"), new Value("61e".getBytes()));
+
+    return data;
+  }
+
+  private static TreeMap<Key,Value> createOutputData() {
+    TreeMap<Key,Value> data = new TreeMap<>();
+
+    Text row = null;
+    List<Key> keys = new ArrayList<>();
+    List<Value> values = new ArrayList<>();
+
+    // Generate the output data from the input data
+    for (Entry<Key,Value> entry : INPUT_DATA.entrySet()) {
+      if (null == row) {
+        row = entry.getKey().getRow();
+      }
+
+      if (!row.equals(entry.getKey().getRow())) {
+        // Moved to the next row
+        try {
+          // Serialize and save
+          Value encoded = WholeRowIterator.encodeRow(keys, values);
+          data.put(new Key(row), encoded);
+        } catch (IOException e) {
+          throw new RuntimeException(e);
+        }
+
+        // Empty the aggregated k-v's
+        keys = new ArrayList<>();
+        values = new ArrayList<>();
+        // Set the new current row
+        row = entry.getKey().getRow();
+      }
+
+      // Aggregate the current row
+      keys.add(entry.getKey());
+      values.add(entry.getValue());
+    }
+
+    if (!keys.isEmpty()) {
+      try {
+        Value encoded = WholeRowIterator.encodeRow(keys, values);
+        data.put(new Key(row), encoded);
+      } catch (IOException e) {
+        throw new RuntimeException(e);
+      }
+    }
+
+    return data;
+  }
+
+  private static IteratorTestInput getIteratorInput() {
+    return new IteratorTestInput(WholeRowIterator.class, Collections.<String,String> emptyMap(), new Range(), INPUT_DATA);
+  }
+
+  private static IteratorTestOutput getIteratorOutput() {
+    return new IteratorTestOutput(OUTPUT_DATA);
+  }
+
+  public WholeRowIteratorTest(IteratorTestInput input, IteratorTestOutput expectedOutput, IteratorTestCase testCase) {
+    super(input, expectedOutput, testCase);
+  }
+
+}
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java b/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
deleted file mode 100644
index be9e320..0000000
--- a/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
+++ /dev/null
@@ -1,233 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.test.replication;
-
-import java.util.Map.Entry;
-import java.util.Set;
-
-import org.apache.accumulo.cluster.ClusterUser;
-import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.security.tokens.KerberosToken;
-import org.apache.accumulo.core.client.security.tokens.PasswordToken;
-import org.apache.accumulo.core.conf.Property;
-import org.apache.accumulo.core.data.Key;
-import org.apache.accumulo.core.data.Mutation;
-import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.TablePermission;
-import org.apache.accumulo.harness.AccumuloIT;
-import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
-import org.apache.accumulo.harness.MiniClusterHarness;
-import org.apache.accumulo.harness.TestingKdc;
-import org.apache.accumulo.master.replication.SequentialWorkAssigner;
-import org.apache.accumulo.minicluster.ServerType;
-import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
-import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
-import org.apache.accumulo.minicluster.impl.ProcessReference;
-import org.apache.accumulo.server.replication.ReplicaSystemFactory;
-import org.apache.accumulo.test.functional.KerberosIT;
-import org.apache.accumulo.tserver.TabletServer;
-import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
-import org.apache.hadoop.fs.RawLocalFileSystem;
-import org.apache.hadoop.security.UserGroupInformation;
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import com.google.common.collect.Iterators;
-
-/**
- * Ensure that replication occurs using keytabs instead of password (not to mention SASL)
- */
-public class KerberosReplicationIT extends AccumuloIT {
-  private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
-
-  private static TestingKdc kdc;
-  private static String krbEnabledForITs = null;
-  private static ClusterUser rootUser;
-
-  @BeforeClass
-  public static void startKdc() throws Exception {
-    kdc = new TestingKdc();
-    kdc.start();
-    krbEnabledForITs = System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION);
-    if (null == krbEnabledForITs || !Boolean.parseBoolean(krbEnabledForITs)) {
-      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, "true");
-    }
-    rootUser = kdc.getRootUser();
-  }
-
-  @AfterClass
-  public static void stopKdc() throws Exception {
-    if (null != kdc) {
-      kdc.stop();
-    }
-    if (null != krbEnabledForITs) {
-      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
-    }
-  }
-
-  private MiniAccumuloClusterImpl primary, peer;
-  private String PRIMARY_NAME = "primary", PEER_NAME = "peer";
-
-  @Override
-  protected int defaultTimeoutSeconds() {
-    return 60 * 3;
-  }
-
-  private MiniClusterConfigurationCallback getConfigCallback(final String name) {
-    return new MiniClusterConfigurationCallback() {
-      @Override
-      public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
-        cfg.setNumTservers(1);
-        cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
-        cfg.setProperty(Property.TSERV_WALOG_MAX_SIZE, "2M");
-        cfg.setProperty(Property.GC_CYCLE_START, "1s");
-        cfg.setProperty(Property.GC_CYCLE_DELAY, "5s");
-        cfg.setProperty(Property.REPLICATION_WORK_ASSIGNMENT_SLEEP, "1s");
-        cfg.setProperty(Property.MASTER_REPLICATION_SCAN_INTERVAL, "1s");
-        cfg.setProperty(Property.REPLICATION_NAME, name);
-        cfg.setProperty(Property.REPLICATION_MAX_UNIT_SIZE, "8M");
-        cfg.setProperty(Property.REPLICATION_WORK_ASSIGNER, SequentialWorkAssigner.class.getName());
-        cfg.setProperty(Property.TSERV_TOTAL_MUTATION_QUEUE_MAX, "1M");
-        coreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
-      }
-    };
-  }
-
-  @Before
-  public void setup() throws Exception {
-    MiniClusterHarness harness = new MiniClusterHarness();
-
-    // Create a primary and a peer instance, both with the same "root" user
-    primary = harness.create(getClass().getName(), testName.getMethodName(), new PasswordToken("unused"), getConfigCallback(PRIMARY_NAME), kdc);
-    primary.start();
-
-    peer = harness.create(getClass().getName(), testName.getMethodName() + "_peer", new PasswordToken("unused"), getConfigCallback(PEER_NAME), kdc);
-    peer.start();
-
-    // Enable kerberos auth
-    Configuration conf = new Configuration(false);
-    conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
-    UserGroupInformation.setConfiguration(conf);
-  }
-
-  @After
-  public void teardown() throws Exception {
-    if (null != peer) {
-      peer.stop();
-    }
-    if (null != primary) {
-      primary.stop();
-    }
-  }
-
-  @Test
-  public void dataReplicatedToCorrectTable() throws Exception {
-    // Login as the root user
-    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
-
-    final KerberosToken token = new KerberosToken();
-    final Connector primaryConn = primary.getConnector(rootUser.getPrincipal(), token);
-    final Connector peerConn = peer.getConnector(rootUser.getPrincipal(), token);
-
-    ClusterUser replicationUser = kdc.getClientPrincipal(0);
-
-    // Create user for replication to the peer
-    peerConn.securityOperations().createLocalUser(replicationUser.getPrincipal(), null);
-
-    primaryConn.instanceOperations().setProperty(Property.REPLICATION_PEER_USER.getKey() + PEER_NAME, replicationUser.getPrincipal());
-    primaryConn.instanceOperations().setProperty(Property.REPLICATION_PEER_KEYTAB.getKey() + PEER_NAME, replicationUser.getKeytab().getAbsolutePath());
-
-    // ...peer = AccumuloReplicaSystem,instanceName,zookeepers
-    primaryConn.instanceOperations().setProperty(
-        Property.REPLICATION_PEERS.getKey() + PEER_NAME,
-        ReplicaSystemFactory.getPeerConfigurationValue(AccumuloReplicaSystem.class,
-            AccumuloReplicaSystem.buildConfiguration(peerConn.getInstance().getInstanceName(), peerConn.getInstance().getZooKeepers())));
-
-    String primaryTable1 = "primary", peerTable1 = "peer";
-
-    // Create tables
-    primaryConn.tableOperations().create(primaryTable1);
-    String masterTableId1 = primaryConn.tableOperations().tableIdMap().get(primaryTable1);
-    Assert.assertNotNull(masterTableId1);
-
-    peerConn.tableOperations().create(peerTable1);
-    String peerTableId1 = peerConn.tableOperations().tableIdMap().get(peerTable1);
-    Assert.assertNotNull(peerTableId1);
-
-    // Grant write permission
-    peerConn.securityOperations().grantTablePermission(replicationUser.getPrincipal(), peerTable1, TablePermission.WRITE);
-
-    // Replicate this table to the peerClusterName in a table with the peerTableId table id
-    primaryConn.tableOperations().setProperty(primaryTable1, Property.TABLE_REPLICATION.getKey(), "true");
-    primaryConn.tableOperations().setProperty(primaryTable1, Property.TABLE_REPLICATION_TARGET.getKey() + PEER_NAME, peerTableId1);
-
-    // Write some data to table1
-    BatchWriter bw = primaryConn.createBatchWriter(primaryTable1, new BatchWriterConfig());
-    long masterTable1Records = 0l;
-    for (int rows = 0; rows < 2500; rows++) {
-      Mutation m = new Mutation(primaryTable1 + rows);
-      for (int cols = 0; cols < 100; cols++) {
-        String value = Integer.toString(cols);
-        m.put(value, "", value);
-        masterTable1Records++;
-      }
-      bw.addMutation(m);
-    }
-
-    bw.close();
-
-    log.info("Wrote all data to primary cluster");
-
-    Set<String> filesFor1 = primaryConn.replicationOperations().referencedFiles(primaryTable1);
-
-    // Restart the tserver to force a close on the WAL
-    for (ProcessReference proc : primary.getProcesses().get(ServerType.TABLET_SERVER)) {
-      primary.killProcess(ServerType.TABLET_SERVER, proc);
-    }
-    primary.exec(TabletServer.class);
-
-    log.info("Restarted the tserver");
-
-    // Read the data -- the tserver is back up and running and tablets are assigned
-    Iterators.size(primaryConn.createScanner(primaryTable1, Authorizations.EMPTY).iterator());
-
-    // Wait for both tables to be replicated
-    log.info("Waiting for {} for {}", filesFor1, primaryTable1);
-    primaryConn.replicationOperations().drain(primaryTable1, filesFor1);
-
-    long countTable = 0l;
-    for (Entry<Key,Value> entry : peerConn.createScanner(peerTable1, Authorizations.EMPTY)) {
-      countTable++;
-      Assert.assertTrue("Found unexpected key-value" + entry.getKey().toStringNoTruncate() + " " + entry.getValue(), entry.getKey().getRow().toString()
-          .startsWith(primaryTable1));
-    }
-
-    log.info("Found {} records in {}", countTable, peerTable1);
-    Assert.assertEquals(masterTable1Records, countTable);
-  }
-}
diff --git a/test/system/agitator/.gitignore b/test/system/agitator/.gitignore
new file mode 100644
index 0000000..3429b01
--- /dev/null
+++ b/test/system/agitator/.gitignore
@@ -0,0 +1,3 @@
+*~
+*.ini
+*.pyc
diff --git a/test/system/agitator/README.md b/test/system/agitator/README.md
new file mode 100644
index 0000000..fdff65b
--- /dev/null
+++ b/test/system/agitator/README.md
@@ -0,0 +1,39 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements.  See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License.  You may obtain a copy of the License at 
+ 
+    http://www.apache.org/licenses/LICENSE-2.0
+ 
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+Agitator: randomly kill processes
+===========================
+
+The agitator is used to randomly select processes for termination during
+system test.
+
+Configure the agitator using the example agitator.ini file provided.
+
+Create a list of hosts to be agitated:
+
+	$ cp ../../../conf/slaves hosts
+	$ echo master >> hosts
+	$ echo namenode >> hosts
+
+The agitator can be used to kill and restart any part of the accumulo
+ecosystem: zookeepers, namenode, datanodes, tablet servers and master.
+You can choose to agitate them all with "--all"
+
+	$ ./agitator.py --all --hosts=hosts --config=agitator.ini --log DEBUG
+
+You will need to be able to ssh, without passwords, to all your hosts as 
+the user that can kill and start the services.
diff --git a/test/system/agitator/agitator.ini.example b/test/system/agitator/agitator.ini.example
new file mode 100644
index 0000000..3512561
--- /dev/null
+++ b/test/system/agitator/agitator.ini.example
@@ -0,0 +1,56 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+[DEFAULT]
+install=%(env.pwd)s/../../../..
+user=%(env.user)s
+
+[agitator]
+kill=kill -9
+ssh=ssh -q -A -o StrictHostKeyChecking=no
+sleep=300
+sleep.restart=30
+sleep.jitter=30
+
+[accumulo]
+home=%(install)s/accumulo
+tserver.kill.min=1
+tserver.kill.max=1
+tserver.frequency=0.8
+
+master.kill.min=1
+master.kill.max=1
+master.frequency=0.1
+
+gc.kill.min=1
+gc.kill.max=1
+gc.frequency=0.1
+
+[hadoop]
+home=%(install)s/hadoop
+bin=%(home)s/bin
+datanode.frequency=0.8
+datanode.kill.min=1
+datanode.kill.max=1
+namenode.frequency=0.05
+namenode.kill.min=1
+namenode.kill.max=1
+secondarynamenode.frequency=0.05
+secondarynamenode.kill.min=1
+secondarynamenode.kill.max=1
+
+[zookeeper]
+home=%(install)s/zookeeper
+frequency=0.05
diff --git a/test/system/agitator/agitator.py b/test/system/agitator/agitator.py
new file mode 100755
index 0000000..db94546
--- /dev/null
+++ b/test/system/agitator/agitator.py
@@ -0,0 +1,241 @@
+#! /usr/bin/python
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import random
+import logging
+import ConfigParser
+
+# add the environment variables as default settings
+import os
+defaults=dict([('env.' + k, v) for k, v in os.environ.iteritems()])
+config = ConfigParser.ConfigParser(defaults)
+
+# things you can do to a particular kind of process
+class Proc:
+   program = 'Unknown'
+   _frequencyToKill = 1.0
+
+   def start(self, host):
+       pass
+
+   def find(self, host):
+       pass
+
+   def numberToKill(self):
+       return (1, 1)
+
+   def frequencyToKill(self):
+       return self._frequencyToKill
+
+   def user(self):
+       return config.get(self.program, 'user')
+
+   def kill(self, host, pid):
+      kill = config.get('agitator', 'kill').split()
+      code, stdout, stderr = self.runOn(host, kill + [pid])
+      if code != 0:
+         raise logging.warn("Unable to kill %d on %s (%s)", pid, host, stderr)
+
+   def runOn(self, host, cmd):
+      ssh = config.get('agitator', 'ssh').split()
+      return self.run(ssh + ["%s@%s" % (self.user(), host)] + cmd)
+
+   def run(self, cmd):
+      import subprocess
+      cmd = map(str, cmd)
+      logging.debug('Running %s', cmd)
+      p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+      stdout, stderr = p.communicate()
+      if stdout.strip():
+         logging.debug("%s", stdout.strip())
+      if stderr.strip():
+         logging.error("%s", stderr.strip())
+      if p.returncode != 0:
+         logging.error("Problem running %s", ' '.join(cmd))
+      return p.returncode, stdout, stderr
+
+   def __repr__(self):
+      return self.program
+
+class Zookeeper(Proc):
+   program = 'zookeeper'
+   def __init__(self):
+      self._frequencyToKill = config.getfloat(self.program, 'frequency')
+
+   def start(self, host):
+      self.runOn(host, [config.get(self.program, 'home') + '/bin/zkServer.sh start'])
+
+   def find(self, host):
+     code, stdout, stderr = self.runOn(host, ['pgrep -f [Q]uorumPeerMain || true'])
+     return map(int, [line for line in stdout.split("\n") if line])
+
+class Hadoop(Proc):
+   section = 'hadoop'
+   def __init__(self, program):
+      self.program = program
+      self._frequencyToKill = config.getfloat(self.section, program + '.frequency')
+      self.minimumToKill = config.getint(self.section, program + '.kill.min')
+      self.maximumToKill = config.getint(self.section, program + '.kill.max')
+
+   def start(self, host):
+      binDir = config.get(self.section, 'bin')
+      self.runOn(host, ['nohup %s/hdfs %s < /dev/null >/dev/null 2>&1 &' %(binDir, self.program)])
+     
+   def find(self, host):
+      code, stdout, stderr = self.runOn(host, ["pgrep -f 'proc[_]%s' || true" % (self.program,)])
+      return map(int, [line for line in stdout.split("\n") if line])
+
+   def numberToKill(self):
+      return (self.minimumToKill, self.maximumToKill)
+
+   def user(self):
+      return config.get(self.section, 'user')
+
+class Accumulo(Hadoop):
+   section = 'accumulo'
+   def start(self, host):
+      home = config.get(self.section, 'home')
+      self.runOn(host, ['nohup %s/bin/accumulo %s </dev/null >/dev/null 2>&1 & ' %(home, self.program)])
+
+   def find(self, host):
+      code, stdout, stderr = self.runOn(host, ["pgrep -f 'app[=]%s' || true" % self.program])
+      return map(int, [line for line in stdout.split("\n") if line])
+
+def fail(msg):
+   import sys
+   logging.critical(msg)
+   sys.exit(1)
+
+def jitter(n):
+   return random.random() * n - n / 2
+
+def sleep(n):
+   if n > 0:
+       logging.info("Sleeping %.2f", n)
+       import time
+       time.sleep(n)
+
+def agitate(hosts, procs):
+   starters = []
+
+   logging.info("Agitating %s on %d hosts" % (procs, len(hosts)))
+
+   section = 'agitator'
+
+   # repeatedly...
+   while True:
+      if starters:
+         # start up services that were previously killed
+         t = max(0, config.getfloat(section, 'sleep.restart') + jitter(config.getfloat(section, 'sleep.jitter')))
+         sleep(t)
+         for host, proc in starters:
+            logging.info('Starting %s on %s', proc, host)
+            proc.start(host)
+         starters = []
+
+      # wait some time
+      t = max(0, config.getfloat(section, 'sleep') + jitter(config.getfloat(section, 'sleep.jitter')))
+      sleep(t)
+
+      # for some processes
+      for p in procs:
+
+         # roll dice: should it be killed?
+         if random.random() < p.frequencyToKill():
+
+            # find them
+            from multiprocessing import Pool
+            def finder(host):
+               return host, p.find(host)
+            with Pool(5) as pool:
+               result = pool.map(finder, hosts)
+            candidates = {}
+            for host, pids in result:
+               if pids:
+                  candidates[host] = pids
+
+            # how many?
+            minKill, maxKill = p.numberToKill()
+            count = min(random.randrange(minKill, maxKill + 1), len(candidates))
+
+            # pick the victims
+            doomedHosts = random.sample(candidates.keys(), count)
+
+            # kill them
+            logging.info("Killing %s on %s", p, doomedHosts)
+            for doomedHost in doomedHosts:
+               pids = candidates[doomedHost]
+               if not pids:
+                  logging.error("Unable to kill any %s on %s: no processes of that type are running", p, doomedHost)
+               else:
+                  pid = random.choice(pids)
+                  logging.debug("Killing %s (%d) on %s", p, pid, doomedHost)
+                  p.kill(doomedHost, pid)
+                  # remember to restart them later
+                  starters.append((doomedHost, p))
+
+def main():
+   import argparse
+   parser = argparse.ArgumentParser(description='Kill random processes')
+   parser.add_argument('--log', help='set the log level', default='INFO')
+   parser.add_argument('--namenodes', help='randomly kill namenodes', action="store_true")
+   parser.add_argument('--secondary', help='randomly kill secondary namenode', action="store_true")
+   parser.add_argument('--datanodes', help='randomly kill datanodes', action="store_true")
+   parser.add_argument('--tservers', help='randomly kill tservers', action="store_true")
+   parser.add_argument('--masters', help='randomly kill masters', action="store_true")
+   parser.add_argument('--zookeepers', help='randomly kill zookeepers', action="store_true")
+   parser.add_argument('--gc', help='randomly kill the file garbage collector', action="store_true")
+   parser.add_argument('--all', 
+                       help='kill any of the tservers, masters, datanodes, namenodes or zookeepers', 
+                       action='store_true')
+   parser.add_argument('--hosts', type=argparse.FileType('r'), required=True)
+   parser.add_argument('--config', type=argparse.FileType('r'), required=True)
+   args = parser.parse_args()
+
+   config.readfp(args.config)
+
+   level = getattr(logging, args.log.upper(), None)
+   if isinstance(level, int):
+      logging.basicConfig(level=level)
+
+   procs = []
+   def addIf(flag, proc):
+       if flag or args.all:
+          procs.append(proc)
+
+   addIf(args.namenodes,  Hadoop('namenode'))
+   addIf(args.datanodes,  Hadoop('datanode'))
+   addIf(args.secondary,  Hadoop('secondarynamenode'))
+   addIf(args.tservers,   Accumulo('tserver'))
+   addIf(args.masters,    Accumulo('master'))
+   addIf(args.gc,         Accumulo('gc'))
+   addIf(args.zookeepers, Zookeeper())
+   if len(procs) == 0:
+       fail("No processes to agitate!\n")
+
+   hosts = []
+   for line in args.hosts.readlines():
+       line = line.strip()
+       if line and line[0] != '#':
+           hosts.append(line)
+   if not hosts:
+       fail('No hosts to agitate!\n')
+
+   agitate(hosts, procs)
+
+if __name__ == '__main__':
+   main()
diff --git a/test/system/agitator/hosts.example b/test/system/agitator/hosts.example
new file mode 100644
index 0000000..63fb8bb
--- /dev/null
+++ b/test/system/agitator/hosts.example
@@ -0,0 +1,16 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+localhost
diff --git a/test/system/upgrade_test.sh b/test/system/upgrade_test.sh
index 590c07c..651755d 100755
--- a/test/system/upgrade_test.sh
+++ b/test/system/upgrade_test.sh
@@ -28,21 +28,22 @@
 
 #TODO could support multinode configs, this script assumes single node config
 
-PREV=../../../../accumulo-1.5.0
+PREV=../../../accumulo-1.7.1
 CURR=../../
 DIR=/accumulo
 BULK=/tmp/upt
+INSTANCE=testUp
 
 pkill -f accumulo.start
 hadoop fs -rmr "$DIR"
 hadoop fs -rmr "$BULK"
-hadoop fs -mkdir "$BULK/fail"
+hadoop fs -mkdir -p "$BULK/fail"
 
-"$PREV/bin/accumulo" init --clear-instance-name --instance-name testUp --password secret
+"$PREV/bin/accumulo" init --clear-instance-name --instance-name $INSTANCE --password secret
 "$PREV/bin/start-all.sh"
 
-"$PREV/bin/accumulo" org.apache.accumulo.test.TestIngest -u root -p secret --timestamp 1 --size 50 --random 56 --rows 200000 --start 0 --cols 1  --createTable --splits 10
-"$PREV/bin/accumulo" org.apache.accumulo.test.TestIngest --rfile $BULK/bulk/test --timestamp 1 --size 50 --random 56 --rows 200000 --start 200000 --cols 1
+"$PREV/bin/accumulo" org.apache.accumulo.test.TestIngest -i $INSTANCE -u root -p secret --timestamp 1 --size 50 --random 56 --rows 200000 --start 0 --cols 1  --createTable --splits 10
+"$PREV/bin/accumulo" org.apache.accumulo.test.TestIngest -i $INSTANCE -u root -p secret --rfile $BULK/bulk/test --timestamp 1 --size 50 --random 56 --rows 200000 --start 200000 --cols 1
 
 echo -e "table test_ingest\nimportdirectory $BULK/bulk $BULK/fail false" | $PREV/bin/accumulo shell -u root -p secret
 if [[ $1 == dirty ]]; then
@@ -54,23 +55,23 @@
 echo "==== Starting Current ==="
 
 "$CURR/bin/start-all.sh"
-"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 1 --random 56 --rows 400000 --start 0 --cols 1 -u root -p secret
+"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 1 --random 56 --rows 400000 --start 0 --cols 1 -i $INSTANCE -u root -p secret
 echo "compact -t test_ingest -w" | $CURR/bin/accumulo shell -u root -p secret
-"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 1 --random 56 --rows 400000 --start 0 --cols 1 -u root -p secret
+"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 1 --random 56 --rows 400000 --start 0 --cols 1 -i $INSTANCE -u root -p secret
 
 
-"$CURR/bin/accumulo" org.apache.accumulo.test.TestIngest --timestamp 2 --size 50 --random 57 --rows 500000 --start 0 --cols 1 -u root -p secret
-"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -u root -p secret
+"$CURR/bin/accumulo" org.apache.accumulo.test.TestIngest --timestamp 2 --size 50 --random 57 --rows 500000 --start 0 --cols 1 -i $INSTANCE -u root -p secret
+"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -i $INSTANCE -u root -p secret
 echo "compact -t test_ingest -w" | $CURR/bin/accumulo shell -u root -p secret
-"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -u root -p secret
+"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -i $INSTANCE -u root -p secret
 
 "$CURR/bin/stop-all.sh"
 "$CURR/bin/start-all.sh"
 
-"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -u root -p secret
+"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -i $INSTANCE -u root -p secret
 
 pkill -9 -f accumulo.start
 "$CURR/bin/start-all.sh"
 
-"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -u root -p secret
+"$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -i $INSTANCE -u root -p secret
 
diff --git a/trace/pom.xml b/trace/pom.xml
index 2b79288..d8c5ef4 100644
--- a/trace/pom.xml
+++ b/trace/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.accumulo</groupId>
     <artifactId>accumulo-project</artifactId>
-    <version>1.7.3-SNAPSHOT</version>
+    <version>1.8.0-SNAPSHOT</version>
   </parent>
   <artifactId>accumulo-trace</artifactId>
   <name>Apache Accumulo Trace</name>
@@ -34,5 +34,10 @@
       <groupId>org.apache.htrace</groupId>
       <artifactId>htrace-core</artifactId>
     </dependency>
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <scope>test</scope>
+    </dependency>
   </dependencies>
 </project>